Hacker News new | past | comments | ask | show | jobs | submit login

Doesn't matter. White-board-as-IDE can throw you off so much that you can't think right about the big picture idea, especially if talking in front of people you just met in a stressful interview. It's nothing at all like explaining an algorithm to a peer after you've been working there and feel comfortable, etc.

Whiteboard-as-IDE is just bad, all the time.




What would you use to ascertain good coding skill? It is impractical to provide someone with a problem set and have them come back after a week. To be honest - that's the approach I'd really love to do.


Why not sit with them at a computer, let them set up their own preferred working environment, let them have an interactive shell prompt within which to execute snippets of code while they tinker and develop the solution, etc.?

I don't understand your thinking -- it seems like you picture it as a dichotomy between asking trivia questions which must be on a whiteboard, vs. assigning an extensive college homework problem set -- both of which seem like terrible ways of assessing on the job skill to me.

The questions you would ask at the whiteboard are probably fine questions. It's the way you allow them to be solved that's the problem.

For example, if someone asked me to write some code in Python that computes the median of a stream of numbers, I would probably do something using itertools-based generators, and/or something using the heapq library for a heap.

I do not have the APIs of these standard modules memorized. I absolutely could not write down their usage on a whiteboard. It wouldn't just be minor syntax issues. It would be so much of needing to look up which function argument goes where, which thing has no return value but mutates the underlying data type, etc., that it would just totally and completely prevent me from being able to fluidly solve the problem or explain what I'm doing. The whiteboard nature of the discussion would be a total hindrance, alien to the experience of actual day-to-day programming.

And I've used both heapq and itertools for many years, time and again, in easily many thousands of lines of code each -- and I still always need to look up some documentation, paste some snippet about itertools.starmap or itertools.cycle into IPython, test it on some small toy data, poke around with the output to verify I am thinking of the usage correctly, and then go back over to my code editor and write the code now that I've verified by poking around what it is that I need to do.

That's just how development works. It does not ever work by starting with a blank editor screen and then writing code from top to bottom in a straightforward manner. It doesn't even happen by writing some then just going back in the same source file and revising.

100% of the time, you also have a browser with Stack Overflow open, google open, API documentation open, and you also have some sandbox environment for rapidly either pasting code into an interpreter and playing with it, or rapidly doing a compile workflow and running some code, possibly in a debugger, to see what's going on.

I do not understand why you wouldn't replicate that same kind of situation when you're testing someone. What you want to know is if they can efficiently tinker around with the problem, use their base knowledge of the relevant algorithm and data structure to get most of the way there, and the efficiently use other tools on the web or in a shell or whatever to smooth out the little odd bits that they don't have an instantaneous recall or photographic memory of.

In fact, if they do solve some algorithm question start to finish, it just means they have crammed for that kind of thing, spent a lot of time memorizing that kind of trivia, and practicing. That's not actually very related to on-the-job skill at all. By observing them complete it start to finish, you're not getting a signal that they are a good developer (nor a bad one) -- only that they are currently overfitted to this one kind of interview trivia problem. You do not know if their skill will generalize outside to all the other odds and ends tasks that pop up as you're working, or as you face something you don't have 100% memory recall over.

Anyway, the point is you can still ask development and algorithm questions, but you should offer the candidate a comfortable programming environment that is a perfect replica of the environment they will use on the job, with their own chosen editor, access to a browser, same kind of time constraints, comfortable seating, privacy, quiet, etc.

And you should care mostly about seeing the process at work, how they verify correctness, how they document and explain what they are doing. If you're asking problems where mere correctness is itself some kind of super rare occurrence, like some kind of esoteric graph theory problem or something, you're just wasting everyone's time.


My reply is about 2 days late. But thank you for the feedback... I am genuinely trying to improve the process since I've been at the receiving end of it at one time as well.

I'd definitely like to run something like this but I'd need folks to install a good screen sharing tool (join.me, webmeeting or some such thing...). But I'll definitely be open to asking the candidate's willingness to do so. That way they can get working code in an environment they are comfortable in...

We do most interviews remotely and offer a remote work setup as well. So its always not practical to physically have the person code in front of me.


One of my most enjoyable experiences as a candidate was when a company shared login information with me for SSH-ing into a temporary virtual machine they had spun up at Amazon S3 solely for the interview. They asked me what editor I'd like present, and separately made sure any environment details were taken care ahead of time.

Then I was able to simply log in with my shell here at home, and the screen was shared with the interviewers. The whole interview took place in console Emacs, with the interviewer pasting in a question, me poking around and asking clarifying questions, then switching over to IPython, tinkering, and going back and writing code incrementally.

I think all of the modern front-end services that do this kind of thing are pretty terrible, like Coderpad, HackerRank, TripleByte, or more UI-focused screensharing tools. Heck, I'd even opt for just a Google Hangout if we had to do it by UI screen sharing.

I think the low tech route of SSH is vastly superior.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: