I've seen folks who have a lot of coding skill on their resume fumble simple white board problems.
Solving a coding problem on a whiteboard tests your ability to solve coding problems on a whiteboard. That's a bias. It makes people who get nervous standing up and being the centre of attention less likely to pass the test. If coding on a whiteboard is a part of the job then fair enough, but if it isn't then you're introducing something to the interview that filters people out based on something other than their ability to do the job - and that means you're not necessarily recruiting the best person. I believe that's a good reason not to use whiteboard tests very often.
While that is in some ways fair, interviews inherently involve being the center of attention; people who do poorly at them because of nervousness will always have problems regardless of the format.
While it's true[0] that work samples are substantially better at evaluating candidates than informal interviews, they have their own downsides. For example, I have heard many people balk at multi-hour homework assignments as part of the interview process as too much of a time commitment to one company. In the end, any screening technique will be flawed. That doesn't mean that we shouldn't use them.
To be fair - we don't look for syntactic correctness of your solution. You miss a semi colon here and there - that's cool. You invent your own method/function to abstract out things like creating threads, communicating between process - that's fine (in fact we provide examples of these and say feel free to use something like this).
What we are interested in is algorithmic correctness. I think for someone who develops for a profession writing an algorithm on a white board shouldn't really be a big deal. Agree on the nervousness... I don't know a good way around it though... We normally do interviews on the phone using collabedit so the candidate can sit in their own comfort zone. I also make it a point to mute my phone and not to talk unless asked to.
There's a world of difference between being a skilled solver of problems, and being a skilled extemporaneous presenter of your problem-solving process on the fly. Most people are not good extemporaneous speakers, and the reason they're fumbling is not because they're incapable of solving the problem, but because they're juggling the following things:
* Presenting the solution.
* Determining the solution.
* Presenting themselves.
It's not really an accurate measure of how well they work day-to-day, because none of us show up to work and are given 15 minutes to present a solution to a problem we've not studied in years.
You're basically testing peoples' ability to improvise a solution while discussing it with two or three strangers. It's not surprising that there's a high failure rate in that.
Doesn't matter. White-board-as-IDE can throw you off so much that you can't think right about the big picture idea, especially if talking in front of people you just met in a stressful interview. It's nothing at all like explaining an algorithm to a peer after you've been working there and feel comfortable, etc.
What would you use to ascertain good coding skill? It is impractical to provide someone with a problem set and have them come back after a week. To be honest - that's the approach I'd really love to do.
Why not sit with them at a computer, let them set up their own preferred working environment, let them have an interactive shell prompt within which to execute snippets of code while they tinker and develop the solution, etc.?
I don't understand your thinking -- it seems like you picture it as a dichotomy between asking trivia questions which must be on a whiteboard, vs. assigning an extensive college homework problem set -- both of which seem like terrible ways of assessing on the job skill to me.
The questions you would ask at the whiteboard are probably fine questions. It's the way you allow them to be solved that's the problem.
For example, if someone asked me to write some code in Python that computes the median of a stream of numbers, I would probably do something using itertools-based generators, and/or something using the heapq library for a heap.
I do not have the APIs of these standard modules memorized. I absolutely could not write down their usage on a whiteboard. It wouldn't just be minor syntax issues. It would be so much of needing to look up which function argument goes where, which thing has no return value but mutates the underlying data type, etc., that it would just totally and completely prevent me from being able to fluidly solve the problem or explain what I'm doing. The whiteboard nature of the discussion would be a total hindrance, alien to the experience of actual day-to-day programming.
And I've used both heapq and itertools for many years, time and again, in easily many thousands of lines of code each -- and I still always need to look up some documentation, paste some snippet about itertools.starmap or itertools.cycle into IPython, test it on some small toy data, poke around with the output to verify I am thinking of the usage correctly, and then go back over to my code editor and write the code now that I've verified by poking around what it is that I need to do.
That's just how development works. It does not ever work by starting with a blank editor screen and then writing code from top to bottom in a straightforward manner. It doesn't even happen by writing some then just going back in the same source file and revising.
100% of the time, you also have a browser with Stack Overflow open, google open, API documentation open, and you also have some sandbox environment for rapidly either pasting code into an interpreter and playing with it, or rapidly doing a compile workflow and running some code, possibly in a debugger, to see what's going on.
I do not understand why you wouldn't replicate that same kind of situation when you're testing someone. What you want to know is if they can efficiently tinker around with the problem, use their base knowledge of the relevant algorithm and data structure to get most of the way there, and the efficiently use other tools on the web or in a shell or whatever to smooth out the little odd bits that they don't have an instantaneous recall or photographic memory of.
In fact, if they do solve some algorithm question start to finish, it just means they have crammed for that kind of thing, spent a lot of time memorizing that kind of trivia, and practicing. That's not actually very related to on-the-job skill at all. By observing them complete it start to finish, you're not getting a signal that they are a good developer (nor a bad one) -- only that they are currently overfitted to this one kind of interview trivia problem. You do not know if their skill will generalize outside to all the other odds and ends tasks that pop up as you're working, or as you face something you don't have 100% memory recall over.
Anyway, the point is you can still ask development and algorithm questions, but you should offer the candidate a comfortable programming environment that is a perfect replica of the environment they will use on the job, with their own chosen editor, access to a browser, same kind of time constraints, comfortable seating, privacy, quiet, etc.
And you should care mostly about seeing the process at work, how they verify correctness, how they document and explain what they are doing. If you're asking problems where mere correctness is itself some kind of super rare occurrence, like some kind of esoteric graph theory problem or something, you're just wasting everyone's time.
My reply is about 2 days late. But thank you for the feedback... I am genuinely trying to improve the process since I've been at the receiving end of it at one time as well.
I'd definitely like to run something like this but I'd need folks to install a good screen sharing tool (join.me, webmeeting or some such thing...). But I'll definitely be open to asking the candidate's willingness to do so. That way they can get working code in an environment they are comfortable in...
We do most interviews remotely and offer a remote work setup as well. So its always not practical to physically have the person code in front of me.
One of my most enjoyable experiences as a candidate was when a company shared login information with me for SSH-ing into a temporary virtual machine they had spun up at Amazon S3 solely for the interview. They asked me what editor I'd like present, and separately made sure any environment details were taken care ahead of time.
Then I was able to simply log in with my shell here at home, and the screen was shared with the interviewers. The whole interview took place in console Emacs, with the interviewer pasting in a question, me poking around and asking clarifying questions, then switching over to IPython, tinkering, and going back and writing code incrementally.
I think all of the modern front-end services that do this kind of thing are pretty terrible, like Coderpad, HackerRank, TripleByte, or more UI-focused screensharing tools. Heck, I'd even opt for just a Google Hangout if we had to do it by UI screen sharing.
I think the low tech route of SSH is vastly superior.
To be fair - we don't look for syntactic correctness of your solution.
Hey, that's great. But the thing is, you never know what you're going to get.
Some interviewers absolutely do insist on 100% syntactical correctness (along with optimal performance on some made up combinatorial problem) -- even though they aren't giving you a shell or IDE to run your code iteratively. Sometimes they won't even give you a decent text editor -- though it may sound ridiculous, it's become very common, of late, for interviewers to ask you to just type directly into a Google Doc -- with variable-width fonts, autocapitalization and other helpful features enabled by default -- even at places where you'd think they really, really ought to know better.
It also make it a point to mute my phone and not to talk unless asked to.
Again, it sounds like you're hip as to the basics of how these sessions should be run, and that's great.
Unfortunately, it's not generally so, out there. Quite a few interviewers seem oblivious to the basics of phone etiquette (using speakerphones with an obvious echo behind them, for example). Or just aren't particularly communicative for one reason or another. And sometimes it turns out the person you're talking to doesn't really know the language you're coding in -- so you have to burn precious minutes explaining the basics of the language to them, along with the solution you're presenting.
That's the fun part about these sessions. You just never know what you're going to get!
Finding a whiteboard and brainstorming about how to solve a problem is absolutely part of the job. Unfortunately it's hard to find problems that can be explained to anyone with a reasonable programming background and solved in 30 minutes. So anything you do under those constraints is going to be artificial.
> Finding a whiteboard and brainstorming about how to solve a problem is absolutely part of the job.
But let's face it, this skill is trivial to an otherwise intelligent person, and it's not the reason whiteboard coding is done at the interviews. It is to assess one's problem solving or even specific coding skills. Unfortunately, in a nearly QM way, observation here affects the outcome.
I did partake in CS competitions at regional level and to me they are less stressful than whiteboard tests. There you just have a console or a sheet of paper and a few hours to hash it over. No 3 pairs of eyes staring at your back. Guess it's the same for many others: the thing that turns reading a figurative newspaper chess column into chessboxing tournament. One might be good at chess and OK in boxing, but not necessarily at the same time.
(and no, unfortunately I don't see a good way to fix this)
And if you overthink it (how can you not? no idea what they're looking for) you appear to clutch or ask too many questions. No shared context means a very artificial exchange.
My team has impromptu whiteboard meetings lasting from 5 to 60 minutes at least once per week. Being able to diagram and communicate your ideas is one of the biggest things we are looking for. We have already done phone screening and an in-office coding exercise by this point, so the whiteboard is a test of how you might integrate with the team.
"Being able to diagram and communicate your ideas is one of the biggest things we are looking for. "
To me this seems like a basic requirement like reading and writing - is it really such a hard skill it's worth it to filter for it? I would think it's fairly easy to learn this skill by attending meetings and watching others if somehow one is unfamiliar with this technique? Or is my expectation level of what people generally can do way off mark?
If that's the case then presenting on a whiteboard certainly should be part of your interview process. My post is a complaint about the ubiquity of it and that it's used inappropriately, not that it's a thing teams use at all. I underatand that it has it's place.
Solving a coding problem on a whiteboard tests your ability to solve coding problems on a whiteboard. That's a bias. It makes people who get nervous standing up and being the centre of attention less likely to pass the test. If coding on a whiteboard is a part of the job then fair enough, but if it isn't then you're introducing something to the interview that filters people out based on something other than their ability to do the job - and that means you're not necessarily recruiting the best person. I believe that's a good reason not to use whiteboard tests very often.