Hacker News new | past | comments | ask | show | jobs | submit login
Blind Pair Programming Interviews (codemanship.co.uk)
104 points by swanson on March 17, 2014 | hide | past | favorite | 43 comments



A 33% rate of developers just going offline after being "explained" the problem (through copy-pasting in instructions) seems pretty abysmal. I'd be really worried that you're losing quality candidates due to them being offended by having instructions pasted into a text file without any chance for a conversation.

I wouldn't be surprised that 33% of raw candidates for a dev position don't really know how to refactor properly, but in my experience, candidates who don't know what they're doing at least flounder around for a while doing the wrong thing, instead of suddenly leaving an interview.

It might be an easily solveable problem - I wonder how much the candidates were briefed about the blind nature of the screening, and why there wasn't a lot of interaction happening.

Another way to fix it would be to have a developer interact normally with the candidate - including voice chat, discussion, etc. but have the evaluation be done by another developer, who only has access to a recorded screen share without any audio. Seems like it gets most of the benefits without the drawback of seeming cold and impersonal to candidates.


You really shouldn't use percentages to talk about trends if your sample size is less than 100 - it's misleading.

But I wouldn't be surprised if a large amount of people who claim to be software developers can't refactor code. I've seen many of my peers (in a CS degree) become used to churning out code without looking back.


>> I'd be really worried that you're losing quality candidates due to them being offended by having instructions pasted into a text file without any chance for a conversation. If a developer gets offended over something as simple as that AND goes offline without much word of an explanation as to why, I'd be relieved because I've just dodged a bullet.


That's probably true, but on the other hand - in my experience if you're really interested in attracting real talent, interviews are as much about selling your company to them as evaluating their skills. Building an interview process that doesn't allow interaction beyond pasting instructions might be a signal to the developer that they don't want to work there.


I am commenting based on my past experience. This sort of behaviour is more of a red flag against the developer rather than against the company. I knew someone who displayed duplicate behaviour in the past before I learned my lesson. Happy to let other companies train and deal with these types of people.


I believe this would have been a better test if the interviewer had made his expectations clearer.

It was notable that everybody failed on one point. That failure wasn't actually relevant - competent people wouldn't fail on that point if they were told it was important (just as they wouldn't fail a similar situation in a job). Allowing them to fail taught us nothing about those people in a real job.

It seemed to me that this led to a situation where prejudices were still very present - whoever has previously worked in a shop with similar policies to the interviewer will pass. Whoever has not, cannot pass.

If an employee would know what was expected, why should an interviewee be expected to read the interviewer's mind?


I for one would have failed the testing part entierly, for two reasons:

First, for a refactoring job, I would have assumed this was not running code, but a subset of something bigger. I would hardly expect it to even compile, let alone pass tests. If I took this interview, I may have noticed the presence of tests, and I may have ran them if I did notice. Pretty unlikely.

Second, most of the refactoring I have done so far is based on local, semantic-preserving, transformations. Those are extremely reliable (as in a couple mistakes every 50 modifications). No point in running tests every few edits, unless they're less than two kestrokes (and seconds) away. If I do get in trouble, I still have Git.

---

When I think about it, I probably would have failed the code smell part as well: over the years, I have seen the utter superiority of the functional style over the imperative style in most cases. This has consequences:

I sometimes use the ternary operator to initialize variables (or should I say constants). The only thing ugly about this operator is its syntax. Its semantics are cleaner than those of if(){}else{}, its imperative equivalent.

I often use multiple returns. It's the only way I can use if(){}else{} without relying on side effects.

I don't shy away from switch statements, or their cascaded if() equivalents. I may even use a cascaded ternary operator. See, most of the time, the Expression Problem leans heavily in favour of sum types (tagged unions) and pattern matching, instead of class hierachies. Java lacks sum types, so this means emulating them with a switch statement. This is cumbersome, but less so than the equivalent class hierachy.

I shun getters and setters. While I like my objects to be immutable, sometimes I do need a public mutable member variable. Well, I'm honest about it, and use just that. I dont hide it under the getter/setter carpet, unless it actually helps me enforce some specific invariant. And I call my getters "foo()", instead of the more customary "getFoo()". I want to emphasise the result, not the action of getting it.

I will probably end up writing ML in Java (except when it means fighting the interfaces around me, including the standard library). But I do believe this generally results in shorter, cleaner, more reliable code than idomatic Java. This stays true even in the face of readability issues ("readability" means you can't use recursion, closures, or even first class booleans, because your poor colleagues lack some basic education —this is not a strawman, I have lived it).

So, your code smell may very well be my best practice.

This interview process would likely reject me. Unless the interviewer has a relatively solid understanding of functional programming, he will just mark me off as sloppy, too clever, ignorant of OO principles, or even all three. I can explain myself, but I only have half an hour, and this comment already took me twice that.


If anyone else is interested in conducting similar blind programming interviews, check out: https://coderpad.io.

It's a completely in-browser realtime code collaboration/execution sandbox. Candidate (or you) types code in the left, and hits a button to run it. Can be quite a bit faster to setup than Skype screensharing and allows the interviewer to edit as well.


It would also prevent the interviewer from forming any prejudices based on the appearance of the candidate's screen. Esoteric window managers, that sort of thing.


That's a great point, thanks!


Wow, this is great! I'm sure the playback feature is very useful for interviews -- other coworkers could watch the interview after the fact.

This would be even better with an integrated chat, so you could watch the interviewer & interviewee discuss the code during the replay. Any plans to support that?


Not directly anytime soon - but we integrate with google hangouts, which you can use as an easy chat-only option. This will, however, reveal their name, which I figure you probably already know anyway.

Otherwise chatting through the code box works pretty well currently.


Coderpad is amazing. Three thumbs up for it. Have had zero issues from either side of it, and it's been a great experience on both.


I would like to take part of such kind of interview, not necessarily to take the job, but to experience an honest and objective evaluation from an experienced developer. Being a student and not actively looking for a job it's hard for me to self evaluate. Hitting roadblocks I don't know if the problem is genuinely hard to solve or I just bit more than I could chew


If you are a student, this sounds like a great project to do for a class: set up a service where two coders can get randomly paired up to program an exercise and then they can later evaluate each other!


Ok, a really naive question here - but how do you avoid cheating?

I can easily see if this takes hold that there are authentication issues - how do you know that the person who is taking the test is actually the person who applied or will show up (even if it's remote) to your job?


There's no reason this has to actually be a remote interview. They can be in the same building as you, on your own hardware.

The value is that the evaluator is blinded. Doesn't mean you can't have someone else greet them and supervise them, as long as that person isn't scoring them.


Use a proctor who checks the ID of the person, and verifies that they are, in fact, participating in the interview and not using another screen sharing program to have a friend take the test for them.


I'm surprised that the whole process is based on evaluating how good the candidate can apply some TDD techniques. I'm sure that more than 99% of all software is developed without unit tests. This includes the linux kernel and the rest of the OS, the other big OS-es, databases and other commercial software. Even most of the Java code doesn't have detailed unit tests. To base your evaluation entirely on how well the candidate uses those technical seems not very objective to me. I've seen people who are very good at working with very messy code which is impossible to refactor in a reasonable time and they do a very good job without any unit tests. This skill is sometimes much more important than knowing how to run you tests and few refactoring tricks.

TDD is a methodology - not an ability. A good developer can develop the habits in very short time. If this is part of the culture people can be trained to use it.


I was interested in the refactoring course mentioned in the post. What I turned up was a youtube video showing someone refactoring code to music. Instead of, like, a github repo or something useful so I could try it out myself. Oh well. It is a nice idea.


I wanted the same thing - there is a zip file with some code on one of the YouTube video descriptions. I asked the creator and he said it was an older version of the material that he uses for his on-site training.


Not quite an experiment - the results were not measured or even measurable - you probably can't hire them all and compare the results vs the interview rating. But an interesting protocol, if they can somehow validate it.


Very good point, measure everything!

One way to start trying a quantitative approach would be to conduct normal interviews with the same candidates and compare the results. If you already have a decent interview process there should still be some similar results, indicating you are on to something.

Not as good as hiring them all and evaluating performance, but the real world sucks


Cool idea, and nice write-up, but now I want to find out how the guys hired with the help of this method fare in their jobs!


This seems to be an ill thought out technique. The exercise has been designed so that all the candidate has is a false task and an awkward channel for communication.

Coding at interview is essentially always a false task, as the context and realistic goals are stripped away and the problem scaled back to something that can fit into a short timeframe. Typically, enough interviewers are bad enough at setting problems that there's a "market for lemons" effect -- the candidate does not know ahead of time whether you are a good or poor interviewer. As their goal is to get the job (not to only get the job if this particular interviewer happens to be a good interviewer -- see note) that means a significant portion of their energies goes into second-guessing who they are dealing with. And stage fright -- I've watched candidates make a complete hash of programming tasks at interview, be hired nonetheless (because we saw something else in them), and turn out actually to be competent programmers too.

Strip the communication back to just slowly typed text, with a thirty minute timeframe so every communication is exceptionally expensive, and now we have an almost entirely artificial puzzle.

Suppose one starts by reading and understanding the code; another starts by running the tests. One decides, given the thirty minute timeframe he's going to have to batch his testing at the end because the number of smells identified sounds like it's the test measure; another decides to run the tests at every tiny edit because it sounds as if the interviewer wants him to show his test-driven credentials; another decides that interviews like to "see how the candidate thinks" and spends most of his time asking text questions of the interviewer.

From this we could deduce, what? To be brutal, I wonder if frankly we might as well be deducing that the first one is a Sagittarius and this week's horoscope doesn't bode well for them. Because for all we know, the differences in behaviour are as likely to be about different guesses about the interview (it is a cut down task, so what has the interviewer decided to exclude and what have they decided to include, and does this interviewer even have any idea about that?) as about how the candidates actually go about programming on tasks in context.

But perhaps that's just my scientific curmudgeonliness about the "cult of hiring" -- the irrational belief that programmers interviewing candidates have, usually without evidence, that they are "able to ascertain how a candidate thinks" in a short space of time on an artificial task.

While it might be mice to hear how the candidates hired performed, at n=2, we still would not be able to make any reliable inferences.

note -- While you might think the candidate would only want the job if it is a good interviewer, that is not true. First, because poor interviewers can nonetheless be good colleagues. But more importantly because the candidate loses nothing in being offered a job he/she rejects, but does lose something in not being offered a job they might accept.


Agreed.

Sometimes I write nice code, I come back a few months later, understand how it works and it is easy to change / add features to it. Other times I write nasty hacks, because it is required quickly, and they just need something that works. Depends on time constraints and other factors. This interview technique obviously assumes that time is plentiful, and refactoring is the most important thing about the code. Usually it is not (at least for most of the places I have worked over the last ten years).

I personally find I get a good idea about someones ability by talking to them. I have a good idea about my colleagues ability, despite not having read much of their code. If they seem s to have a passion for programming, understand a variety of concepts, know what libraries are useful for what tasks, then I will have an idea that they are good.

Or are they still logging onto MySQL from the command line, because anything else is too complex to set up. Print statements for debugging, because an IDE is to complex to set up with graphical debugging. Still using the same tools that we got taught in University ten years ago, because they have not bothered to keep up? Yes, I work with these people as well. If I was hiring, these ones would not get chosen.


This is a fantastic idea, like blind auditions. http://www.nber.org/papers/w5903


That's the first thing I thought of!

For the lazy, the linked paper (which is very famous) shows a significant jump in hiring women after orchestras started conducting blind auditions.


Seems like an interesting approach, but there is a lot to be said about how one communicates, not just codes.

Unless you just want a pure code monkey that never needs to talk to anyone or coordinate with anyone.. then this is a great way to find that monkey.


I would expect another interview purely to gauge likeability/communication/basic hygiene; this is a good approach just to gauge raw programming skills. It's no use getting to know someone just to find out they can't code at all.

That being said, the benefits of being able to discuss the requirements/expectations of the test would probably outweigh possible negative biases. Even if the candidate doesn't know what refactoring is, a discussion about it may give them insight on what they should research to increase their employability.


I like the idea of a blind evaluation, but I can't get past how impossibly awkward it would be to have to keep switching between coding and communicating with my "partner" via an IM window. Speaking a question out loud while I'm looking at code and my hands hover above the keyboard? Sure, that's using separate channels. Having to switch windows and type out my question? Huge context switch.

Not sure how badly that would affect the evaluation, but it seems like it would be significant.


This could easily be a useful service. It sounds like a good transition from a phone call with technical questions asked by someone who is not themselves technical, and a full interview. Some questions:

1. Could a non-technical person conduct this interview? 2. Could a technical person who is not familiar with the challenges pick them up quickly enough that they could proctor them for many companies who supply their own questions?


I do not think that a non-technical interviewer would be able to adequately interpret things like how well the candidate found and fixed code smells, or adequately interpret how well the coder was using their tool of choice, or make meaningful comments on how the user went about testing the code.

Proctoring blind interviews might be really interesting, though.


I like the concept. It would take a lot of data (perhaps by a large SW firm adopting this?) to see if this improved recruiting. Even then, it may only be applicable to the firm in question.

When I set up interviews, I purposefully don't allow interviewers to talk to each other. This reduces some bias, but not gender or race. I also speak last in verbal feedback sessions.


I would hang up on this guy, because he is a total d*ck and I would never work for this type of person. He just wants a machine to do his programming.


I would be delighted if all job interviews were done like that. I find all social parts cumbersome and unnecessary waste of time.


For a white man, this is reasonable.

For someone who is not a white man, this is an advantage because the interviewee is being tested on things that they control, rather than things they don't. (Gender, skin color, national origin, disability...)


Someone that cannot type fast is going to be selected against in this format. Someone for whom the interview language is not their native tongue is going to be selected against in this format.

Concrete examples: I work with someone that is a very bad typist. Yet she is very smart and a prolific contributor. (edit: the hint here is that a disability can make you a bad typist, yet a valuable contributor) Another person that I worked with was French and and found textual communication very challenging. I can't imagine either making it through this interview process.

Both are minorities by several measures.


I'd be very surprised to see someone who could comfortably speak in a second language, professionally, but have trouble typing. I say this as someone who has some experience both learning multiple languages and having went through learning to teach English as a foreign language. These people would be exceptions to the rule. The blind audition equalizes for more people than for whom it disadvantages.


* The blind audition equalizes for more people than for whom it disadvantages.*

Maybe. Do you have evidence for this statement?

I haven't put myself to any sort of test, but my perception of myself is that I am far more attuned to lapses in English usage in the written form than the spoken. I don't think these interviews are very blinded at all. I can tell if you are ESL (unless you are very, very good), I can often tell if you are younger or older, I can tell if you come from my background or not. As well as if I were meeting you face to face? Of course not! But, how will I select people in each situation? Am I more or less biased in face to face or written communication?

We know tests like the SAT have a cultural and racial bias. I should hire programmers because they can deliver value to my company, not because they type like me. When I measure you by a proxy (how well you communicate via IM) I have the chance to make the wrong choice.

Don't get me wrong - I think there is a very good chance you are correct in that statement. But I don't believe these interviews are blinded (they are probably most blinded to gender, and least blinded to ESL), and no one has shown that the results are less biased than the alternatives.


Any sort of condition that affects fine motor control would make it hard to type but not to speak.


As a white man, and a software developer, I have never EVER found any prejudicial advantage in being white. Not once. In fact, I have felt distinctly disadvantaged when competing against Asians and Indians. Asians of any variety, south or east, very frequently experience preferential treatment over whites in technical fields. That's no exaggeration, based on my personal experience. Mind you, I don't begrudge anyone for it, but it's certainly a reality I'm tuned into.


But presumably you've never been any other colour, so it would be hard for you to identify if you were?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: