I postulate that false negatives are far more expensive, in a constricted talent pool. We keep seeing the adage that false positives are expensive but honestly if one cannot spot a false positive within 30 days of engagement with that person, then the hiring person should really contemplate their expertise in the field.
I saw a really good write up (I think on here) where they basically did a initial "are you an idiot?" round of interviews and then contracted every person that they thought might be a viable (not good just viable) candidate with a small piece of software for 2 weeks. This amounted to 3 candidates IIRC the TL;DR was that the person they thought was least likely to perform knocked their socks off. The one that crushed the interview asked for more time and then just fell of the radar. The technical code test and trick question interview is a tired fingerprint of the industry. Honestly if one cannot spot technical talent by having a 30 minute conversation with a person, they may want to reassess where they think they are in their own technical and managerial skills, because they may want to reflect the possibility that they are engaging in a weiner waving contest as opposed to an interview, to stroke their own ego.
Imagine that you passed your dream job interview, you pack you things and move to another state! 30 days into the new job you get the news, you are fired for being a low performer! You go online and post about your experiences, how horrible this company is for not looking out for their employees. You get a lot of responses saying that they have experiences the same, a large part of new hires are fired within the first month at this company!
Now tell me, do you think that said company will get many quality applicants in the future? At least I prefer harsh interviews and then being relatively safe compared to the reverse.
I did not go into deep detail about the article because I did not want to misquote it but my understanding was that they sent the prospects a VM with the entire dev environment running. Gave them a simple but real task that they needed accomplished and remote contracted them to complete that task (with compensation). They (the prospective developer) were given a full software spec, and access to the development team. They did track the time the team spent with the developer and did review the returned code. At worst it was a side gig with some income for the two week. IIRC 9K to each candidate whether completed or not. The way the process was structured was so that there was little risk to the applicant. The whole experiment was to expose their companies bias in the hiring process and the summation was they found it eye opening.
People keep talking about doing these "trial" periods but the people I want to hire are the type of people who have enough options that they wouldn't agree to do some two week trial period when they have four other companies offering them a full time job with no strings attached.
I wish I could do paid trial periods for everyone, I just think this will filter out far more people than having a restrictive interview process that works specifically to filter out false positives.
So with a paid trial you will generally filter out the best people who have other options while a more restrictive interview process would generally filter out the worst people who couldn't make it past a difficult Skype interview and a difficult onsite two hour pair coding session.
The problem is as the company in the article articulated, they thought they where filtering for the best but found out that they where not. The guy that they all agreed would be the best selection submitted a ball of code that was not cohesive and incomplete. The one who actually made it thru was the lowest on their list. IIRC their intent was to all three if they submitted good work.
The whole intent of their experiment was to show them exactly what we are talking about, you are selecting people based on their sales skills not based on their technical skills. 99% of the time people don't develop while someone is watching them over Skype or pair coding where they know they are being graded. They develop at their machine and if they get stuck they know they can turn to a close confident and ask them a question without being graded on it. The point is how well I perform under a microscope is a huge factor in your decision making process and it is not normal. Therefore it acts a a huge bias filter and one that does not amount to a hill of beans in every day development.
I am not saying that what the company in the article did is a workable or salable solution to hiring, but I am saying that it exposes what many of us, with multiple decades in the industry have been saying and that is hiring is broken and people in the industry are very bad at hiring because they let all kinds of bias in. One is salesmanship as highlighted in the article, the best salesman usually gets the job. Another big one is developer ego and sadly it is a very real thing in our industry. I will see if I can dig up the article.
From my perspective there aren't any other methods that are less arbitrary than several interview coding sessions.
Doing no coding sessions are more arbitrary because it is far easier to BS a non coding interview than it is to BS a coding interview. Doing paid trials are more arbitrary because you are excluding anyone who can't devote two weeks of their life to a trial that may not amount to anything.
So from my perspective intensive coding sessions, although far from perfect, are the best system we have, so our effort should be placed on recognizing the weaknesses they present and working to construct your interview in a way to minimize them.
For us we very proactively (and without judgment) give advice throughout the coding process - if it is clear someone is stuck on something small or easily Google-able we will simply tell them so they can move on to actually solving the problem. If they are stuck on something bigger, we will talk through the problem with them in a way that would make certain approaches apparent to someone who is a halfway decent programmer.
I am not saying that any of the above is perfect, but I think trying to construct a programming interview that tries as hard as possible to mitigate unimportant factors that could lower interview performance (such as nervousness or getting stuck on something simple and panicking) is a better approach than any other form of vetting I've ever seen.
I saw a really good write up (I think on here) where they basically did a initial "are you an idiot?" round of interviews and then contracted every person that they thought might be a viable (not good just viable) candidate with a small piece of software for 2 weeks. This amounted to 3 candidates IIRC the TL;DR was that the person they thought was least likely to perform knocked their socks off. The one that crushed the interview asked for more time and then just fell of the radar. The technical code test and trick question interview is a tired fingerprint of the industry. Honestly if one cannot spot technical talent by having a 30 minute conversation with a person, they may want to reassess where they think they are in their own technical and managerial skills, because they may want to reflect the possibility that they are engaging in a weiner waving contest as opposed to an interview, to stroke their own ego.