Hacker News new | past | comments | ask | show | jobs | submit login
Tech Interview Handbook (yangshun.github.io)
760 points by yangshun on Aug 17, 2019 | hide | past | favorite | 325 comments



I realize everybody's going to jump in and rant about algorithms in interviews, but I wish you'd all add something constructive as well.

I just had to conduct a round of interviews in a non-SF large US city, and it was a hellish crapshoot. Resumes are meaningless, and often re-written by recruiters to match the job anyway. Everyone has the same canned answers to the stupid behavioral questions. And as for the code, we included what we thought was a trivial nested for-loop problem and virtually nobody could even get started on it.

Is this kind of code problem too complicated in your opinion? For all I join in when complaining about irrelevant algorithmic questions, I have to admit that they at least test something, even if it's just willingness to study for the interview.

Instead of reading everybody's complaints about interviewing, I'd love to hear how you think it should be done. Because I have to admit I'm pretty much lost right now.


I can tell you how I do it and would certainly recommend it as the way it should be done. For some context, I've been interviewing software engineers for about 25 years in companies ranging from established multi-nationals to tiny startups in very fast headcount-growth mode. I'm in silicon valley. I can say that I've never regretted a hire I said yes to, so the method works to my satisfaction.

It'd be nice to think I have some special skill here but I really don't. This is just how interviewing was done in the 90s. To some of the younger generations, I've been told it sounds crazy.

If you send me your resume I'll actually read it, carefully. If the person described in this resume fits the background experience the role needs, you get an interview.

During the interview we'll talk about all those projects you worked on that are relevant to this role. Which parts you enjoyed the best and why? Which parts were boring and why? Which parts were the most challenging and why? What you find too easy and why? What would you have done differently? Could you have? If you were to do the same project all over how would you approach it? Other open ended conversations along these lines.

I don't ask anyone to whiteboard code, that's not part of the job so it's not part of the interview. No puzzles, no trivia-pursuit style questions.

It works great. You can't BS your way through such a conversation with a senior technical peer if you didn't actually do the work described in the resume. You just can't.

It is, however, vital that the interviewer must be a expert in the field.


> nested for-loop problem

> > No puzzles, no trivia-pursuit style questions.

I swear HN technical interview threads are the poster child of talking past one another. First, for-loop is nowhere near a trivia-pursuit question. Second, different companies of different sizes/industries/goals have different requirements. Let's all move forward with this discussion and acknowledge that we can't all use the same process because we're not all hiring for the same type of job. If the engineers you hire consider for-loops a "puzzle" that's totally fine and OP using it doesn't invalidate your process for your multi-national or startup companies.

> I can say that I've never regretted a hire I said yes to

The real question is, has anyone else regretted that hire, which unfortunately can't be answered as they may not tell you.

> To some of the younger generations, I've been told it sounds crazy.

Doesn't sounds crazy at all. A single process guaranteed to work for everyone? Now that sounds crazy.


Another question that has not be answered sufficiently: Would the guy you ended up not hiring actually have been a great hire? Everyone panics about / focuses on eliminating false positives and nobody studies or investigates the false negatives.


I have no data to back this up, but my expectation is that this is mostly a result of two things:

1. It's often surprisingly difficult to fire an underperformer (anecdata: a friend of mine has lost multiple talented members of his team in the past few months - all the result of having to work with one completely unqualified person. For some reason, the other person has not been fired). As a result, a bad hire can have an outsized and lasting effect on a company.

2. Unless you're looking at very senior positions, there are almost always more people available to fill a role. You may miss 9 out of 10 people who would be effective in the role, but you only need to hire one person.

The math changes a bit when you look at the mythical 10x developer, so maybe it would be worth looking into false negatives specifically in that case. Still, getting that 10x dev would be much more important in a more senior position.


It’s also very subjective and hard to train for or to audit externally. When a company gets too big, you can’t properly vet hiring personally anymore, so you have to scale it.

You don’t want ‘bozo cliques’ to form, so you make a semi-objective process like ‘solve this algorithmic question’ as part of the interview loop. I think execs and the founders doing a final review before hiring all engineers comes from that fear.

Then other companies cargo cult interview processes from larger companies and the trend propagates.

If you want to ‘hack hiring’ as a smaller company, you should use hard to scale processes like the one described in the parent post.


Well, if you get too big to get things done well, maybe not get too big?


> Well, if you get too big to get things done well, maybe not get too big?

I love this idea but my old history professor (who I pretty much disagree with on everything except this) used to always say "follow the money". What are the incentives that all the parties have for and against growing companies bigger?


These questions should definitely be part of the interview process, but not all of it. I've done a lot of these kinds of interviews and I've definitely seen candidates that speak impressively but fail basic technical tests.

If you don't actually verify the technical problem-solving ability of the candidate in some way you're forgoing signal that can massively increase the confidence you can have in your decision.


I’ve failed technical tests because I found the interview process stressful, or felt nervous or uncomfortable in the moment or was having a bad day. I recently did a round of interviewing for jobs and realized the real key for me was handling my emotions in these situations so I can bring the same approach I bring to my work to the interview. And it’s not the same as your day to day work because if I’m writing an algorithm or solving a problem at work finding the solution usually happens in my head rather than out loud. This difference is meaningful.

After a few bad interviews I got the hang of it and aced a couple algos interviews. I’ve worked as a developer for 8 years, have tons of software in production, have some open source contributions and have worked productively on several teams.

Interviewing is a skill. I understand why a company wouldn’t hire someone who doesn’t pass a programming test but failing a programming test doesn’t mean you can’t do the job.


Yeah, I consider the demand for people to talk out loud during algorithms interviews to be an anti-pattern. It is something I can do (albeit with some difficulty) and excel at interviews as a result but I know engineers who are great at algorithms and at normal design communication who fail interviews because the interviewer doesn’t like their frequent silence while thinking or coding.


> was having a bad day

Then ask for a reschedule or try to get it as the first thing in a day.

> I found the interview process stressful, or felt nervous or uncomfortable in the moment

Need to grow confidence. Walk into interview - like you own it. You are the best. Think - you know everything {only during interview :-) } but also keep an open mind. Don't think - it's an interview - take it as you are going to teach them about what they will ask.

Irrespective of this - mistakes will happen. Fail -> learn -> try again.


Anecdotal but we've hired an engineer who was pretty decent in the interview but couldn't figure out how an if statement worked in a legacy codebase.

Yes, I wrote that right. He struggled to understand conditionals in general when building his own logic. The guy even had a masters degree.


So I had the same problem for a while, it was quite absurd, because I got through my uni courses, could decently code in Haskell and some other things, but never figured out the appropriate position to put an if clause.

I then did a course on assembly programming and having to be extremely structured and using jump statements actually helped me a lot in easier to use programming languages.

Point of this comment is that people might be good at a lot of things, while just having some weird brainfuse related to a very specific thing. That said, not using if statements makes programming rather difficult.


No see I wish they had any redeeming skill, he always asked an engineer to assist him with anything. They couldn't figure out anything on their own. It was shocking someone could complete the interview (small problem with a solution that is essentially explained, make x do y) and completely fail at building their own solutions without guidance


We had an intern once who struggled with initializing a new object at a root component's initializer in file A, passing it down to the initializer in a child component in file B (required adding to a dictionary object also in A), and extracting and passing it down again to a child-child component's initializer in file C (required adding a new argument to that function, either the param itself or a dictionary object to follow convention) and extracting it for use there. Later I realized the issue was the person's prior experience had been on small programs, mostly written entirely by themselves, and thus they were able to keep them entirely in their head. Since then all my technical interviews try to additionally answer "can you reason about and make changes to code you didn't write, whose entire body is big enough you can't fit it all in your head at once (at least in the amount of time we have)?". A few times I've used a variant of the exact problem I described (of what's ultimately passing data to some nested functions) using simplified code from our codebase. I use a different problem most of the time now, regardless of level, but it's still about modifying existing code and using existing objects rather than cooking up something from scratch.


Can you elaborate?

What did he do or say that made you think he could do the job even though it looks like he wasn't able to work with other people code?


Not OP, but there's a group of people who have essentially a good manager's understanding of a project. They understand the trade-offs, they can talk intelligently about technical choices and architecture, and to some degree can even talk about individual modules and classes and language choices.

But they blank at code and struggle with the basics. They get lost for hours at the most trivial bugs. It's weird, but it's very real. Real coding takes a certain kind of abstract thinking that some people just don't have.


I have (half?) jokingly said that hiring could be done by selecting people who try to right click something when confronted with a windows task they don't know how to do.


I’ve known many smart IT folks that say they don’t or won’t code more than a batch script.


My immediate reaction is that the problem with that approach is that it’s simply too expensive for the hiring company, in the same way that take-home programming challenges are too expensive for the applicants.

A company trying to hire engineers can easily give a take-home programming challenge to a dozen engineer applicants and take very little time analyzing the submissions. That’s pretty unfair. But it also feels a bit untenable for a senior engineer at a hiring company to have very deep investigations of each applicant’s work history.

Another problem is objectivity. If your company’s engineering hiring process relies heavily on a senior engineer’s subjective impression of an applicant, you’re going to have big problems with your own engineers’ biases, whether subconscious or not. Expect to hear a lot of evaluations like “well, the applicant did seem to have good knowledge and experience, but I just wasn’t impressed for some reason.”


My immediate reaction is that the problem with that approach is that it’s simply too expensive for the hiring company

Is it? The cost of hiring the wrong person can be huge. Not just agency fees if they came through a recruiter (those aren't cheap!), but also all the time people then spend on the bad employee and all the damage that person does before the mistake is rectified.

If this system of reading the resumes and then spending a few hours of senior employee time on an interview can reliably find good employees, that's an absolute bargain.


The cost of hiring the wrong person can be huge.

This just isn’t true. Every company has a “probation period” usually 3 months where either party can terminate the agreement. That’s more than sufficient to cover this risk.

You don’t pay the recruiter until probation is passed - everyone knows this.

This meme comes from Spolsky who somehow also convinced the world that the hottest programming talent was beating down his door to work on a bug tracking tool for project managers. Maybe his talent was in blogging, and not actually in hiring?


> This just isn't true.

Sure it is. People on our team need to take time to onboard the new hire. That's good and expected. The new hire will work slower and that's ok; they will need extra help, time to learn the codebases, etc. After a couple of months, a particular new hire did not work out. Time had to be taken to document reasons, meet with HR, meeting to talk about expectations, etc. In the end, the new hire is gone and so is the time the team spent helping, and the slowdowns on real deliverables, and the hit to team morale. It sucks when someone is let go. It was a net negative for our team and thus the whole org. Depending on a combination of level (leadership position?) and toxicity, negative impact on the team or company culture can occur. Bad hires can have a real cost.


This is all true to my experience but not just exclusive to hiring low performers (which I assume is wrong qualifier means here) but also to unsuitable fit, poorly management, failure to motivate a new hire (specially when hiring senior developers), etc.


A real cost that can be mitigated by breaking work into smaller chunks at first, and firing quickly.


This just isn’t true.

I believe that to be an incorrect statement, based on my own experience. I have seen companies hire the wrong people and pay the price.

Every company has a “probation period” usually 3 months where either party can terminate the agreement.

I believe that to be an incorrect statement; while I've always seen probationary periods, in this very set of threads "akelly" says that nobody has probation periods. This evidence suggests that "akelly" has never worked at a company with a probationary period (and has foolishly extended their own personal experience into the universal, but that's a separate mistake).

I would also suggest that there is an opportunity cost attached; if someone is fired at the end of their three or six month probation period, the good candidate that replaces them is six months behind.

Some companies end up with drifting dead wood employees; not fired, just moved around from team to team, department to department, because the company makes firing people harder (or more costly to the manager / team-lead) than moving them. This can go on for years. That seems quite a large cost. I've also seen managers simply sideline bad employees rather than dismissing them, for reasons of company politics and face. I've also seen bad hires get promoted, in cases where promoting someone is easier and less costly (to the relevant team lead or manager) than dismissing them; that can be really damaging.


>This just isn’t true. Every company has a “probation period” usually 3 months where either party can terminate the agreement. That’s more than sufficient to cover this risk.

Yeaaaah no. I've seen people nope out of a job within anywhere between 30 mins to a couple of months. I've also seen people that knew it wasn't right for them stay 1 to 2 years.


If it really would be, then current state of things would not go into "making it cheap". I think for entry level employee, or just a regular employee, cost of bad hire is not that high. If turnover on those positions is high, then it also makes things cheaper just to fill in the seat.

I have seen not that brilliant employees that still were earning company money. Just you know "we hire only best of the best" is a scam.

Hiring bad manager or a senior dev, can be bad I agree. So it is not so good to use the same hiring way for seniors and juniors, but unfortunately companies treat senior devs also like juniors.


This is just how interviewing was done in the 90s

I agree with your approach and use it myself but one thing is different now and that’s the proliferation of tiny skills. Back then you would have a few big skills, you would claim to know one or two main languages, one or two databases and so on. Now people list hundreds - literally hundreds - of skills sometimes. And there’s no way to tell on reading if they really know it, or just saw it once and maybe did a tutorial or “hello world”. People now will add a skill to their CV if they’ve done it for a hour total in their entire lives! Or read a blog post about it. It is a massive time sink to pick through that.


If we're talking about the same kind of thing, I don't consider those to be skills. For example I've seen resumes which list every javascript library they've ever used. Or every UNIX command they know how to run or similar minutia.

This tells me the person doesn't know the difference between skills and implementation details, so I don't need to proceed to an interview.


I'd argue that this is a resume overindexed on getting past recruiting filters looking for specific JavaScript libraries or UNIX commands. Even the recruiters action may not be necessarily bad - startups with a decided tech stack might decide they are not able to provide the time for a new hire to ramp up.

Effectively you penalise not catering the candidate resume to what you are looking for or deem important.


Does anybody even know anything about these resume filters? They seem legendary, or I'd think there would be a best practice by now.


The trick is there are multiple resume filters.

One will filter you out for having too many keywords. One will filter you out for not having enough keywords. One will filter you out for including a picture. One will filter you out for not including a picture. One will ... ad infinitum

Your resume is being evaluated by different filters constantly. Different filters filter different things. There is no "right" answer. There are only answers you will or won't be filtered out for depending on which filter is being applied.

Which filter is being applied?

It's random.

Hiring is essentially random.


Various vendors of software in that space claim they are best practice, e.g. IBM: https://www.ibm.com/talent-management/hr-topic-hub/applicant..., https://appsource.microsoft.com/en-us/product/web-apps/cem_b... promotes keyword matching as a feature, ...


Oh sure, I know they exist, but on the resume side there's no common knowledge of anything besides keyword stuffing. I'm wondering if that's actuallly the best way given how the filters actually work (which I don't know).


Algorithmic HR has ruined any chance you won't see an SEO centric resume ever again.

"I need my resume to show up in your filter, relevance to the actual job functions be damned."


As a very fresh junior developer, crafting a CV is extremely exhausting, because it is very hard to gauge what you can put in. Take git as an example: I used it for a couple of personal programs, read part of the documentation, had some errors and managed to get rid of them.

I know the theory of how to use it on large projects. I even know enough to know that I'm pretty much just scratching the surface, but so are probably most other people.

Same with most other skills. At how many lines of code can I call myself proficient in a language? And what if I copied large parts of code from stackoverflow?

All in all I think I spend like 8 hours on writing and formating that freaking thing, and it still just feels like some of my skills are a stretch.


Skills or tools are keyword dumps, use them as such (which in later revisions may include removing them or moving some to be contextualized in job descriptions). My own rule is to only list things I think I could adequately answer a bunch of random questions on; sometimes you might only realize you couldn't when you meet a line of questioning that totally defeats you. I don't put a proficiency level, if it's present then I'm proficient if perhaps also rusty. (Stating proficiency levels has a tendency to bring the knives out, "advanced" or "expert" begs to be challenged, and something like "mastery in C++" -- is your name Bjarne?)

> At how many lines of code can I call myself proficient in a language?

If you want a target to shoot for, I'll give you one, but don't really follow it... LoC is kind of meaningless on its own in part because it's contextual on many things (you highlighted one context, lines copied, I'll use a different context). 1k lines if paid for them, 6k-10k if not. This doesn't include lines added and then deleted (i.e. if your only project in FooLang weighs 300 lines, it's 300 lines, even if making the project involved adding 100 lines, deleting 50, adding 500 lines, deleting 100, editing 300, deleting ...) you need projects whose sum mass is 1k. Probably better to have at least one 1k+ project too, especially if it's unpaid. But again, don't follow this, it's just a somewhat random target to answer the question. Better to think about what this could be a proxy for, and target that instead.


I've been around the block, and I hope I can give you some perspective from both sides.

I'm glad you brought up git. Source control is central to modern software development. If you don't have it on your resume, someone might think "hmm, have they really done much programming?" But if you do put it, it will obvious from your resume that you are just out of school, so no one is going to expect you to teach a git class.

As a more experienced person, I won't expect you know the internals of git.

Non-technical people screening your resume aren't going to know the difference between you and Linus, though, so you need to put it. Don't worry, it's not false advertising.

If the job description is back-end tooling at Github or Gitlab, I'll expect more. Similar to the difference between knowing how to use a spreadsheet, and having written a spreadsheet.

Now, if it's something crucial to the job,say your C skills writing software for micro-controllers at an embedded electronics company (like what Nest or Pebble used to be). You can't fudge here. Nothing will help but lots of practice.

Good luck!

P.S. For your first job, focus more on figuring out what you want from your working life. What kind of manager, what kind of work, etc. Yeah, you'll need to pick up some buzzwords to add to your resume, but it's not as important as becoming good at the fundamentals of what you are doing.


You should be able to talk fluently for at least 1 minute about everything you list as a skill. You should have enough material to be able to talk about your CV as a whole for 45 minutes. Not a cast iron rule but this will see you through most interviews. Good luck!


Thank you, that is an interesting way to look at it, and makes a whole lot of sense! I guess it should be amended by "meaningfully talk about", but I get that from context.

To be honest, the process of writing a CV is mostly painful for me because I have to evaluate myself and feel quite inadequate. At the same time, I have multiple companies wanting me despite not being very good, so my actual chances are really good. Software development in Germany is going crazy at the moment.


> At how many lines of code can I call myself proficient in a language?

I would say don't use adjectives to describe skill level. I put most languages I have experience with on my resume, and order them descending by how well I know them. I know the stuff at the top pretty well, and it's downhill from there. In interviews I'm very upfront that I'm incredibly rusty with the stuff on the bottom of the list, and interviewers understand and accept that.

The exception is if you feel comfortable putting "advanced" or "expert" on one of those, but then you had better be able to back it up.

LOC is a bad measure of proficient. I would say you're proficient if you can write a simple program without looking up syntax and relying only on autocomplete for library references.


I just use a table with experience level columns like e.g. "Proficient", "Advanced", "Hobbyist". But use categories that make sense to _you_ and your skillset.


This works for non-technical positions too. Generally it requires an experienced interviewer to listen to the candidate, think on their feet, be fully engaged, and basically do the opposite of the one-size-fits-all coding or behavioral interview.

It takes a lot more effort on the part of the interviewer and is "harder to scale", in that you can't just train people to ask canned questions. But since it's open ended, it's a lot better at finding out what the candidate is really good at, and it's very useful to have one of these in every hiring loop, usually by a senior team member or exec.


>It is, however, vital that the interviewer must be a expert in the field.

This is key. There are a lot of hiring managers masquerading as experts and are frustrated when cargo culting hiring processes falter and lack the people skills to diagnose a situation. You'd also be amazed at the quality of resumes a high, advertised salary will bring.


> You'd also be amazed at the quality of resumes a high, advertised salary will bring.

This is so true. The best devs are making above market rate and are busy and content with their current role. By not including salary most of these people will simply ignore you, because of the time risk involved in finding the number. Non disclosure only works if your salary ranges are generally known (eg faang).


These are the kinds of interviews I love, and I appreciate that they're still given at some places. I agree, it's nearly impossible to bullshit your way through this kind of a conversational interview but people still think they need to ask trivia questions.


Been doing it the same for over 20 as well, only had one bad hire and it had to do with drugs and attitude more than skill set. He was a good developer and was sober when I hired him but fell off the wagon and started missing work, no showing etc. I fully concur that if you can't spot a competent developer from a conversation, you probably should not be in the position of evaluating potential developers.

I don't disagree that there are a lot of fake it till you make it developers who did a vo-tech class and are applying for jobs out of their league, but I just don't know how people don't spot them. It takes me about 5 phone calls to find a competent individual, and then I bring them in for a in person. As a hiring manager, I don't find it cumbersome to weed thru 5-10 people to find a good hire.


> I can say that I've never regretted a hire I said yes to, so the method works to my satisfaction.

You think this is a claim about how good your hiring practice is, but the only way I can think to read this is as a point about how little hiring you do or how little evaluation of hires you do. In the real world, perfection isn't possible, so claims of it are a sign of inexperience or naivete.


you're misconstruing perfection with satisfaction. entirely different things


Thank you for your refreshing point of view. I'm a mid-level software developer and I was on the other side of the table as a junior web developer and encountered this same style of questioning and embraced it. This is my go-to style when interviewing because if you are confident in the lingo and zeitgeist of development, there is no way they can bullshit there way and lie through their teeth on their own experiences. It is very easy to tell. Whiteboarding doesn't give that detection since it is too broad of a spectrum to start as a indicator of experience.


Exactly. If you were really working with the stuff you do you will know all the common pain points, workarounds, etc. Unless the interviewer is alien to those things themselves there can be instant "one of us" recognition.


> I can say that I've never regretted a hire I said yes to, so the method works to my satisfaction.

This may just mean that you say a lot Of wrong “No”s. To get very high precision or very high recall is really easy... what you must measure is your F-score


The regret metric is different for a false "no" compared to a false "yes." I'd rather say "no" to someone who could've been great than "yes" to someone who wasn't, so I would say that it isn't that important.


Depends a lot on the company and situation (how easy is it to fire a bad hire? Does the company do it?) - but you can use something like F2 or F0.5 where relevant.

Remember Facebook rejected Brian Acton, and paid billions for that. An oversimplification, I agree, but the larger point is that people tend to dismiss too easily the impact of bad “no hire” decisions, compared to the bad “hire” decisions. Just because you can’t or don’t measure it, doesn’t mean that the impact may not be arbitrarily large.


This comment boils down precisely what I’ve always thought is the key problem in tech hiring. Companies overestimate the cost of a bad hire and underestimate the benefit of taking a risk and having it pay off.

To borrow from poker: it’s commonly understood that if you’re not “caught bluffing” at least a little, it means you’re not bluffing enough. If your hiring process results in zero bad hires, you are playing it way too safe and I guarantee you are missing out on phenomenal candidates.


I think you're wrong. I think companies are completely correct in assessing the cost of a bad hire to be much greater than the loss of a good hire, assuming you're getting enough good hires to fill the positions available.

Now, this assumes that you're not missing out on great hires; in other words, it assumes that the good hires you miss are at the bottom of your hiring range. Your point might be that this can't be guaranteed, and while that may be true, it's unlikely that any tweaks to a given hiring process are going to bring them in either.


I'm an engineer and my contract is very explicit which parts of my work I'm allowed to speak about. The answer to all your questions will be "I can't speak about it." That's precisely why Google interviewers always ask abstract puzzle questions and avoid like fire any possibility of being exposed to protected IP.


There are definitely ways to talk about your work experience in enough detail without breaking NDAs..

I would be honest and say "I'm under NDA for a lot of that but I'll do my best". If neither of you are jerks, you should be able to manage it.


> I can say that I've never regretted a hire I said yes to

Have you let new hires go in the first 90 days? That's just one of the ways personal regret is probably not the best metric here.


If you haven't hired any people that you regretted (ultimately), I wonder if you're being too conservative in your hiring? Sure you don't fail, but are you ever surprised that a marginal borderline outside the box case turned out to be 100x ?


>If the person described in this resume fits the background experience the role needs, you get an interview.

This is one way the current status quo might be better than the past: you don't get pigeonholed so much by your past experience into being a "fit" only for similar roles. Sometimes the hiring manager is really looking for a specialist, but in general, we don't care what industry you were in or what tools you were using, as long as you can prove you're smart. Some of the most impressive people we have working on Go microservices were enterprise C# developers before.


I think fizz-buzz is still an appropriate screen.

While it maybe easy to hire dev who have been on the market for 10 years. You still have to keep in mind that new developers are still a very large proportion of the dev population.


This form of interviewing requires a certain degree of skill and lots of experience to keep the conversation objective and on track. I don't have that. I rely on a base technical question based on the candidate's past projects and then go on from there.


This is how much of my interviews have been and how I wish (however unlikely) my future ones will be. Unfortunately I rarely get to the interview stage of things as my resume is filtered out


I absolutely agree, as an interviewee that's what I focus on, and you're right you can catch it if someone is bullshitting.


I wonder if there is a list of companies doing interviews this way.


I use the same technique (and I'm 26 now, so relatively young) and it's how I've always interviewed others. Granted, I've only worked at smallish startups, but, either way, the company didn't care about my technique for evaluation, and just my final "yes", "no," or "maybe." So, I think it depends on the engineer.

If someone can keep up in a technical conversation about their background with me and answer every question I have about a technical project they did, then basically they pass. It works especially well even if I'm not familiar with their project, because I have an opportunity to learn, so I can ask any question that comes to mind until they teach me what they learned.

I did hire someone that I regretted, though, but to be fair, this was among my first interviews. The mistake I made was getting too easily caught talking about programming and technical things without specifically diving deep into his past project. He and I vibed quickly and I liked him, and that felt like enough, but after only a week it seemed obvious that he wasn't going to be producing much code, and we let him go. Otherwise, I've been happy and my ability to discern has only gotten better as I became more experienced.

I got a little offtopic, but my main answer to your question was "if the company leaves the decision up to a majority of engineers saying 'yes', then a lot of companies do this." Google does this, the startups I've worked at do this, and some of my friends companies do this.


Not exactly that, but you might like to peruse https://github.com/poteto/hiring-without-whiteboards

Reader beware, of course, I know at least one company on there shouldn't be.


Oh, I see, they use the term "whiteboard" as a synonym for leetcode-like riddles.


Personally I can't see an issue with very simple FizzBuzz style programming interview questions. I used to ask a simple "count duplicate substrings" question [1]. Maybe some people consider this too hard? I never used to require exact syntax, and would have been happy with pseudo-code. Using libraries is fine etc..

I also found very few people could solve this (similar non-SF large city location). Occasionally, people who could not solve this were hired for other teams. Based on their performance, I don't think I would have been comfortable working with them.

I don't think it's unreasonable. But I'm not sure I'd use it as a screen if I was hiring now. I think I'd just have a chat and try and discuss a previous project. After that I'd move to a paid take home project (ideally representing real, useful work).

[1] Take a string, for example "ABCCABC" and count the number of times each 3 character substring occurs. In this case the answer would be 2xABC 1xBCC 1xCCA 1xCAB.


I never had any technical interview prep when I started. I could code just fine. I demonstrated that at my other jobs and in school well enough. Then I started interviewing for SF positions that did these interviews.

I froze up. I couldn't talk and think at the same time. I didn't have the skillset for doing this in a very intense scenario. In my case, I was homeless and needed a job ASAP. Every interview felt like life or death to me.

First one I was asked to reverse a string in C. I hadn't done C in a few months. I froze up on syntax. I looked like an idiot who couldn't do it.

I could imagine many people who have never experienced this format (or haven't experienced it much) would easily freak out and look stupid as bricks like I did.

I've since done over 200 technical interviews (as the interviewee) and I usually sweep. Still fail at FAANG but I always get the solutions. (Even the leetcode hard ones) Just not sure why I fail but cest la vie.


Whenever I ask coding problems during an interview, I emphasize any language at all including pseudocode. And if they still struggle, I ask if they could just walk me through it verbally and we can work out the syntax together.

I realize people can still freeze up, but at some point I think there's just no solution to that unless the candidate can produce a significant portfolio. Also, I read your post and it sounds to me like to gained confidence in this area as your coding skills improved, which I don't think is as much a coincidence as you seem to believe.


This is a really interesting observation, thank you.

Is there anything they could have done differently to make the situation easier? (I would guess perhaps giving you more notice about language requirements?). Or is there a different interview format that you feel would have worked better?


Not really, innovative thought can’t be done with a gun to your head. Perhaps if you’d written this code multiple times over the years and had it memorized. But, that won’t help with random problem #2.

Stick to the basics and a trial period.


Is reversing a string not already "the basics"? What are you suggesting should be covered in an interview/screen prior to a trial period?


Your question was not reversing a string. You also mentioned fizzbuzz.


It’s the problem under discussion in the post you are replying to.


I was referring to your [1] in this thread.


Of course some people will consider that question to be too hard. But you may not want to hire those people. The point of an interview process is for you to pick the people you want to hire, not to ask questions that everyone agrees aren’t too hard.


I'm curious to hear the opinions of people who think that the question is too hard (i.e. it's too complex of a question to ask in an interview situation).

Perhaps some people might feel that while they could write code to solve this problem (or similar problems) outside of an interview. However, under the pressure of an interview they would not be able to solve the problem (due to anxiety, stress etc.).

If that was relatively common, then such a question would not be a useful interview question. Or would at least not be giving an accurate estimate of a candidates programming ability. This seems unlikely to me, but I would be interested in hearing different view.


The fizzbuzz test is not entirely about whether someone can do it. It's also about figuring out the style of the programmer. There's a surprising amount of flexibility to fizzbuzz; how is user input handled? Do they use a linter? How are variables managed/named? There's a lot going on, and it can be gamed just like anything else on both ends.

That said, the last time I had a fizzbuzz take-home test, it was infuriating -- my code is online.. what does fizzbuzz prove except to help make a company soc2 compliant?


No, a fizzbuzz test is entirely about whether somebody can do it. That's the definition of a fizzbuzz test. When you hear others talking about a fizzbuzz test, they're not talking about the specific algorithm, they're specifically talking about a simple single-function problem that should take no more than 5 or 10 minutes.

If you're dealing with user input or linting, then that's not a fizzbuzz test. I'm not doubting that maybe somebody asked you to do something larger than that including the fizzbuzz problem, I'm just saying that nobody else here is talking about take-home problems when we talk about fizzbuzz.


I guess it depends how you use it. I started hearing about FizzBuzz in the context of "why can't programmers program" [1]. I.e. that many interview candidates can't solve this problem.

It's also my experience interviewing programmers (outside SF) that most candidates can't solve simple FizzBuzz style questions. As a take-home test, I agree I can't really see the value in it.

[1] https://blog.codinghorror.com/why-cant-programmers-program/


Back when I did phone screens, I went even simpler: Write a program that counts from one to ten, printing each number out followed by your name followed by a newline. Do not print the line for number 4.

It’s astounding how many programming candidates can’t work their way through this.


Also astounding is the number of programmers who program over the phone. Approximately zero.


a nested for loop is certainly not hard ... however ... they describe the problem as something we thought was a trivial nested for-loop problem which suggests that maybe it wasn't. if the problem was presented in a more abstract fashion, such as print all filenames 'within' a directory, then a candidate may have stalled on which approach to take.

also, someone who has only ever written code using lambda expressions might have difficulty writing a nested for loop


Your problem actually is a good illustration of the whiteboard test problems.

I just tried it, and it was trivial to do in a minute or two on my laptop. However, I did took note of two syntax mistakes that I made in the python REPL that were immediately obvious there and took seconds to fix, but which I most likely would not have noticed on a whiteboard.

So there's quite a bunch of problems where if the acceptable "format" of the answer is "hey, I'll just pull out my laptop from the backpack and push the solution to github in 15 minutes" then it'd be okay, but it'd be hard to do it while 'whiteboarding' without access to immediate feedback and easily accessible API documentation.

For example, I work in many languages, and for many APIs I can't remember whether in this particular language the same thing is called add or append or something else; e.g. I've worked on Java code for a dozen years and would be a quite productive Java developer, but since I haven't written any new code in Java for quite some time recently, I can't remember off of my head what's the right boilerplate to open a text file for reading in Java - it's something that a Java tutorial might have in the first pages right after Hello world, but I'd still have to look up the incantation to pass the encoding properly - there's like three Reader classes to instantiate and I don't recall their names off the top of my head.


Nobody is punishing trivial syntax errors in whiteboard interviews. Nobody is taking your code and checking it runs.

It’s about your approach to problem solving and your ability to communicate that.


Is this really true though? I mean, I've been in development long enough to know that people go just plumb ape shit over the silliest things.


A cs professor at Stanford that hired at google said they _absolutely_ care if your code compiles and works correctly. This is the “coding fluency”[1] metric. The author of cracking the coding interview also touches on this.

1. https://www.programmercoach.com/2017/04/interview-insider-co...


Do you think you could rough out an answer on a whiteboard that would have roughly the right structure?

When I used this question I wasn't looking for accurate syntax. If it was a solution that looked like it would work, after some debugging etc. I'd consider that a pass.

Regardless most people couldn't answer it, which I considered surprising.


Sure, the structure is obvious; one was an issue of a method being named differently than I remembered, the other was an off-by-one error that'd show up in testing.


So, it wasn’t obvious to many candidates I interviewed. For example, one candidate tried to create a solution involving 3 nested for loops.


Assuming these candidates already made it past an initial filter it would be interesting to ask the filtered out applicants the same question and see how well they do


I also was able to solve it in a couple/few min after seeing the comment. I did have a couple issues, index was off by one, and didn't catch an issue if I had already seen the substring using a brute force nested loop, but otherwise it works (Python). Edit: I also did it without a nested loop using a dictionary. Why I didn't think of this initially I don't know.

I've also had a couple FAANG onsites and get nervous as can be, but that's just a practice more and be more confident with my abilities problem I feel. No offers though yet. Still trying and practicing.


If I were with you I'd not necessarily write anything.

Id put out my fingers 3 characters wide to the first substring and step through the rest of it in 3 char 'spans' with my fingers. I'd say "extract each substring, put it into a map with that substring as the key and either 1 as the value if it wasn't already present, or if it was, increment the value"

I'd probably not bother mentioning how to extract the results unless asked.

If someone said that to me it would show the solution, and I'd be 100% happy.

I'd assume that they could then render that into code - perhaps that would be a mistaken assumption though but in my experience solving the core problem is the thing I'm interested in, not the syntax. But those with more experience may say the former doesn't always imply the latter, and the code needs to be shown.


The translation to code is important. I have seen people who can describe a solution like that but when they come to write it as code struggle with managing simple mutability/immutability, scope, iteration, and conditional execution concerns. Better basic ability tests include a problem that requires an explicit nested iteration, because some people seem to struggle juggling two loop contexts in their head at once.

What I struggle to understand is why this sort of thing still serves as a useful weed-out screen in our industry, even for people whose resumes indicate masters degrees and multiple years of experience in development organizations. But experience shows that there really are people who are ‘faking it’.


That beats fizzbuzz for me - I think I shall steal it. It's simple enough that it can be done in five minutes but it has enough edges that people could go down the wrong rabbit hole


Am I misunderstanding your problem, or would this be a valid solution: https://pastebin.com/GWKr4AQ8

If so, then the company you're interviewing for must not be attractive to first-class CS grads (top 10-20% I think) from any non-online university.

As a comparison, any FAANG + Palantir/Jane Street/Two Sigma in the UK have tougher questions as their FIRST phone interview for INTERNSHIPS. (Palantir requires you to go through 6-8? interviews before getting an offer)

If this is a job that requires actual software engineers, I think rejecting everyone that failed this question would be perfectly reasonable.


Yes, that looks like a valid solution to me.

If you’re saying only the top 10-20% of CS grads can answer such questions then that would make sense. I’d guess I was getting a random selection of candidates.


> you’re saying only the top 10-20% of CS grad

I'm saying that ALL top 10-20% of CS grads will be able to answer questions at this difficulty level in an interview scenario (accounting for nerves and so on). I'm not limiting it to them (i.e. the "only" quantifier).

I know people in the bottom of CS grads (judging by GPA) that worked remote jobs or worked on personal projects during uni that are very talented and would be able to solve this in less than five minutes. But there are also people that merely "got by".

> I’d guess I was getting a random selection of candidates.

I stand by my opinion that if you were recruiting for a software eng role (and not some random front-end position), then blacklisting anyone who failed this problem would be reasonable.


+1. Anyone who is a programmer can solve that problem. If they can't solve it, they aren't a programmer, end of story.


So the weird thing is, that people who can’t solve these kinds of fizzbuzz type problems are often employed as programmers.

And, in my experience there are definitely situations where the majority of candidates can’t solve these problems.


Right! I can think of two explanations for this.

One, these non-programmers are somehow not actually doing their jobs, but getting away with it. I've worked with people like that, but only a tiny number. Maybe i've been lucky.

Two, you don't have to be a programmer to be a developer. In this day and age, our tools, frameworks, and resources (ie Stack Overflow) are developed enough that you can actually get some useful things done without having to think mechnically. Knowing some obscure boilerplate (eg Spring annotations), how to use your IDE's autocomplete, and where to go for copy-and-pastable code for various kinds of problem actually makes you a useful team member! Even if you can't write a simple loop to save your life!


Good to know, I would tend to agree. I’m interested in any contrary opinions (because I may not sufficiently account for anxiety, or other issues).


It's not an unreasonable problem, but one way to smooth it out could be to provide a list of "potentially helpful" string functions relevant to the variety of ways one might solve this problem. Not all string libraries are equal among programming languages, and recognition is easier than recall. I guess it really depends what you want to measure with the problem though. I use a problem that at the end of the day requires them to output an edge (start point, end point) that is used to feed a line drawing routine. I provide the standard junior high formulas for euclidean distance, mid-point, slope, inverse-slope, and two ways to represent a line, since I'm not testing for remembering those, but whether a subset of them can be used to solve the problem. I've still run into candidates who seemingly had never modeled a line mathematically before, and found myself hastily trying to explain how one would do so, like I were giving a lecture to a junior high student for the first time in pre-algebra. Needless to say they weren't able to then program a solution, or anything at all really... They would have been filtered out by your problem in less time. I think your problem could also be made easier to filter out the same people faster, but from my standpoint, I'm usually mandated by policy that I'm going to spend an hour with the person, so I try to make my problem answer a bit more than just the basic "can you code at all?"

Do you get candidates that ever ask "can I assume the strings are ascii?" The usual solutions to this will break in amusing ways if you allow arbitrary unicode. I know some interviewers who actually would have as a hidden scoring criteria "candidate asked about input encoding", and a lack of asking about that is a fail even if they correctly solve the problem for ascii. I myself disagree with using such hidden criteria -- if I'm going to score something, and not tell the candidate what exactly I'm scoring, it's at least going to be something in the code like "correctly avoided the divide-by-zero case without me pointing it out" and not an expectation of the candidate to read my mind on my expectations of things I haven't told them. (I do tell them to try and write code without errors (I help fix up basic syntax quirks or if I spot a typo I'll point it out), or ask how confident in their code they are and whether they might have in mind any edge cases to try -- I'd like to get a candidate who actually writes a unit test on their own, I always point out junit is set up...)


That seems like a nice, easy puzzle someone should be able to do on a whiteboard, but I think it would work better on a computer -- my first iteration of this included a bug, as well as some print statements intended help detect and diagnose bugs. I could carefully think through the problem to ensure I get all edge-cases/bugs, but I do better playing with it interactively. (In this case, my first iteration included 'BC' and 'C' as substrings each occurring once.)

You see my process better if you watch me write and correct this program on a computer. If my tasks is to think up a fully correct solution without iterative trying incorrect solutions, you're missing an important part of my process, even with a trivial FizzBuzz. (Though with this one you could probably just act as the interpreter and point out the errors for me.)


>Take a string, for example "ABCCABC" and count the number of times each 3 character substring occurs. In this case the answer would be 2xABC 1xBCC 1xCCA 1xCAB.

Can be done in O(n) time with a hash table.


>>> how you think it should be done. Because I have to admit I'm pretty much lost right now.

- Treat recruiting in the same way as you do software development.

- Formulate a set of requirements.

- Define interview questions that give insight into whether or not the candidate meets those requirements. This is the equivalent of "tests" in the software process.

- Specific skills with your technology stack is good, but not necessarily essential.

- Ability to discuss sophisticated software concepts, and to explain software that they have built, and how they would build out ideas given to them is good.

- Evidence that this person gets stuff done is good (ref Joel Spolsky).

Coding tests are, for the most part, garbage. Not because the test is of no value, but because you the employer probably don't evaluate the result properly.


While I can't say anything from the prospect of an interviewer, as an interviewee I am a big advocate for this style. Give someone a set of requirements, and then talk through how they would solve it. Allow for further clarification, and just talk about tech. It's a little more unstructured and less formal, but surely you'll very quickly pick up what experience they have, whether or not they've made past solutions that'll fit, what new technology they'd like to use, what technology they'd end up actually using, and there's no set right or wrong answer. The interviewer might even end up learning about something they themselves weren't aware of.

If someone could blag that while not being able to even write FizzBuzz, all I can say is well played to them.


1) Resume screen 2) 30-minute coderpad/codeshare exercise on a problem/pattern you actually use/encounter during the course of your work over phone. (No inverting binary trees). Expect a 20% pass rate here. 3) Reasonable take-home problem that you've timed 2 of your own staff completing well in 50 minutes. This is where you will get the most complaints from applicants, but that's OK. Let them select themselves out of the process. Expect a 30-40% pass rate here. 4) In-person interview. At this point, you should be mostly committed to hiring the candidate. Do a couple livecoding deals, but be extremely lenient in how you interpret results. Other week, we had a candidate fail the problem, but they kept their composure and showed they knew what they were doing on the way to bombing the problem. Candidate seemed sad at the end of the interview and happily surprised when we extended an offer.

IMHO this process works fairly well and does a good job of being economical with people's time.


I'm curious about the codepad/codeshare approach right at the first touch point. I fully agree that screening actual tech skill early on is important. Do you not find that you commit a lot of engineer time to codeshare interviews that don't work out further down the line?


Codeshare portion is 1 (typically senior) engineer for 30 minutes. Not that bad on a large enough team with everyone taking turns. I should add that we have 3-4 "canned" questions that the HR rep writes down the answer to, so even in the "resume screen" there are a few super-lightweight technical questions.


Interesting. I've got a feeling early stages of tech hiring pipelines are very poorly optimized at the minute; it's something I'm working on. Individual experiences help; thank you.


Another idea I've heard about but never got motivated enough to set up is a system wherein the applicant submits their resume via an api with some kind of trivial challenge... like "here is an id and an integer, multiply it by 2 and submit it with the id and your resume as a json with these fields or whatever." Might try it some day, but that's another idea to get some of the technical probe out early.


There's a company, I think it's Mythic Beasts in the UK, that has or had a job application that had something along the lines of (before you got to the form) 'first prove you're not a robot do this simple maths blah...' but every time you submitted your answer, by typing in and clicking on the page, you got an error page saying 'too slow'. I wasn't looking for a job at the time but that has to easily be my favourite application process ever!


> Is this kind of code problem too complicated in your opinion? For all I join in when complaining about irrelevant algorithmic

No. Just programming is actually not easy and a lot of people apply for jobs they can't do.

Plus, a nonzero number of people freeze up in any interview situation.

I once failed an interview loop because I forgot how bucket sort works.


Also 95% of job descriptions list skills candidates will never use and screen candidates with problems they will never encounter.

At the final interview to join the SRE team at Google I was asked to implement the kNN algorithm. I barfed at implementing a kD-tree after regurgitating the brute force solution.

Has any SRE ever had to implement a kD-tree in < 20 minutes or Google would go down?

I asked the interviewer at the end. They had never implemented one on the job.

As long as companies insist on these inane rituals I think it’s fair game to optimize for it as an interviewee.

It’s stupid but what else can you do?


The thing that really bugs me is that while Google has a reputation for asking these optimization questions, when it comes to programming artifacts one can inspect client side:

- Gmail loads slower than Eudora did on dial-up

- Chrome takes so much memory it’s basically a meme now

- The Google homepage (a text bar on a white background) is several hundred KB.

So what I want to know is, if they hire so many algorithmic whiz kids, where the fuck are they hiding?


The (free) Google desktop suite has so many visible fails, it constantly surprises me. Gmail search gives a superset of results. Contacts editing is so slow, it must be doing something artificial behind the scenes. Contacts and Calendar entries rescroll back to the beginning whenever you change context. Even Gmail Labels are sorted differently between desktop and mobile, and navigating within a Label contents can lead you to a dead end that makes you reload Gmail.com from scratch.


Very few at Google work at client side stuff, most sought after positions are many layers down the stack like machine learning, distributed infrastructure, programming languages or operative systems.


Right when you phrase things like that it does sound silly, but that's not what the google homepage is, nor is a literal webapp comparable to a desktop app in terms of load time (?!?).

Chromium at least is open source. Make it use less memory.


IMO, Google and similar require you to spend a lot of time preparing and training stuff you won't need to select for:

a. hard workers, who will just do what is required without fussing

b. people who really want to work there, either because of the fantastic comp or because they drank the cool aid or they genuinely want to work on specific stuff that nobody else except the tech giants does.

I agree that it basically hazing, and has similar purposes - it discourages from applying people who are not hard workers or just don't care about working at a FAANG that much. As long as they still get tons of candidates, it's a great initial filter.


They could be looking for those who participate in competitive programming contests.


I've been through a few different types of interviews and the random algorithm style seems off to me. I recognize that there are companies like FAANGs that want a deep bench and will have expert algorithm people on-call for that one time when it's needed. Most companies should be OK to just present real problems either solved or being worked on which they would expect applicants to be able to solve.

This is more true for startups that need less know "how to reverse a binary string" and more "how to properly design a database with 3rd order normalization", etc. If you're presenting an interview question, it should have actual job relevance and your co-workers should be able to solve it in the same time as the candidate. If you're drilling people on non-job qualities (e.g. invert a binary tree for a web dev role...) then you should expect a large difference between audience that can pass your bad interview tests and audience which will perform well at actual job.

Not trying to complain. I think job interviews should focus more on the 99% of what you do in your job on a Tuesday. Poor interviews seem to be more gotchas and algorithm tricks to disqualify roles which have little actual use for algos + data structures. The tests might seem too easy but as a front-end engineer I would rather be with a coworker which understands the CSS box model, knows semantic markup for accessibility, and similar web things than a person which is good at creating hash tables and doubly-linked lists in JS. Leet code probably doesn't test vertical centering techniques with CSS but if you're applying for a web dev position you better know them.


One company sent me (before the interview) a small technical assignment. After I had submitted the code, an interview was scheduled. The entire interview was an extended code review -- talking about trade-offs, about other potential solutions, etc.

I felt this was much better in that it was less stressful, yet allowed me to demonstrate both knowledge and design skills.

Another company did something similar but more thorough: They invite candidates for a full day of work where they try to solve a small problem. Then the code is reviewed and evaluated together. They also start the day with a 1-hour overview of their current architecture and you get to ask questions and talk about alternatives. I think this gives both sides a better chance of finding the right fit.

I realize this is not always reasonable.


I've been thinking for a while that companies should create their hiring tests from bugs and/or feature requests that came from their actual software in the past. Then they can gauge the quality of the employee for their purposes by comparing the candidate's solution(s) to those the actual employees wrote.


Bugs I have seen in actual code are divided into two types: those that are due to the way code inter-operates with other code (sort of integration bugs) and tricky bugs (usually edge cases) in some highly dense code.

The first type requires the candidate to learn a lot about the existing code base. The second requires more mental work but is atypical of real work.

Implementing features sounds like a good idea. Perhaps create a small well defined system which is a simplification of the real world system and let the candidate implement some additional features.


Resumes and prior experience should be taken more seriously, backed by more rigorous verification of said things and real reference checking. In a way, that is what already happens for the more technical positions (someone already knows you can do the job, the interview is just a formality) and sucks for those who don’t have well known enough reputations.


> And as for the code, we included what we thought was a trivial nested for-loop problem and virtually nobody could even get started on it.

It's possible that you asked the question poorly, or the solution wasn't as obvious as you thought.

Designing interview questions is hard[1]. I'll test out new questions on my peers at least two or three times before putting them in front of a candidate. And many don't make the cut. If a good engineer who's relaxed can't solve it easily, then a stressed out candidate will have no hope.

[1] This is why I hate seeing candidates share specific questions online. As an interviewer you'll have to scrub a good question, and switch to something you're not as familiar with. This hurts good candidates.


At my current position for a remote job, one of the interview assignments was reviewing a pull request for a very basic example app using their tech stack, and then implementing my own suggestions during a video call (while sharing my screen), updating/running tests, discussing trade-offs etc.

I think this was a great way to not only verify coding ability, but also testing team work and communication skills.


Where are you posting your jobs? Are you paying a recruiter? How much? Maybe you could have interviewers rank candidates (hire; competent, but no hire; slightly incompetent; how did they get here?), then see where the majority of your "how did they get here?" candidates come from.


For the code portion of things I stick to a set of increasingly difficult "real world" problems. The first one should be easily answerable by any potential candidate, the last one should be too hard for most people but I'm really looking at how they problem solve and handle themselves. I've recommended people for hiring who eventually gave up on that last one (and who went on to be fantastic in their jobs).


> Everyone has the same canned answers to the stupid behavioral questions.

The art of behavioral questions isn’t “ask and answer” it’s the follow up questions. As you astutely point out, the questions are ‘stupid’. They might as well be “do you want a stick of gum?”

Next time you ask those questions do a couple of things. First keep in the forefront of your thoughts what information you’re trying to get out of it and keep the candidate on track answering your data point. Do that by relentlessly asking follow up questions. When you think you have everything ask more.

As an anecdote I was being shadowed during an on-site recently. I asked some arbitrary ‘dumb’ behavioral question, went back and forth a bit, wasn’t getting much out of it. I noticed my shadow clearly moving on to the next question in their notes and decided to keep pushing on the original question - why did you do this, what were you trying to solve, what motivated you. Turns out the candidate did all of this to generate new revenue for the company and ended up bringing in $10m a year extra at the small company there currently worked for. Loads of great data, would have never gotten there if I’d settled for the canned answer the candidate had.

Behavioral questions aren’t comp-sci trivial questions, you can’t just ask the behavioral equivalent to fizbuz/Fibonacci/floodfill and copy down the answer (and you should never be asking those questions either, but that’s a separate rant).

Behavioral questions are stupid and to some degree that’s the point. When you ask your significant other or kids “how was your day?” — guess what, that’s a stupid question too. What matters is what follows from your line of interviewing.

If you want to get good at behavioral questions listen to Fresh Air and try to be like Terry Gross.


Nest for loop for the technical interview? Where do I apply?


> And as for the code, we included what we thought was a trivial nested for-loop problem and virtually nobody could even get started on it.

> Is this kind of code problem too complicated in your opinion?

May you please post the problem so that we can provide you with meaningful feedback regarding said problem?


I like the for loop code thing. Imo, good hiring process for hiring programmers must involve small piece of code. Not difficult or algorithmic, but something to distinguish those who can't do anything at all.


Seems like your problem is basic competency? Move the "can you even code?" question to as early in the filter stage as possible (first 'phone' screening). If you have a lot of applicants, you'll have to do some earlier filtering in the name of time (like on degrees, years experience, "the lucky half"), but don't pretend it's fair or very accurate since both the false positive and false negative rates will be high.

I agree that some coding problem needs to be used to try and answer the question, though with the right interviewer they can answer it without seeing code. The problems you use for that don't have to be at octree-collision-detection whatever challenge, a trivial nested for-loop is fine -- fizzbuzz level is fine. Sometimes you can rely on github or strong internal referral to skip this, but watch out, and anyway it's worth giving your questions to people you're sure will do fine (you've timed at least yourself right?) for the benchmark data and because sometimes they don't do fine, perhaps since maybe your question is too much. e.g. Floyd-Warshall can be done simply with a few nested loops, still I would never give it as a problem and I'd expect nearly everyone I've worked with to flunk it given only the standard hour (which really means 45 minutes).

Some jobs only need basic competence, so you might want to extend an offer if you've been convinced of its presence. At my last job, which ended up being more technically challenging / interesting than my current job, I was hired after posting my resume to Craigslist which led to exchanging some emails and having lunch with the startup founder to talk about my past work and whether I would be useful for his most pressing work. At my current job, I've been part of on-sites where I've established "can you even code?" is "no". Those were costly failures of not having that answered earlier. But we also like to believe we need more than basic competence, so rejections can still occur because of a lack of "testing mindset" or certain "behavioral answers". Only once you fix your "can you even code?" filter is it even worth considering what else you might want to justify an interview pipeline with more stages than a 'phone' screen or lunch conversation.


I couldn't agree more with the idea that you should move a 'can you code at all' test to as early as possible in your hiring pipeline.

I used to wait to the first in-person interview to try simple fizzbuzz style questions (with the candidates on a machine and a compiler/interpreter). In about a third of cases that meant we'd committed a significant chunk of time to engineers that apparently couldn't solve trivial problems.

Now it's one of the first things I check. Done right, it's a relatively small hurdle for capable people to overcome, but really helps as a filter for those who aren't suited to the role.

I recently created a service (https://candidatecode.com) to help companies manage issuing and reviewing their coding challenges; I think it's got real potential to help some people out.


The way most fields that require some level of knowledge or ability handle this is by having industry-standard assessments that everyone takes. When doctors interview they get almost entirely behavioral questions because the hiring hospital / office merely needs to check that the doctor is a) licensed and b) certified in whatever specialty they are being hired for. They certainly never get asked whatever the medical equivalent of FizzBuzz might be.


I'm getting up the gumption to look for a different, hopefully better job in the nearer future (in automotive control software, looking to move to AV), and I had a bit of a revelation when talking to a friend of mine who does interviewing. I cannot talk about the things in my current job that would make me a good hire for the things I want to move into. It's all hunting down an obscure bug buried in layers of technical debt and overly complicated standards, but giving any depth beyond that bare platitude requires going into things that my NDA covers. I even work in driver assist technologies, but I can't go into detail about that because I'm working on unreleased, unannounced features. That means that the only thing left is side projects or whiteboard interviews. To be fair, this is at least partly an industry problem due to long product development cycles and a culture of secrecy, but it makes a lot of the better solutions to interviewing unworkable and causes companies to drop back to whiteboard interviews. And as a candidate, my personal maximizing function is to hit the books and be ready for curly braces and logic puzzles. Sometimes there really isn't a better way.


“I’m under NDA” is a valid response. Have some hobby project or floss bug fix on github if there is time to fill.


If you risk having a bullshitter you need some way of identifying them. The best way is through references, personal projects etc.

But if you need to recruit someone without any such credentials then you may need to do a simple coding aptitude test. Could be a code review or a simple excercise but whatever you do, don’t do whiteboard coding and don’t have people recite/implement memorized CS textbook algorithms. Anyone can do that and still no code.


> Everyone has the same canned answers to the stupid behavioral questions.

If the person conducting the interview thinks the behavioral questions are stupid, then perhaps they are. In that case, don't ask "stupid" behavioral questions.

> Resumes are meaningless, and often re-written by recruiters to match the job anyway

Was the position entry level? Students coming right out of compsci often have little to no practicable experience. They may have difficulty thinking about what to put in their resume. After one or two years of full time experience that should no longer be an issue.

> For all I join in when complaining about irrelevant algorithmic questions, I have to admit that they at least test something, even if it's just willingness to study for the interview.

Asking those "stupid" behavioural questions and receiving the same canned answers also demonstrates a willingness to study for an interview.

The coding problem should be testing a candidate's problem solving capabilities as practicably required by the role being interviewed for. The chosen problem should reflect the types of problems that they will actually need to solve if hired. For example, you could select a small PR from one of projects being actively developed by the company. The selected PR should involve only one or two classes (assuming a language with classes) and require improvement. You can look through the history of a PR and just pull out a segment that was selected for improvement by the reviewer(s), or have the team select it for you. Then ask the candidate:

- to conduct a code review of the PR

- to improve the code


Is there any chance you could share the problem? Changed enough to protect your identity, of course.


It was very similar to the “given an array of stock prices, find the optimal buy and sell indices for the biggest profit" problem that somebody referenced above.

But we emphasized repeatedly we weren't looking for the O(n) solution, just the brute force naive solution was 100% OK. And it definitely didn't look like people were freezing up trying to figure out the optimal problem, they were struggling on the basic nested loop.


What works for me is to consider what work the engineer will be doing, and ask them to write code to prove they can do it.

For instance, a front end engineer can be expected to be able to write a to do list or similar app in a framework (ideally the one your team uses, but not a hard no hire if not) app with minimal googling (although that’s fine as long as not excessive) in ~45 minutes.

Then you have to look at what level of experience they have. Less experience requires more mentoring generally, which may be fine depending on how much time your team budgets for that work.

Lastly, measure their body language and tone of voice to check for red flags pointing to difficult communication styles or people who treat others poorly.

If all three match, hire!


How much were you paying for these positions?


Slightly above market rate for the area, all in the low-to-mid six figures.


Was this a position where they were expected to write code? I'm surprised that developers making six figures couldn't solve a problem that an interviewer considered trivial.

Any chance of a sample or maybe an alternate version of the problem?


I have had similar experiences as an interviewer. It’s one of those things that you really have to see to believe, but it’s definitely a real thing.


Like $110k-$150k?


I prefer take-home assignments. I know that they often get a bad rep and have the potential for abuse; for this reason I'd argue for implementing some or other solution in the field.

I recently had one requiring me to develop a native mobile application, which I enjoyed. It was interesting, the code is useful down the line, and if I don't land the job, it beefs up my portfolio.

Initial screening by recruiters is tough as my background's missing a degree and industry experience.

Context: self-taught, started out with game dev, tried going solo - not a runaway success. Looking to move away from the field.


I use basically the same process, but use a very simple programming problem at the end - simple, but amenable to discussing optimization and edge cases. I let the candidate choose the language (or just use pseudo code) and don't care at all about the syntax.

I find that this is very useful especially when interviewing juniors who don't have many projects under their belt for the first part. It's also useful when a candidate has good verbalisation skills, but poor programming ones (which happens).


I have a similar experience doing interviews in NYC. We started by asking algo questions, but quickly found out that 95% of the candidates can't answer them. My boss, an ex-programmer, was surprised too, and she asked me to dumb them down, a lot. We ended up with a bunch of really trivial stuff like "write a function to reverse a string" or its slightly harder version - "reverse an integer".

Could you give the exact wording of your "nested for-loop" question?


> we included what we thought was a trivial nested for-loop problem and virtually nobody could even get started on it.

If no one can answer the question, maybe they don't understand it? Maybe there's something unclear in the way it is worded?


I don’t think the problem lies with your interview if they can’t write a simple nested for loop...

Which I think is why a lot of startups locate in the expensive Bay Area - not a lot of cities have a similar concentration of decent talent.


Phone screens should prevent those candidates from ever getting onsite.

You'd be surprised (or maybe not now) how many applicants for a senior frontend position can't build a progress bar for the phone screem.


I've given phone screens to individuals who turned out to be a different person when they showed up onsite. Not "nice on phone, jerk in real life" but rather "Bob does the phone screen for Charlie (and passes), Charlie shows up at the onsite."


I usually have video off during phone screens. Do you require them on?


This.

I loath writing algorithm on whiteboard, especially the 'catchy' type. But I've interviewed people who can't even write a for loop.. the amount of brain drain in the flyover country is insane.


> And as for the code, we included what we thought was a trivial nested for-loop problem and virtually nobody could even get started on it.

Could it be that you're having issues communicating the problem?


> And as for the code, we included what we thought was a trivial nested for-loop problem and virtually nobody could even get started on it.

How did you run these? While it sounds like something that should be ok even on paper, you can vary comfort a lot through the medium. E.g. for the last interview I had with a substantial coding part, I think being able to do it on my personal machine made a big difference. (I obviously was told before what kind of environment I'd need to have ready)


I agree a code test is necessary. I’ve seen several panels neglect to do a code test, the candidate was hired, then within a month fired because it was clear they couldn’t do anything (other than, well, argue).

I’ve been in a panel where I was the only person who asked a code question, the candidate flunked, and then the VP of Engineering went over my complaints and hired the guy anyways. He had been a Professor of Software Engineering and had a graduate degree from Princeton. Within three weeks, the VP of Engineering had to fire him because he couldn’t make it through a simple code review.

BUT the extremely negative sentiment here towards the technical interview process is very well-deserved.

Assessment of code (and the selection of problems) is most often no less subjective than any non-technical assessment. Sometimes the interviewer doing the grading is flat out wrong. Several times I’ve been asked the famous “given an array of stock prices, find the optimal buy and sell indices for the biggest profit.” One interviewer was not aware of the linear time solution to this problem, and didn’t believe me when I wrote and tried to explain it to him.

But sometimes, the interviewer doesn’t even want you to do well. Once time I was interviewing with an injury that prevented me from typing efficiently. I had a doctor’s note and the injury was quite conspicuous. Nevertheless, three start-ups made me solve problems by typing on a keyboard, which guaranteed an excessively long completion time. Those companies held those results against me. (An it’s not like there is anybody to hold those panels accountable).

And then there are those who just don’t care. I had a phone interview with Airbnb that was literally as bad as the stories on Glassdoor: the guy answered the phone in a noisy office (not a conference room), gave no introduction, then simply stated the problem and dumped me into Coderpad. I literally thought it was a prank, since I had met with people at Airbnb face-to-face prior to the call. But the recruiters confirmed the guy was a real employee.

The root problem here is there is no feedback loop back to interviewers. The candidates get “feedback,” but people asking code questions, especially new grads, typically get zero assessment on how well they are doing as interviewers. What’s worse is that recruiters and hiring managers both have incentives to deprive ICs of such feedback, since it would invariably make ICs more aware of opportunities outside the company.

Until the incentive structure of technical interviewing changes dramatically, we’re stuck with Leetcode and hope for the best. People like Gayle Laakmann are helpful (especially when Facebook gives candidates a live hour-long session with her for free), but these people are ultimately invested in their own income and not the task of fundamentally fixing this broken process.


what is everyone smoking? youre still trying to optimize for a test and not a job.

dont give them a fizbuz. give them an example of a real problem your engineers need / are trying to solve. how do they respond? thoughts / intuitions / pseudo code. do they show knowledge of the problem space / domain?

ot if you just want a kid who can code and they pass the fizbuz but fail at the real job, what does your training/culture look like? who does that really reflect on?

it seems to me that interviewing is terribly cargo cult. the problem is real, the practises ostensibly supposed to be solutions are not.

/end rant


I realize everybody's going to jump in and rant about algorithms in interviews, but I wish you'd all add something constructive as well.

I was like this a couple years ago. I was a self taught, "college is a scam", "practical experience" type guy. I now, however do see immense value in the ability to be able to work through these algorithm questions,especially if you ever want to do something besides web / app development.


Algorithms are great but in the real world what matters is being able to recognise a class of problem then go to the literature e.g. Knuth to find the right one. No working programmers know every algorithm off the top of their heads.

The classic detect a loop in a linked list question. The original guy took years to devise the algorithm for it. In an interview either you’ve seen it before, in which case you rattle it off, or you have to write a research paper effectively in 5 minutes.


This. The problem a programmer needs to solve isn’t remembering or applying the algorithm, it’s identifying what needs to be done.

“This problem is (analogous to) loop detection”

“This problem requires a sorted list”

“This problem can be solved by depth first traversal”

Actually sorting a list or doing a traversal is easy after that crucial step. You can look it up. You can’t look up the step that told you what algorithm or data structure to use though.


Yeah but there's a difference. We're immersed in a state of better knowledge. It took centuries for us to go from the forms of knowledge that are "truth is self-evident" to "verification:of consequents is the pre-eminent method of knowledge acquisition" to "failure to reject falsifiable ideas is the best method to acquire knowledge". But I took that same path without reading the philosophers in approximately ten years. How? Because the cultural milieu is different. I am a unit of a more capable civilization and I am therefore enhanced.

So am I smarter than Karl Popper? Not really. It's just easier to learn something than to find it. So, yes, I know the linked list answer.


This is surprisingly easy, but you have to understand what the problem is first or the solution won't be apparent. If this sounds patronizing it comes about from my frustration on being on the other side of the table too often.

What you want is very good football (soccer) players. Unfortunately you (or upper management) may not know all of the rules to soccer. You may not know the training regime that goes into winning a good soccer game - and it's a big risk spending money training for the big game on the off-season only to lose during the Big Show. So what do you do to test potential candidates? You see if you can get along with them, if they're a team player, and then see how well they play foosball.

It's perfect! There are soccer players on the field, there's a goal, it takes skill and coordination. But oh no, it turns out in the population at large really good soccer players really sort of suck at foosball. After all, they'd rather spend their time and energy playing soccer. So now they spend all their time reading up on books about foosball and what the best foosball strategies are.

You see where I'm going with this? No one uses algorithms in their day jobs. OK a few of you, but come on man, I make full stack web applications. As do most programmers. So why are you testing theory that has literally nothing to do with the job? Somehow someone thought this was a proxy for smart people, but I mean, if the guy who wrote this "Tech Interview Handbook" was really smart wouldn't he have spent his time writing a cool program? I mean how lame is this?

If you want to hire competent engineers tell them what you're building and ask them how they feel is best for them to show they're competent. If you have a big data pipeline in Scala ask if they can construct a data pipeline example that is cool over a few days (take home) or do something similar in the office. Some people like one, some like the other. But just communicate with the people you want to hire! And if not everyone's interview process is the same then maybe that's ok.

I mean I just got a guy who sent me some automated code interview program that had a timer that counted down from an hour at the top! WHO THINKS TREATING THEIR POTENTIAL EMPLOYEES LIKE STAR WARS DRONES IS A GOOD THING? All you have to do is treat the people you want to work with like you would want to be treated and demand that they know their shit.

This. Isn't. That. Hard.


I don't work in tech (though I sometimes think about uuyýjmq


> nested for-loop problem

Ah, dynamic programming.


Given the current state of tech interviews, they have more or less become like standardized tests, such as SAT, ACT, GMAT, GRE... with guides, cheat sheets and perhaps neighborhood coaching institutes on the horizon with instructors who have cleared tech interviews in FAANGs.

Are we going to see tech recruitment become more and more like college admissions where a top score in the interview is just one of the criteria and no longer sufficient to get a job?

Perhaps next is asking people to write essays regarding career, goals, why they want to work their, extra curricular etc..


Y'know, as someone who went to a very competitive public school, there were a particular brand of driven, studious kids who knew exactly how to paint along the lines, play the game, and get into the college of their choice. They understood the precise combo of grades, extracurriculars and essays to get into a "good" school.

I definitely see some of those kids doing the same tactics to get into a "good" company. The steps are slightly different but the philosophy is the same: grind Leetcode, participate in tech clubs and take bullshit jobs juuust long enough to get an internship, get said internship and parlay it into a return offer. Put your head down, play the game and reap the rewards.


I mean it's all a game as much as it sucks. One of my best friends in high school was a salutatorian because the school didn't wait grade. She took zero AP classes and very few honors classes. She knew what she was doing.

Friends in college took less challenging majors to boost GPA for med school (business admin, environmental policy, etc) to boost grades and there core classes were easy so it gave them more time to focus on med school pre-reqs.

It has been and always will be a game. Few things in this world come down to meritocracy because the way we socialize and the way we interview does not beget that in absolute terms.

Tech did a better-than-most job as a meritocracy for while, but I believe those days are done. Most areas f tech are too mainstream with too much obvious money at stake.


I don't understand why you are discrediting people who are taking the better path to med school. Work smarter, not harder.

There is also short game vs long game. For example, if you cheat on a test without learning the material. You might win the short game, but if that material was important and you need it later you're losing the long game. That said, if the material was just fluff and no one really cares. It's actually smart just to cheat and get by.


People who take the easy way or cheats to get around bureaucracy will do the same thing after you hire them at the job. They will do things like avoid password salting for your database unless you explicitly tell them to do it etc.

In other words, I don't want to hire "smart" people, I want to hire intelligent people who take pride in what they do and don't try to take shortcuts to make their lives easier at the expense of others.


That's how most kids get into prestigious universities. Rarely, if ever, do kids just "stumble" into HYPS or Oxbridge, out of sheer intellectual power and luck.

If you want to get into those schools, you gotta know how to game works, and practice specifically for that match.

I went to a school like that, and the majority of my classmates were from upper-middle to upper class families that had poured money into their education since they were young, with the specific goal of getting them into top schools.

Yeah, that's unfortunately the thing, when you start throwing in a ton of different criteria / measurements. The people that know how the system works, will study to maximize those points.

My honest opinion is that you kinda of end up with smart people that are very good at test-taking, but may not be the best people when presented a set of problems with non-obvious solutions, or any guides or road maps on how to solve the problems.


Part of the problem is how little time one has to prepare for college. If we encouraged people to apply to college after a few years of adult life, I have a feeling the playing field would be a little better. But instead we expect kids to gain the credentials of much older adults while going to school and getting fantastic grades. Unless you understand the game going in, it's too easy to slip up and fall behind. Even then, you don't have time for mental health or sleep.


You're absolutely right, unfortunately here in the Bay area it's those kids winging it and getting in.


You joke, but I've seen several applications where I've been asked questions like "What achievement are you proudest of?" and "How would you contribute to the diversity of our team?".

Also cover letters cover some of this.

Actually, I wouldn't mind a standardized test like the GRE, where a good score might actually keep your resume from getting thrown out immediately.


Maybe I stand alone in feeling this, but I don't see those questions are absurd during an interview. Especially at smaller startups, where early hires are going to be interfacing heavily with the majority of other employees. Maybe this should be filed under things that comprise "culture fit", but I also think it's different than that.


I think it's a little odd to ask directly. If you are evaluating an interviewee for "culture fit" as in your example, would you directly ask them "how would you fit into our culture?" or try to evaluate their thought processes, demeanor, etc to determine it for yourself?

Perhaps it's just the context from which this conversation arises and it wouldn't strike me this way otherwise, but asking these questions directly feels a bit too much like college application boilerplate to me.

If a team wants to prioritize a diversity of perspectives, I feel the team should spend the interview trying to understand my perspective, analyze how that compares to their own perspectives, then synthesize the answer to the question themselves rather than just adding a "How will you contribute to a diversity of perspectives?" question.


If there were a standardized Programming GRE, it will end up increasing the competition for jobs thanks to the future Kaplan/Princeton/local training centers. What would companies do then? Start another layer of programming interview on top of this programming GRE. Eventually, we will reach a point where companies don't trust such GRE scores.


Bright high school students are vastly oversupplied compared to seats in elite colleges. Graduation rates are in the high 90s. There are many more than 5,000 kids who can handle the workload; which 5,000 you pick is arbitrary.

Engineering competence is not even slightly oversupplied compared to useful engineering work. Project failures, incompetent people, and systematically incompetent orgs are still very much alive at the most selective tech employers. There are real business needs to hire better engineers.


Admission at Georgia Tech has spoken on this online. Per their analysis, they can definitely differentiate the top 30% of applicants from the rest and it makes a difference in performance. Within the 30% they have found no differentiator that significantly impacts their academic performance.

So they have a full 30% of their applicants qualified to be there but they have to narrow it to 1%. Whatever method they choose must have the appearance of meritocracy, abide by laws and regulations, be resistant to corruption, achieve goals other than academic performance such as culture, volunteering, and sports, and so on.

I think the same must be true for companies. At some point in the elimination process everyone is technically qualified, so they might as well hire someone "because we like his face". But that's demoralizing and corrupt so they invent some criteria that on its face seems useful, even though it's truly not.


Every person you hire is technically competent? So every system you build on or integrate with is high quality? Every architecture choice you have to live with is a good one? Every bug report you file is investigated well? Every coworker's code is a joy to read, and their reviews of your code and designs are insightful? Every person on your team is capable of the most difficult work in its pipeline? Every system you and your business could think to want is within your engineering team's capabilities to build quickly and operate well?

You have something really special there; hang on to it. People dream of working at a place like that. And have fun taking over the world, because with a team like that, you will.


Maybe I didn't phrase it well. Georgia Tech of course has students who perform poorly and fail so their point wasn't everyone is good enough.

Their point was after getting to the top 30% there's really no signifier of who's going to fail or how much they would succeed. After a certain level of basic qualification it's essentially random to them.

So those people of course will get through the hiring process. The issue is whether you actually have a model to filter them out at hiring. Since a workplace Utopia as you suggested doesn't exist yet, I suppose no one has that model yet. So picking the top 0.01% is essentially randomness because as far as we can tell the top 30% are all the same.


It's actually that you aren't qualified to say who's competent. The CEO can't hire a tech competent CTO and that multiplies down the line. In the absence of a tech savvy judiciousness they fall back on fluff like "devops culture fit" and "growing the team" to be like "silicon valley culture". Hundreds of recruiters will sell them that story (those recruiters are really in the PR industry) and you end up with incompetence all around except everyone has become good at selling their own incompetence as the highest form of tech savvy. Most prevalent in older enterprisey firms.


Even worse, engineering management is desperately in short supply. However the beauty competition nature of the project and vendor selection process is the root cause of failure imho. And that's not fixable in tech, in fact, the problem is so meta (and metastatic) that only the PG recommended "end-run" to provide value to end users can cut through the BS.


Replace high school students with engineering graduates and elite colleges with elite employers in your first para.... And see..


Elite employers are nowhere close to getting all their projects done successfully or with reasonable quality. The subset of engineering graduates who can do that is small.


I was against the algorithm tests or something more like exams before.

Until I interviewed a lot of people with my colleagues during these years. Interview processes are highly biased based on the knowledge of the interviewer's background and value, or even mood.

Some interviewers are too sloppy on interviewing, asking ill-defined questions, demanding answers they want, or just in a hurry and wanting to go back to work. I often feel bad and angry for interviewees - they spent time and patience preparing themselves carefully, then was treated very casually. It isn't fair at all.

I hate to say that, although standardized tests are bad, they are better than the most nowaday interviews.


I went to a Cal State, and I recall in the last few years Google has taken professors to their campus and pretty much preach Cracking the Code Interview. So it looks like schools are already making that their standard. I have yet to see what classes look like but I could see there being a required technical interview course you have to take in order to increase school’s numbers on where their alumni work.


That would be Stanford CS9 : https://web.stanford.edu/class/cs9/

> This course will prepare students to interview for software engineering and related internships and full-time positions in industry. Drawing on multiple sources of actual interview questions, students will learn key problem-solving strategies specific to the technical/coding interview. Students will be encouraged to synthesize information they have learned across different courses in the major. Emphasis will be on the oral and combination written-oral modes of communication common in coding interviews, but which are an unfamiliar settings for problem solving for many students.


There's a YC company that's literally making a standardized test for programmers: https://cspa.io/


I really hope this fails, something I have never wished for any other startup.

At least personally in my hirings I'll never use or trust anything like this.

There are so many things wrong with this approach that I'm kinda speachless as to where to start.


Having a way that coders can avoid doing these types of coding problems over and over seems like a positive. Companies that aren’t interested in the approach could not use it.

Most companies today are already using a version of this that is way less respectful of applicant time.

But you’re speechless, so I guess there are strong points on both sides.


This is my thinking too. Having to prove, over and over, that I understand how linked lists and binary search trees work is just tiresome. Doing it once and then being able to refer to a trusted credential that certifies it would be a blessing.

I understand nobody is going to hire on the strength of one exam. Every job is a little bit different and some of them call for particular skill sets. That's fine. By all means ask questions tailored to the job at hand. But let's find a way to skip the really generic questions.


It would be a blessing, but I don't have any reason to expect its coming. I mean, I proved I understand how linked lists worked in my freshman year (well arguably earlier in HS CS classes) along with many other people who were enrolled in a CS or related (mine was CE) degree. Why can't I just point to my degree and never have to prove myself on that stuff again?

The issue is trust. No one trusts degrees anymore, and for very good reasons. I don't see any way for something else that relies on the same sort of credentialism to avoid corruption and lose trust unless it's measuring fairly static and not-very-gameable attributes like height or IQ (see: Wonderlic for basically IQ testing in a way that complies with laws against IQ testing..).

Where this gets particularly ugly is that even though no one trusts degrees anymore, they're still very popular, and some employers still require them anyway (or at least make it very hard to even get an interview without one). If a new "trusted credential" gets popular enough, or even a set of them, they'll eventually lose their trust, but because at some point they were popular people will still sink time into them and so the time cost we're inflicting increases when it'd be better to just get serious about hiring and tell smart and interested 18 year olds to just come start working already.


I like your thinking around the trust issue.

It's a totally different matter between hiring new grads, and non new-grades. cspa.io seems only to target new-grads. But as you said, it's replacing one degree system with another similar, but maybe evenly good/bad system.


Please do start? I'm kind of curious what you'll say because it's not at all obvious to me that it would be a bad idea.


I'll try - a few thoughts:

Software engineering is part knowledge and part application of knowledge (problem solving) and part ability to acquire and apply new knowledge. We remain current in our jobs and roles by making sure we're constantly learning. Interviews should be part testing of knowledge, but a lot more discussions to evaluate problem solving and ability to acquire & apply knowledge.

Also, knowledge in many cases becomes stale if not used. If I learn a new language/framework but don't use it in a project immediately, I'll forget almost all of it. I may learn react via books/tutorials, get a great score in the test, never use it and/or forget it by the time I get a job based on the score. Am I a good hire?

Given that it's not clear as to what problem such a test will solve and moreover it's unclear as to the long term impact of such a test. Is this just for initial filtering, or will become an actual hiring criteria?

If graduates from colleges are expected to take such a test to prove something, then what is the value of the work these grads did in their colleges? How is a score in these taking in regards to the work students do in colleges?

Some people are better test takers than others. An a good interviewer is able to probe and work with the person. So such tests may end up discriminating, not on purpose of course, against bad test takers if they become the sole filtering mechanism. People may have to take tests again and again till they get a good score. An effort, which in no way helps in their job per se but is simple an additional burden.

The only potential place where I can see some value for such a test if to evaluate the coding camp folks, since different coding camps have different quality. However, even there, it's unclear how this will pan out.


>I may learn react via books/tutorials, get a great score in the test, never use it and/or forget it by the time

Thats one of the point of programming test, there are tons of framework/language out there, every company might use different language/framework, a standardize test streamline the interview process, make it easy for you to interview in many different company.

>Given that it's not clear as to what problem such a test will solve

It very least shows that you are care enough for the job that you willing to prepare time to prepare for it.

>Some people are better test takers than others

Sure, life is never fair, but for those who are worse test takers then it just mean you have spend more time and effort to practice it.


> Thats one of the point of programming test, there are tons of framework/language out there, every company might use different language/framework, a standardize test streamline the interview process, make it easy for you to interview in many different company.

As you said, they're very different. The thing is it might not be possible, as the technology develops and diverges, to have this one standardized test.

> It very least shows that you are care enough for the job that you willing to prepare time to prepare for it.

It applies to grinding leetcode as well, so not really related here.

> Sure, life is never fair, but for those who are worse test takers then it just mean you have spend more time and effort to practice it.

It can't be like, because some people are just not good at being tested, sorry, we don't want to work with you. They're of different purpose.


But the basic/fundamental like algorithm and data structures are still the same.

Yes just like grinding leetcode, is at least shown that you are willing to put the effort.

If you are someone who just not good at being tested then well you have to make it up with some other skills. For example being a really2 good domain expert that the company seek for you instead of the other way around or having excellent networking(as in relationship) skills.


Why do you think it would fail? Curious about your reasoning


They trademarked the words "Core Exam"? That seems like a rather generic term.


Judging from the people I work with (FAANG). I would say that this screening is successful, if someone truly awful got in, then there's still PIP. But those are relatively uncommon.


The next step is back channel reference checks


except there is no standardization at all.


Having hired dozens of devs, I can confidently say there is absolutely 0% chance to consistently successfully identify good developers in any reasonable amount of interviewing / assessment time period.

The best way is someone brings in an existing code portfolio and discusses it.

The second best way is someone completes multiple design and development exercises of varying complexity, constraints, and use cases.

The third best way is they complete a single exercise and provide commentary on alternative designs.

There is no fourth best way; all other approaches are essenrially stochastic and select for interviewing traits not development traits.

The actual best method I think is a 3 month probationary period which is more or less an extended interview. They're asked to contribute to existing codebases, participate in code review, go through some architecture design sessions, conduct stakeholder interviews - things that again are mostly impossible to accurately gauge in a typical candidate assessment window.

By the way a tremendous book for interviewers and hiring managers is How Judges Think by Richard Posner. A lot of it applies to hiring, and he's a great writer.


#1 is great, but impossible for a lot of people due to NDAs. The 3-month probationary period is indeed the only way to really evaluate devs, but I'm not so sure it's a good way to hire them. After 3 months, the mediocre devs have friends, etc., which makes it hard to get rid of them without creating a weird morale issue and massive team disruptions. And then you have to add in the limiting effect on the hiring pool (though I wonder if that really matters, I suspect good devs know who they are).

Number 3 is my favorite, but there's a trade-off: almost anything large enough to allow for multiple designs is probably too large to require all applicants to complete. But I also love having a candidate discuss their own code. I'm really curious what types of exercises you have for this kind of thing, and where it fits in the interview process?


> After 3 months, the mediocre devs have friends, etc., which makes it hard to get rid of them without creating a weird morale issue and massive team disruptions

I’ve seen this mistake happen so many times. Especially in early stage start ups. If you want morale issues and disruptions, you’ll achieve it 100% of the time by retaining underperforming staff. Even worse, this ultimately leads to your best team members leaving if it’s an issue you can’t solve. Fail fast applies to HR too, you need to learn to fire fast.


It's true that keeping on terrible employees is worse than firing them, but that's the answer to a different question. The question at hand is should you opt for a system you know is going to involve a lot of firing fast, or do your best to avoid bad hires even at the cost of a higher false negative rate?


Hiring and firing are expensive in a whole lot of different ways, but it takes time and effort to get right, and you’ll never get it right every time. So actually using the probationary period properly, and getting good at firing people are things you’ll have to do anyway.


The actual best method I think is a 3 month probationary period

My workplace has a six month probationary period; I brought in a mandatory three month review after watching one group screw this up badly. At the end of the six months, the employee came in to work all happy as usual thinking everything was great, and was promptly fired.

The three month review is the point at which the employee is told that they're on track, or they're below standard. If on track, just keep going the same way and if they don't get taken aside for a specific chat in the following three months, told to assume that they're going to pass the probation period; we have definitely turned probation failures into probation successes via this mid-point review. It's also their opportunity to tell the company what the company is doing wrong; what the company is doing that will make them choose not to stay. This too has happened, and we have retained good employees by listening to them at the three month point and making changes.

If they're below standard, they're told what they need to improve and are offered help to improve, or they can just sack it now and walk (or, as happened once and once only so far, they're considered unrecoverable and we take a long hard look at how that person was hired).

The principle we subscribe to is that if the employee is surprised by the results of their probation period, that employee's team lead and by association "the company" has really screwed up. If an employee doesn't know how they're doing after six months on the job, something has gone very wrong.


Nobody does probationary periods because few candidates would pick a probationary period offer over a standard job offer. And it's presumed that top level talent will have multiple offers on the table, and if you don't then there's something wrong with you, so having a probationary period selects for lower quality candidates.


I would assume that a company without probationary periods is lower quality; either the work is so uniform that anyone can do it and employees are fungible, or they simply accept bad employees and live with them. Every company I've worked for has had some kind of probationary period.

I don't live or work in the US, though; we can dismiss someone during their probationary period with ease, but after that they have workplace protections. If we could fire anyone at anytime for anything we like, that effectively makes the entire employment period a probationary period. Probation never ends if you can be fired at any time.


> The actual best method I think is a 3 month probationary period which is more or less an extended interview

Most states in the US have at-will employment terms. Probationary periods are common in other countries. And yet no one seems to want to do what you're suggesting.


The problem is, most companies in the US want to hire you on as a Contractor and not a full time employee for those three months. I can't speak for everyone (but most folks, I assume), but I need medical and dental. I can't go three months for the chance of maybe getting it. I can't do it personally and i can't do that sort of thing to my family. there are other options in other countries (depending on the country). I'm a centrist by nature, but this makes a "three month interview" untenable, IMO.


Sorry what I meant was US companies could treat the first 3 months of any full-time/W-2 employee as probationary because they can fire anyone for any reason other than protected characteristics. It doesn't even need to be an official company policy documented anywhere.`


They could in theory, but nobody wants to. I used to see probationary periods, but I haven't seen a company use that for salaried staff for a long time. HR and Legal have put an end to that kind of thing.

So Jim is right, in practice the only way HR is going to let you have a probationary period is to use contractors, and that limits your hiring pool significantly.


Every decent sized contracting firm I've ever worked with offered benefits. It's more expensive, but you just price that into your rate. So I don't understand what the concern is. Is it the risk that after 3 months you won't be brought on full time?


Upheaval is the issue. First, most contracting firms have pretty crappy medical plans. Secondly, there's generally a wait period before full benefits kick in. Thirdly, you wind up switching your medical plan multiple times in a short period.

That's fine if you're single, but if you've got kids that might mean switching doctors multiple times, which is a gigantic huge deal. Also, you have multiple periods of time where you don't really know your benefits which is terrifying. And if the company does drop you after three months for whatever reason, now you're doing it again.

And keep in mind it's definitely a seller's market if you're a good dev. It's incredibly unlikely that the 3-month contract is the only offer on the table. So all things being equal, who would take it?


Your description sounds like this should only be a problem for families with serious medical issues, which would be the minority. I do have a kid. If your kid is relatively healthy then pediatricians are basically replaceable widgets. If it's important to you, just pay the cash rate for your favorite pediatrician during that 3 month period. If you're really hung up on it, you can just choose the COBRA option and keep your old insurance during your trial period. It's very expensive, but again, you just calculate that into your contracting rate.

>And keep in mind it's definitely a seller's market if you're a good dev. It's incredibly unlikely that the 3-month contract is the only offer on the table. So all things being equal, who would take it?

I don't feel any additional security whatsoever being a least tenured employee vs contractor. Maybe there's some statistical benefit, but it seems pretty small. So for me, it's practically zero cost to do the contract-to-hire thing. Any financial costs are just built into my rate. I'd rather spend my hiring currency on other stuff like working remote full time and extra vacation days.

It probably makes a difference that I spent a big chunk of my career as a self employed consultant. I do remember feeling some anxiety when I first transitioned from a regular employee to consulting. But I quickly learned my anxiety was unfounded.


I do a lot of consulting as well and am comfortable with it. But most people with families want more stability. I don't know why you're pretending to be surprised by this.

Your personal preferences aren't really the topic at hand.


It's not about my personal preferences. If you're concerned about an instability period on your healthcare you can literally keep you old healthcare for up to 18 months with COBRA. Adding the contract-to-hire period doesn't really impact instability, at least as far as healthcare is concerned.


I fear we've gone down a wrong turn. You may be 100% correct that fears about instability are irrational and not well-grounded, which is the point you've argued for several posts.

But nobody is arguing with you. Nobody is making the contrasting argument. That simply isn't the conversation anybody is having. It's an interesting conversation, it just doesn't happen to be this conversation.


I asked what the concern was and you responded primarily in the area of health care/insurance. So that's what I addressed. What else is the conversation about if not that?


How often are you hiring people whose professional work is open source? How often are you hiring people for roles where their professional work will be open source?


the OP isn't saying it's always possible, just that it's the best way if it is possible


There is something I cannot understand and this has already become a cliche: > Technology industry is an extremely fast-moving one. Many technologies used today didn't even exist/were popular a decade ago; in 2009, mobile app development and blockchain were pretty much unheard of. Engineers constantly need to upgrade their skills to stay relevant to the demands of the job market. Engineering is a great career for passionate individuals who like to learn.

Why would anyone give this advice? Can we stop handing out this advice and encourage everyone to stand up for their rights instead? If you think about it, by proxy this gives you the following advice: as a programmer you will have no life outside of work and you are supposed to be an idiot who spends his time working and studying outside office hours even when you could spend time with your family. So yeah, go for it and suck it up you idiot.

At least that's how most employers handle this problem. And interestingly by comparison no-one tells a MBA diploma holder that it's a great career for passionate individuals who like to spend their whole life studying.

And before you think that I am against studying, that's not the case at all. If my company pays for it and I can kick back on a sofa during working hours to study I am fine with it. But I cannot see any value studying something on my own expense on my own time which will be outdated in 3 years anyway and so by definition it only benefits my employer.


Why would anyone give this advice?

It seems relevant to getting a tech job. If one is looking for a job, chances are that things have changed since the last time they looked for a job, which on an average for most people these days is every 2-5 years.

This is not limited to engineering or tech, and extends to most specialized jobs in most industries.

Can we stop handing out this advice and encourage everyone to stand up for their rights instead?

In the right context absolutely. In general, its possibly a bad idea. You could always balance it out and educate the folks about the rights of an employer

If you were on my team, I would not expect you to sacrifice any of your rights. But if you kept falling behind your peers, to the determent of the team's performance, at some point you would be put on a performance plan. The unfortunate thing about performance plans is that by the time its enforced things are close to unrecoverable.

And if you did find yourself failing, I hope someone tells you that:

The technology industry is an extremely fast-moving one. Many technologies used today didn't even exist/were popular a decade ago; in 2009, mobile app development and blockchain were pretty much unheard of. Engineers constantly need to upgrade their skills to stay relevant to the demands of the job market.

Because, I will not.


I don't know what is a performance plan. Can you elaborate? I have never worked in a company where there was such a plan.

Is this public to employees or you just simply whip everyone until they work themselves to death without telling them the reasoning?

Also I assume you are in the US. In most European countries you can absolutely do nothing about someone who works full time on your team unless they let's say causes you financial losses or punch you in the face, so you can basically shove up your plan to your bottom part in other cases.


Performance plans, or performance improvement plans, are extremely verbose formal documents that are used as a last step to tie up all legal requirements before terminating an employee. These are most commonly used in countries outside US.

Most of the US follows At-will employment[0], so generally no performance plans are necessary.

This is not public. We've only used it twice, one person was suspected of misappropriation of funds (and later proven), and one that's currently in motion is an employee that misrepresented their technical knowledge i.e. cannot write code, convinced other employees to do 90% of their contributions.

I get that there is this impression of the US whipping everyone until they work themselves to death. Anecdotal, but I've never really experienced that. If anything, the folk in the US like being busy, and workplaces (at least in tech) are overly happy, in something that reminds me of a cult.

I have lived an worked in 5 countries in 3 continents and worked with people living in many more countries. In my own observations, the US colleagues have usually exhibited the highest level of happiness/satisfaction. They also tend to find their next jobs the quickest. And are paid the highest.

[0] https://en.wikipedia.org/wiki/At-will_employment


Thanks for the info. Interesting. But I assume this (i.e. the plan) is something that the employee can choose not to sign if it was not in the original contract and the law doesn't require him to do so. At least I wouldn't sign anything like that. And if he doesn't sign it then you can try firing him. Which might or might not be legal (which usually a court decides). At least that's how it works in those countries I worked in.

It's also not rare and it's actually on the news all the time all around in Europe and Japan that a company fires someone whom later the company is forced to hire back by court ruling.


I didn't read that as saying you have to spend hours outside of work to learn. Learning on the job is pretty much a given these days, no? At least in my experience. Even decades ago I was always given time to research/study new things and I really like that about our industry.


Where did you work if I may ask? In the companies where I worked there was never such a thing. I worked in 2 companies in Hungary, 2 companies in Germany, 3 companies in Japan and one in the US.

It was always basically assumed that you will study all the necessary things on your own time and when you sit in the office then you make productive things, a.k.a. as you deliver. You always had to code and show some progress of a specific development task each week. There was never such thing as time to research/study.


Let’s explore that a bit with an example. If your boss instructed you to learn a completely new language and refactor an existing production service in it, you would be expected to learn that language on your own time, or purchase training with your own money, outside of the office?


Yes. I would have to take an online course on my expense and/or read some books outside of office hours until I can use the language. Until that time I would have to work on something else which doesn't cost money for the company and I can deliver right away.

But that's the norm. Every company works this way.


Strange — it’s been the norm at every US company I’ve been in since the dot-com boom to allocate time and money to training staff in new technologies if it’s a serious initiative.

That’s the thing about assumptions, I guess :-)


MBA holders is a bad example since many of them happily work many hours of overtime, so I am pretty sure that they work more hours than software engineers even if you include the time it takes to study for interviews.

> It turns out that the median number of hours racked up by an MBA in his or her first year of employment is a whopping 54 hours a week

https://www.forbes.com/sites/poetsandquants/2018/03/06/the-6...


I am confused. Why do you think software engineers stop working as much as MBAs when they land a job? That's when they have to start studying even more beside working.

54 sounds about average for an engineer anyway. In Germany for example it's not rare to have 9 hour working days, with 30 to 60 minute lunch breaks in between. If it's so then basically one week comes to about 45 hours / week. If you study 2 hours a day, read books or read the news, follow trends, watch recordings of past conferences, etc which are pretty standard things that you are supposed to do on your job then it easily comes to 54 hours / week. So that's not high at all.


Or maybe you put that time into raising your family, or otherwise being a well-rounded individual. It’s easy to do, if one assumes that the employer pays for the time required to keep an employee’s skills current.


There are no employee rights in modern capitalism. In the 19th century, they worked 80 hours a week in mines, had child labor, no vacations, women were ignored, indentured servitude existed. All this and more until unions and labor laws.

Karl Marx's entire philosophy was based on the abuses of workers he observed.

Nowadays people mock progressive causes as 'socialism', mock unions and worker protections, demonize progressive politicians, and idolize oligarchs. Even though it's all against their own best interest.

Your idealistic notion of 'rights' doesn't exist and will never happen. You have no rights as long as someone controls your purse strings, which with booming inequality is more and more people nowadays as well.

I can't wait until most jobs are finally automated and we're done with this whole capitalist system entirely and have to figure out what to do next.


> I can't wait until most jobs are finally automated and we're done with this whole capitalist system entirely and have to figure out what to do next.

We have to figure that out much sooner than all jobs are automated.


Totally agree but it's been my experience human beings (as a whole) never do anything about abstract problems until they feel the pinch themselves.


I agree, and I think you hit on exactly why white-boarding persists (the least of all interviewing evils, IMHO): if this industry is so fast-moving, the fundamentals are the most reliable thing we can measure that has lasting value for (what we hope) are long-term hiring decisions.


Teachers, doctors, lawyers, professional engineers, and many others get their licenses revoked by the government if they fail to go back to college and take classes every few years.


Some of these "questions for the interviewer" are very good. In particular I like some of the "tell me the negatives" ones:

> What is the most costly technical decision made early on that the company is living with now?

> What is something you wish were different about your job?

> What has been the worst technical blunder that has happened in the recent past?


These are good questions, I like how they are phrased so concretely instead just asking “do you have a lot of tech debt?” They’re interesting for getting to know the team and getting a piece of insight you usually wouldn’t have until a few weeks into the job.

But I would be careful how you interpret these. In fact I would almost factor in these answers in the opposite way of what I think you intended. The company that admits to the worst technical issues is at least honest and self reflective. The company that doesn’t admit to any serious issues might be just as bad or worse, but their strategy is to tell employees to lie about it rather than be open to addressing it.


For all the negativity for these type of tech interviews, they are, from what i've seen, one of the most merit based systems out there. It is either this or we need to create some sort of national developer exam. The other alternate is to get jobs at good companies, they will only look at what school you went, whether you graduated with a CS degree, what companies you worked etc... All things which do not guarantee merit.


Merit at high pressure performance, not being a good developer.


it is not a perfect system but nobody can really predict a good developer till someone is already on the job. A good developer is more than algo skills, he/she also has good communication, works well with others etc... These are not things we can test yet. The problem with our profession is since it is lucrative and has no real licensing, it is a perfect breeding ground for fakers.


While this is very good to study before a technical interview, over time however I can see that this alone is going to make it 40x harder to differentiate say 100 candidates that are all perfect at interviews in general, that we are going to start asking ridiculous Oxbridge-style interview questions and expect perfect scores to advance 'good' candidates.

Perhaps companies will start asking candidates to construct mathematical proofs of data structures, algorithms, formulas and common equations from university-level entrance examinations just to do a mobile app or a web dev job.

As soon as that happens, the 'ideal candidate' companies will be expecting to interview would be a very prodigious candidate, former math Olympiad champion and decorated with titles and research papers in their name.

You guessed it: 𝔜𝔢 𝔬𝔩𝔡𝔢 𝔩𝔢𝔤𝔢𝔫𝔡 𝔬𝔣 𝔶𝔢 10𝔵 𝔡𝔢𝔳𝔢𝔩𝔬𝔭𝔢𝔯.


Yes, basically Goodhart’s law generalizes to standardized tech interviews.


> Perhaps companies will start asking candidates to construct mathematical proofs of data structures, algorithms

Perhaps we’d produce better software if people prioritised correctness like this in practice!


Perhaps so. Those sort of questions would benefit a company working at the scale of FAANG or Microsoft and actually tackling or researching real computer science problems.

Now would this make sense for a graduate entry level role for a web / mobile app developer position? Interviewers looking for such candidates need to lower their expectations a bit in for positions like that.


I once interviewed at a startup for a senior engineer position and was asked “if aliens came to Earth and asked you to go into their UFO with them, would you?”...

As much shit as we give white-boarding, I would have chosen it instead if it were an option.


At one time i was asked "why do frogs croak? I gave a series of answers like: to attract the opposite sex, gave some biological explanation of how they achieve that. But the stupid interviewer kept asking me why?

Needless to say after the interview ended i ran away from that deal.


What's the point of hypothesizing on things significantly less likely than someone winning the lottery (edit: while being hit by lightning, for good measure)? In fact "what would you do if you won the lottery" might actually give more interesting insight.


> Get a proper email account with ideally your first name and last name, eg. "john.doe@gmail.com" instead of "angrybirds88@gmail.com"... Avoid emails like "me@christi.na" or "admin@[mycooldomain].com" -- because it is very prone to typo errors.

I don't think I've ever seen anyone explicitly recommend against using a custom domain for email.


That's what I was thinking as well. If I gave you the following emails: s@vdan.cc s@dantuluri.cc surya@suryad.com dsuryav@gmail.com

Which would look most professional?


Well organized, but a lot of noise. This is honestly the best guide I saw (got a job at a FAANG company following it):

https://haseebq.com/how-to-break-into-tech-job-hunting-and-i...


When I talk to other Android Developers I notice that they aren't really using much of data structure and algorithms in their day to day programming except 1-2 very basic one's like ArrayList, List, and HashMap.

1. If I am not using a lot of data structure and algorithms in my day to day work how am I supposed to be good at it?

I am very good at finding solutions to problems but I am very bad at remembering a lot of things.

2. How does one even prepare for a subject as big as Android? The thing is vast. And asking trivial things about it won't be very useful.


In my experience, dynamic programming is overrepresented in interview questions (vs. like other algorithm techniques). Everyone loves to ask dynamic programming questions, make sure you've done a lot of them and you'll look really impressive.


How is this much different than a paid smaller version of LeetCode?

When I was prepping I got the most value out of LeetCode for solo prep, followed by mock interviews with sites like Pramp.com, Gianlo.co, and PracticeCodingInterview.com.

I don’t know why tech companies don’t just admit that this is all pretty much standardized at this point. Just build a standardized test, or certification, and get it over with.


That’s a great idea. Tech interviews are a poorly implemented version of something like actuarial exams. Let’s formalise it and give people a certificate for it - Certified Software Engineer.

I’m only half joking. At least we’d only have to go through the process once.


So, as someone who’s been out of school for a million years, after my name and contact info, my education info is the second most important thing?

I don’t think so

Maybe this should be titled “Just Graduated? Some Useful Tips”


I stumbled across another amazing resource a while back - recorded mock interviews:

https://interviewing.io/recordings/Python-Google-6/

I'm not affiliated with this site, I just thought this was a great idea and well executed.

I used to lean toward the "studying algorithms, data structures, whiteboarding, etc. is useless since I'll never actually need them" ideology until later in my career when I realized that worst case (for me) I can take a break from building CRUD apps and refresh my CS fundamentals. I enjoy speeding up code and then asking myself, "can I do better?" each step of the way, trying to make further improvements.


What if people not being able to answer simple programming questions is a problem with the interviewers, and not the interviewees?


This is one of those ideas that sounds really sophisticated, but is just wrong.

The fact is that there a tons of people out there representing themselves as programmers who actually can’t perform basic tasks.

The HN population massively selects for competence, so people here have a hard time imagining what things are like from the interviewer side.


Notwithstanding, interviewers could also be asking misguided questions. Which I find often.


This thread has been a great discussion. We are a startup and did hiring ourselves. We gave 1) algorithms, system designs type on questions 2) a mini project like build a profile in React Native. With 10 sample size, we found people who finished the project quickly and reasonably performed well at work while candidates who passed algorithms didn’t grow as fast.

The main thing is the problem solving oriented skills in a project that really makes a difference.

Thus, we pivoted to do a mini project screening automation for companies to do hirings. This site has backend to frontend tasks that come with a CLI to code the project locally. The website is called https://real.dev. We want to solve this interview problems for companies.


I'm going a different route with interviews and wonder if anyone has input... I'm a senior backend dev with 10+ years of experience. Breadth of projects, masters in CS, open source work, team lead, top contributor, great references, etc. But I cant pass a coding interview, because I kinda freeze up during algorithm whiteboarding questions. I get requests for interviews daily and have been considering starting with a cover letter that basically says I'm not good at coding interviews, but love to talk tech (and see other positives above). Could I do a homework assignment or anything else to show my worth?

Has anyone had any luck with an approach like this?


Put “I don’t do pop quiz interviews” on top of your documents and see what happens. I do this but haven’t been so blatant about it. Networking is the only other avenue.


> ... nobody needs to know: > Anything less recent than 3-4 years

Really? I did the most interesting and challenging projects earliest in my career (late 80's to early 2000's) -- just due to the nature of the industry, I'd think that more recent project descriptions have undergone a certain homogenization that would make them less and less an indicator of talent and varied experience.

I can chat excitedly about early projects, but now people must all be saying "cloud blah JS blah containers blah blah bit-pipes and storage blah - oh yeah, and modands!"


Jezus, the amount of hoops people are willing to jump to work at a certain company. And it baffles me that I might want to jump these hoops as well ..



One of the goals of technical interview (or any interview) is to find your real charterer (are you lazy? do you break when you do not find a solution? are you creative?)

The quickest way to do that in a 45 min, is to put the candidate under pressure.

The best way to put a developer under pressure is to ask coding questions and expect some solution in a very short time.

Note that the goal is NOT to find out if you know the ins and out of a specific algorithm. It is to discover how you think under pressure.


Just wanted to say thank you for this!


Curious as to why the geographical locations provided are only for US and Singapore.


The author is from Singapore and writing from the perspective of a graduate with fang jobs to his juniors. Also, looks like it showcases some software(Docusaurus) he made at facebook.


Two outcomes for the technical industry:

1.) Everyone is studying these problems all of the time and they finally disappear.

2.) Other outcome is a dystopian field fueled by a race to the bottom where everyone is practicing algorithms problems all of the time. If you read the blind forums, some people are completing 500-1000 leetcode problems before heading into interviews.

I'm putting my money on number 2, which is where we already are. Can only imagine what this is doing to code quality...


I started leetcoding again this year because I want to jump ship and holy crap! I used to solve problems on leetcode 5 years ago (last time I switched jobs) and it was pretty laid back. Nowadays, I'm seeing dynamic programming with 3D memoization arrays like it's something normal. It all started like a way to check if someone knows how to write code or knows data structures and basic algorithms but now it's competitive programming level.


That's what happens whenever there is a competitive exam. Look at math olympiad papers from 1970's, and compare them to the recent olympiad math test. Tougher than 1970s. It is the same about many entrance exams we see in China, India, etc: older exams are easier than the recent ones.

People just master the foundations behind the old test material. Now that stuff has become trivia of the today. So, they need some advanced stuff to test the test takers.


> some people are completing 500-1000 leetcode problems before heading into interviews.

Over the course of one year, I completed, classified, commented 200 leetcode problems. I also taught algorithms to third and fourth year university students not too long ago. I believe I write readable code, I know perfectly the language I'm using (at least for that purpose), I'm totally fine with complexity, and I know most methods involved in these algorithms, including more advanced algorithms such as KMP.

Yet... I failed my round of interviews at Google. After this preparation, I'm still not able to solve quickly any leetcode problem in the context of an interview. On a whiteboard, with an interviewer in my back, in a stressful situation. I need to practice more if I want to get consistent results.

So I agree with your conclusion. We are competing with a lot of people who train using the same resources. Including young graduates who have a lot of free time on their hand.

On a positive side, I'm thankful to Google for giving me a shot. Based on my resume (40+ with little experience in software industry), I'm not sure I would have been interviewed in a more traditional company, let say a bank.

To come back to my interviews, system design went very well. Algorithms quite well too but not well enough. Couldn't solve one problem, and a bit slow on an other one.

The recruiter first told me I passed, and that they were going to find a team for me and make me an offer. But they finally asked me to re-take the algorithmic interviews a few months later, to "make a stronger point to the hiring committee".

Never heard from them since then, about 8 months ago. Recruiter doesn't answer emails. I think she moved to a different position. Not sure what to do now?


There is nothing you could do. Just like when you are new to the dating world, you are just new to the recruitment world: when the company wants, they will send you emails, call you, etc. When you don't hear from them, don't even bother sending emails. That's why you should interview with multiple companies, in the hope that one of them will offer you a job.


Can only imagine what this is doing to code quality...

I don't get why no one seems to consider the possibility that these sorts of interviews actually do get high quality engineers in the door.

I get downvoted for raising the question every time. But isn't it possible this interview style actually works, even though it doesn't resemble real coding and even though many of us hate it?

I have yet to see any compelling argument for why I should believe these interview practices don't work. And yet the fact that so many companies, with so many resources to change things up if they felt it was in their best interest, keep interviewing this way must at least suggest the possibility that maybe it works?


> I don't get why no one seems to consider the possibility that these sorts of interviews actually do get high quality engineers in the door.

That's is debatable to say the least. I'm both a hiring manager and on the market for a new job (so I am still solving leetcode problems in my spare time). After 3 years of hiring based on leetcode for technical competency, I can say that the quality of engineers is a hit-or-miss. I've had people writing brilliant solutions to hard leetcode problems crash and burn when writing production code. Currently on my team, the most technical debt was written by someone who completely aced the leetcode stage and was pip-ed out a couple of months back. We even have an inside joke to look both ways before changing X's code. He easily landed a job at a unicorn and I'm really glad he's their problem now.

I really don't believe there is a strong correlation between competitive programming chops and being a competent engineer in a team environment.

We are currently changing our interview practices to ask questions which touch on more practical issues (like multi-threading, review a piece of code, change a piece of code, make a unit test pass, instrument this with metrics, etc..) because what I personally identified as a better signal was competency in specific types of leetcode problems such as LRU caches, O(1) data structures, iterators for common data structures, etc..


I guess the reason is that if it would work we would already know about it and there would be a bullet-proof way to prove it


Can you help me follow, because that seems like a leap of logic. Why would that imply there's a bullet-proof way to prove it?


Because if you can't prove it works, you cannot know it works. You can only assume then.


The companies using it has statistics that it works. They have no reason to publish such statistics since it would help their competitors.


That's an interesting theory. But this problem is actually not confined to the industry, it impacts academia too. Even just as a purely theoretical statistical problem since there is enough data and easy access to that data we should already have some research in this field which sheds some light on the hiring issue.


You have everyone's interview scores, performance feedback and promotion histories in a database at a company with tens of thousands of employees. You also have the interview scores for everyone who failed the interview process. Put a statistician on that for a day and you will get a lot of significant data about your hiring pipeline.

It is not hard to do, the data just isn't public and such data will never become public. Therefore public researchers will always lag behind private ones, since the private ones have access to the interesting data.

Edit: Also it is not a theory, I have seen internal studies on this myself.


I didn't want to call it bullshit but you are not the first who wants to walk me down the bullshit lane.

Here is the thing: this discussion is not about scores or about candidate performance. It's about the question whether the same candidate would be better assessed with a technical interview over a non-technical interview. Since you are not even addressing the question at hand I call it bullshit.

Also, something which has no written proof of and there is no consensus about among at least a group of respectable people and/or institutions is just a theory.

And the third one, you somehow assume that public sector is somehow an inferior player to the market who might not even have relevant data. That's again a theory though and not necessarily correct, here are a few examples of publicly funded software engineer employers: CERN, NASA and the US Army.


> It's about the question whether the same candidate would be better assessed with a technical interview over a non-technical interview.

I don't understand the problem here, all technical interviews are also non-technical interviews since they are still communicating with a human and not just doing problems on a computer. If we didn't care about the human interaction part we'd just put them in a room alone with a set of problems.

Also I think you don't understand how much recruiters hate this process, they'd do anything to remove it since they have no way to game technical interviews. So they work hard to change the interview process at Google to something more soft like we have in other areas, but the evidence points to soft interviews being worse.

And you might not believe me, but I definitely believe me and the people deciding how to hire people at these companies certainly believe in the studies they do, so there is no way they will change the process. These interviews are here to stay, and until we have some new methods nobody has tried yet it won't change. You can complain all you want, but the best companies will be using this process as they scale up since nothing else works at the moment. Some other things might work for small companies, but as soon as the founder can't interview everyone himself it breaks.


"Absence of evidence is not evidence of absence"

OP is just saying we can't prove it doesn't work.


Yes, and I am saying that we can't prove it works. So we cannot know that it works.


> Can only imagine what this is doing to code quality...

Why would studying algorithms and data structures affect code quality? They would similarly be able to learn to write quality code once they're inside the company, no?


This race to the bottom, as parent poster said, is from a generation of developers hyper-specializing in interview-style problems. These problems are tiny and self-contained and have a slick solution which can be regurgitated onto a whiteboard in about 30 minutes give or take.

While that is not a negative skill to have, it is also not a skill that I'd list anywhere in the Top-25 of most valuable skills for productive developers.


Just curious, could you list the top 25 (anecdotal?) valuable skills for a productive developer? Thanks.


Because these problems encourages you to write unreadable code. It makes sense when you write it, because you can fit it in your head, but you never have to revisit it after having passed the problem. It encourages one letter variable names and other quick hacks in the name of speed.

They work against creating readable, understandable and debuggable code which is much more important in general than being able to solve algorithimical problems you'll almost never see in real life.

I've seen this first hand where some were brilliant at these problems but wrote the worst code imaginable. I would rather hire someone who can write clean and simple code and teach them how to solve these problems than the reverse.


> I would rather hire someone who can write clean and simple code and teach them how to solve these problems than the reverse.

Why? Teaching people to write readable code happens automatically during code reviews, teaching people how to think is a lot harder.


Because producing simple and good code is much more important.

It's quite condescending and narrow-minded to say that solving algorithmic problems is thinking, but writing clean and simple code isn't.

Writing readable code is much more than just formatting code and using decent variable names. It's about simplifying your design just enough. Code reviews is not enough to teach someone this.


> Because these problems encourages you to write unreadable code.

First off, no, writing good code in an interview wins you additional points. Second, why do you think candidates can (or will) continue doing that on the job? New employees aren't allowed free rein to check in code from day 1 at most places - trust has to be earned. And I don't think any decent company allows check-ins without review.


Code reviews are good at catching oversights and at giving design & implementation pointers to people acting in good faith. Senior talent doesn't have the time or energy to push back on all of a systematically incompetent person's code until it's good. See the bullshit asymmetry principle. And once you hire two of these people, they'll just review each other.


> giving design & implementation pointers to people acting in good faith

Why automatically ascribe bad faith to people who study for coding interviews? They went to all that trouble to get better at something, so they're obviously diligent and seek self-improvement.

> Senior talent doesn't have the time or energy to push back on all of a systematically incompetent person's code until it's good.

That sounds like a problem with the company's timelines or priorities. If senior talent is so strapped for time that they can't insist on decent designs upfront, then they likely can't hire the right people either. Because even that takes time and energy. Mentoring juniors is part of the job for senior engineers.


The time to review a diff is proportional to how much work it needs. Code reviews that are within the normal bounds of “needs mentorship” don’t cost that much. Productivity is noticeably down across the board during intern season, though.

It’s more than a full time job to push back on all of a bad hire’s output, and people are accountable for their own projects as well. Things inevitably get to “fuck it, good enough.”


Because you get the college applications problem, where you stop getting the people who have a real passion for programming and coincidentally have problem solving and algorithms skills, and start getting people who are really good at problem solving and algorithms and may or may not have a passion for programming.


> start getting people who are really good at problem solving and algorithms and may or may not have a passion for programming.

Why is "passion" so important? And what even is passion? For professionals in every other field, competence, ability to deliver results, and getting along with people, are what matter. Many pros are passionate, in the sense of loving their work, but passion isn't a prerequisite for being a pro.


Oy vey. I didn't mean passion in the "I'll work 80 hours a week for little pay mister!" kind of way. I mispoke. I meant that before, if you based your interview process off data structures and algorithms, you'd get competent professionals who just happened to be good at algorithms. While now you'll get a bunch of people who have specifically trained to be good at algorithms. Which has no bearing on competence.


Why can't people who study to get good at algorithms also study to write better code? They've already demonstrated their aptitude for learning difficult things.


For one, interviewing requires extremely short term knowledge. Almost everybody I know who plays the interviewing game learns just enough to pass the interviews, then immediately forgets it until the next time. So it's not really comparable to learning and refining a difficult topic over the course of a few years, as in the case of software development practices.

Also, it's two completely different skillsets! There's something very weird about interviewing for X but then demanding Y. I don't ask my primary care physician how good he is at surgery, then be like "he knows how to learn difficult tasks, so we're good".


> Almost everybody I know who plays the interviewing game learns just enough to pass the interviews, then immediately forgets it until the next time

Presumably the first time they learned computer science fundamentals it took them some time right? People usually go to college for several years to learn this stuff - or 6-month bootcamps for 40-60 hours/week. Subsequently of course it'll take less time to refresh their knowledge.

> Also, it's two completely different skillsets! There's something very weird about interviewing for X but then demanding Y

I'm not disagreeing here. I just think most companies that use this interviewing style figure that X (easy to measure in 45-minute interviews) is a reasonable proxy for Y (much harder to measure). And Y (writing good code) is easier to train new employees to do as long as current employees maintain standards in code reviews.


Maybe they can, but why would they? Algorithms interviews are the only thing with an actual bearing on their future earnings.


> Maybe they can, but why would they?

Professional pride? Wanting to get better?

> Algorithms interviews are the only thing with an actual bearing on their future earnings.

Do they not want promotions or pay increases at their current jobs before leaving? Usually you have to do good quality work to get promoted.

Please read my original comment: https://news.ycombinator.com/item?id=20727948. People who lack these skills can learn after getting hired. Any software company worth working at has a code review and design review process.


Most programming is not solving tricky puzzles, but grinding out code.


> most programming is not solving tricky puzzles, but grinding out code.

Sounds like most jobs need dispassionate guns-for-hire who'll do excellent work, no matter how tedious, get paid, and go home. In other words: pros.


Because these people are just memorizing coding problems. It's like a spelling b


Thanks. This is gold.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: