Hacker News new | past | comments | ask | show | jobs | submit login

I wonder that as well. I don't see any significant connection between Leetcode or Leetcode-style questions and anything resembling what a company theoretically should be looking for in an engineer. The only thing I can come up with is that being able to memorize or otherwise internalize so many different problems is a proxy for intelligence (general mental ability). If so, I suspect it's a fairly weak proxy.

But, is that it? In the US, at least, not a lot of employers will give a straight up test of general mental ability (IQ test), likely because they see it as potentially attracting lawsuits. This is because IQ tests show a well-documented racial disparity in scores, which has not been shown to be correlated with anything genetic. Rather, the problem is either traced back to differences in family socioeconomic status growing up, or cultural bias issues with the tests.

Everything in the previous paragraph can be verified in a few minutes of Googling, so, I'm not going to litter this comment with citations when the obvious keywords will produce the information mentioned here. But, everything else, including what's to follow this, should be considered pure speculation on my part.

Given all this context, I can only think of 2 obvious reasons companies resort to using LC or LC-style interviews:

1. As a proxy for a test of general mental ability, as previously mentioned, or

2. Cargo culting, as with the whole "How would you move Mt. Fuji?" phenomenon.

I don't personally have a great deal of insight into which it is, or, if there's a third explanation that I'm completely missing. I also don't know whether there's any research supporting or refuting the idea that LC puzzles provide a valid interview signal for software engineers. All I really know is that these type of problems exercise skills that are not what one uses on the job as a SWE, so, it seems rather illogical to use them as a main component of your hiring process.




Due to a series of coincidences I've worked at 3 companies in the hiring space in my career, and I've personally performed over 400 tech interviews. I've also spent 2+ years teaching programming. I feel like I can answer that question - though I'm sure plenty of my colleagues would disagree with my answer.

Essentially I agree with you - most of the reason is cargo culting Google.

Assessing software engineers is hard. A couple decades ago, Google (which at the time was a tech darling, and the #1 place to work) had a saying of "A players hire A players. B players hire C players". Essentially they were terrified of hiring bad people, because they figured the company would inevitably go downhill if they did. Their hiring process was essentially an expression of this idea - it was based on the philosophy of "we're all A players, but are you as clever as us?". Interviewing at google at the time involved sitting about 6 back to back whiteboard interviews with programmers. Each person would spend ~20 minutes asking you their favorite puzzles and things, and seeing how you did. Nobody can say this because it would be illegal, but it was in many ways a programming themed IQ test. Good questions were the ones which filtered candidates out. And its easy to recommend against hiring someone if they couldn't reverse a binary tree on a whiteboard in 20 minutes. (I mean, thats easy for me! They must be a C player.)

Other companies followed suit. I mean, hiring is hard. Why not just copy Google's approach? Microsoft did something similar. Facebook was full of ex-googlers, etc etc.

The problem is that being able to reverse binary trees doesn't correlate with how well you can manage a database, style a form, fix a memory leak or talk to your team. And the people who only had those useful skills are unhireable. Oops!

In my opinion, the right way to interview programmers is to make a list of skills you want your programmers to have (coding, debugging, CS knowledge, communication skills, architecture, ...) and then find ways to assess each one. For example, to assess debugging you can give your candidate some pre-prepared code with failing test cases and see how many bugs they can fix within 30 minutes or so. But that requires preparation and test calibration. Most companies struggle to convince their engineers to interview someone for 20 minutes - let alone spend a few days putting together a problem like that.

Knowledge of data structures and algorithms is useful, and it is a positive signal about a candidate. But (depending on the role) I'd weight it below communication skills, raw coding skill and debugging. Those are all much more valuable. We need to start treating them as such.


>Essentially they were terrified of hiring bad people, because they figured the company would inevitably go downhill if they did.

I heard that Google search is now performing badly in many key areas.


It is indeed, by my own subjective observation.

But that doesn't mean it's the fault of the engineers - and most likely it isn't.

Rather, the product people (who are also not dummies) basically realized that "dumber" results were more profitable, for various reasons -- most likely to do with "engagement" and prioritizing what 90 percent of the users will versus the needs of the other 10 percent.


Yet one should be very skeptical about such claims. We don't know the parameters what they are optimizing for so how can we even asses it's performance related to those?


I hate medium and hard LC questions in interviews, it's very hard for me to not get panicky, so I'm surprised i'm going to argue against what you're saying:

If you want to test raw coding ability, asking someone to implement some very basic graph or tree traversals is a pretty good way to see if they know the basics of conditionals, loops, recursions, maybe hashmaps.

If you want to see someone debug something, and you make them run their code it will inevitably fail or not compile the first time...so they'll have to debug.


> If you want to see someone debug something, and you make them run their code

I hear what you’re saying, but that doesn’t assess what I want to assess. We all have experience debugging our own code, that we just wrote. But how well can you read someone else’s code? How well can you find and fix bugs in it? It’s a different skill! And it’s vital in a team setting. Or when you’re depending on complex 3rd party packages (which is most of the time).

I want coworkers who can read the code I write and debug it when it breaks. That’s a much more useful skill than what leetcode problems train.


Funny thing is that A players can go downhill not because of their engineers, but because of their management. How many companies have we seen in tech go under because of bad engineering vs bad management?


> For example, to assess debugging you can give your candidate some pre-prepared code with failing test cases and see how many bugs they can fix within 30 minutes or so. But that requires preparation and test calibration.

The tricky thing is that no matter what test you contrive, it's more likely to say something about the developer's recent experience than about their competency in general.

For example, I'd say I have pretty good intuition for when to just read code or sprinkle printfs or fire up valgrind/gdb/asan when debugging C. Which I guess is to be expected given that I've been doing C almost exclusively for many many years. I'd do pretty bad with Haskell; the last time I really used it was around 13 years ago. The next guy might be a bit lost with gcc's error messages since the last time they used C in anger was 5+ years ago for a small project, but they'd do well if you hand them Python code that uses a well known unit test framework or whatever. I guess that's fine if you're a run of the mill crud company looking for "senior <foo-language> developer" but not if you're after general competency.

You can try hard to make the debugging be more about the system than about the implementation but it's not easy to separate the two. You can make different tests for people with different backgrounds but that only makes calibration harder.

One trick I've seen a company do is deliberately pick a very obscure language that most people have never heard of. That can eliminate some variables but not all of them (I took the test and did well but I also spent a fair amount of time studying the language to figure out if it's suitable for a purely functional solution before handing in a very boring piece of imperative code). Ultimately it wasn't much more than a fizzbuzz.

And if there's puzzling involved, I'd say there's an element of luck involved. At least that's how I perceive the subconscious mind to work when you're thinking about a problem that isn't immediately obvious or known to you beforehand. Which path does your mind set you on? Are you stupid or incompetent if it happened to pick the wrong one today and you spent 10 minutes thinking about it too hard? Are you super smart if the first thing that came to mind just happened to be the right one and you didn't doubt yourself before blurting out an answer?

If you're lucky and know the problem beforehand, you can always fake brilliance: https://news.ycombinator.com/item?id=17106291

That is to say, test calibration is hard and there are so many variables involved. It follows that there's no obvious right way to conduct interviews. And I guess it follows that companies who need people (and aren't necessarily experts at interviewing) effectively outsource the problem by conducting the same type of interviews they've seen elsewhere. Maybe that's less cargo culting and more just doing whatever seems popular and good enough?


The best solution I’ve seen to this (if you have the time, and are interviewing for a variety of roles) is to have the same code (& same bugs) in a variety of languages. And let the candidate use their own computer and their own tools to work on the problem. If you’re hiring for a python role, get the candidate to debug python code!

I’ve done hundreds of interviews like this, and it’s fascinating watching what people do. Do they read the code first? Fire up a debugger? Add print statements and binary search by hand? I had one candidate once add more unit tests, to isolate a bug more explicitly.

After hundreds of interviews I still couldn’t tell you which approach is best. But if there’s one trend I noticed it’s that more senior people (& older people) seem to do better at this assessment. Which is fascinating. And that implies it’s not simply a test of what tools the person is familiar with most recently.

As for luck, I agree this is a problem. It’s mitigated somewhat by having a bunch of easy bugs to fix instead of one hard one. But even a debugging problem like this should be one of a set of assessments you perform. If you fail 5 small assessments in a row it’s probably not luck.


It's a weak proxy for general mental ability, but it's also a signal for high conscientiousness -- someone willing to spend a couple months studying to the test is showing that they have the ability to work hard on something over some length of time.


Why not bring back the civil service exams of Imperial China, then? Surely one's willingness to undertake extended study of the classics of Chinese literature, and to master a stylized form of discourse about them is also a signal of conscientiousness, isn't it? As a "bonus," this is likely to take far longer than the few months people spend practicing Leetcode.

https://en.wikipedia.org/wiki/Imperial_examination

https://en.wikipedia.org/wiki/Eight-legged_essay


When I was in grad school for CS, I recall talking to the dean of the department who said that his preferred approach for recruiting CS students would be to have them write a five page essay on any topic of their choice. Anyone who could write a coherent essay would probably make a good researcher. Unfortunately, he was never able to put this into practice - the university administration only considered grades and GRE scores.


Does grinding for months even work? The point of this post (and others like it) is that you can spend years doing leetcode without getting better. In comparison, I score far below most of my peers on conscientiousness but I find leetcode style problems easy. I'm a terrible hire in lots of roles because most companies' problems are boring. I have a history of getting bored and quitting. But I sail through these interviews.

If conscientiousness is what companies are going for, I think they're still interviewing wrong.


I think it works to some extent. There's a limit to how much you can get out of yourself, and a lot of dumb luck involved in getting interviewers who are actually reasonable.

In my case, when I grind those dumb problems I quickly go from needing 45+ minutes to implement a working sort function with file I/o to banging it out in 5-10 minutes. I dunno if I'd ever get good enough at those problems to make it into Google unless I got lucky, but I've gotten offers from the other FAANGs over the years

The whole thing is pointless, though. If a junior engineer asked me for advice, I'd tell him to get whatever job he can and work on saving money/building a startup in his spare time.

Grinding for months to get into Google is so soul crushing, and working for any big company is just awful in the long run compared to being independent


I think leetcode is just like anything else you practice - in order to actually improve you need to practice the right way (or in the OP's case, with the right state of mind! Being burnt out is a fundamental problem which needs to be fixed first!)

I started out thinking I was pretty smart but got blasted by a fairly simple question on my first screening interview. I started looking at random leetcode questions but that didn't really work - I soon figured out that I needed to learn the concepts one at a time. I think that is the way to do it, get a list of general topics, learn about it, then practice just those questions until it clicks.


It's also a weak proxy for high conscientiousness. More often than not, it's actually signalling someone is young enough to not have other commitments, contributing to age discrimination without even meaning to. It also does in the way that older engineers are less likely to have CS degrees just because CS programs were rarer and less well-known before the 90s.

That's not sour grapes, either. I have a CS degree and worked through math and logic puzzles for fun from the time I was 6, well before I ever knew what a computer was. Interviews like this are practically made for someone like me, but even there, the signal you're getting is I'll gladly do something I like doing anyway, not that I'm going to show similar conscientiousness when it comes time to get into your daily company grind that actually has nothing to do with solving interesting algorithms puzzles.


I'd say it's a signal for a rather mediocre kind of conscientiousness -- that is, for someone willing to put in long hours to achieve a certain goal, for sure -- but for a goal they almost certainly believe to be fundamentally pointless.

And as such, is likely to cause them to develop cynical attitudes about the industry -- and about the companies requiring these tests, in particular.


I thought my ability to build a prototype in a day showed that. Maybe I should have learned every variation of FizzBuzz instead of learning how node works.


Same as any profession, the reason for the tests is to sort for compliant employees. Check out the book “Disciplined Minds” by Schmidt.

https://www.bmartin.cc/pubs/01BRrt.html


> This is because IQ tests show a well-documented racial disparity in scores, which has not been shown to be correlated with anything genetic.

That isn't true; the correlation is visible when you look at the scores of people with mixed ancestry.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: