Google recruiters call me a lot. I think I'd do a good if not stellar job working there. I've passed multiple FAANG interviews and been very successful as a senior developer.
In my email I have an "interview prep packet" from them that essentially tells me to brush up on algorithms and read Cracking the Coding Interview to prepare for their interview process.
I'm fairly happy in my job. If they offered more money or a really interesting project I'd consider working for them. But I'm pretty lazy about redo-ing college algorithms class during my free time at home to go work there, so I probably won't.
There's an opportunity cost with interviews like this where an M.S. and long career of getting shit done counts for very little and memorization of undergrad level topics that you can look up in two minutes in Knuth if you have a problem that requires it can make or break an interview.
I've made a career fixing a ton of horribly shitty, inefficient code that's been produced exclusively by people who pass these interviews.
You wouldn't be a good fit for Google. With their algorithmic interviews that require college-grad level of studying, they filter for people that are ready to follow orders without complaining.
That's who they want to hire at the end of the day: some coders that don't get too critical about their job and do what they are asked to do, even if it is repetitive, stupid and doesn't really make sense (such as re-studying algorithms implementation details for two weeks before an interview when it can be looked up super-easily online).
I was a naval officer in a prior life, and my current manager loves that I get the job done, whatever it is, without complaint.
I'm pretty much the opposite of what you think, so if my desire to study for the algorithm interview is your litmus test for that, kinda proves my point.
Not everyone that would be good for Google has a burning desire to work for Google. Google might want to consider that.
To make it clear, I absolutely hate coding interviews that makes candidate lose so much time restudying. I think having some critical-thinking is absolutely needed and way too many engineers lack some (especially those swallowed into FANGs)
What I described on my previous post is a credible explanation that my group of engineer friends came up with on why all the FANG companies pursue those heavy memorization algorithmic interviews.
To my mind, the more likely explanation is that they would simply get too many false positives if they didn't use the algorithm stuff to filter potential hires. You lose a lot of potentially good hires that way, but the pool you're left with are all of a certain intelligence level. Whereas, if you don't use the algorithm stuff to filter, it's really hard to figure out who is even intelligent enough to do the job.
Not so much intelligence but applied diligence on top of intelligence is my sense. If you want what they’re offering and don’t want to jump through their hoops for it then they don’t want you. For those that ball at the hoop jumping as being beneath them, there’s probably additional characteristics that comes with that that they’re trying to filter out.
Given that many job applicants apparently can't write a Fizz-Buzz implementation, I'd say that being able to implement an algorithm is probably a reasonable way to cull the herd by a hefty margin.
Print numbers from 1 to 100, except print Fizz for numbers evenly divisible by 3, print Buzz for numbers evenly divisible by 5, and print FizzBuzz for numbers divisible by both 3 and 5.
There's a correlation between the belief that memorizing algorithms correlates with intelligence and the application of algorithm questions in interviews.
If you have two great engineers in a kitchen, discussing a relevant problem, you want there to be synergy. That synergy is broken when one of the engineers has to Google how to reverse a binary tree.
It is also a safe place to work for the really exceptional engineers. They can talk freely about complex computer science, without getting blank stares or having to dumb it down. Otherwise it gets frustrating fast.
CS is way too big for any individual to know it all in depth. If it really is a “safe place” because the employees can talk about complex topics without ever losing anyone, that would imply that they’re all specialized in roughly the same topics and severely lack organizational breadth. I kind of doubt that’s really true.
You can have organizational breath with smaller teams. You pair a great programmer with an exceptional programmer and it helps if they are specialized in roughly the same topics.
The other commenter was describing it as a “safe place” where you can talk about complex CS topics without confusing people. Unless your teams never talk to other teams, that doesn’t fit.
IQ tests are a form of convergent preferential bias and they don’t test for abstraction or creativity which is more than half the definition of intelligence.
They don't test for creativity (though seeing a correlation between IQ and creativity would be interesting) but they almost certainly test the ability to deal with abstractions. That's pretty much what an I test is! What reason do you have for thinking otherwise?
They test for puzzles that appear as visual shape complexities which require use of the visual cortex. That is something a person might refer to as a visual abstraction. That isn't a practical abstraction, such as requiring a person to form an answer from the absent of acceptable criteria (to abstract or form something new to solve a problem).
It's not readily apparent to my that those two abilities should be distinct. It seems plenty likely that they would be at least highly correlated. While you're point about the visual cortex may be relevant, I don't think that's the whole story.
My understanding is that different IQ tests will rely on different kinds of questions (i.e. not all involve visual shapes; some involve word/logic puzzles). The point the scores for all IQ tests correlate very highly (and thus suggest that there exists some common factor).
It means a bias of coming together toward a tester's preferences. Do people excel on the few narrow performance meausures you find most correlated with intelligence?
There are divergent aptitude tests measure quantity and diversity of answers to a given question opposed to the one desired answer. Divergent tests are rarely performed, but at a stronger measure of creativity and problem solving which is typically what people actually want when they say intelligence. The reason why convergent testing is preferred over divergent test is because it is easier to measure and those simplified measures are easier to compare.
Can you provide a link for divergent aptitude tests? I'm afraid my search results all have to do with some young-adult fiction novel (called Divergent).
How much do such tests correlate with things like (for example) creative achievement?
> Testing or measuring procedures cannot be determinative in employment decisions unless they have some connection to the job.
IQ tests are not directly related to the job, and so are illegal according to that ruling. Coding tests are directly related, which is why they get a pass.
You didn’t read the opinion and you added the word “directly” to the summary.
Do you scan source code and draw firm conclusions about what it does based on skim reading the first comment you see?
Perhaps my old contracts prof could have a second career as a google interviewer. (He was notorious for cold calling people that hadn’t briefed their cases and eating them alive.)
> my group of engineer friends came up with on why all the FANG companies pursue those heavy memorization algorithmic interviews
I think the most likely explanation is that "elite" of CS grads usually do competitions like ACM ICPC and olympiads, which are full of problems like those watered-down interview questions. As there is no real authority on what makes a developer good, but there was a measurable outcome of those programming competitions, leading to high status and pride displayed by winners/participants, it was simply taken from there and dumbed down to fit into interviews. Some top colleges even have special prep courses for those competitions and comparing to them FANG interviews are super trivial.
> With their algorithmic interviews that require college-grad level of studying, they filter for people that are ready to follow orders without complaining.
Spot on. Certain companies ask silly question and to perform tedious exercises and, in the debrief, investigate if the candidate complained.
(disclaimer: I worked for Amazon and never seen this pattern in the company)
some coders that don't get too critical about their job
Couldn't be farther from the truth.
As someone who works there I've noticed this general trend of animosity towards Google, it seems to be fueled by the fact that people feel a sort of inferiority complex when they don't clear an interview or they feel they wouldn't be able to crack it if they ever gave it a shot, this leads to an overcompensation of attitude in the other direction, a sort of "sour grapes" narrative where there is an effort to downplay the prospects of working at Google or ridicule the people working there like you did just now.
A lot of people are sour at FB/Google simply because how the companies represent themselves to the users. I can't look at Google/FB without bringing up gmail, maps, search etc. and their utmost desire to track every nanosecond of my life, and understand me better than I do... just to earn money from me. I couldn't care less about hiring processes there, since where I live they don't pay more than other businesses, and everybody knows how 'great' their hiring experience is.
With ie banks, they are at least honest about how they earn from customers like me and I don't have any unreasonable moral expectations.
My comment wasn't trying to ridicule anyone, but explain that big companies with an extremely large amount of coders need a very specific type of heads down personalities. As can be seen in this whole thread, plenty of people simply don't have a desire to work at big FANG companies, and even less desire to study multiple days//weeks for an interview that doesn't really help their day to day job.
>it seems to be fueled by the fact that people feel a sort of inferiority complex when they don't clear an interview or they feel they wouldn't be able to crack it if they ever gave it a shot
Some of the most talented people I've ever met have failed Google interviews. As in far, far more talented than the majority of folks I know who do work at google. No amount of bullshitting is going to convince me that Google's False Negative interviewing system is the correct one. It simply bypasses too many talented candidates even beyond the reasons generally mentioned above.
One guy in particular I know is now a CEO of a company which, ironically enough, employs several former googlers. Yes, I know Google doesn't give a shit they missed on this individual. But to mis-characterize the anti-Google-interview narrative as "sour grapes" sounds a bit like you're drinking their champagne.
I've met my share of Googlers who are lazy, don't know how to solve problems (as opposed to write code itself), or still need help with basic tasks after several years on the job.
You just can't test for some qualities like focused, persistent problem solving and the desire to find ways to speed up your work.
I think it's possible for both things to be true at the same time. Yes it's a selective meritocracy, and it's also true that the process unnecessarily alienates people. It comes down to recruiters being given the very difficult task of finding candidates for a process with such a high rejection rate, and the added stress of being given very specific rules about the things you can't be transparent about. The overwhelming majority of job seekers are mature adults who can handle rejection and constructive negative feedback. It's the sense of having wasted one's time that generates the "sour grapes" feeling.
There is also the general attitude of Googlers//Facebookers that brag a bit too much about working at Google//Facebook as if it was the dream that everyone was looking for.
There is nothing more annoying than discussing with a Googler that tries to convince you that you should apply and work for Google.
I have no idea whether I could study hard enough to work at Google. What I do know is that I don’t like large companies and I have no desire to move to the west coast.
> What I do know is that I don’t like large companies
I think this is a key thing to learn about oneself. Very large companies often have a lot of cachet and can provide opportunities that small companies cannot (scale is scale, after all).
But small companies (not even start-ups, just companies with < 100 employees) can provide a different kind of opportunities:
* Interaction with different parts of the business
* Opportunity to wear multiple hats
* Less likely to be in the Bay area
* Nowhere to hide incompetence
Of course this isn't every small company, but I have worked in a few that were like this.
It is all of that and while after 20+ years developing professionally and developing as a hobbyist since middle school, I’ve never felt burnt out from continuously learning. But,I was burned out by big company politics and bureaucracy after only three when I was worked at a large Fortune 10 (non tech) company.
I think the original point of these interviews was to see how much the candidate actually learned or retained out of university.
What happened was that the interview questions inevitably got leaked and accumulated (this is the result of a post-highly-super-indexed and centralized internet) and it became a race to the bottom for candidates.
Imho the test may have worked in year 1-4 of google but it no longer flies since college kids spend an eternity studying them at home. It’s like the SATs all over again for these kids.
I still think these tests are generally good for testing how good somebody is. If you somehow got through cs without knowing how to 3 color a map vaguely (not talking perfect answer here, just vaguely correct intuitive explanation) even if after 15 years of work, something isn’t right. These sorts of questions definitely will weed out your local web-dev baddy or even dev-bootcamp baddy, which Silicon Valley is starting to be flooded with now.
You just described practically any corporate job, ie in banking (which especially are tuned to 11 due to massive bureaucracy, over-processing and very high pyramidal management structures)
I'm in your boat. I get hit up by Google 3-4 times a year. I was interested once and did the pre-screening. When I got their prep packet I looked it over and then promptly canceled my interview. I wasn't going to review stuff I've never used throughout my 10+ year career.
I'm a big fan of not wasting time, which I why I get stuff done at work and have been promoted twice at my current company in the past 3 years. As a sibling commenter suggests, if Google wants employees who blindly do what they're told, then I wouldn't be a good culture fit. I was taught critical thinking skills in school. Respectfully questioning my superiors' plans from time to time has been a valuable skill.
I have a technical phone screen with Google this week and I have the same attitude as you. So i'll probably fail the interview, but I'm doing it just in case they ask a real world problem, which I'm pretty good at.
No, but they certainly have no use for SWEs thinking outside the box in ways beyond engineering work. Such as recognizing that interviews don't work well to predict ability and success. We engineers are supposed to shut up and conduct interviews and build interview tools, not wonder if the whole thing is a case of the emperor has no clothes.
That isn't really unprecedent through out the history of work. There are, have been, and will be many fads where the emperor was eventually found to be wearing no clothes. This particular interviewing style is just one of many, and is already on the way out as people learn they aren't getting very far by cargo culting Google (e.g. see Stripe and Microsoft's new interview process).
What I find weird is why does anybody want to work at Google? I use their tools all the time at work. If you go to console.cloud.google.com... I mean... Wow. Just yesterday I was trying to find the logging for our cloud endpoints on Stackdriver. They should use that as their interview question ;-)
And this is the big thing I've noticed about many Google apps (especially dev tools): they are all incredibly (for want of a better word) hacky. And the documentation... while I can very much appreciate that they are serious about it and work very hard, it's a huge mishmash of marketing speak ("It washes your socks and makes dinner for you!") with the crucial information you need hidden away in secret corners. I literally have to use Google the search engine to navigate any of the documentation.
I don't mean to be so negative. It's not that it's so terrible in reality, it's just that it's not anything particularly great. I literally can't think of a single offering they have that I would aspire towards. There are lots of smaller, hungrier companies making much better products. And by extension, I think if you work at those smaller, hungrier companies you have a better chance of learning more, becoming a better developer and even being happier with your job.
So the thing is, unless I'm alone in my assessment, I think the main things people are looking for from a job at Google are status and money. Additionally, I think a lot of young people believe that talented developers mostly work at famous companies. In my experience, this is the opposite. Back when Microsoft was the powerhouse, I knew a lot of MS developers. Some were amazing. Most were average (as you might expect when a company is hiring thousands upon thousands of developers). However, you had a much better chance of actually working with someone amazing if you got a job at a small shop. I still think this is true.
Personally, I don't complain about the "You have to be this clever to ride" interviews. Yes, they self select for people who are not like me (bulldogs that work away obsessively at problems until they are solved without using any particular magic insight). Yes, it means that I'm unlikely to get paid at the very, very top of the payscale (heck, if I wanted to get paid more, I would have been a lawyer -- I want to write code). It just means that it's that much easier for me to find companies that are a good fit for me. I don't really see a problems with that.
> What I find weird is why does anybody want to work at Google? I use their tools all the time at work. If you go to console.cloud.google.com... I mean... Wow. Just yesterday I was trying to find the logging for our cloud endpoints on Stackdriver. They should use that as their interview question ;-)
Working for Google is the only way to get bugs in Google products fixed, or your feedback even listened to. ;)
I'll be honest, I complained about my total lack of ability to find anything in the documentation and they emailed me back to ask for suggestions on what to fix. I never got back to them. I really should find time.
It was funny because in my frustration, I left a comment on the last page I looked at and they responded with the completely reasonable question, "Why were you looking at that page? It doesn't contain any of the information you were looking for." It was such a great summation of what my problems was that I felt difficulty in finding an appropriate response :-). Possibly this is too unfair, but I felt like the question, "How should I have found the information I was looking for?" was something that had never occurred to them. It's pretty applicable to virtually everything I've used from Google. It has that feel of, "If you don't already know, then you don't deserve to know".
with all the android privacy issues researchers keep uncovering, i feel like Google has really driven a certain fraction of the labor pool which cares about protecting user privacy directly away from itself
> they are all incredibly (for want of a better word) hacky.
yeah, i haven't seen much of anything to feel inspired by either (although I do think Google Maps team has put out a really solid product). but i have to deal with the Android SDK and other google libraries for android, and "hacky" seems like the best single word description for that stuff, IMHO, too.
what makes me laugh about the android documentation is that even though Google's mission is to organize the world's information, the best android documentation and guidance i can find is organized by Stack Overflow.
I'm probably responding more to this thread than I should, but I'm an a chatty mood today :-) I think there are a lot of factors and this is almost certainly one of them. Having worked in a couple big, wealthy and famous companies before, one of the biggest problems is that they are always making acquisitions. One day you're building X and then upper management thinks, "Woah... X + Y would be awesome. But we don't want to spend time to build the Y part. Let's acquire a company and glue it onto X." Usually that happens with great grumbling from the programmers who complain that X and Y were never meant to work together and it would actually take more time to get them to work well than it would have to build Y in the first place. Upper management doesn't believe this and says, "Stop grumbling". And, well, the programmers do what they have to do. So it ends up being super hacky... because it is super hacky :-)
But I think it's more than that. You know how certain things are obviously built by committees? You can see it because everything is consistent, but it's often got a lot of compromises. On the other hand some things are obviously built by individual contributors. It's got some things that are really great, but other things that are horrible because the developer just has blinders on. Many Google apps give me the latter experience. Also, when they have a suite of tools, although they are tied in together, they have wildly different UI, naming conventions, placement and layout of information, etc. Usually there are some really cool bits, but these bits are often not the point of the project and look a bit out of place.
I think teams in Google often have people who are confident and smart and the tools reflect that. It's a kind of "This is the way to do X", shouted 500 different ways. One of the biggest things I find frustrating is the mountain of trivia that I have to commit to memory in order to use their tools fluently. I really do think it reflects the type of people that Google chooses when they hire people. While they may be good at the things that Google screens them for, they may not be the best people overall when it comes to building finished products.
Your experiences closely reflect mine for GCP. Some of the tools, although amazing in and of itself, often have some minor flaw that makes the user experience extremely inconvenient when you encounter it many times. The simplicity of the inconvenience only exaggerates the frustration.
I've interviewed with Google and the interviews I got didn't really require having memorized algorithms. Maybe on the level of breadth first and binary search which are pretty basic. But I don't think most interviews used any algorithms from a book at all. I think their interview prep packet overstates the amount of knowledge expected.
We ... had different experiences. It was a day of implementing algorithms on a whiteboard. I was supposed to be interviewing for a management position so I didn’t get the study packet either.
I also interviewed at google, in the in-person all of the algos I got were very fair in my opinion, in that it took a little bit of thought to figure out what I needed to do, and then after that, most of the work was just turning that idea into an algorithm. It was all on the whiteboard but I thought it was fair. They were fine with little errors and we talked through them.
On the phone interview I was asked how to something I had no idea how to do. I have no idea how I passed the phone interview.
Seems like a mixed bag, the question as far as I know is completely up to the interviewer, which makes the interview very subjective when they don't use a normal question.
Interesting. My interview process was one of the most negative interviewing experiences of my life. Two of the people interviewing me were mostly silent except when they were combative. One algorithm I implemented was essentially the internals of a popular vi clone and was dismissively told the algorithm wouldn’t work in practice. I also argued Big O with a another interviewer who said nothing except my value was wrong (it wasn’t).
My interviews with Google last year did have a BFS and a system question, but they also asked me a problem that required dynamic programming and one that required fast exponentiation. I'd say that Google ranks 8 of 10 or so in how algorithm-heavy their interviews are.
"I've made a career fixing a ton of horribly shitty, inefficient code that's been produced exclusively by people who pass these interviews."
Well, the interviews are marginally effective at identifying new CS grads (who also are cheap and have few outside commitments - companies love this!) but not much else. You're exactly right to view it as an algorithms final exam or an algorithm puzzle contest, but Cracking the Coding Interview is an embarrassingly bad book, in spite of (?) its ostensible purpose of helping people study for algorithm puzzle interviews at Google, Facebook, etc..
I've realized I can still have a great career, make a very respectable wage, and work on interesting, novel problems all without working for a FAANG. Moreover, I don't have to directly contribute to the proliferation of corporate surveillance as a societal norm.
Give them benefit of a doubt, you have no prior knowledge of the time/pressure constraint's and organization structure at the time they wrote the code.
I would say they probably made the decision like we all do when development work, that solving this n+1 or bubble sort, or api package doesn't have time in the budget.
Getting to market sometimes is more important to managers than optimal `correct` code, when the market isn't willing to pay for the services of `correct`.
Similar profile, would not considering working for them. My first and only interview with FB a few years ago I nailed the initial hoop jumping about IPC and other *nix things but overall just got a really bad taste in my mouth. Also went for dinner, which was nice but still, something is creepy about them. They are looking for some kind of desperation that I just don’t have. Sounds like some fun problem solving though.
Haven’t had a chance to hire any ex-FAANGers yet, do they compare well to the general market?
Having recently gotten an offer after interviewing at Google, I can testify that those preparation packets are a sufficient but not necessary condition for doing well on an interview, and seem to be primarily targeted at making sure people who want to prepare don't do so badly. I spent about an hour on the plane reading through some example questions online, and did fine. I suspect that a lot of questions I "figured out" a clever solution to could actually have been solved with some obscure algorithm, but by treating them as puzzles rather than memorization tasks I was better able to demonstrate my skills as a candidate.
In short, I think the exact same question is interpreted as a cool algorithm challenge or a recall check, and the interviewer will be fine with you rederiving the answer on the spot if you are quick thinking enough to do so.
What an eloquent way to summarize the silly elitism of Google interviews.
They seem to have no lack of talent and it certainly doesn't seem to be negatively affecting them. I wonder if this will change eventually, and Google will become IBM v2.0 (that is, a respected and profitable company which is mostly boring/unexciting).
I don't work in California and it took me a while to figure out what FAANG stands for. Have you guys ever thought that you're just in a bubble that's not representative of the industry as a whole?
Aside from all the things mentioned in the article, this also seems like a fairly predictable application of Goodhart's Law: "Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes."
Once upon a time, skill at doing these sorts of problems might have correlated (imperfectly) with general aptitude as a programmer or software engineer. But the very act of trying to leverage that correlation for hiring purposes probably also made it go away. Now you've got a whole lot of people practicing hard on these sorts of problems, spending huge chunks of their free time grinding away on Project Euler and Advent of Code and HackerRank. That muddies the quality of this stuff as a proxy for what it was originally trying to detect: natural aptitude. I'm guessing having time to level grind like that also correlates inversely with other traits that are desirable in a programmer.
I'm pretty sure it's age discrimination plain and simple. The only time in my life I would have been in great shape for the standard interview process without a good deal of study was a couple years in early graduate school, where nearly all of this "breadth" stuff was fresh in my mind.
The relentless scepticism about people's achievements is to some extent understandable (we've all run into the senior person who can't do fizzbuzz), but it ties neatly in with the idea that every new hire should be 25 at most.
Without a doubt it's age discrimination...you've hit the nail on the head. Also, God help you if you have followed a non-traditional career path where you decide at points in your life that you wanted a break from it all.
I, too, in 1988, fresh out of my BS in CP with no family or life experiences and a strong desire to code day and night would have sailed past the technical side of these often ridiculous interviews for jobs I could literally do in my sleep now.
But now, at 53, not only has Father Time fucked with my ability to sight memorize (something I took totally for granted in my younger years) without even trying, I find it almost impossible to hide my frustration at the ridiculousness of the very idea that I would be unable to do the work required of me for the job...
"Ok..we really need someone to fix X and Y on this website, and it would be just great if they could reconfigure Z on the server..."
"Well, sure...I first saw X and Y-like issues back in the mid-90s in a networked client-server environment and I did X(x) and Y(x) to fix it, and again saw it in the mid-00s under the LAMP-stack and again fixed it doing X'(x) and Y'(x)...and the server issue is something I've seen over and over again during my 30y+ career..I am 100% certain I can solve these relatively simple issues for you guys..."
"Oh...yeah ok...so can you whiteboard a bubblesort in Javascript for us? You have 15 minutes."
"???"
...and of course I don't get a callback. Do people think I'm lying about all my experience on my resume or what exactly?
It's at the level of life and death for me right now, to be honest. I've been shooting out resume after resume for the past month, and nothing good is happening.
All I know for certain is...if the real issue was truly about finding someone who can do the job and fix the problems, there is no way in hell I'd still be looking now.
They don't want to pay you what they think (the you in their heads think you think) you are worth. That sounds convoluted but I've watched it happen repeatedly.
Bill budget IT guy, "What? We need to harden a server? Why do we need to pay this guy over a hundred thousand a year to do that? The internet is full of documentation. Let's hire someone with just enough technical know how to implement it."
The reality on the other side is that hiring an experienced engineer has it's risk. I've worked with 20+ years of experience engineers who did it because it was a job and didn't deserve the salary based on their skill set. Companies let this happen, you need to cap positions and then give inflation based raises.
> The reality on the other side is that hiring an experienced engineer has it's risk.
Totally agree and have seen it from the interviewee side. I interviewed in March last year as an experienced engineer, and my network was the number one source of interviews and how I found my current job.
Don't neglect your network, folks. As you age, it will become ever more important for your next position.
It is baffling that someone with over three decades of experience would be expected to go through the same interview process as a recent graduate.
When I graduated (over two decades ago, and in Belgium), testing was a regular part of the interviewing process. However, it was understood that this was only done for people with no or very little experience. Later in my career, this fact helped me weed out the jobs which were below my experience level. After stating such and withdrawing candidature for the position, in several cases I was even contacted to come in for a more senior position.
I'm about your age, and this is very relatable. My last job hunt took about six months. Among the other high points, I passed the on-site day at Google, passed the hiring committee, and then had an executive reach out to quash my hire.
I did end up finding something I'm enjoying.
Anyway, just wanted to send you some support. Keep your head up and keep going. You really only need one offer.
By accident or design, the "coding tests" seem to discriminate in favor of new grads (with CS 101 fresh in mind, and also willing to grind yet more artificial test-taking prep), and also have a component of hazing/negging, especially when administered to experienced people. (Asserting an imbalanced power dynamic from the start, and perhaps also exploiting a psych weakness, like "I'm jumping through hoops for this, so this must be worth jumping through hoops for." I'm sure most individual interviewers are of goodwill, and don't intend this, but that doesn't mean it's not present.)
Lately, I decline to interview with any company that requires new-grad coding tests for experienced people (especially if they require it even when the person has open source code and community participation that the company can look at). I usually do well, but even then, it leaves a bad taste.
Of course, I'm very happy to talk each other's ears off in energetic collegial discussions about engineering problems and technologies, including whiteboard brainstorming of approaches/algorithms, perhaps much like would be a part of everyday work. If anyone ever then interrupted, "Hold on, can you put in all the semicolons, so I can type it in, and make sure you know how to code," you might wonder how that's not already obvious to them, and where they're coming from.
This aversion to "coding tests" for experienced people seems to be more acceptable to small companies/startups (or small autonomous units in large orgs), than it is to less-flexible/agile large companies. Recently, after discussing my latest background with a nice FAANG recruiter, we had a good discussion about the company's practice of putting experienced people through what seemed like a new-grad vetting/hazing process, and why that's been a turn-off. They soon sent a followup email, including a quote from an engineer there saying "... I need to know whether you can code in a language," along with attachments on how to prep for their new-grad coding tests. :) For whatever reason the company insists on that process, it seemed like it probably wasn't on track to a professional relationship that I'd want.
I agree with you 100%, but I do have a bit of sympathy with the insistence that they want to verify that everyone can code and not just talk a good line.
I made the mistake of hiring someone quite senior through internal transfer once who was utterly and flagrantly unable to code despite the job saying this was a requirement - I probably could have caught this with a simple fizzbuzz, but felt like this would have been too insulting.
At at least some of the FAANGs it's also a pretty clear indication of the fact that you're going to get busted way down and work your way up. I was a Principal Engineer at Intel with a successful exit in a highly technical area, but I would be shocked if I wouldn't have to re-earn my stripes at most other companies. Most of the super-smart guys I know who went to Google, for example, got busted way down and quickly earned their way up.
So if we're not up to grinding and expect to go back in at a high level, maybe it's kinder to warn us off early. :-)
Except for people over 50 who have probably realized there is more to life, why would some demographics be willing to grind less than others?
Edit: By "over 50," I mean age demographics in general, which is the main thing that raises your family obligations. The only demographic division that I can think of that would reduce someone's willingness to abandon their personal life would be age.
I can see this sentence being an interesting Rorschach test. Either you think it’s quite sensible or utterly absurd. My own brain was even flip-flopping around for a bit before coming down on the utterly absurd side.
I think the maybe reasonable counter-argument against your sentence would be that while you personally do not have to value something like raising kids above your work, others very well might and do and there is nothing absurd or weird about that. Your framing betrays a certain kind of worldview where “work” is real and everything else is a mere hobby, a worldview that might be valid for certain people but is certainly not universal.
The gp actually just said what it said; in response to
"They have kids. Developing a sentient being is a higher priority", that probably means "raising kids is just another hobby" - not a judgment of whether hobbies or a job should take priority.
I.e. that him saying "developing a sentient being has higher priority..." is about the same as someone else saying "my creative hat-making has higher priority...". Sure, that may be true for many people. But given two otherwise roughly equivalent candidates with W>L and L>W priorities, why would a company choose the latter?
If work is everything, shouldn't we set aside some time to raise the next generation of workers? Working all the time is so defeating it even defeats the amount of work being done 30 years from now.
> If work is everything, shouldn't we set aside some time to raise the next generation of workers?
I don't think work per se is everything. But working on world-changing things is.
At least concerning the situation in Germany, where there is compulsary school with a compulsary curriculum (vulgo: 18-19 years of brainwashing), I am working a lot time in the evening on an alternative curriculum (currently focusing on computer science topics since this is something I am hopefully knowledgable about) that enables people to deprogram themselves from this kind of brainwashing to enable them to develop their intellectual potential so that they can begin to work on world-changing things. A lot of highly gifted people already asked me multiple times when they can finally read the first text of my planned series (they are really eager for it) - unluckily there is still so much to do even on the first text.
So the first thing that we should solve is to prepare a curriculum that does not completely brainwash our children. Only when this problem is solved, we can begin to think about how we can set aside some time for them.
> Rather: I work for you to finance the really important things in my life.
> You work to live, not the other way around.
At least if you work in (academic) research, working to live simply does not work [pun intended]. The things that you work on in this area are the really important things in life.
No, this is the wrong way to think about it and it has destroyed our generation by its proliferation. You should give birth to children and have enough balance between your work and your life to raise them and still grow in your career. You should never have to choose one or the other.
This false dichotomy cuts to the heart of a lot of gender and racial diversity issues that plague the workplace in the US.
"Sure, I'll consider abandoning all priorities (i.e. social life) and/or putting off others (kids) -- and maybe even take minor risks on my health -- if you you're able to appropriate compensation."
Guess what, though? In the vast majority of cases -- even when we're talking about the bulge-bracket FAANG salaries occasionally gloated about in these and other parts -- simply don't come anywhere close enough to providing that level of compensation.
I don't think you need to be over 50 to realize there's more to life. Some demographics are less likely to live to work v. working to live. Some demographics may not have time to juggle interview grinding with other responsibilities (children, existing job(s), etc.)
This. I don't have kids and I'm not over 50, but I agree with those people (if they hold this belief); not just philosophically, but also because I want to see a more diverse workforce in software development.
It's a lot more socially acceptable for members of one sex than it is for members of the other to be too busy with career stuff to spare much time for their families.
On a somewhat related note, single parents simply aren't going to have that kind of time.
People who have other responsibilities have less time to grind - and those other responsibilities fall more heavily to under-represented folks. Consider things like taking care of children (including siblings), elder parents, and other household work.
Humans are social creatures; we can't live on code alone.
If you come from an underrepresented demographic or have interests or social preferences outside the mainstream of the profession, it will be harder to fulfill your need for social affiliation by participating in communities like HackerRank in your free time-- because it may be harder to find people who share your values, and that you can identify with on a social level.
In other words: the social component of "grinding" is more fulfilling for some demographics than others.
People from some demographics are less likely to have the option to spend all their time grinding. Spending all your time grinding means that you don't need to work toward supporting yourself or others.
The tests have become a pure filter, screen out those who dont study heavily prior to taking them. They are uncorrelated from the candidates ability at every day tasks.
Hmm. Y'know, that's a great point. I could build a whole conspiracy theory off of that idea:
If I'm a FAANG, I'm simply not using my normal interview process to hire for the really interesting jobs. I reserve those ones for people who got the job by virtue of their publication history in the academic literature, or because they built some well-known cool thing, or because they got promoted into the position. Those people get shunted over into the "you didn't come to us, we came to you" interview process.
The seats I'm looking to fill with the more public interview process are mostly seats for the grunt coders who work under those people. My ideal candidate for that position isn't some rock star creative genius; it's a workaholic who is resistant to boredom. And what's something a workaholic who's resistant to boredom would be really good at? Grinding away on programming interview questions, of course.
This isn't a conspiracy theory - it's literally exactly what business schools do...
Teach a bunch of people to think in a certain way, speak a certain language and respect authority. Someone who excels at the repetitive mundanity of business school will be a perfect junior marketing manager at BigCo.
It's basically taking the way the Army trains new recruits and applying it to white collar jobs.
I wonder if the added friction of changing your place of work caused by this practice is meant to somewhat counterbalance the heavy incentives engineers have to job-hop in the current climate. Kind of makes sense from the point of view of tech employers.
> I wonder if the added friction of changing your place of work caused by this practice is meant to somewhat counterbalance the heavy incentives engineers have to job-hop in the current climate. Kind of makes sense from the point of view of tech employers.
I am not sure what to think about this claim: it is the other company that prevents you from working for them by this interview process. The current employer has the incentive that you don't leave. The other (potential) employer rather has the incentive to poach you.
These same employers (famously) had a cartel that prohibited job hopping before - the Jobs/Schmidt email - so it’s not surprising that the system would trend toward the same equilibrium again.
> These same employers (famously) had a cartel that prohibited job hopping before - the Jobs/Schmidt email - so it’s not surprising that the system would trend toward the same equilibrium again.
This explains this phenomenon plausibly for the FAANG companies. But I think there also exist lots of startups that could easily act as a cartel breaker - to their advantage, since this way they can poach from other companies.
Time spent by a new employee learning the ropes is time wasted from the perspective of an individual employer-actor. On the surface, it sounds similar enough to the iterated prisoner's dilemma so I'm inclined to think that a greedy strategy would do poorly here.
But they are not supposed to be boring? At least to me they never are, it's just that they are rarely high on my list of important (or fun) things to do on my self-improvement time.
The companies have caught on to this and nowadays, even if you passed the technical bits with flying colors, they look to rule you out or rank you based on softer criteria, like how well will you fit with our team? IOW, how much like us are you?
I'm not sure if HackerRank has updated itself recently, but the last time I poked my head in there (years ago), all the answers were in the "Talk about this challenge" section. You'd just go in there and copy paste the code in, maybe change the variable names/order a bit. Ever since I learned that, it's never been that 'wowzers' for me.
"Ever since I learned that, it's never been that 'wowzers' for me."
I am not sure I follow. For people who want to, there is a section where they can find answers even if they have not solved the problems. What was 'wowzers' about HackerRank before you learned that?
Well, some folks would brag about their score (years ago, dunno about now) and use that ranking as an indicator of their coding prowess. When I learned that the whole thing was easy to 'cheat' (whatever that mean in this context), then that rank lost all prestige for me.
Bragging about your rating as a function of solved problems in practice (with solutions available) doesn't seem rational.
HackerRank also used to organize contests where they would have a certain time to tackle a number of new problems (5-10 problems in 1 hour to 1 week, depending on the contest) meaning you have no access to solutions and are competing against other people solving the same problems. They have a rating for performance on those as well.
Isn't the whole point of the challenge to figure out how to do the problem? What you say is literally just copy/pasting others' answers. Is that not looked down on?
Given the name of the site (I've never used it), I always assumed it was about proving your ability to others (like maybe future employers). Easily-found answers totally destroys the trust that's needed for such a thing to work.
Bingo. Years back, in what small circles I run in, the score was something to toot your horn about. When I discovered that you could 'cheat' on the scoring, it took away all the prestige from that ranking score. Honestly, other than practice, what use is it?
What’s frustrating to me is that we haven’t found a way to teach programming that doesn’t rely on natural aptitude.
It’s really a travesty that we can’t teach it the same way that we teach maths or natural languages.
The end result is we’re left trying to divine whether someone is the programming equivalent of being illiterate. As with illiteracy people find ways to fake it.
As someone who has studied both math and computer science (and is now a professional programmer), I have no idea what you are talking about. We are far better at teaching people to program than we are at teaching them to do math. Its just that most people self select out of math [0], so, due to selection bias, it appears that we are great at teaching those that remain.
As an aside, as someone who also studied linguistics [1] (and a couple of foreign languages at a beginner level), I am very confident in saying that our approach to teaching natural languages relies almost entirely on natural aptitude. It is just that, absent a serious mental disorder, all humans have a very large natural aptitude for natural languages
[0] Probably because we are absolutly terrible at teaching it.
[1] I assume not what you mean by teaching natural languages, but the topic of language acuisition (including in adults) does come up; plus it gives some perspective on how teaching language would look if we didn't rely on natural aptitude.
I took math as part of my engineering degree. And that was 30 years ago. The older I get the more I think the way it was taught was terrible. Same reason I think teachers are bitching about the US's fetish for academic testing. Testing pressures teachers to teach students to mechanically solve problems. But with shallow understanding.
I think of monads, which are real easy to explain in a programming context as soon as someone understands lists and map/reduce, but are total gibberish in a math context even if you've gotten through calculus.
I don't believe for a second it requires "natural aptitude" to program. The problem is any programming curriculum starts with a text editor open.
If a developer-to-be doesn't understand the framing context of what they are doing they are being dropped in a lake with no sense of direction.
Its why all the "naturals" started as geeks who played with computers from a young age. You learned about the environment you would end up working in and later on when you hit the grindstone and actually started creating gears to stick on that machine you had an idea what the result should look like and knew the tools in the shop when you set out to start building it. Even if you didn't know the steps involved in the process, you were familiar with the environment.
People who haven't spent time engrossed in computers, such as the myriads of youth entering a cs 101 class thinking its an easy career when the most exposure to tech they have had is maybe updating their phone and using apps for Facebook and Twitter and maybe owned a video game console with no tinkerability as a total black box drop out so fast. Their professors lead them to an anvil and tell them to forge a steel rod without any wink of an idea what a hammer is.
Its just not an answer anyone wants to hear, because the solution is only to have what amounts to an entire degrees worth of learning to predicate the actual study of programming. But you don't want your brain surgeon to go to medical school after having never studied high school biology or even more generally learned to read.
We convinced our parents to buy a computer because it would help our education when all we really wanted one for was playing computer games. Joke was on us though, because playing computer games at the time usually involved a lot of putzing around and figuring out how stuff worked, ultimately teaching us marketable skills.
I'm envious because my parents heavily restricted my computer use thinking it'd rot my brain or I'd get r*ped. They wanted me reading books, to become a lawyer or a doctor. I always had an affinity for technology. My folks meant well but I think not watering that seed has me in the middle of this lost life.
I'm envious of you. I had no restrictions on technology use, growing up. I dropped out of high school at 16 and spent the next 15 years doing very little of note (with video games consuming the bulk of that time). Now I'm 35, struggling through my undergrad, and surrounded by kids.
I have some ideas about where to go from here but it's not going to be easy. The search for real meaning and really meaningful relationships is ongoing.
I spent so many years yearning. It was so unfair that all of the other kids had Pentium computers at home, and all I had was an old 386. My parent's refused to get me a game console, or cable tv. I was SO BORED, in my desperation, I tried making games in QuickBASIC.
I still think my parent's should not have gone so Amish, but I also don't think I would have developed as a programmer without that.
Not having access to a powerful computer makes you appreciate the finer things in life, like hand-bumming instructions till your inner loop hits a hard deadline.
I had zero games for my VIC-20. It was strictly a BASIC machine. (Later I had a TI-99/4A which had a few.) My dad is a kind of super-polymath, and in the early 80s he bought a computer and learned how to use and program it so he could grind out solutions to mechanical-engineering equations. He saw me become hooked on BASIC and went down to Crazy Eddie's to pick up the VIC for me so I wouldn't bother him on his rather expensive machine.
The personal computer grew up alongside us xennials, and some of us were just drawn to it, even without the promise of video games.
In my case it was a natural progression: "videogames are great!" -> "I have an idea for even better videogame!" -> "how do I make one?" -> "can I tweak this one into being a bit more the way I like it?" -> tinkering around data files -> "I really want to make my own game" -> picking up a programming book at 13 -> a programming career.
Spend a few years putting in 100-hour weeks writing collision detection for this year's Dora the Explorer game -- a typical game industry position -- and you will be thankful for the crud Java app job.
using literacy model - we're testing people to write haiku according to all the rules of style and form when the job we're testing for is basically just low level clerical like copying documents (using copy machine), sometimes taking notes and filling simple forms.
The less talked about absurdity is that successfully passing programming interviews is a skill itself. It's especially absurd because the time I spend developing that skill is less time spent developing skills and knowledge more directly relevant to my job. Yet, programming interview skill is more relevant to progressing my career.
edit: now if you'll excuse me, I need to do some dynamic programming problems.
I've worked with some folks who were decent engineers who originally didn't get offers from FANG companies because they blew the interview; they later studied their ass off on leetcode and got offers. This did not make them better engineers at all.
Ultimately, its just studying for the test, very much like the ACT/SAT in high school. You can be great at taking tests but ultimately a terrible student or vice versa.
I go back and forth about how I feel about doing heavy algorithm/data-structure stuff in interviews. On the one hand, I have never once written any kind of sort algorithm or LRU cache by hand for a production system, because why would I? Pretty much every language's standard library has a fairly-optimized sort and caching thing built in, and if they don't then there's still probably a million outside libraries to do it for me.
On the other hand, I genuinely do feel theory is really important. While not knowing the minutia of a tim-sort doesn't indicate that you'll be a bad engineer, not knowing the runtime efficiency of a sort can lead to some really awful code. Not knowing when to use a hash table instead of a nested-for loop can be a sign that you don't really know what you're doing, and not knowing some rough theory on concurrency indicates that I might be stuck debugging your race conditions or deadlock.
I try to not be a complete jerk and I won't do stuff like give out an NP-complete problem (which an interviewer gave me once), nor will I ask for intimate details of how one would implement CSP, but I do tend to focus on theory-heavy questions more than my peers, but I try to give a fairly-generous amount of hints so that people don't get too stuck.
> On the other hand, I genuinely do feel theory is really important. While not knowing the minutia of a tim-sort doesn't indicate that you'll be a bad engineer, not knowing the runtime efficiency of a sort can lead to some really awful code. Not knowing when to use a hash table instead of a nested-for loop can be a sign that you don't really know what you're doing, and not knowing some rough theory on concurrency indicates that I might be stuck debugging your race conditions or deadlock.
Theory is important but what's more important is how someone applies the theory to actual problem solving.
Reversing a binary tree isn't testing your knowledge of theory as much as it's really just testing memorization of a very very specific application. Knowing how to reverse a binary tree or how a hash map works is pointless if the person can't identify when to use them when solving an actual higher level problem. No one is ever given a binary tree and told to reverse it in the real world, they are given a business problem that you identify can be solved efficiently by modelling it as a binary tree and reversing it.
I'd bet most of the good people who fail the "reverse a binary tree" type of questions would succeed if you give them a realistic problem to solve without forcing a very specific solution onto them. Either they will come up to the realization that the solution is to think of the problem in terms of binary tree or they won't.
And neither is a terribly bad answer either. If they recognize you gave them a problem that can be represented as a binary tree and efficiently solved by reversing it then there you go, not only did you prove they knew the "theory" but they knew how to apply it as well. If they don't recognize it you gain valuable insight into their line of thinking and they might find novel ways to represent the problem and apply other theoretical concepts that solve it efficiently (maybe more maybe less).
I don't think you're wrong in general, but in one specific case I actually can give an example where me not knowing how a hash-map worked was really bad. I was working on a very limited set of memory for a small embedded thing when I was an intern, and was using hashmaps all over the place because they're magic, and kept getting out of memory exceptions.
I learned later that hashmaps are inherently O(n) (or O(log n)), or they take a lot of memory; internal arrays can get super huge if you're not careful.
In this particular case, an ugly nested for-loop was the immediate solution, and eventually I was able to cheat a little and have hard-coded integer indexes and was able to use an array.
Anyway, your point is valid, I just figured I'd give an example where knowing the internals for a hash table would have saved me a lot of time.
> On the other hand, I genuinely do feel theory is really important. While not knowing the minutia of a tim-sort doesn't indicate that you'll be a bad engineer, not knowing the runtime efficiency of a sort can lead to some really awful code.
OK but why don’t these interviewers ever ask “internals” questions? How does a CPU works, or what the different levels of memory hierarchy are, what an interrupt request is, what is pipelining, what is SIMD, what is a GPU, etc. Or how about compiler internals, what are the different stages in a compilers, what are the different grammars and parsers, what is an AST, what is interpretation, compilation, bytecode, JIT, and how does it work? How about database internals? How is a database implemented, what is relational algebra, what is normalization, what is a data model, how do indices work? Similar questions can be asked about operating system internals, networking, floating point arithmetic, etc. In my experience, no interviewer has ever asked me these questions. And as an interviewer, I have consistently been disappointed with candidates’ inability to answer basic and relevant “internals” questions. Ironically, the algorithms and data structures questions that are commonly asked in interviews are outdated ways of thinking which do not take the underlying hardware reality into account (memory hierarchy and parallelism). It could be because a lot of these FANG(-like) “engineers” are themselves not knowledgeable to ask such questions. Or they are aware that new comp sci graduates are unlikely to really understand anything at a deep level, but all comp sci students are drilled on algorithms and data structures. If you are targeting a very specific age group (0-5 years experience) and want a standardized test then I guess asking CS201 exam questions in a job interview make sense. Companies prefer the 0-5yrs experience segment because they’re cheap, don’t have children, are easy to exploit, and they are easier to “mold” into your corporate culture.
> In my experience, no interviewer has ever asked me these questions
Because not everyone writing Java code has had a background that exposes them to compiler internals or grammars and parsers.
But everyone writing Java code should at least be a little bit arsed to figure out what common data structures do - or how to write two for loops that print out "1 2 fizz 4 buzz"...
I think the important thing here is the difference between understanding complexity and implementing a sorting algorithm.
If I were hiring a carpenter I would want to know they understand their tools and when to use them. I don't care if they don't know how to make them. But of course knowing how to make them suggests an intimate appreciation for the craft (looking your way Matthias Wandel)
The flip side would be asking a carpenter to do trigonometry problems or something... Anyway, what's most important is measure twice and cut once. That and following a blueprint and going the extra mile.
> On the other hand, I genuinely do feel theory is really important.
My beef with this in interviewing is that a huge chunk of modern programming work is developing UIs - and user interface theory and skills are treated like some softball thing. You ask UI-related questions to interviewers and half the time you get some shrug "oh we use whatever", etc.
This depends highly on the domain. I'm the only person on my team with any level of frontend work in the past ~2 years. And when I did, I was able to consult with UX experts and designers.
I can't say that theory is more universal than frontend, though anecdotally, for me, it is. But ui isn't universal.
> Not knowing when to use a hash table instead of a nested-for loop
On the flip side, the O(1) look-up nature of hash tables make them the no-brainer data structure to use, at least for passing programming interviews. Perhaps more interesting test questions would be when not to use hash tables.
Fully agreed. Most questions I got when I started out as a data scientist interviewing for jobs were pretty simple: "use a hash table to count stuff, join stuff, lookup stuff". However not once did I get asked, "why do databases not exclusively use hash tables if they're so good?" That's a much more interesting question, though perhaps out of scope for a data scientist. I'd add that I've never heard that question being asked of engineers, I'd love to hear of people out there getting it though.
I actually got that exact question once, or pretty close to it. I'm paraphrasing, but the question was "Hash tables are cool, but what's a data structure they might use in a DB besides that?". This was pretty early in my career, so it stumped me, and he pointed out that a binary tree is used semi-often because of ordering.
I'd be curious to see if there is any value in questions that focus on understanding of the basic ideas behind algorithms and data structures as a subject. Time complexity, sure, but what I would really want to know is, given a description of a data structure and some algorithms for manipulating it, can you identify the invariants that should be true of these rules?
That sort of thing might give a better sense of if someone has an instinct for how to verify whether the code is working. And, by extension, if they have a grasp of some concepts that go a long way toward helping a person come up with more reliable and maintainable designs.
I understand that FANG have themselves come to the conclusion that brain teasers are not necessarily very predictive for engineering performance.
But CS/programming questions for CS/programming roles? That seems sensible.
You can study lots of vocabulary to achieve better results on the verbal section of the GRE or similar tests. But afterwards, you will, in fact, have better vocabulary, I submit.
I am surprised that you categorically deny that they were better engineers after studying CS/programming questions.
>You can study lots of vocabulary to achieve better results on the verbal section of the GRE or similar tests. But afterwards, you will, in fact, have better vocabulary, I submit.
Yeah, but tests like the GRE claim to predict academic performance, not vocabulary. So unless you think augmenting one's vocabulary alone will make one a substantially better student, there's still a gap there.
Oh, you'll certainly be better at solving programming puzzles type of problems, but the thing is that those questions usually don't translate well into real-life problems that you'll encounter at work later. You can't architect a good solution IRL by trying to figure our what the test author wanted you to do, like with programming puzzles. And for implementing some algorithm keeping every detail of it in head definitely helps, but it's not a significant advantage over someone who just googled all those details 10 minutes ago. So both types of tests are not testing the real dev capabilities, they test how much they prepared for the test. IMO only tasking the candidate to build some real piece of software or even better refactor some real code, with enough time and full access to google and stack overflow, and being fully able to ask other people for suggestions & help will show you the realistic picture of the candidate's future performance as a part of the dev team. Put them in a real situation and give them the real type of problems, and you'll get the real results. Everything else is like giving sudoku tests hoping to choose the best mathematician.
As someone trying to break into the field, I'm very intimidated by coding challenges. The ones on leetcode take me an afternoon and some change to get through the easy ones. Some people seem to take no issue with googling the result then moving forward but I thought that wasn't in the spirit of doing these challenges.
> You can study lots of vocabulary to achieve better results on the verbal section of the GRE or similar tests. But afterwards, you will, in fact, have better vocabulary, I submit.
Maybe for a few days after the test anyways. And then...you push out those words you never use or read to make room for actual useful stuff.
It feels very much like that to me which is why it’s strange there isn’t a Kaplan equivalent.
There’s interview cake and leetcode, but I think people would pay $2k for a class that focuses on the questions and in person whiteboard practice.
They could collect information about the interviews at the major companies and then use those to create the program. For payment could also help candidates negotiate and then take a cut of the signing bonus.
If this isn’t part of lambda school already I think it should be.
I already work at a competitive tech company, but I’d sign up for this in a second to help me stay competitive for interviews.
App Academy was working on something like this, last time I checked. I believe it's more focused on people who've already been through their program working on getting the next-step-up job from their first so I don't know if it's generally advertised/ available
The only way I would consider paying for a course like this is if, upon successfully completing it, I would not have to do the technical portion of the interview at companies I applied to. They would accept this cert and just do the soft skills interview.
One could argue that CS undergraduate degrees are perfectly suited for this purpose. FAANG & Co don't seem to care for that. You've successfully passed a few dozen exams at MIT or Stanford? Well, let's make the reasonable assumption that you have no clue about algorithms and data structures and start with our third whiteboarding session, that should be a way better proxy for your knowledge.
They may very well be, but is the quality of signal truly worse than that which an interviewer can extract from a few hours of your writing code on the board?
I think that’ll never happen because the incentives of the cert granting institution and the company hiring will never be aligned (or aligned enough to matter).
The major ones seem to be Outco and Interview Kickstart. But agreed, there's far fewer whiteboarding/interview process-specific bootcamps compared to web/mobile bootcamps.
I've noticed several groups that offer SE interview specific training out there now that I'm stepping through this interview process after the landscape has changed since my last position.
It's funny that after doing some searching, many proponents of this widely adopted and poorly researched interviewing methodology appear to be running businesses selling training for these interview processes... surprise, surprise.
Yes! I felt I was noticeably better at selling myself in interviews, which was my primary goal for going through them. I didn't feel I had enough of a support network to properly prepare otherwise. There's a lot of technical stuff, that goes by very quickly, and also equal amount of non-technical, this-is-how-recruiting-works content.
If you're FANG you kind of want to hire that guy though -- the one who sees there's a task and a set of things he can do to accomplish that task. Same thing with the ACT/SAT, colleges want good test-takers because they're going to be giving a lot of tests.
What's more, it's funny reading that Google sends out prep packets saying to learn/relearn these algorithms. So they don't expect their recruits to know them, just be able to memorize them for the interview.
I've interviewed tons of engineers that ace the interview and you have no choice but to pass but you see later on writing abysmal code. The problem is that design, quality, and maintainability are not valued. As a result, you end up with extremely mediocre "hacky" engineers at larger companies that can spit out tons of valid code but lead to a tangled mess shortly thereafter.
Worse, there's this huge emphasis on "big O" with zero focus on clearly egregious bad practices (tons of copies, outrageous memory usage, casting between strings and numbers all the time). I've rarely seen clearly bad algorithms be deployed but I have seen plenty of unperformant code go out.
I myself too have witnessed a similar thing but with not-so-algorithmic-people. I think it all comes down to the company culture and the person actually being able to let go of their ego and improve their programming skills. It is similar to a senior developer who never learned to properly program and because of hir position is incapable of seeing faults in their code. Once you've gotten too accustomed to shitty code, it's hard to change your style which is understandable.
Probably biggest influence on this I think is the culture, if nobody tells you that your code is shit (in a kind way) you'll never learn better. But then again, some people are so hard-headed and full of themselves that they get defensive and never actually admit their faults. Though giving and receiving feedback is not easy, can give you an identity crisis once you realize you've been doing something wrong for years.
And then you have a bunch of Ivy League graduates who have spent years learning algorithms and are burning to use them but there's no actual problems that really need them.
I had a coding test once where I was given someone else's code with an introduced bug, and I had to fix the bug. To do so, you were basically documenting the code along the way as you traced your way through possible trouble spots. It felt far more meaningful than the typical "implement some list traversal algorithm that you won't actually ever use at this job."
I've had a similar interview and although I did not solve the bug, I think the interview was much more effective than most algorithm based interviews. Not only did my interviewers get to see how I navigate a codebase and use my IDE/other tools, they also saw how I approached known unknowns and discovered information.
Note that one often-cited counterargument to its seeming "absurdity" is that it evens the playing field in a fairer, more meritocratic way.
No matter your background, what school you went to, or what randomized experience you got in previous jobs, every person has equal opportunity to study and practice the same algorithms on their own (as opposed to being lucky enough to be able to afford a top-tier $$$ CS education, or to being lucky enough to have the connections or chance to get certain previous jobs).
And thus, when applying to jobs, it becomes something more akin to a raw-ability IQ test, which you can argue is "fairer", especially when management realistically knows developers might be shuffled around all the time, and that the extensive SQL experience they were hired for will mean nothing when project requirements switch to a basic key-value store.
On the other hand, if you are interviewing for a highly specialized position that is fairly certain not to undergo change, then it makes sense that specialized experience could rightly count for far more than any kind of generalized intelligence or ability.
> every person has equal opportunity to study and practice the same algorithms on their own
Well, that's just not the cause - there are many groups of people who lack the opportunity to study and practice. Couple of examples: people with kids, people working 12 hour shifts, people without access to teaching materials, people without a sufficiently advanced machine to run dev environments, etc.
It's "fairer" if you're a recent college grad, probably single, definitely no kids or other family obligations, fresh out of school, and without much to lose if you spend all your free time studying for a "fair, meritocratic" test.
This is why this whole interview thing is so absurd. The amount of days lost by engineers to relearn obscure algorithms and training on leetcode while we could be coding for things that are actually useful.
obscure algorithms are useful.
it isn't a perfect system but its better than what most people propose as alternatives, which is to just have an ad hoc conversation.
testing whether someone is willing to prepare for a thing is a relevant work skill test too.
Knowing the existence of, and understanding the performance characteristics of a broad range of algorithms is, to my mind, a much more useful set of working knowledge than the details of any individual algorithm. The latter can always be looked up. It is harder to find out the former and much better to say “yes, there is an algorithm such-and-such that may be applicable here, let me spend 5 minutes checking that.” In my experience, anyway.
Broadly speaking, that used to be how design patterns were often treated in interviews. Could you recognise, describe, code, discuss pros, cons and applicability for pattern X in given circumstance. Pretty easy stuff to cram unfortunately, which may be why they may have fallen out of favour in recent times as proxy measure for brains, but depending on the role in question that sort of material was often more likely to be useful in the job than a similarly broad knowledge of algorithms
>testing whether someone is willing to prepare for a thing is a relevant work skill test too.
Is it, though? I could understand if algo questions had some relation to the work you're doing, but your comment on why they're good is independent of the actual material. If we replaced the algorithm question interview with an interview testing obscure presidential facts, would it really be a useful test to have devs take? I guess it is a relevant work skill test in that you get people willing to put the work in/game the system, but it doesn't seem to be much of (if at all) an improvement from ad hoc conversations.
There’s a vast difference between, say, preparing for a school exam and preparing for a job interview. In the former, you know in advance what material will be tested. Job interviews are more like PhD comprehensive exams, or professional licensing exams. I know a lot of working professionals who are good at their jobs, yet admit they wouldn’t be able to pass the license exam today.
Go search for "Knights on a Keypad" (a formerly-common Google interview question). Trying to imagine any situation in which the solution would be useful is harder than the problem itself.
If you're trying to find a situation where that exact solution is going to be useful, you're not thinking about the problem from the right perspective.
Handled correctly, a problem like this answers several questions:
1) Can you correctly break down a problem like this into its components parts?
2) Can you recognize the overall class of problems that this falls into?
3) Can you transform this specific problem into the more general class so that you can solve it in a known fashion?
4) Can you think about and implement the movement?
5) Can you communicate while you're doing the above?
No one cares about solving that particular problem. But the answers to the above really are relevant. Being able to map novel problems onto known solutions is absolutely a skill that any competent software engineer needs to have. "Oh, you want me to do X? That looks a lot like Y, this thing we've already solved; maybe I can just implement it in the same fashion (or re-use our existing system!)"
I'm not saying that this particular problem is a wonderful example, or that I'd use it in my own interviews. But this overall class of problems really does have a place in interviewing when it's handled well by the interviewers, and arguments against it on the basis of the specific problem being irrelevant are really rather missing the point.
If it's true that this type of problem is relevant (I doubt it), then why not ask about a specific problem you've actually had to solve?
I don't do many interviews anymore, but I used to, and I had no short supply of problems that I actually had to solve in the course of my work that I could ask about. I don't think whiteboard interviewing is great in general, but if you're going to do it you can at least try to keep it relevant.
As a matter of fact, I did have to do memoization of graph traversals at least once for work (most programmers never have to do this), and I find that problem a lot more interesting (trait matching for Rust). I could easily give a talk about that problem. As for that interview question, though? I don't always do well with time pressure, and so I can't guarantee I'd be able to answer it to your satisfaction.
Setting up the same problem in terms of trait matching for Rust is quite a bit more difficult than this toy problem when you're in a limited-duration interview setting. All of the time that you spend laying out the problem is time the candidate doesn't spend solving it, and time you spend not getting a signal. The simpler the problem setup the better, in my experience.
I've done the "ask a real problem that I've solved before" thing, and I find that it usually gets hung up on details and context around the problem to the detriment of actually solving the problem at hand. That's not to say that such a conversation isn't itself very valuable, but that's a different interview conversation, at least on my team. We find it important to maintain some level of focus just to ensure that we're covering the various signals that we'd like to get from the candidate.
I am not at all confident that I would be able to answer Knights on a Keypad to your satisfaction, due to some combination of nervousness and time pressure. I probably could if I did many hours of interviewing practice, though. Absent that, I think there would be a good chance you would reject me on the grounds that I don't know how to do memoized graph traversals, despite the fact that I'm in a small minority of programmers that have shipped memoized graph traversals to production (and in fact a few teams at Google rely on my implementation of memoized graph traversals, ironically enough).
My view is that Google's interview process seems to work because Google gets so many applicants that they can afford to randomly reject most of them. It's not because it's a good process.
Using an actual problem you solved can also work against the candidate because most problems we encounter in our daily jobs require a lot more time than the standard 45 minutes reserved for an interview (actually only 35 minutes out of 45 because there's 5 minutes of intro/icebreaker and 5 minutes reserved at the end to answer any of the candidate's questions).
So it isn't fair to ask the candidate to solve a real bug or implement a real feature in only 35 minutes unless they've seen something similar before.
This is why big companies like Google are limited to whiteboarding interviews because they need to have an interview process efficient enough to properly vet and filter >1 million applicants Google receives each year.
Personally, I think a better interview process is a pair-programming or work audition for a day. But that is not even close to matching the scale of Google.
Let's say out of a million applications, maybe only 25% are qualified. That is still almost 1000 candidates to interview per day (number of U.S. business days in 2019 is 261; 250,000 candidates / 261 business days = ~957 candidates per business day). Pair-programming or full day work audition will not be able to accommodate 957 candidates every day.
I have actually implemented memoized graph traversals for work (unlike most programmers). To begin with, I see no point in asking this when I could ask how to implement trait checking in Rust (what I had to use memoized graph traversals for), which a practitioner will find much more intuitive.
Moreover, memoized graph traversals don't get you full credit on this question. There's a dynamic programming solution, and in fact the ideal solution is one using matrix math, which is ludicrously divorced from anything most programmers would ever see.
So you are saying that companies are hiring people that are willing to prove that they can put "a lot of work" into preparing for interviews that have no relevance to the actual work? I can see that as a valid point, as a way to filter lazy individuals. (In the same way that a college degree is more a way to prove that you can sustain X years of learning things without dropping out).
But it will also filter out everyone that is opinionated enough to not do that stupid preparation work, and you will end up with sheep coders that will always follow the rules.
Looking at Google, Facebook etc, this might already be the case. I will even go so far to say that they prefer those type of obedient coders than the ones that ask too many questions and get too creative.
No relevance would be like when I took the GRE for grad school. I studied things that had zero relevance to the CS program I ended up attending.
Reviewing algorithms for interview prep at least has some relevance to programming. While a candidate may not use that exact algorithm in their day to day job, they are creating ad hoc algorithms all day long. With that said, I don't think time pressure, white boarding algorithms is a very good job performance predictor.
There seems to be an assumption in these discussions that everyone has to prepare for these things in equal amount. But that's not the case.
(The counter to that is that if people can relatively-easily (single-digit days) cram for your interview, you're still not going to be effectively screening for at-hand pre-existing familiarity/knowledge.)
It is not illegal to use IQ tests as part of a job screening (in the US). It is illegal to screen in a way that is both discriminatory and not proven to correlate with job performance, but that applies equally to both IQ tests and algorithms questions.
Obviously I do not think that is false; you can get sued for literally anything. Do I think you would lose? Depends on whether you've done the research showing that 135 is an important cut-off.
To turn this around, can you point at the law that makes IQ a protected class or whatever you're claiming it is? I'm not going to have much luck proving a negative - Russell's teapot and all that.
This resonates strongly with me. I know a few engineers who are bad to mediocre software developers, but excellent interviewees. That skill alone lands them offers at any place they want to work. They can whiteboard algorithms like there is no tomorrow, but they can't manage software complexity.
Managing complexity is a valuable skill that should also be screened for at interview. At most top companies you are there for at least 4-5 hours so there should be plenty of time to evaluate that skill.
I think whiteboard questions are good, I want to know that this person is capable of writing difficult code if we need them to.
I also think we probably ask too many of them.
I have been on the hiring side and my experience so far has been that almost always the feedback is close to identical across multiple whiteboard questions. The questions are also so abstract that asking multiple to "prevent bias" seems ineffectual. What bias could there be, you either solve the problem or you don't. Bias is more likely to come in on the behavioral interviews. There should be multiple of those for sure.
Generally someone is either a good enough coder or not and they will display that consistently across all the interviews. You will see the same stuff throughout (good or bad variable naming, good or bad communication etc) thus asking > 1 coding question by default is a waste of everyone's time.
It's not a false dichotomy because I am not suggesting that both skills are mutually exclusive. I'm highlighting some personal experiences where I've seen one skill is vastly overvalued compared to another.
>Managing complexity is a valuable skill that should also be screened for at interview
I've never been part of an interview, on either side, where I've seen testing for managing software complexity. Good interfaces, function design, side effects, state management, etc, are all second class citizens to finding the appropriate algorithm to solve the interview question. I've seen it over and over. The only time I've been close to a complexity management question was on a systems design question, but even then, it was only a very high-level systems discussion. I don't think most places know how to screen for it.
This is kind off strange for a non US resident to grasp. I've been employed as a programmer three times and noone tested my coding abilites what so ever on the interviews. No samples, nothing. Never heard of any collegue doing that either.
>This is kind off strange for a non US resident to grasp.
I've worked for two non-technical companies as a software developer and one highly technical company, and interviewed at a few Silicon Valley companies.
The difference between the interview processes is staggering; my current job's interview was two hours of conversation, no code tests, just a general assessment of "do you know what you're doing" by the hiring manager and a couple other members of the team. The highly technical company had a code assessment then the in-person interviews had zero coding.
The SV companies must have a good reason for this, but golly the amount of coding in those interviews is nuts. I'm a process over code speed kind of coder, and I've failed every SV-level test because of it; my code comes from talking to non-technical users like medical researchers and study operations managers and tossing something together in Python or a cloud service that makes their lives easier. Needless to say, I don't go over algorithm fundamentals on a regular basis, and I generally fall out after the first or second interview.
It's especially odd that interviews are so intensely focused on those couple hours since I personally don't see any dev or any resource for that matter contributing in any meaningful way in so fast a time, or even within 90 days. I'm not sure how this problem could be solved with the limited time companies can dedicate to interviews, though; maybe rely more on portfolios?
It's quite frustrating, I've submitted my portfolio of open source projects on GitHub for interviews. I specifically told the recruiters, HR personnel, hiring managers and some of the developers that the projects contain a large enough body of work to see examples of my code. These projects are quite comprehensive and not one person looked at them or mentioned them during the interviews.
Unfortunately, people in general rarely try to first understand what the candidate offers. It's more often ONLY about whether candidates uunderstand the exact way the company uses certain technology.
Knowing what good work looks like is a plus, but I wouldn't be convinced by cherry-picked successes. That doesn't tell me how much you struggled with them, how many others you failed, or how much of the work was actually yours.
I think most of your concerns could be answered fairly quickly in a conversation, though. I can look over a Github repo and check out its history and ask specific questions about changes and why they were made or why design decisions were made.
The cherry-picked successes thing is certainly a problem, though, and not just for coding. Maybe it's the candidates I've asked it but when asked "tell me about when you made a mistake in a project" they tend to answer with a strength and try to re-frame it as a weakness. "Oh, I worked too hard on this project and it made me tired" isn't a weakness. "I worked too hard on this project and that made me neglect business requirements since I was too myopic to notice" is a weakness.
Sorry if that's a tangent, it's been a pet peeve of mine since I started interviewing that not many people are humble enough or have thought enough about what their weaknesses actually are, and how that affects the success of their work.
This is also my experience interviewing for embedded systems and (hardware) test automation work at electronics manufacturers. I've never been asked to code anything. (Nor have I been asked to draw any schematics.) Just conversation.
Go start a company. Hire people without doing any sort of coding interviews. Report back in a year.
The reality is that these are extremely desirable positions with a staggering number of applicants who _do not know how to code_. Not as in “I can’t solve a dynamic programming problem without studying up on it”, but as in literally don’t know what a for loop is.
The process is far from perfect and the frustration is understandable, but it works well enough as a filter from these companies’ point of view.
I imagine technical screening is necessary, but in-person coding assessments are nonsensical. It's not like programming is a spectator sport, so why are we tested on it live? I suppose the process is self-selecting, because I personally have given up responding to SV recruiters or interviewing with those companies.
Well it started off with Microsoft in the 90's, then Google in 2000's and then it just became common for every company in SV to conduct these programming interviews.
It's actually not as common as it sounds from reading HN. There is definitely a bias based on the fact that the big tech companies do it and startups tend to do it. Which many commenters here have experience with.
A lot of more traditional software development roles do not have much testing. Sometimes they will have a quick online timed test or a simple question on an intitial phone screen. That isn't to say that all companies do not have testing. But it's definitely the startup and big tech worlds that have the majority of it.
I've been in technical positions in the UK and Hong Kong, and in both cases the interviews were very technical. Hours and hours of technical assessment and questions.
The massive FAANG-types of companies receive so many millions of job applications, they championed these types of interviews to have a more objective way to filter through all the applicants. That practice has waned somewhat at the larger companies (not totally, but it's changing), but has trickled down to smaller companies.
I received a cold call from a Lyft recruiter recently for a senior machine learning position. I asked why they were reaching out to me and the recruiter mentioned that it’s an especially hard time to locate experienced machine learning candidates, they don’t have enough applicants.
I said I was interested in interviewing but that I would only agree to a process that evaluates me based on my previous work history, and not any onsite or takehome coding projects, system design questions or whiteboard coding questions.
The recruiter said she would run it by the manager, but thought it would not be possible, and a few days later I got a rejection email.
Not really though. If your price is working in a quiet, private office with a door that shuts, or having humane treatment when you’re being interviewed, then it seems nobody would pay it.
Let’s take their word for it that they are desperate for a machine learning engineer. Then it suggests they care more about mandating workspace conditions (since financial cost even to provide thousands of workers with private offices in dense urban areas is not a realistic excuse not to do it) or trivia during interviews than about business needs.
This is such an important observation. Claims of limited talent pool don't align with reality. If demand were truly inelastic, companies would negotiate.
In some cases I do think there is a limited talent pool. Unwillingness to negotiate in those cases usually means the company would rather simply suffer along with worse business outcomes, or try to invest in a totally different area of work, than to accept they are at a negotiating disadvantage and that it may be the case that painful company culture changes have to be made (like giving some people private offices and not others).
“The market can stay irrational longer than you can stay solvent,” seems apt for this.
true that, and I wonder, based on the little info I've about employment in the US, it's so expensive to fire someone once you realize they are not a good fit?
Even in my country (Arg), that has huge amounts of protections to workers, the first 3 months are consider trial, so you can fire someone without additional cost of the payed salaries.
Most workers in the US are under the "at will" regime: they can be let go at any time in theory. In practice, companies (especially big ones) usually try to "build a case" to justify the move.
Probing someone with very technical conversation about past technical projects is a much stronger filter to prevent unqualified candidates than passing CoderPad tests, whiteboard algorithms, etc.
The conditional probability you are hopelessly lacking software skills to do a job given that you nonetheless passed a TripleByte exam or something is quite high. Overfitting & memorization for the sake of the test is extremely common.
But it’s much, much harder to fake competence when needing to dynamically and verbally explain technical details in a conversational interview about past work experience.
I can, with quite a high level of competence, speak about projects I had no part in implementing or that don't yet exist.
It's far, far easier to fake expertise when you have more context than the person asking questions. Technical interview questions make sure that the question giver has more context than the recipient, ignoring pathological cases.
I’m sorry but you cannot do what you are claiming. It has nothing to do with whether the candidate has more context or not.
In fact, conversational interviewing like this has very little to do with any of the domain specifics of the project. The point is to recursively keep probing for deeper technical specifics, so they have to explain at finer and finer technical levels what were the tradeoffs, why exactly were certain decisions made or how were certain problems overcome.
It is precisely the situation when someone did not have to dig into the technical weeds of a project for themselves that they will not be able to fake or fast talk their way through this type of interview.
That is the number one, defining characteristic of this way of interviewing.
>I’m sorry but you cannot do what you are claiming.
But I have. Not in the specific context of a job interview, but I have absolutely convinced technical experts that my level of expertise in a field is above my actual level of expertise. Ironically, if this were a technical interview, I'd fail it, not because I lack the skills to convince technical experts of my non-existent abilities, but because you don't find me trustworthy.
>The point is to recursively keep probing for deeper technical specifics, so they have to explain at finer and finer technical levels what were the tradeoffs, why exactly were certain decisions made or how were certain problems overcome.
But without context, you can't effectively do that. I, the candidate, am in control. I can steer the conversation to avoid areas where I don't have expertise by answering all kinds of things: "investigating that was someone else's responsibility", "well we never tried anything else and our current implementation works well enough that we never needed to", etc. You can't know if those are lies or not. There are all kinds of completely valid non-technical reasons for decisions that may be completely outside of a candidate's control.
You're either forced to completely trust the candidate, or attempt to verify their authenticity during the interview, at which point you quickly venture into the land of bias and subjectivity.
> “But without context, you can't effectively do that.”
This is simply false. You don’t need context to understand if the breakdown of a technical problem into constituent trade-offs was appropriate or not — by definition that very breakdown into constituent technical details is the context.
> “investigating that was someone else's responsibility"
This just confirms to me that you are not correct in asserting people can just skate by these discussions. If someone tells me something like that, I’ll ask them what did that other person find when they investigated? If you say anything like, “I don’t know; that was their job not mine,” then you’ve lost credibility because you didn’t put that other person’s conclusions through strong skepticism until you were satisfied you knew the details well enough that you could own or support them if you had to. That is exactly the sort of thing that indicates bullshitting.
> “attempt to verify their authenticity during the interview, at which point you quickly venture into the land of bias and subjectivity.”
This is just wrong. You don’t ever “just trust” the candidate, that’s the opposite of this interview style. Further, you never “attempt” to verify authenticity.. you just do verify it, since it does not require context or domain specialty to analyze reductionist decomposition of any engineering work down into primitive constituent tasks or decisions that are universal.
This approach is far less biased or subjective than appraising “how a candidate thinks” while they solve tricky puzzles in a foreign environment with unrealistic time pressure.
>This is simply false. You don’t need context to understand if the breakdown of a technical problem into constituent trade-offs was appropriate or not — by definition that very breakdown into constituent technical details is the context.
I mean, yes you do. Context tells you which decisions are important. You're asking for the context you need to assess someone's technical competency from the person whose technical competency you're attempting to assess. They have every incentive to lie, mislead, or stretch the truth to make the context they give you highlight their skills more than the real context did. And no, you can't verify that.
>If someone tells me something like that, I’ll ask them what did that other person find when they investigated? If you say anything like, “I don’t know; that was their job not mine,” then you’ve lost credibility because you didn’t put that other person’s conclusions through strong skepticism until you were satisfied you knew the details well enough that you could own or support them if you had to. That is exactly the sort of thing that indicates bullshitting.
But now you're punishing someone for organizational things possibly beyond their control. If my job was to build a thing that made use of some blackbox algorithm, and John developed the algorithm, why should I put the algorithm under strong skepticism, perhaps that's how things work in your workplace, but there's no clear reason that mine work the same way. This is just a bias against people whose development practices aren't the same as yours.
>This approach is far less biased or subjective than appraising “how a candidate thinks” while they solve tricky puzzles in a foreign environment with unrealistic time pressure.
Except that, as I've literally just proven, you've failed to verify the authenticity of me because you've wrongly concluded that I'm not authentic. This comment thread exactly demonstrates just how you can fail to identify a good candidate with this method because if you dislike what they're saying, you'll unconsciously convert that to them being less authentic.
So again, either you take the candidate at face value in a situation where they have every incentive to lie to you, or you attempt to verify their authenticity, at which point that analysis is subject to a multitude of biases that have nothing to do with the candidates skill level.
> “I mean, yes you do. Context tells you which decisions are important.”
No, you definitely don’t need to come into it knowing about this, and even if you don’t know about this ahead of time, it won’t imply “just trusting” the candidate or being overly subjective. You will ask the candidate to explain why various decisions were important, and not stop at the top line answer but recursively probe into it, breaking it down into concepts and trade-offs that are universal in any kind of applied problem solving.
> “But now you're punishing someone for organizational things possibly beyond their control. If my job was to build a thing that made use of some blackbox algorithm, and John developed the algorithm, why should I put the algorithm under strong skepticism, perhaps that's how things work in your workplace, but there's no clear reason that mine work the same way.”
I’m sorry but this also just isn’t true. If you are describing your contributions to projects and all that keeps happening is you hit walls in your explanation where someome else did the work and you did not review that work at a high level of depth, then you’re just being misleading about your contributions at work.
Your job as an engineer in a company is to solve problems for your stakeholders, whether that means building tooling for other engineers, assisting designers with prototypes, designing algorithms for core product functionality, sales engineering for client stakeholders, etc.
It doesn’t matter how your company is structured, it doesn’t matter how the work was divided up. Your job is to know about the stakeholder problem you are solving, at a deep level, and when you represent your work to other people and you fail to offer technical depth about the trade-offs needed to solve stakeholder problems, that’s a clear mark against you as a candidate.
It’s bewildering to me that anyone would think that the way their current employer organizes assignments should reduce their burden of knowing how to represent their projects in significant technical depth. That is an always-on, never mitigated, constant responsibility for all employees anywhere. You’re not holding anything against someone if they can’t provide that in an interview... no, you’re just uncovering what they’ve lied about or embellished on a resume.
> “Except that, as I've literally just proven, you've failed to verify the authenticity of me because you've wrongly concluded that I'm not authentic.”
I see no such proof at all, and the cheeky rhetoric just makes me feel more entrenched that you are bullshitting hugely in this thread.
> “So again, either you take the candidate at face value in a situation where they have every incentive to lie to you, or you attempt to verify their authenticity, at which point that analysis is subject to a multitude of biases that have nothing to do with the candidates skill level.”
You are doing nothing but gainsaying here. You’ve made no argument that would support any of these strong conclusions, especially not any reason why this interview method faces the false dilemma between either just trusting the candidate or else succumbing to biases.
You are just asserting things, but they do not seem to be connected to or bolstered by any of the other things you’ve written.
> You are just asserting things, but they do not seem to be connected to or bolstered by any of the other things you’ve written.
Let me lay it out clearly: I asserted that I can, and have, inflated my abilities to people who have technical know how. This was in response to you stating that "I’m sorry but you cannot do what you are claiming."
So to be clear, at this point, one of two things is true:
1. You are wrong
2. I am a liar
To you, it is clear that point 2 is the true one. To most readers, this is not as obvious. Once you have decided that (2) is true and I am a liar, nothing I say can or will convince you otherwise. But you haven't decided that based on anything factual. In fact, (1) is true here. I am not lying. I can, and have, done the things I claim to have done in this case.
I'm using this to demonstrate that your ideas about such a conversational interview don't work, by pointing out that in the conversational interview that we are having right now, you've decided, based on a preconception, that what I say cannot be true! I could be the world's most successful conman, but because you're preconceptions lead you to believe that your preferred interview process is effective and is less biased than your non-preferred one, you won't accept evidence to the contrary.
>I see no such proof at all, and the cheeky rhetoric just makes me feel more entrenched that you are bullshitting hugely in this thread.
Right, and my point is you're wrong and unwilling to accept that. And that is a demonstration of you not being able to effectively figure out whether or not someone is bullshitting from a conversation with them. You've decided that I'm bullshitting because the alternative would require you to do a lot of introspection about how and why you analyze candidates the way you do. So it's easier to just say "you're bullshitting" and then not put in the effort. And that's certainly your prerogative, but its not at all a good look for your interviewing capabilities.
That you're so prone to cognitive biases that you're willing to completely write off someone's experience because it forces you to rethink something you hold dear is not a selling point of the process you espouse. It demonstrates, like I've said, that the process is prone to cognitive bias and is therefore decidedly not objective.
That is, there are two possibilities:
1. You are wrong, and you're refusal to accept that is coloring your perceptions of our interactions in such a way that you are not able to be objective about my experiences and abilities, as I claim.
2. I'm completely making everything I've said up and haven't ever been able to inflate my abilities to anyone. Your person-analytical skills are infallible and you've caught me.
I subscribe to (1), you continue to wrongly believe (2). This is expected, its why your process isn't as objective as you claim.
>It doesn’t matter how your company is structured, it doesn’t matter how the work was divided up. Your job is to know about the stakeholder problem you are solving, at a deep level, and when you represent your work to other people and you fail to offer technical depth about the trade-offs needed to solve stakeholder problems, that’s a clear mark against you as a candidate.
This is, again, your opinion of how engineering should be done. Not every engineer has the opportunity to work in a workplace where that's how things work. Are you going to write off everyone whose experience has been in a PM led environment because they haven't had the opportunity to develop using the process you prefer? If so that's again your prerogative, but you're probably filtering out a bunch of good engineers.
Are you talking about TripleByte exams from experience, or are you speculating that they are similar to other software interviewing processes? The questions they asked me covered what I think of as a staggeringly wide range of knowledge, including sysadmin stuff, detailed knowledge of four or five programming languages, POSIX semantics, high-level scalable systems architecture, and so on. It seemed to me that it would be very difficult to "memorize for the sake of the test".
Maybe we just see it differently, but I’d consider everything you’ve listed as exactly the sort of stuff that just gets memorized. I don’t even think the breadth of what you listed is that bad actually. Especially for recent grad brogrammer types with no serious life obligation time commitments, memorizing an entire program of study like that is probably only an investment of ~1 month of time, and has practically no correlation to on the job effectiveness once hired.
I’m speaking from ~10 years of experience running my team’s recruiting in a quant finance firm, where many interview requirements / tests / etc., came down from executive managers, so I got to see a wide range of performance on tests of all sorts, riddles, hardcore algo trivia, etc.
The sum total of all that leads me to believe quite strongly that the best signal to noise comes from super careful and tedious resume selection followed by conversational and behavioral interviews that recursively probe into more specific technical details.
I agree that any one of the questions could have been memorized, but it seems to me like it would take a year or two of constant memorization, like more than 8 hours a day, to get all of it, even assuming there was a place that had the relevant information conveniently formatted for memorization — which, as far as I know, there isn't. But maybe I'm just bad at memorizing things?
I note that you didn't answer my question. Have you taken the TripleByte exam or not?
Interviewing is a skill regardless of what field you are in. I agree that algorithm are not optimal for testing engineering skills, but there's a whiff of entitlement when engineers get indignant about having to spend time preparing for an interview. Objective evaluation methods for evaluating candidates are hard to come by, and I don't see obviously better alternatives.
I'd suggest kattis and CodeForces. I use kattis mostly, but know a number of people who like codeforces better.
Here's a few for you to try. Some of these are pretty hard, but you should be able to find solution sketches online if you google the contests they are from.
Why do developers complain so much about hard interview questions that can be supposedly be gamed by studying to the test? Every high paying industry heavily engages in gatekeeping, because the number of people who want to make 400k/year is far larger than the number of 400k/year jobs available. The traditional forms of gatekeeping involve requiring that people have the right personal/familial connections, or have an elite school pedigree (finance, consulting, law), or that they spend several years and half a million dollars in post-college schooling (medicine).
The tech industry's preferred form of gatekeeping is asking people to do algorithmic puzzles, which is far tamer and less exclusionary than what other industries do. If you believe that the only thing standing between you and 400k/year is a few dozen hours of practicing leetcode, why are you whining about it instead of taking advantage of the situation to wildly enrich yourself with a fairly modest amount of effort?
1. At least for me, I didn't know the game going in so I wasn't prepared for it. A lot of folks I interview don't either (though its becoming less common... almost everyone is somewhat prepared for it now), and I feel bad for them.
2. I feel like I'm a competent developer, but I'm not very good at the game, even when I practice and study a lot for it. I can barely eek through the process. And at least for me, I don't have a wide, general set of skills... I'm kind of only good at writing code, so when these artificial barriers are erected, I see future job prospects disappearing, and I don't really know what to do about it.
3. Perhaps it goes hand in hand with engineering, but I criticize absurdities an inefficiencies in business all the time. There's something particularly hypocritical about an industry which prides itself on meritocracy developing a process which not only fails to recognize qualified developers but often actively works against them.
4. I think we actually need more engineers. A lot more. Gatekeeping is preventing this from happening, leading to a general population wholly unequipped to understand and maintain the software that's taking over their lives. Programming should be a lot more like reading and a lot less like specialized medicine.
I already make plenty of money in my current job and I'm fairly happy.
If I wanted to switch companies I'd be interviewed as if I were a recent college graduate.
Anyone experienced and good at this isn't going to have time for that because they're not going to jump through hoops to increase a 275k salary to a 300k one.
So what the interview selects for are people desperate enough to study hard enough to fool the interviewer.
And this is one reason the industry leans heavily toward young privileged male candidates and loves to reinvent the wheel every six months.
If you're experienced and good you can do way better than 300k at some companies. Also if you think spending a few hours practicing leetcode makes you desperate, wait till you hear the hoops people have to jump through to become a doctor or investment banker.
There are more interesting things to do. Coding your side project, contributing to the opensource project, learning a new programming language or even reading random GitHub project code and issue trackers. All of these activities help to improve your programming skills more than leetcode. If I am making max for my local area and I am not willing to relocate, why would I waste time for interview preparation?
I don't know what you are replying to the parent poster at all. If you don't want to interview for a job in a new location, or for more money... then the advice to study isn't meant for you. Like, sure some people find studying fun for it's own sake, but the purpose of studying is for the interview which you just said you don't even want to do.
What I am saying that there are more interesting activities that would actually improve your skills. Leetscode and drilling interview questions are waste of time even if you are looking for a job. There are places that hire based on take-home interview, onsite pair programming or based on your portfolio/recommendation.
>I'd be interviewed as if I were a recent college graduate.
>leans heavily toward young
So, it's a problem to interview everyone the same way? But it's also ageist because the interviews are accessible to people with no industry experience yet?
I think because some people can't train themselves up through the gate. This is not because they can't figure out the leetcode problems, but because they have more difficulty dealing with the kind of pressure you see in interviewing situations than others.
If you have a hard time working under pressure, there's sadly not many viable alternatives. Any process that has a very low % success rate will inherently be stressful. Also, being able to perform under stress could be seen as a desirable skill too.
It's not that I can't perform under stress. It's that I can't perform under the specific stress of being in a new location having 5 hour one on one interviews because I have Aspergers
> However, whether or not a candidate answers a question correctly is not the only source of signal during an interview. You can also evaluate their process by, for example, observing how long it takes them to finish, how clean their code is, and how much they struggle while finding a solution. Our analysis shows that this second source of signal (process) is almost as predictive as the first (correctness).
I seem to do OK when an interviewer:
- asks me a question that sounds like a real problem, not a contrived one (although on occasion I'll have fun with a contrived puzzle if the interviewer has a sense of humor & makes the process light hearted)
- doesn't push me down a path that requires me to implement a "simpler" solution I'd never consider (e.g. asking me a question that clearly wants an O(N) solution & then pushes me to try the O(2^n solution first)
- talks like a person with a problem, and not as someone who clearly knows what they want & simply won't say it
- doesn't try to "see how I think", because I code as much in my head as I do on a screen, meaning most of the code I throw into a text editor is the latest thought in a stream of random ideas until I get to one that works
- doesn't constantly interrupt me
- states their actual expectations, such as "I don't expect you to finish, what I'm really looking for is X"
As a frequent interviewer I will say that you are not alone in that: any interviewer that isn't following the above guidelines is unlikely to get a good signal from anyone.
To your final point about expectations: One question that we try to ask ourselves about any particular interview question is "how may bits of information are we getting out of this?" Trivia questions are usually less than 1 - you'll find out if they know some trivia, but you don't honestly really even care. The best questions are those that get you several bits over the course of the interview period, rather than a single yes/no answer at the end. Questions where there is no final end point or where we explicitly do not expect candidates to finish, are often the most effective; we can explore the pathways that are working, dive into side channels that seem interesting, and get data along the way rather than just a checkmark on some individually-useless problem.
So much this^^! Like everyone here is saying, interviewing itself is a skill, and part of that skill is how the interviewer communicates and takes part in the problem (or lack there of ). Every point you mentioned is an issue I've seen as well.
regarding the simpler solution, i just had that situation today. working on elevatorsaga.com the candidate clearly wanted to solve the full problem correctly (the way a real elevator works) from the get-go. however doing so would have taken him several hours without him knowing if he got any closer to a working solution.
i had to push him to try something simpler first just so that he would get to somewhere meaningful within the hour available.
so which approach is better depends on the goals. in my case i wanted the candidate to solve the basic problem first: (make every person get to their destination, no matter how long the elevator needs) and then optimize to make it go faster.
I hear what you're saying, I do. But if the candidate knows what an optimal solution is, and that optimal solution takes several hours, maybe you're asking a question that is too big?
For example, asking someone to write a self balancing binary search tree in an interview might be too much. And asking a question that ultimately demands using a self balancing BST might be a bit much. It's like asking a question that really needs an associative array, but then imposing the rule that associative arrays aren't available. How about simply asking the candidate about binary trees, if they've used them, and what they've used them for?
Last week an interviewer asked me about if I was familiar with the Egg Dropping problem (https://brilliant.org/wiki/egg-dropping/). I wasn't, but I remember saying something like, "I haven't done this one before, but I'm pretty sure it's going to take O(log n) guesses to find the max floor from which you can safely drop the egg." The interviewer asked me how I'd implement it, and then we got into the weeds when I started asking for what the requirements were. He wouldn't share any information about inputs & outputs, so I just started writing the dumbest thing possible, predictably coded myself into a corner, and burned a potential interview win-win on learning how answer that kind of question for that kind of interviewer.
A while ago we had a job applicant who had travelled very far and long to reach us. (literally from the other side of the world.)
So, as a courtesy we figured, why not spend a few hours extra with this applicant in the programming test. We set up a laptop with a clean Ubuntu install, devised a programming test that was quite involved. Not algorithmic hard, just more complex than what can normally be done within a 20-minute whiteboard interview. We expected it to take at least 2-3 hours. Google/Stack overflow/etc access was allowed and encouraged. "Just act as like you would normally do when solving a problem."
We spent like 2x4 hours devising this problem, based on our codebase (cutting out something somewhat easily digestible and making it able to run standalone).
It took like one hour to get productive. Explaining the problem, setting up editors, compilers, etc.
We took turns, but most of the time someone in the interview team (of two) sat next to the guy. We did give him some alone time.
This is probably nothing new in terms of interviewing techniques, but to us it was such a revelation. We learned so much more about the applicant. Perhaps it worked well with this guy because he happened to be a bit more outgoing than our typical successful applicant. We'd never felt so confident about giving someone an offer before.
I'm really looking forward towards testing out this approach with local candidates to see if we can replicate this "data gathering success".
This can be problematic, you waste a lot of time setting up the PC / explaining the problem.
Last time I was involved with this interview style it always seemed to take an hour to get setup which meant a long interview of 3-4 hours particularly if a candidate went down the wrong path.
In the end we optimised for SOLID principles with a blackbox dll that had a function that slept for 2 seconds and a calling class that had mixed responsibilities (logging and calling the dll). We started folks off with a test or two and hoped they'd inject a mock to get rid of the delay and split logging off into a separate class.
I'm not saying it was a great test but you could do something within an hour or so then maybe spend half an hour talking through what techniques they'd use for a more complicated scenarios.
If you can afford the time then more realistic testing is great and I do think you should try.
There's a problem here. The only thing that Triplebyte can claim based on their data is that easier programming questions are more predictive of performance among candidates who received an offer. Since candidates who get offers are (in theory) different from candidates who don't get offers, we can't necessarily generalize from one population to the other.
There's also a question about how to mix question difficulty. Should you ask nothing but easy questions, or is it good to throw in a harder question or two to see how the candidate reacts to something they can't answer? I can see a good interviewer getting a lot of signal out of that, but in the hands of a bad interviewer it would not work well.
You touch on a very important part of this equation: there are good interviewers and bad interviewers.
A good interviewer, in my opinion and experience, will try to get to know the person first. They'll put the candidate at ease. They want to try as much as possible to be talking to the person they're possibly going to be working with, not the anxious candidate that just walked into the room.
i am interviewing this week. at the end of the interview i ask the candidates how they felt about the interview process. i was mostly interested in finding out how they felt about having to do live code challenges while i observe them. (i let them "play" elevatorsaga.com for an hour and then implement a small project of their choice from freecodecamp)
yesterdays candidate told me he felt very relaxed, much different from all the other interviews he was doing.
and yes, figuring out if i want to work with this person is my primary goal.
Yep. Lot's of "data analysis" without even defining what would be a "false positive" (someone who did well in an interview, got an offer but turned out to be a bad hire? how is TB exposed to that critical data exactly?)
i wonder about the definition of "easy" vs. "hard" questions. it seems to me that this is problematic. i mean, is there a well accepted mapping of all computer science related questions to a numerical difficulty rank?
i mean what if we did that for math? some people would be good at geometry questions, others at algebra, others at topology, real analysis, etc.
what if some great geometer interviews with a fanatical algebraist and then fails to make the cut? this candidate is a great geometer! were the questions "too hard"?
I think that asking a candidate to perform a code review can be an effective method of evaluating quite a few desirable qualities. Can they understand someone else's code? Can they engage in constructive critical discussion? Are they able to effectively refactor something to make it better? Can they spot mistakes and do they have an opinion about how to avoid such mistakes?
I know you're joking but... If i had never heard of a binary tree, saw it for the first time in my interview, and didn't at least have an idea of how to attempt solving it, I'm probably a lousy hire. The quality of attempt is what these hard questions are really about.
Looks like I am gonna have to look at the solution as I don't fully get how to achieve that structure. Recursion is good for a few standard problems but those are just standard problems. We shouldn't try doing every possible thing using recursion, even if it is possible to do so. Also, debugging an iterative solutions comes naturally to me compared to recursion. I guess I don't have the right mental model for how to track recursion/recursive calls.
There's a classical test of giving a never seen before programming language (invented for the test), then the candidates need to reason about some code. Never seen that in the industry though.
That would be a rather interesting challenge for an interview; I'd love to be able to try it someday.
I recently did something similar, that was of my own devising.
At one point several years back I wrote down some "code" in my own "shorthand" form; it was meant to implement a library and some test code for a microcontroller project I was contemplating at the time. I basically wrote it in such a fashion so that I was quick to get my main ideas down without being too "wordy" (whether in code or otherwise).
Then I put it away, and didn't revisit it again - until recently.
A couple of weeks back I found that code again, and looked at it - worried that I wouldn't be able to recall my shorthand or what I was thinking; in short, worried that my ideas would be "lost".
I looked over the code, walked thru it in my mind - and after a few minutes it all came back to me, and I was able to understand again what I had originally created (and where I could make improvements as well). It both left me feeling optimistic about the process, as well as a bit excited that I was able to remember it and improve on it - that I didn't have to worry about it, and that my shorthand "pseudo-code" was legitimate enough that it could have been real code for all that it mattered.
i did something similar with junior candidates a few years ago. put them in front of a piece of code and ask them to fix the problem. not a code review, but debugging. that covers many similar qualities.
i didn't expect critical discussion because they were not native english speakers. (in fact they hadn't had an opportunity to even use english outside of talking to their teacher, so when they were able to engage in friendly arguments over how to solve a problem, that was quite an accomplishment, even more so for asian culture which is generally rather submissive (you don't argue with your boss))
Yeah that's what we do. As a developer you spend a large amount of time reading/reviewing existing code and making changes to it. I'm surprised there aren't more companies that test for that skill.
I see one aspect of this trend of asking programming questions that require a lot of memorization: We have had for the past ~10-15 years people in the workforce (and thus acting as interviewers) who went through a public education system where heavy emphasis was placed on passing tests that required a lot of memorization.
I'd be interested to know how many of these interviewers actually think they're able to identify a solid candidate this way? Not to mention, are they even factoring in how many people don't test well but are otherwise superb software engineers?
Ultimately it seems like there is a soft element to interviewing that is being tossed out now, which is: do I think we can work with this guy/gal? Are they someone that can become part of our team on a personal level? Can they get good work done? Fizz Buzz can't tell you that. What can tell you that is experience. It's a hard-to-put-your-finger-on-it X-factor that I think companies think they can ignore.
> Ultimately it seems like there is a soft element to interviewing that is being tossed out now, which is: do I think we can work with this guy/gal?
A colleague offered this simple heuristic for the soft side of [the] interview:
"If they are going to be equal or junior teammates, I ask myself, 'Would I feel comfortable sitting in a conference room with this person and hashing out a design or troubleshooting a problem for two hours?'
If I'm evaluating someone who is going to be senior to me, or my direct boss, I ask myself, 'Would I feel comfortable following this person's technical instructions if they handed them to me in a document?'"
In my personal professional opinion, this measure is more useful to an engineering org. than any "Stump the Chump" style technical-trivia screening.
So much this. I've been programming for 30+ years. My brain only has so much cache space and it dumps frequently. Asking me questions that clearly are testing my ability to hold large amounts of data in my meat computer isn't testing my ability to design and write computer programs.
Sure, I'd love to have a "mind palace" like Sherlock. Alas, I do not. I often admit this as early in the interview process as possible to avoid wasting time.
If anything, it's testing your ability to misdesign computer programs.
I hate working on software that was written to be read by an audience with perfect recall, because, as a human, I just don't have that. Give me code that assumes I have the memory of a goldfish, and can't keep track of anything that isn't right in front of my face.
I'm pretty sure that's what half of Dijkstra's papers were trying to say, weren't they?
The black hole that gets me is a love of complicated things. It seems like, given a choice between two different things that are equally capable of solving the problem, 9 out of 10 hairless apes will pick the one that has more switches and knobs.
I haven't tried literate programming, but one of the things that entices me about it is that my instinct says that it's a vaccine against unnecessary complexity. If you can't express it comprehensibly in both English and code, there's likely an easier way to do it.
At a previous company we interviewed by giving the candidate a choice of a dozen or so problems. All of medium-ish difficulty. Then we together sat at a pairing station and paired on the problem for 2 hours. Candidate can use the internet and anything they want really, as close to real coding as we could make it. Two hours was enough to get into the problem a bit, and get a feel for what they would be like to work with and a feel for how they approach problems. Still not perfect, but by far the best interviewing method I've found.
We had pairing stations, so usually sat at 90 degrees. Each person gets their own desk, monitor, keyboard and mouse. Also this company was huge on pairing, so it was also an indicator of what working there would be like.
I usually ask what's your strongest language; then ask questions about that programming language.
f.e. if it is python:
> how would you explain the with statement to a junior developer ?
then increasingly difficult questions that go into the language runtime/concepts.
one other favourite question of mine is:
> Imagine, you got a standard website the serves data from a database. When a customer types in the url into the browser bar what needs to happen until the customer see the website.
> Go as deep as you can in the answering the question.
When you got an answer, you'll see frontend engineers explain more about the browser, while backend engineers talk more about the backend.
There was one very senior engineer, that actually talked about the ethernet layer, he talked for more than 15min. Most medior engineers are done in 5mins. ;)
> Imagine, you got a standard website the serves data from a database. When a customer types in the url into the browser bar what needs to happen until the customer see the website.
That's a favorite of mine too. It sets up a broad range of discussion without needing to spend much time setting up hypotheticals and lets the interviewee really delve into topics for which they have expertise and/or interest. It's also easily adapted or sets up follow-up questions for different positions or levels, e.g. "with TLS," or "how would you debug a problem with symptom x?"
exactly that you can have very interesting conversations. in a setting that the interviewee is most comfortable with, since the assumptions/details are determined by them.
it put the pressure a bit on you as interviewer, since you need to adapt your questions. but in general i found it quite easy when you have a rough plan on the topics to ask.
I'm like the senior engineer you mentioned. I'll spend as much time as possible on them to see if I can wear out the attention of the interviewer. They usually give up by the time I get to how SSL handshakes work.
There is so much that can happen between the browser making a request and receiving a response, that someone could completely gloss over the OSI model and still talk for a very long time.
I've toyed around with starting my answer with what happens when the physical return key on the keyboard is pressed. Could spend quite a bit of time on what happens before the browser even knows the return key was pressed :D
> Go as deep as you can in the answering the question.
I'm not a fan of this because it wastes time. Its better to give an overview of the steps and then if more information is required the interviewer can ask for it. They can keep asking for more information until they are satisfied the person knows enough or the person can not answer anymore.
1. A candidate who can solve puzzles but is not willing to do the dirty work with team, solving production issues, doing debugging, bug fixing with usual stuff.
2. Another Candidate, who is willing to learn, is ready to work with team and do the dirty work.
I am on a hiring committee as a Tech Lead, and I always try to weed out 1.
Works great, we hire as interns and then assess them. Someone from Google who we hired full time, was detrimental to team's morale, grunting, complaining about code, complaining about food and what not.
Another experienced smartass was self centered on his skills and didn't want to teach junior engineers anything or willing to admit he needs to update his skills. The moment he realized his skills have no values, started attending pointless conferences. Now his LinkedIn profile has "aware of block chain technology", "attended machine learning seminars".
I said not to interviewing at Google and FB because I don't have cycles to spend months on leetcode. Did I erred? Perhaps. But I am sure neither can provide me same work quality I execute in my current mid size company. I regret nothing :).
For me evil is not what I consider. Its pure business, if other party don't need for my skillset, I aint gonna put my time to change my skillset to suit theirs.
The best interviews I've been a part of have been a short cultural style interview followed by a (roughly) 2 hour paired-programming session on a relatively simply to-do application.
1. The problem is pretty well understood (but does offer room for interpretation).
2. Provides time to cover all key aspects (Frontend, Backend, Database, Networking, Debugging, Testing, Caching, etc) in at least some capacity. In particular, it shows you what areas the developers focus on.
3. Provides a more relaxed/realist environment. It's also more accommodating to developers switching stacks - familiar with good programming patterns but not the specifics of stack (e.g. "Here's how I'd do [some specific task] in [other stack]. How do I do it here").
4. It's clearly a throw away task so there's no concern about "interview labor". It can also be pre-prepped so you don't have to worry about jumping too far in.
5. You can cut short with bad candidates and expand the problem for more complex candidates.
I got bit in the ass by this one as triplebyte itself. They asked me to make a tic tac toe game, and gave me iirc 30 minutes (less?) to do it. Except, it wasn't "build a tic tac to" game, first it was "draw a board to the console," "take user input from the console," etc a bunch of instructions in a convoluted path that perhaps another engineer would do when knowing from the outset that the goal was to build a tic tac toe game in 30 minutes, but not me.
So we'd get to a portion where I'd be writing a quick test on user input, or extrapolating something to a function, and the interviewer would say "don't worry about that, just worry about {getting the grid to print to console or whatever}."
Later on I got my feedback and they said they were disappointed with my user input tests and repeated, extractable code in the tic tac toe portion.
Triplebyte is trying to do good things in the interview space but I think they're still learning. All in all my interview with them was about as positive an experience as a harried and bad interview could be, from my perspective.
Same experience with the Tic Tac Toe, then again with the rest of the interview.
There were a lot of Googlable boilerplate questions (e.g. "what does malloc return?", "what's a bloom filter?") that, as a product engineer, never come up.
Then there were the classic Big-O notation queries that for most use cases don't come up until much later stage. It felt like the founders were classically trained in CS and over-optimizing for things that aren't practically relevant for the large majority of early/mid-stage startups.
Am I familiar with these concepts—e.g. can I go back and refresh myself when they come up?—absolutely. But often times the skills you'd want in an engineer are:
1. Knowing when to optimize
2. Knowing how to profile and identify bottlenecks
3. Familiarity with the available solutions
4. Ability to dig in and evaluate which is the right tool for the jon
This is particularly pernicious, because it's a trick question, too. On linux, malloc always returns, it will never return NULL. Even if you ask for 4 petabytes of memory on a 128mb system, malloc will hand you back a valid pointer for the memory.
If I were asked that, I would say something like "malloc attempts to allocate some memory on the heap and return a pointer to it." If pressed for more details, I would say it depends on the allocator and that we should look at the documentation for it to find out.
If that was an unacceptable answer, then I'd consider the interview a waste of time.
I took shortcuts in my triplebyte coding porting of the interview, assuming we would iterate and expand on the solution. Instead we just moved on to algorithms and I got heavily dinged on my feedback for the shortcuts I took (things like hard-coded strings in some places to get the UI working before wiring up to real data models).
Telling people not to worry about parts of the problem is often a time management strategy. They want you to move on so you have a chance to show strength in other parts of the problem, or at least come away with the positive experience of finishing something, even though it may count against you.
I had the same tic-tac-toe q, I felt that the instructions were pretty clear: “Accomplish this one thing, as simple as possible, then generalize to actually work.” To me it felt like I was getting specs to write a class and that’s what I did.
I had a pretty great experience and would recommend TripleByte.
You failed your Triplebyte interview because you neglected an extremely important aspect of the job: communication. You made assumptions about the ask which turned out to be grossly out of tune with those of the interviewer. In the real world, engineers are often left holding the bag when other participants of the process leave out important details. It’s our job to ask questions and establish the boundaries of each problem before diving into a solution.
> engineers are often left holding the bag when other participants of the process leave out important details
That sounds like a process problem and not an engineering problem. If I've been given requirements, I'm going to trust that my project managers and stakeholders have done the due diligence to understand their request.
Also you've got to realize that different developers tackle issues in different ways. Not every engineer is going to be super talkative while they're in the mud trying to get something to work until they've hit a wall that they don't feel like they have enough information to overcome. I think expecting an engineer to sit there and talk through every aspect of their reasoning WHILE working is fundamentally counter to the way that most engineers perform their day-to-day jobs.
At my consultancy we recently streamlined our interview process:
1. Phone screen which takes 15 or 20 minutes.
2. The candidate fills out an essay, including showing us some code they're proud of.
3. If the essay ticks the boxes we conduct a 1 hour on site interview. We use the same a set of questions for every candidate, so the investment is easy to manage, and our team has a shared set of expectations on what is good or bad.
4. If the interview goes well, we give them a take home assignment. Takes between 2 and 6 hours, depending on how experienced the candidate is. Problem is in C and/or Python (or both)
5. We wrap up with a 2 to 3 hour onsite interview. We walk through the assignment and have a deeper conversation about culture and fit.
The results have been positive for us: we've made some great hires and weeded out some candidates who weren't a good fit.
We've also been able to scale it down to the process we use for interns.
The 1 hour interview has some typical programming interview questions, but we wrap them into a real-world example. The goal isn't to prove they know how to program, but more about allowing them to show us how they think/work out a problem.
If you want to do it for junior developers, fine, but as a senior dev, I'm not doing any take home assignment. It's an immediate pass from me. There is very little possibility that you can glean anything from my code other than picking apart it for code review bullshit. I'm not going to waste 2 to 6 hours on this - done it before, never again.
And this is why I built a site to allow people to search for jobs based on interview type. Although I haven't done any proper marketing yet, still a work in progress (https://softwarejobs.xyz)
I’m sure it works out well for you in that the candidates that pass can probably do the job. But, you basically have lost me by the time you’re asking for more than a 20 minute recruiter call, 60 minute phone screen, and a ~5 hour onsite.
My group likes starting with easy questions and ramping up the difficulty, not to eliminate people with wrong answers, but for two other reasons: first, to see whether people understand the questions and whether they try to make up answers, or ask questions, or say “I don’t know”. Second, to see what the limits & boundaries of their experience is. We know that people don’t know everything, and we measure more for potential than for knowledge, but it’s still useful to understand someone’s experience and exposure level.
More important than question difficulty to me is attitude, and I’d love to see whether attitude is measurable and how it compares to later performance, but curiosity and optimism and communication really do go further than right or wrong on math and engineer questions for me. That point might even be tired already, I know people say it all the time, but I’m going to keep saying it because we still have blog posts on question difficulty, when easy vs hard engineering questions are pretty low on my list of what matters when I’m hiring.
At the end of the day, the question itself is a tool, not the goal. When interviewing, I look at how the person approaches a problem and works through it. I don't really care if they get to the end or not.
People who complain about memorization and difficulty are kinda missing the point. Just like learning math at school isn't really about knowing how to do trig, but being able to think logically and do problem solving.
A few jobs back I was tasked with hiring new developers to bolster a thin front-end team. The job was very CSS/JavaScript heavy, so I asked questions that were pertinent to what the candidate would be doing if hired. Of the five candidates, only one answered all the questions perfectly, and he turned out to be the biggest bust for us.
The other candidates, after answering some of the harder questions incorrectly, seemed very upset with themselves. They knew they were cracking a bit under pressure, but actually showed that they knew the answers when we chatted further. I hired 3/4 of those people because of how well I felt they'd do given the opportunity. All three became leads within a year and a half.
I think personality has a lot to do with outcomes. If you are someone who shows they are hungry to learn and knows how to improve their skills, I will never dismiss you for screwing up a few coding questions.
Personality has a huge amount to do with outcomes. Especially in teams, where the best teams have a mix of complementary talents.
If you're hiring for Generic Developer Skills you're going to get generic developers - and much less development than a more flexible approach would give you.
One interview of mine asked something i didn't know yet I said there's no doubt I figured it out via Google. The interview pretty much ended there and I'm glad it did! Any place or interviewer that says you shouldn't use Google of OverFlow to get your work done is no place I want to work for.
But, that interviewer is the type of developer I don't want to work with. A know it all who only acts like he knows it all and uses Google secretly. Who puts others down to make themselves look good. AKA an insecure P word I want nothing to do with.
I got my yearly review recently and I got very good feedback. At the same time I have been doing Leetcode at home for fun, starting with easy problems, and I get my ass handed to me.
I find it hard to reconcile these two experiences. How can I thrive at a top tech company while failing to solve an 'easy' coding challenge. It makes me concerned about what would be of me if I had to look for a new job now.
Now, the ads ask simpler things like floating point precision and function variable scoping (https://www.facebook.com/triplebyte/ads/?ref=page_internal ); legit problems, but not sure if they are an indicator of how good a developer they'd be in the real world working on a CRUD app.
You nailed it. For someone who needs to determine the endianness of a machine on the fly, the question is trivial to solve; for everyone else, it’s trivial in the sense of “unimportant.”
Their pre-screening quiz uses a bunch of questions on various technologies to ensure that some subset of the questions tests you on technologies you're familiar with. Which is good, since not everyone is a web engineer.
I don't think there would be too much correlation between CS degree and ability to answer that question. There are lots of CS degree programs that don't deliver such low level instruction, and there are plenty of people without CS degrees who can answer that question because they were curious about how computers work or have been exposed to low level programming.
The bigger problem is the inconsistency amongst interviewers when judging candidates. All these articles from TripleByte and Gayle (who've built business on the flaws of interviewing) focus on the questions instead. Doesn't matter how hard the question is if the interviewer knows what they're looking for, are experienced enough, show no nepotism and have good communication skills.
My worst interview ever was with Facebook when a non-native, new college grad gave me a Leetcode hard problem in half-broken english and went back to his work without even looking up or walking with me through the problem.
If they want something that mimics the common, everyday..
Give the candidate a project with 300,000 loc, tell them to make the most local change possible that fixes the reported bug. Update the tests to reflect the new logic.
Bonus: discuss architectural changes that would have resolved the bug and/or improved performance.
,,Hard questions do filter out bad engineers, but they also filter out good engineers (that is, they have a high false-negative rate). Easy questions, in contrast, produce fewer false-negatives but more false-positives''
The philosophy at Google is that it's better to filter out 3 good engineers than to let in a bad one. The consequence of this is that it's really hard to get kicked out of Google.
The other part (whether it's more important to work on long easier questions to see how the candidate works on a large code base) is orthogonal reasoning, and that part may be true, depending on what type of engineers somebody is looking for.
The philosophy at Google is that it's better to filter out 3 good engineers than to let in a bad one.
Corollary: "If I follow (what I've heard to be) Google's hiring practices (despite not having their brand recognition or candidate pool or a comparable engineering environment to offer) -- then my company is on the way to becoming another Google!"
Sure, it's something not to be copied by a small company. But it's important to understand for good developers that they can be great and still fail the process (they just have to prepare again adn reapply)
> The philosophy at Google is that it's better to filter out 3 good engineers than to let in a bad one.
Google's problem nowadays is that they have such strict standards only for grunt developers, but not for management. In fact, it looks like that it's more like opposite, i.e. it's better letting in/promoting 3 bad managers/directors than 1 good one.
This is sadly true, but even more true for PMs, the average engineering skill required to be inside the organization went down. When Eric Schmidt was selected as the CEO, having a strong engineering background was a requirement for him, and he was great at connecting with engineers inside the organization (probably not that great with connecting with sales, they had to accept that Google was an engineering driven organization in the past, now it's a mix).
> The philosophy at Google is that it's better to filter out 3 good engineers than to let in a bad one.
Just fire the bad ones. You're going to get bad ones anyway. At larger companies you might never notice whether someone is good or bad.
From the way they structure their interviews, it seems like they'll still get plenty of bad ones - it's just they'll get bad ones that are great at algorithms, with unknown skill at everything else (like the actual work done).
Who and what decides what a bad software engineer is (note this is very different than deciding who is a good)? Is it a manager? Managers are single people so shouldn't be the sole deciding point. So you need to gather feedback, aggregate it and decide what constitutes bad. Then have that feedback reviewed by independent people to ensure that no one is being unfair. That takes time and bureaucracy.
For me not being afraid to get kicked out of my workplace was a good thing. Also most of my colleagues were amazing, I miss talking to them, but at the same time I don't miss what the top management has become.
Another solution to this problem is contract to hire. I realize this is kicking the can down the road to the contracting firm but hear me out: that's the business the contracting firm is in. They can get really good at their hiring practice since that's their core business. That's not our core business. We've been doing this for the past two years and it's worked out great. Now you can see how well people do the actual job and if you're not satisfied, which happens from time to time, just get someone else.
Why would I leave my W2 job for your contract to hire position? All I can see here is a company that doesn’t want to fire people who aren’t working out, possibly to save on benefits and unemployment insurance premiums. And why do you have so many people not working out that you have to do this? None of this inspires any confidence.
Yeah, in my experience asking the basic question of why I would even entertain moving from my W2 job to a contract position where I assume 100% of the risk is met with dead air on the other end, followed by a hasty assurance that we don't really have to go the contract route.
It gives me the feeling that I will be working with people who either don't know their own value and so are willing to agree to this kind of deal, or are so poor performing they don't have the leverage to insist on a regular employment contract.
Based off some of the responses here I just want to clarify some things in order to make this work.
- Yes, you have to have vetted your contracting companies. In our case that's something we'd already done (roughly 20%-30% of our staff are contractors)
- This is contact to hire, which is different than bringing on contractors for staff augmentation. The intent is to hire and the applicant is made aware of that up-front, to the extent that their employee salary is negotiated and employee benefits package is discussed.
- W2 employees jump at the chance to be given an opportunity to prove themselves in a new domain. Younger and older employees are especially attracted to this option.
- You're still doing an interview, but the nature of the interview is different since you know the contracting firm has vetted their basic skills.
- So far we've only had to let one person go, they simply never could "gel" with the team. We've had two others we had to let go as we were scammed - the person that did the interview wasn't the person that showed up to work. Yes, it happens. What we don't have is a "revolving door" situation where people are continually rotating in-and-out.
How does this help? You still have to find people to contract and so you still have to screen the contractors. Every one that fails you was a huge waste of your time. You've got to spend time telling them what you want them to do, explaining to them your systems, your APIs, training them to your way of doing things or whatever and if they fail than all your time is in the toilet.
The only advantage I can see is you can get rid of them easily by not renewing their contract but that in itself is not a solution to finding good people in the first place. Plus it's likely to be hugely limiting. I'm not going to give up my current job and move to your city for a contract you might cancel in 3 months.
This is dangerous because it will just create a shadow workforce of contractors without any health benefits or rights as an employee. We already have this issue in tech.
Many companies are "contract to hire" but never hire and keep contractors on as long term employees.
I'm glad it works out for you. But I would not want to work at a company where a significant number of my coworkers only last a couple months, and I think I'm far from alone in that perspective.
The thing that bothers me about these questions is that there isn't a shortage of practical questions to ask that will test if the candidate can truely contribute to the real problems you're trying to solve.
"Right now our roboticists use a hacked together QT based GUI to manage customer robot fleet data. It takes 1-10 minutes to load on a slow network and is hard to add more features to. I know you have far less information than you'd want but walk through your thought process for how you'd replace this system over the next 12 months. We can make assumptions."
And then the next 15 mins can be an organic conversation about the problem space. You can direct the conversation into corners most relevant to their potential role: "you mentioned using web tech because we discussed how all usage is across the internet. Can you talk about the merits of Http vs. websocket?" "How would you ensure that we don't accidentally take every single customer offline if we centralised our data store?" "What kinds of UI technology would lend itself to robot mapping? Can we just use Google Maps?"
If you really need to dig deeper into technical prowess, find something relevant in your conversation and dig deep into it. "We talked about saving changes to floor layout. Can you whiteboard/laptop how you might implement undo/redo for floor elements?"
Having recently gone through some interviewing (for machine learning research), a very cynical but overlooked aspect (in this thread) is the following:
After receiving an offer from a big tech company, the interviewing process has already completely turned me off from the idea of working there.
Now despite this being a dream position for many and me having no alternative but to take it currently, the smug interviewers have already gotten me in the corporate mind-set: No matter, the reputation and salary, treat it like any other job, do not bother being loyal - they will not be.
So the terrible interview process has at least the advantage of reminding future employees what they are signing up for.
I wonder what kind of psychological filtering is at play. Do employees feel loyalty after an interview process that is best described as hazing? Are they projecting the humiliation they experienced when interviewing future employees? That's always been my impression.
Many interview processes generally paint the company in a grimmer picture than it actually is. So I would do additional research than turning down an offer because of a bad interviewing experience. Of course, you are completely in your right to do so, but it might not be the most optimal behavior.
Usually, interviewers are just bad at interviewing and aren't really aware of what proper behavior is. It actually isn't easy, nor is it a desirable activity nor do you really get points for it in your performance review.
Oh I won't turn down the offer, as said, I have no choice and it would be an irrational career move to do so.
The ultra-negative interviewing experience just completely changed the mindset with which I will go into that job. This then got me thinking about the hidden cost of a negative interviewing process.
Sure, but in general all of the big company interviewing processes will convey the similar amounts of dread these days. They are so worried about false positives and have so little concern about false negatives that they push many candidates away simply by not focusing enough on the "why should I work here" part of the interview.
Specific examples of what classifies as a “hard” or “easy” interview question would be very helpful to have reference points and assess one’s own interview process.
I wish this comment were higher in the thread, because it's essential. I'm very interested in this finding from triple byte, but we have no guidelines for what hard and easy questions are.
I know these questions are part of triplebyte's product, so a full, repeatable study isn't in the cards. But if someone from triplebyte could just post a few examples of each, I'd be able to get a lot more out of this result.
Easy - reverse a string, determine if a string is a palindrome, reverse the digits of an integer, determine if one string is an anagram of another.
Hard - implement a subset of regex match in optimal time+space, find the operations required to turn 1 word into another word given a list of transitory words, find the median of 2 sorted arrays in optimal time, find the next permuted value.
I think those "easy" examples are too easy to get any meaningful signal from. If they struggle, they have no idea what they're doing, and if they don't, you don't really learn anything about how they work because there's not much to them.
Other people have mentioned Triplebyte using console tic tac toe as a question; that seems like a better sort of "easy" question that still lets the interviewee have a chance to show off their problem solving and factoring skills.
examples of hard (subjective), recursive algorithms, anything that requires dynamic programming, generating permutations/subsets, problems that require a "de facto" memorized algorithm such as tree/graph traversals, coloring of subsets etc.
And here's the problem. People don't agree what's hard/easy. If I ask you to find duplicate values in a nested data structure (arrays of arrays) ... that calls for recursion. I think that's easy.
You realize that this approach is flawed the moment there are blogposts / books ( e.g. Cracking the coding interview ) on how to crack it. The how to crack 'x' becomes a field altogether ( coaching / youtube videos / blogs / books etc ).
Also, platforms like hackerrank are adding fuel to fire. I read the CEO write somewhere that he wished the below "were taught in schools :
1) Communicating complex ideas with clarity
2) Systems thinking
3) Grunt work tasks
4) Boundaryless thinking
5) Self-awareness / EQ"
Please note almost all of these are not evaluated on their platform (they profit from coding tests) or during interviews and almost all are soft / intangible skills (skills which are not immediately obvious about the candidate during a typical programming interview). [ Side note : some could say that coding tests are the problem they have chosen to solve - in which case, why are they worried about these skills ? Are the companies seeing coders crack the tests on their platform, while not performing well on the above skills post hiring ? We could only speculate. ]
All good work is done by teams, and to be effective in a team requires a lot of intangibles which aren't even assessed in a typical interview.
Or :
A couple of weeks of work with a task being assigned and the mentor or interviewer looking at how the candidate is approaching the problem and whether he is able to solve the problem within the time constraints (an easy task shouldn't take long, and a hard problem shouldn't be short circuited to give a sub-optimal solution.) and other such observable traits can be evaluated.
My two cents!
EDIT : Poor wording above (i.e., couple of weeks). A task should be assigned and evaluated post a time (which is ideal for task completion as per the interviewer). No constant interaction with the candidate and spending loads of time with that candidate - that isn't scalable when the demand-supply equation is imbalanced already.
EDIT 2 : The idea above is not about spending weeks for recruitment. The idea was about being practical about the kind of questions / tasks that are given during interview (example : code a feature or fix an issue we have, as another user has suggested well in the comments). Took me a while to realize we have missed the point I tried to convey for the logistics of how it should be done.
I upvoted you based on the first 90% of your reply, but that last full paragraph is just silly. Sure, you can evaluate someone more fully in 2 weeks of working with them than in an afternoon of interviews, but who's going to spend 2 weeks working with you, just for the possibility of continued employment? No one who isn't both desperate and unemployed, that's who.
Hey, i have had a couple of recruiters who have approached with this idea. Basically, the recruiter says, okay i give you a week or two for this task to be complete ( the call is basically 30 mins or so explaining the problem and getting to know the basics about me ) and then post the time alloted, a sync up to see how you have solved it and to explain the design or solution or something like that. Only a couple of times have i had this experience. There is no additional overhead on the recruiter apart from reviewing it post the time alloted.
It felt better, because the interview was more like a typical work like situation. In work too, we are assigned tasks, and we update the progress.
Maybe its a bit idealistic, but then we all like to dream about utopia, don't we :)
I still wouldn’t do it. Why should I, when I can do the entire interview process with another company with only a day’s worth of time expenditure on my end? If the task itself takes more than an hour or so, I can get to an onsite interview with far less effort than that by doing a recruiter chat and a phone screen.
Sounds fair enough :) Agreed. I have just proposed one of the alternatives which i could imagine / think of from the top of my mind. The problem isn't solved yet - or rather solved well yet. We could always brainstorm on a better alternative. :)
Here's the problem: you're dealing with a multiple prisoners' dilemma situation here. Theoretically, the best solution for all (candidates and companies) is to have a thorough, standardized, and real-world based interview process. If every company cooperated, then you could have that. But, when there are companies out there that will bring candidates onsite for ~5 hours after a little more than an hour's time commitment, that means I could theoretically do 5 of those interviews in a week. No other company is going to start demanding multiple days of an interviewee's time given that every other company's process takes ~1 day of active participation by the candidate. So, you need to limit your "thorough, standardized, and real-world" process to taking no more than a day.
Here's a process that might work, that I actually would participate in:
1. ~20 minute recruiter chat to establish basic levels of fit/compatibility.
2. ~45-60 minute phone screen.
3. Onsite consisting of 1 hour behavioral/cultural interview, and 3-4 hour programming task, preferably based on a real problem or task that arose in the company's code base, suitably extracted, simplified, and scrubbed of proprietary information. Internet resources are allowed, as is asking any questions of the interviewer, without penalty.
You could bring me in at 11 AM, do the behavioral interview, take me to lunch, then have the afternoon to do the programming task, and have me out by 4.
Yeah, was thinking along these lines during this discussion. Sounds like a good idea and something that everyone would be fine too. The idea being that the interviewing pattern should shift from hard theoretical questions to practical problems that needs to be solved as you have suggested above :
"3-4 hour programming task, preferably based on a real problem or task that arose in the company's code base, suitably extracted, simplified, and scrubbed of proprietary information. Internet resources are allowed, as is asking any questions of the interviewer, without penalty."
I agree with the sentiment, but I noticed recently that TaxJar (I considered applying there at one point) mandates a trial period along these lines. From https://life.taxjar.com/distributed-team-hiring-process/, "... We hope a candidate is able to spend somewhere between 80 and 160 hours working with TaxJar during their trial. ..."
Presumably there are some people who are willing to do a trial, either concurrent with their existing employment, or on the chance that they will be hired after the trial.
This is good to know, typically it should be in concurrent with the existing employment. If we plan to switch companies, we commit time apart from existing office work for interview prep anyway. This interview prep time is what we would end up
committing to work on their tasks etc. I think working on a project or task from the prospective employer would be much fun than preparing / memorizing interview questions etc. What do you think ?
Most companies won’t allow such a thing, especially if the other company is a competitor. In any case, doing so would likely require me to take 1-4 weeks off from my regular job. Again, why would I want to blow all my vacation time working, just for the possibility of another job?
Okay my experience was, i didn't have to take leaves from my regular work. I had to simply dedicate a couple of hours post office time, to work on the task (the couple of weeks was given owing to time constraints due to my regular work). Post office hours, i would have to anyway commit to interview prep if i had decided to switch anyway is what i meant to say.
In any case, this is not the only method or approach that is the better alternative :) Maybe it will work for some and not for others. But evaluating alternatives is a good exercise. Currently we are stuck with this method because we do not have a better solution to the hiring problem.
TaxJar sounds insane. So after I go through up to 4 weeks of poorly paid work (while working my existing full-time job and raising a family), I get to participate in their hazing rituals!
Try flipping it and looking at it from their vantage point - probably they are aware that not many would take this up, which would leave only the kind of people they want to work with showing up to their doors ? Maybe it's a clever hack to filter people out ?
Thinking more about TaxJar...I wonder if they are at any risk of employees of an existing firm doing work for TaxJar on the current company's time and equipment, which would in many cases make the TaxJar code the intellectual property of the former company.
That's not true, not every state shares or accepts Bar results from other states, and Doctors have medical board exams that also need to be studied for.
If you are a lawyer or a doctor and are moving to a new state, you probably need to study.
If you're a practicing physician and move to another state, you, in fact, probably do not have to retake licensure exams. Almost all states either have reciprocity agreements with at least a few other states, or will grant a license by endorsement to a physician who's licensed in another state and has practiced for a certain number of years. See http://www.visalaw.com/wp-content/uploads/2014/10/physicianc...
Board exams are the same as MCATs in the sense that you take them once and forget about it.
For lawyers, the situation isn't so rosy (which is to be expected, because law varies a lot per state), but there is quite a lot of reciprocity out there, and some states will offer an "attorney's exam" in lieu of a full bar exam. And, for some areas of law that are strictly federal, admission to the federal bar can be sufficient to practice in any state. It is not as hard as you make it out to be.
I shared a pattern. The problem is, the reversal of motives (Are you writing the test to demonstrate ability or fooling the system to move ahead or virtue signal ?) and the inconsequential nature of the test once there are "cracks" available. In our country, we have very competitive examinations conducted every year, and people have cracked it with high national level ranks without much understanding of the subject - thanks to tutoring institutes that spawned as a sub field to coach these people.
I have also known a couple of people who aren't that smart ( when you talk to them in person ) but who wear badges of honor about their SAT scores etc.
I really agree. At companies I've worked at for the last 20 years (at least) the approach has been to ask pretty simple questions; people who struggle on those we don't want; people who don't struggle, even when they make silly mistakes (due to feeling interview pressure) are good.
Questions about the standard library of the programming language in question are good. Questions about the dusty corners of said library: bad.
And don't ask about floating point: most likely the candidate won't really know more than the usual things; anybody who really does understand them will probably give answers over the interviewer's head :-).
I like some coding in interviews. I try to make the tasks fairly real-world though. I am a UI engineer so my tasks typically fall into one of two buckets:
- A take home (2-3 hours) task for retrieving tabular data from an API and displaying it. Here I'm looking for general framework chops, readability, some design sense.
- An in-person (~45m) not-quite-pair programming task, with a real computer, tools, editor etc., for doing a typical UI operation, e.g. truncating text. Starts simple and gets more complex as time allows: make a function to truncate text to x chars; now add an ellipsis only if truncation occurred; now make sure not to truncate in the middle of a word, etc.
I've never been giving given a practical programming test. Nearly all are useless algorithms. Create a function that takes a integer greater than 0. Build an array equal in size to the argument. Fill the array with numbers that when summed together equal zero.
It was worded far worse than that. What exactly is that telling you about the engineer?
On the flip side I'm asked to code full fledged applications but not to spend too much time on them... okay...
Another time I was asked to code a luhn algorithm. Oh and do it while a room of people watches you on giant screen cause that's what your day to day job will look like... I failed miserably and still got the job. What?!?!?
FWIW, I regularly pass interview candidates who "fail" at certain questions. The point of those questions isn't to complete it perfectly (and sometimes we get less information from those, honestly), it's to see how you go about getting there, how you fail, and how you deal with that failure.
For instance, one of the problems I frequently ask has a structure that really encourages people to try inventing heuristics to solve the problem, even though ultimately all of those heuristics fail. Seeing how people react to "but what if your input looks like this?" questions is often very enlightening- can they rethink their approach? Do they just keep glomming on more special cases? Can they deal with someone pointing out that sort of flaw?
Most programming interviews are a waste of time and energy for everyone involved and the results are, for the most part, a complete facade. Asking puzzle questions gives interviewers the feeling that they're really testing hard for top talent, when in reality they're just demonstrating their lack of interviewing skills.
I recently interviewed managers at two different technology companies, both nationally known, for a report I am working on. Most managers admitted their technical interviews were flawed, but didn't know any other way to assess skills. They also admitted that a significant number of people they recruit refuse to even take the technical challenge and end up working elsewhere.
In interviewing a couple dozen engineers, I found most just don't want to waste their evenings and weekends on a technical puzzle for a job, especially when there are a lot of companies out there who don't bother with them, so they end up searching for companies that don't waste their time with technical challenges.
Another funny thing I discovered during my research is that just under half of the employees at both companies I've interviewed so far were not able to successfully complete their own technical challenges.
Another problem with technical challenges is that often times the interviewer knows less about the topic than the interviewee. I recently went through the interview process with a local technology company who uses Elixir and Go (both of which I know). During the onsite interview, the interviewer kept saying things like, "Don't forget to..." or "You forgot..." I kept explaining that I didn't need to do as he was suggesting. In the end, my code worked, my tests worked, and I passed the interview. In spite of this, I was rejected because the interviewer, "Wasn't feeling it."
I still have a lot of research to do, but I haven't found anything, so far, that suggests that technical interviews predictably result in top-talent getting hired. It seems to be the same crapshoot interviewing people without using technical challenges is, because in the end, most people decide within the first couple of minutes if they like someone and hire based on that, regardless of the rest of the interview process.
I just finished reading Bill Kilday's Never Lost Again. It was amazing that four engineers produced the ground-breaking Google Maps in 16 months. Algorithm/Math questions used to be really effective at finding engineerings like those four engineers. There was a reason that Microsoft resorted to algorithm puzzles in its hay days as well.
It's just unfortunate that there's so much prepping materials online nowadays that the programming puzzles have become ineffective. It gets worse as many interviewers were not good enough to ask follow-up questions. For instance, addition with big integers is a pretty easy interview question, right? But if a candidate can go as deep as this article: https://bearssl.org/bigint.html, I can be pretty confident that the candidate is really really good.
That said, I personally don't find it necessary to join the rat race. Instead, I'd suggest engineers just take time to thoroughly study just one book on algorithm designs. In fact, an introductory book, such as Kleinberg's Algorithm Design or Udi Manber's Introduction to Algorithms, will be good enough. It may not get you into Google, but it will likely get you into another damn good company. The best part of this approach is that passing interview is really just the byproduct of you trying to become a better engineer.
Another clarification: this data only applies to general programming exercises and not domain-specific knowledge. For example, if your role requires bio-informatics knowledge (or ML or AI etc.), then by all means ask about bio-informatics—even if it means asking harder questions. (We do, however, recommend keeping the content of general programming exercises as vanilla as possible, so that you don't filter out someone because they lack mastery of a subject that you don't actually intend to measure.)
If you are asking interview questions that has one beautiful precise answer, you are doing it wrong. Good interview question should start with something very simple that even very beginner can think and answer, then gradually add complexity and constraints little by little.
Example:
1. Write function that multiplies two integers.
2. What if these numbers were real numbers but computer can only operate on integers? How do we use same number of bytes as ints to hold a real number?
3. What if I wanted infinite precision? What would be run time of your algorithm and storage complexity? (don't insist that candidate must hit the known optimal).
4. Can I have complex numbers as well?
5. Imagine complex numbers not only has "i" but also "j" and "k". How do we handle this?
It is astonishing how many candidates won't be able to move past #2.
The key is to look at how candidate approaches handling complexity, create representations and use it to craft clean solutions. Whether they eventually arrive at known optimal/great answers is unimportant.
Above example is not a test of your programming skills. May be you are great NodeJS ninja who doesn't need to care about any of this. I think there is two types of hiring that often happens (1) for a specific project, specific task (2) long term member of the mission that company has.
For #2, I prefer to hire people who are generalist problem solvers. They need very strong coding skills but more than that they need to be able to operate in new domains easily and adapt. If I was running a startup, I would need them to work on MySQL database one day and react-native stuff other day and perhaps also pick up some of deep learning practitioner skills two months down the line. Above example question doesn't expect candidate to be familiar with floating point representation but it is interesting to see what they might have thought if they were the ones doing it. Assuming everyone works with numbers all the time, people are hopefully familiar with basic issues of precision, rounding etc.
There's not one kind of programmer. You may not be good at algorithms, theory of computation or numeracy. There are huge swaths of important work that don't include these attributes. But there are also many that do. And for those, you may be unqualified as this is not a particularly challenging or deep question for this branch of our field.
I would ask them to propose one that they think is satisfactory. Interviews shouldn't be game of secret answers that one is out for hunt. There shouldn't be any adversarial component in interviews. I often imagine candidate as 1-person startup who discovered this problem, perhaps not in very polished well defined form. As an interviewers I'm just thin air in his room observing the process of polishing, thinking and solving :).
I agree that "read my mind" is a terrible form of interview. One issue with the "let the candidate guide the conversation" approach as you are outlining here is that it is very very time consuming to train new people to administer this kind of evaluation in a repeatable, fair way. Giving the full definition helps the evaluation move along a bit faster, too. Then, it does not create an opportunity for evaluating "how do they explore the problem?"
>Harder questions ultimately filter out too many qualified candidates to be optimal.
This was the key sentence if anyone missed it. "Optimal" for whom, exactly? Not for FAANG certainly. They don't need to worry about filtering too many candidates out, because they have a nearly infinite pool of applicants, and infinite money to conduct a search.
They can ask as difficult questions as they want, because they can pass hundreds and thousands of qualified candidates, and still have plenty more where that came from.
Edit: If you consider "optimal" to be the expected cost of a hire compared to the expected profit, it is fully plausible that if your margins are big enough, asking hard questions is the most effective way to ensure low false-positives. But as everyone knows, comments on articles about interviews are never about the economics, it's only about human ego of feeling rejected.
Is it a surprise to anybody? My favourite example of interviews going bad is a tech company in SF asking to find cross currency arbitrage loops with all known currencies over the phone in the first round hiring data engineers for the ETL team. I am glad I did not pass that round.
I've seen this borne out in practice after administering dozens of phone screens and in-person interviews over the years. I started off with questions that were a little too hard and couldn't get a good read of the candidate unless they happened to have already done the problem on some interview prep site.
Switching to more practical, simpler problems allowed me to really observe how they work and solve a coding problem. As the article said, I was also able to add requirements or features to the problem which let me see how the candidate adapts to changing requirements, or refactors their own solution to handle a new edge case. Simpler is generally better if you are timeboxed to 45 minutes.
I changed careers from another technical non programming career to web development. I got a handful of interviews but I never felt like I was showing them what I could do... learn.
Until one finally gave me a fairly straightforward homework style project. It was probably more apt for a noob and I threw myself into it and submitted my work and explained what it over the phone. I was in the office the next day and we talked about it and I had an offer.
The homework style interviews are understandably controversial, but at least I got to show my work, me doing my work, my thought process, outside of a few moments at a keyboard.
I always thought that the primary quality measured by this popular interviewing style is "being-like-your-interviewer" which leads naturally to the avalanche effect due to which the practice becomes even more popular within the company and simply common sense after a while. The interesting part, to me, is how exactly similar the methodologies should be for companies different by size and domain. Even if you assume that some flavour of technical interview worked for Microfaceboogle, it might as well kill a copycat company (or may be completely irrelevant to its success or failure).
I wonder if hard questions, especially hard algorithm questions, has high false positive rate as well. Being good at solving those questions simply requires you to do tons of practice on leetcode, and those who are willing to spend tons of time are probably those who find it difficult to get jobs (or new grads). I don't think the skills for solving hard algorithm questions correlate well with the actual performance at work. Especially if someone is a new grad, they could be good at thise questions but don't have a clue about how real software systems are built.
At one of my jobs I nailed it as the top candidate, by far, of the 79 people they interviewed in person. The interviewers were looking for competence, freedom from frameworks, experience and so forth. I have in this line of work more than 20 years and do it as a hobby so I nailed the interview.
At the job though I worked with a bunch of fresh juniors who only know how to write code the one way they learned in school. According to them I am shitty developer because I didn’t write code in the one way they understand.
This article makes a lot of good points. After going through the "implement a red-black tree on this whiteboard" experience as a more junior dev, I always promised myself I would never use this kind of stupid questions to hire.
Now, 13 years later, I mostly rely on "homework" type exercises. I think they address most of the issues. They are more "real world", no time pressure, etc. However, even those now are being heavily criticized. What's left to be used?
Homework is a bit of a catch-22. I agree that they can be made more real world than typical interview problems (although not all are). OTOH, why should I do your “4 hour” project, in which I’m competing with people who spend 8+ hours on it, just to get to the same onsite interview I could get with another company after a recruiter chat and a 45-60 minute technical phone screen? It’s not a good use of my time.
Just to clarify first, the exercise should not take more than 1 hour. And I mean it. It is not difficult, or tricky or anything. It is used 1) to make sure the candidate can do very elementary things, 2) as support for follow up during the on-side interview.
Regarding "getting the on site", if you don't do the exercise before you would get the "whiteboard" BS. And then you may be competing with people who spent 2 weeks reading "cracking the tech interview". How is that different?
If your exercise truly takes 1 hour, and you're evaluating it on the basis of 1 hours' worth of effort, then fair enough. I would say that roughly equates to the technical phone screen in this case, except without all the annoying "let's talk about every little thing we're doing while we're doing it, so it takes twice as long" BS.
And there's nothing wrong with whiteboarding, if done appropriately. Whiteboarding a system design question is totally appropriate. Asking a candidate to write down code for finding the longest palindromic substring of an input string in linear time is not. I've never come to a situation where writing anything more than the barest pseudocode outline for what to do on a whiteboard was the best way to get a real world task accomplished.
There is a problem with homework, candidate spends couple of hours and you spend couple of minutes to assess candidates. It's not fair. Google doesn't do that, Netflix doesn't do that. Why anyone would believe random startup and spend their weekend unless you're desperate looking for any job?
It is both true and false. Yes, the candidate may spend 1 hour (I don't give difficult homework exercises) while it takes me 15 to 20 minutes to review the submission, but I have to do it for maybe 5 candidates.
I totally understand the main criticism for homework, it takes time, the company may never call you back after you poured 2 hours into their stupid exercise. But it is an attempt at fixing all other alternatives:
* the onsite whiteboarding is BS
* using open source contribution is totally unfair to candidates who don't participate
* the "contract for 1 month and then maybe we'll hire you" is also total BS in my book. Who would leave a FT job with benefits for a contract that may end?
I'm not sure I can think of any alternative that has 0 drawbacks.
and at the end it turns out to be 5-10 hours. Companies say "you should be able to finish it in 1-2 hours", but it's almost never true.
I think alternative is to have 1 hour coding session on a good problem. Most problems are complicated, but there could be something else. I was once asked by Uber to implement timer in JavaScript that will update DOM, also create APIs to stop/start/pause. This kind of a challenge doesn't involve any algorithm, but it shows your ability to code.
It should be enough to bring you onsite.
But I open source should work. If you contributed to Linux Kernel and it was accepted - onsite no questions asked. If you have github repo of 1000+ stars - onsite no questions asked. If you have already passed Google tech screen and was invited onsite - invite onsite no questions asked.
Given the 5th & 6th paragraphs about "correctness signal" and "process signal" it seems like the obvious solution would be to do both, i.e. ask questions that are easy alongside those that are hard. The easy ones are "process questions" and the hard ones are "correctness questions." Or maybe you have a range of difficulty from easy to hard, and each question has a number attached to it from 0 to 1 that reflects its intended use.
One issue I've noticed is that the elite companies are trend setters, and their interview methods are being copied by non elite players.
Perhaps Facebook needs and wants engineers that can bust out A* on the spot, but I doubt Nordstrom or Starbucks needs that level of talent.
This has changed the field where now to even get an average job at an average company you need to study at the new normal, whose interview ideas were designed to look for the top 1% of the field.
I totally agree with this. You shouldn't ask a question where some amount of luck of figuring out the problem is necessary. Instead ask something practical and relevant to your domain. Also, leave room for exploration if they sail through it and recovery for someone that goes down a bad path.
I’d agree but on the condition interviewer actually know ehat they are doing.
As I gained more experience (300 interviews and counting, baby) I realized that I pick up more on candidates skills that are not directly related to their performance on particular question. At this point the question itself is just a conversation starter and so it’s better if it’s simpler and more broad because it leaves lots more avenues for the conversation to go.
Another use of these kind of interviews is age discrimination to filter out of older candidates who don't have time or motivation to prepare for this BS.
This is just another fad.. In late 90s there was fad for MSFT style puzzle interviews..then it went out of fashion.. this will also go out of trend.. just wait for couple of more years.
"Easier interview questions are also less stressful, which is an important advantage. Stress causes candidates to under-perform. But, on the other hand, when candidates are more comfortable, they perform their true best, which actually makes interviews more predictive."
I cannot believe that any interview situation is comfortable.
Interviews are like taking the SAT or ACT. For better or worse they’re highly standardized and you just need to prepare. There are a bunch of different prep service like InterviewCake.com, Pramp.com, and PracticeCodingInterview.com that are good if you feel like you need more than just leetcode grinding.
Isn't this just saying that easier questions are more correlated with overall interview performance? That might be a good thing (a sign of consistency), but it could just as easily be a bad thing (i.e. it doesn't provide unique signal).
I took a technical interview today over the phone, and you know what, they were pretty fair with their questions. I don't know what kind of competition I'm up against for the position, but very fair questions.
I've worked for several companies who made up tests that nobody on staff could pass, including the authors. But the test did imply that the existing staff was superhuman, since no applicant could pass.
Would Triplebyte mind sharing the data? I'd love if the numbers could speak for themselves, rather than having to rely on an interpretation of the numbers.
It seems like this article would be stronger with some examples. What are some examples of good and overly-difficult questions, according to TripleByte?
That's because professional outcomes for programmers aren't related to aptitude. Nobody ever got promoted for writing good code. (But the reverse is very common.)
Don't forget that they introduce market friction to reduce developer turnover and reduce salaries.
I suspect one of the reasons Google is so open about their process and the need to study is so that everyone follows suit. Thereby forcing people to take days off and do homework for even the most mediocre of positions, causing the switching costs of interviewing anywhere to become higher.
Interesting idea. I got an email from Facebook with invitations for the interview. Replied: “I can do it but i’m Letting you know, I will not spend a minute preparing for it. So, we can do it today”.
I never heard from this person again, which brings a question - is probability of passing it without putting personal time is so low it makes no point in even trying?
That's a nonsensical line of reasoning. Hard interviews don't reduce salaries, they increase them. Why do you think FAANG companies pay developers 300k+/year when there are so many lower-status companies who pay <100k/year and still manage to find hires? Because the latter are willing to hire candidates who have worse resumes and less impressive interview skills that the FAANG companies consider beneath them.
You might want to re-read your own post because whether 300k/year is a lot of money or not is a non sequitor. You claimed that companies ask hard interview questions to depress salaries, which is a trivially falsifiable claim when you note that companies that don't ask hard interview questions can get away with paying less money than companies that do because they are less picky about who they hire.
What do you mean by "than you need to"? I'm guessing for many companies (especially the ones with this kinds of interviews), the limit is the number of hires they can do, not the number of candidates that apply... So by definition, you need to reject all but n (the number of open jobs)... Why wouldn't you reject them based on performance on interviews (as opposed to, by their CV or luck or something)?
This might be true at Amazon, Google, or Netflix. But many companies have openings that sit vacant for months, even though perfectly great candidates applied for the job.
Months? Years :). Know couple of teams in my current place (very well known company) that couldn’t hire for a year... and lost position, as business argued - if you can’t hire for a whole year you don’t really need t
In many cases companies don't interview all the candidates before making a decision, it's more on a rolling basis. You consider each candidate separately - you interview a candidate and then decide whether to hire him/her or not.
In plenty of cases companies reject candidates who later perform successfully at similar roles, and this is the point of the parent comment. However this is kind of a desired effect, because not hiring a right candidate has lower cost than hiring a wrong candidate:
- if you skip good candidate, because you're not sure whether they'll perform well, you just wait for more candidates to apply, it just slows down the process
- if you hire wrong candidate, the candidate joins the team, underperforms and eventually is let go, but during the time that person works for you, you don't look for the right candidate for that role, which costs you more money and time
I did seven interviews for a top US bank. I didn't pass. They needed to fill dozens of positions. Two years later they opened the gates and hired a notorious complainer ex-coworker. That's the current state of affairs.
Think it’s more a by-product than by-design. At the risk of making a blanket statement, think it’s fair to say most software engineers have average-to-poor people skills, leading them to the fallacy that “logic tests” are a good way to evaluate people. That in turn leads to a bias you describe “did you see how badly the candidate failed the test!?!”
I don't give questions like this primarily because:
- Workers are going to be around for 15 months or less and they have domain expertise on 1 stack already and I don't need to screen for how they would hypothetically function across all stacks
- Worker's process and resource finding skills are more indicative of the time they will spend on a task
- Worker's process includes collaborate use of version control and code reviews, if they pass the screening but can't really integrate on these things then thats what will get them booted from the team
It isn't always more expensive to have a not great developer. Look in your organization and see if what I experience is true for you, and you'll save everyone a lot of time.
In my email I have an "interview prep packet" from them that essentially tells me to brush up on algorithms and read Cracking the Coding Interview to prepare for their interview process.
I'm fairly happy in my job. If they offered more money or a really interesting project I'd consider working for them. But I'm pretty lazy about redo-ing college algorithms class during my free time at home to go work there, so I probably won't.
There's an opportunity cost with interviews like this where an M.S. and long career of getting shit done counts for very little and memorization of undergrad level topics that you can look up in two minutes in Knuth if you have a problem that requires it can make or break an interview.
I've made a career fixing a ton of horribly shitty, inefficient code that's been produced exclusively by people who pass these interviews.