Hacker Newsnew | past | comments | ask | show | jobs | submit | veets's commentslogin

I always find this argument of "Google gotcha" questions weak. I agree 100% that tricky questions around dynamic programming, linked lists, etc. are a terrible choice. However, I think a lot of these questions do a good job of measuring how well the candidate understands basic data structures. It amazes me how many "seasoned engineers" do not understand when to use things like hash sets/maps (just look at the monthly blog posts about how you shouldn't use `in` in Python on lists because it is O(n)!! It blows my mind this is blog-post worthy).

Also, I would far rather have a company not hire me because I failed a programming assessment than have them hire me and then fire me a week later because they were finally able to assess my skills.


The issue fundamentally is that most people do not use this kind of knowledge on a daily basis - even those of us doing low level work, and when it is needed, it is a mere web search away - or if necessary I can reach for Knuth.

I think you misunderstand my notion of a "google gotcha" - I'm not proposing that interviews do not into rigorous technical detail. I'm proposing that interviews that test knowledge of easily-looked-up trivia rather than an understanding of engineering pragmatism and trade-offs are fundamentally flawed and yield markedly worse results than those that do not _regardless_ of company.


I wish they would have also tested running (unless I missed it). Is there a benefit (with respect to thinking/creativity) to walking over running? I think that would be quite interesting...


Yes - and that is the point the OP you responded to is making. This is not a generic library function. It used in a specific setting where preconditions exist and presumably are checked _prior_ to calling this code.

Sure you could make the argument that we don't _know_ they are being checked, but it's a pointless discussion. Who cares? _If_ preconditions are met, this code is safe, if they aren't, it's not safe. Since we don't know one way or the other, there's no point in discussing further. The kernel developers know their stuff...


Imo every piece of code (for reasonable definitions of "piece") is supposed to check its own preconditions and not rely on the caller to check them.


It's a bad idea, especially in kernel/low-level/performant code. It's OK to check some values at specific points (like when a value is passed from user space), but in general it's bad practice to check it at every single function. You trust in your program flow.

Imagine if at every function down a complex stack you go with:

    if (!ptr1 || !*ptr1 || *ptr1 > 5 || param1 < 0 || param2)
        /* etc.... */
    {
        return NULL;
    }
(used arbitrary names and values).


This is nice in theory but a bad idea in practice (as a blanket statement, I am all for checking preconditions in general). An easy example is binary search, which I think is a reasonable "piece" of code by your definition. One should never check its preconditions _inside_ the binary search function (that the list of elements being searched is partitioned by the search predicate). Checking the precondition is O(n) while the algorithm is O(lg n).


You're right, but checking for null is not terribly expensive compared to iterating over a linked list.


The thing is that you aren’t really advocating for just that... Since this is pretty general support library code, the result of this philosophy is defensive checks in all support functions and that means checks every spinlock, mutex/semaphore, and god knows how many other places... everything really. We also would want these checks at multiple levels since we shouldn’t trust that other people checked things. This would have a significant effect on performance and many would prefer their system not to run however-many percent slower. All to avoid a possible issue that can’t be very large or systems would be kernel panic’ing all over the place.

From a black box, zero-knowledge, perspective it’s maybe worth remembering that code built this way successfully runs mission critical systems all over the world every day. Thoughtful people would have switched to other (slower, more pedantic) systems if doing things differently was a real overall life improvement. People have had the choice, there are plenty of OS’s out there... some far more formal/pedantic. Linux wasn’t the safe choice back in the day, it was just better by several important metrics consistently over time.

Perhaps these insanely experienced kernel devs know what they are doing.


This isn't possible in many cases - consider the simple C library function strlen, one of who's preconditions is that it must only be called on a zero-terminated string. There is no way to write code in strlen to check that this is true.


Which is one of the reasons why coding standards like MISRA forbid using strlen (at least in C++, I guess even MISRA doesn't want to force you to write safer C with sane string types).


In my experience, which is graduate level mathematics, spaced repetition is essential. Once you get beyond basic mathematics and algebra, recalling and understanding concepts and definitions is essential. This is true at the calculus level where you need to remember things like the chain rule, integration by parts, various theorems, etc. But it is even more true at the higher levels of mathematics where there are many more theorems, lemmas, and definitions to remember. I find that at that level, just having a huge depth of recall for definitions alone is incredibly useful. If you remember enough definitions you can start to piece things together. If you can recall theorems, lemmas, and some key proofs you can achieve quite a bit.


Can you clarify why you don't think Rust has parametric polymorphism?

  struct Person<T> {
    age: T,
  }
Is parametric polymorphism and valid Rust.


Just because some bounds are parametric does not mean that all of them are; specialization violates parametricity, for example. While that's not in Rust proper yet, it did require making sure that dropck could handle it, for example, and the intention is to ship it.

There's also more minor things like https://github.com/rust-lang/rfcs/pull/1210#issuecomment-179...


I don't doubt you know more about Rust than I do, but this seems pedantic to me. Kind of like correcting someone for pronouncing forte as "for-TAY" instead of "fort" or telling someone "well technically you don't actually touch _anything_ because of the electrostatic force."

If you ask all the developers out there to describe parametric and ad-hoc polymorphism I think a vast majority would give the example of a type parameter (e.g., Java generics or C++ templates) for parametric polymorphism and Java interfaces or Haskell's classes for ad-hoc polymorphism. I can even quote directly from Pierce (Types and Programming Languages):

> Parametric polymorphism, the topic of this chapter, allows a single piece of code to be typed "generically," using variables in place of actual types, and then instantiated with particular types as needed. Parametric definitions are uniform: all of their instances behave the same.

I think Rust and the aforementioned languages fit this definition. Outside of a specific compiler issue, claiming otherwise seems to only confuse the issue, especially for those just casually reading and not familiar with programming language theory.


> If you ask all the developers out there to describe parametric and ad-hoc polymorphism I think a vast majority would give the example of a type parameter (e.g., Java generics or C++ templates) for parametric polymorphism and Java interfaces or Haskell's classes for ad-hoc polymorphism. I can even quote directly from Pierce (Types and Programming Languages):

In practice it is often useful to drop our demands for rigour by a bit. But any reasonable definition of parametric polymorphism _has_ to exclude C++ templates.

C++ templates are much closer to the definition of ad-hoc polymorphism.

My practical less-than-rigorous rule-of-thumb for parametric polymorphism amounts to something like: whenever Wadler's Theorems for Free paper applies. https://ttic.uchicago.edu/~dreyer/course/papers/wadler.pdf

Theorems for Free means that eg the type of the identity function (forall a . a -> a) guarantees that even if you supply it a bool, it can't magically turn into an implementation for `not`, or multiply all integers by 2.


This isn't Rust specific; it's just the definition of parametric polymorphism. Yes, many programmers may give you a slightly incorrect definition, but especially in an article about Haskell, I'd expect a bit more precision.

Which doesn't mean it's terrible to get it wrong, just want to be clear about what Rust does and does not have. It is important because these kinds of definitions either hold or they don't; "sorta kinda mostly parametric" isn't the way folks tend to think about this. Which makes sense, because they're interested in proofs and formal definitions.

Yes, Pierce is great! But the issue is:

> all of their instances behave the same.

This is not true in Rust, as shown in my comment and the other replies. We have accepted APIs that break this property, and we have other language features on the way that break this property.


IIRC, Java interfaces are subtype polymorphism. Ad hoc polymorphism would be, for example, overloading.


In a sense even things like `TypeId::of` or `mem::size_of` violate parametricity.


You can do the following, which violates strict parametricity:

    fn foo<T>() -> usize {
        std::mem::size_of::<T>()
    }

    foo::<u8>(); // 1
    foo::<u16>();  // 2


You can do the same in Java and C++. This may violate a strict definition of parametricity (I've read the definition from a few different sources and am still mulling it over), but I'm not sure how this relates to parametric polymorphism.

The _behavior_ of this function is the same for all types, the _output_ is different. That is, for all types, the function body is the same. Maybe there is a more abstract definition of parametric polymorphism you are using, but as I said above, this seems pedantic.


The internal behavior can trivially be made different just by operating on the value:

    fn foo<T>() -> usize {
        let x = std::mem::size_of::<T>();
        if x % 2 == 0 {
            panic!();
        }
    }

    foo::<u8>(); // 1
    foo::<u16>(); // panic
That the body is the same isn't necessarily the issue at hand (though of course it's still a useful property in its own right), what matters is that reasoning about what this function will do requires knowing which specific types it is used with.

> this seems pedantic

The first code example is merely the simplest demonstration, in the wild I would expect lots of `size_of` in generic contexts to result in type-dependent behavior somehow.

I'm not saying this is necessarily a very bad thing, nor do I have strong opinions on the usefulness of strict parametricity (which AFAIK Haskell doesn't have either). But in discussions relevant to parametricity, it's useful to know the ways a given language can subvert it (and Rust will further encourage it to be subverted, once the specialization feature is developed).


> I'm not saying this is necessarily a very bad thing, nor do I have strong opinions on the usefulness of strict parametricity (which AFAIK Haskell doesn't have either). But in discussions relevant to parametricity, it's useful to know the ways a given language can subvert it (and Rust will further encourage it to be subverted, once the specialization feature is developed).

In practice Haskell seems to have pretty strong views on enforcing parametric polymorphism, doesn't it?

Haskell gives you ad-hoc polymorphism via typeclasses and there are also existential types and GADTs etc, if you need those. But once you declare something to abide by parametric polymorphism, you are expected to keep your end of the bargain.

(Yes, you could violate the pact via some unsafePerformIO trickery, but that's considered bad form.)


The whole point of parametric polymorphism (as opposed to eg ad-hoc polymorphism) is that just from reading the type of a function you get a guarantee about the limits of its behaviour.

If you functions routinely violate those limits as a matter of course, those guarantees are useless.

I'm all for abusing notation and terminology a bit when it makes sense in practice, but loosening our definitions too much risks making them useless, too.

In practice in Haskell, I often only need a helper function for eg integers, but when the implementation allows, I will give the function the most parametric-polymorphic type that fits, because that makes the readers job easier:

Just like an invocation of `filter` is easier to read than a for-loop, because `filter` is strictly less powerful than the loop.

(In addition, the more general type serves to give the compiler a hint, so it can yell at me in case I accidentally do an operation on the integer that I didn't mean to.)


Of all the people I know at FAANG companies and that I interacted with while at Google, only one of them has a degree from one of those schools (a MS from Stanford done while they were already at Google). The idea that they mostly recruit from top-tier programs is false. Certainly being at those programs is a huge help, but by no means even remotely close to a requirement.

I have been recruited by a few FAANG companies (recruited in the sense that a recruiter reached out for an interview) and have only gone to public schools, nothing impressive.


Recruiters throw a very broad net. They reach out to everyone. At one point, I’ve gotten emails for software dev positions from Google, Microsoft and Facebook. I graduated from a no name school decades ago, my light LinkedIn profile showed a list of no name companies where I was just your average SaaS CRUD C# developer.

I wouldn’t get passed a phone screen without prep for a software dev position.

I only recently ended up at a FAANG as a technical consultant. Even then, the recruiter reached out to me about a software engineering position and I asked about openings in the cloud division.


I think you definitely need to be at the absolute top of your field to even consider working at somewhere like Google though. That's probably bifurcated between people coming from top schools and independently talented people like I guess yourself.


There are lots of programmers globally, the top few percent is probably on the order of a couple of football stadiums worth of people. You are probably closer to that then you suspect.


You really don't need to be the absolute top. Grind leetcode for a month.


I used to think that Microsoft/Google/etc only hired amazing super-geniuses and I could never walk among their hallowed halls. It turns out, though, whiteboard coding is a skill you can develop just like most skills. The evil logic puzzles they used to ask at Microsoft are a skill you can improve as well, but hopefully you won't run into those.


This probably feels great to say... but in reality 99.99% of developers will never be anywhere near the standard for Google even if they waste years grinding at Leetcode.


I also disagree. If you are able to pass leetcode style test you have a very good chance at getting into a FAANG. It seems to be all they really care about.


Disagree.


There is plenty of work running evolutionary algorithms on GPUs, but there are a number of limiting factors that make it less than ideal. Evolutionary algorithms in general can be extremely effective, they just solve very different problems from "is this a picture of a cat?" and aren't as sexy as neural networks.

In fact, much of "old school" CS is still very much alive and applying modern techniques and hardware to their problems. It's not that neural networks are some field of CS that is much more advanced than the rest of CS but rather neural networks happen to be the current hot topic.


It seems you are arguing something different, although I am having a hard time understanding what you have written. I think you are saying algorithms and data structures aren't hard, distributed systems are hard. In my experience choosing the correct data structures and algorithms in your services/programs/whatever can dramatically simplify the design of your systems overall.


> It seems you are arguing something different,

I'm making an argument that the stark reality of what's hard in software development is not the simplistic "Rules of programming", which have limited utility.

The reason the "rules" aren't self evident (or followed), is because we live in the reality of disparate functionality paired with an ever-changing technical landscape. You can't just make a DB KISS abstraction and expect it to hold with all the different repository types like (rolls dice) Athena after using (rolls dice) CockroachDB. There are concerns that are not purely algorithmic vs data structure that are far more influential and important to understand. Even knowing these details and cases, becomes less useful as time goes on and new technologies emerge.

> I am having a hard time understanding what you have written

If you're not interacting with new environments, tooling, and problems, regularly (every year or 2) you don't encounter the real pain which is far more important to your career and your ability to produce functional software. Reading almost every postmortem, the number of lines attributed to "we changed the data structure to O" is dwarfed by "we learned that technology X does Y, so we had to do Z".

This is only incidentally related to distributed systems, which is indicative of a disconnect with the problem described. Of course when you sit around in the same environment for a long time, you can observe and optimize on structure and algorithm, but that's not getting you to market (you're already there) and that's the nature of maintenance...not just being a fire extinguisher who is on call.


> we live in the reality of disparate functionality paired with an ever-changing technical landscape

It is the responsibility of tech leaders to minimize this (accurate) stereotype. Choose boring technology, and only build your own or choose something exotic when it gives you competitive advantage - because the reality I also see is that 99% of devs aren't working on anything new or unseen in the field. Even at the FAMANG companies, most people I know are working on boring problems.

So when your CTO or architect or whomever buys into the hype for X technology, make a good argument against it by proposing a better solution.


I have tried linux on a laptop and the battery life was abysmal (it was an older Thinkpad X1). With macos (which I don't particularly like) I get a solid dev experience and reliable battery life. Plus, their hardware is untouchable IMO.

I have a linux desktop that works well, but I can't always work on it unfortunately.


I have taught at the undergrad, which gave me the perspective of observing those undergrads that got job offers and those that didn't. Almost without exception, those students that did well in class and made an effort had no problem getting job offers. It was rare if they got a job at a FAANG company, but I don't think I knew anyone who wasn't able to secure an offer. One thing common to all of these students is they practice technical interview questions. Some had internships, some didn't. Some of the students were very bright from a theory perspective but most had a relatively poor grasp on theory. However, one common trait was they made an effort to grow as programmers.

There were students who really struggled getting jobs and it was not at all surprising. They struggled in their courses and were not proficient programmers. I never got the impression that they spent much time outside of class trying to work on these skills. No matter how hard I tried to motivate them it fell on deaf ears. I do not think an apprenticeship would have much value to these students other than possibly delaying the conclusion that this may not be the field for them.

I see the idea of apprenticeships discussed as a better alternative to the technical interview. My observation is that those that are proposing this have not worked in a "skilled profession" (not entirely sure how one defines that) that has a so-called apprenticeship. A common example I see on HN is that of doctors, where their residency serves as a sort of apprenticeship. The number and difficulty of tests doctors have to go through to practice as a doctor (at least in the US) is pretty incredible. Given the choice between going through that or studying months for the most difficult battery of technical interview questions, I would choose the technical interview route every time. This completely ignores the fact that doctors attend 4 years of post-graduate education (medical school) before even starting the residency. Imagine if companies required a PhD in CS before you could become an apprentice! And their boards are, without question, orders of magnitude more difficult than a PhD defense (I have a PhD in CS and my wife is a spine surgeon, so I am speaking from experience).

The counter argument I suppose is "well maybe not doctors, but what about accountants and actuaries?" I was a fully credentialed actuary in a prior life and can say those exams were way more stressful and difficult than preparing for technical interviews. Once you are through them it is very easy to move around and there are no technical interviews, but it also takes on average 7 years of intense studying and heartbreaking failures to get there.

I am not saying technical interviews are perfect. However, it seems the theme is that the grass is greener in other professions, and I don't think that is actually the case. What other profession can you studying your butt off for 6 months and land a job paying over $250k out of college?! Yes preparing for technical interviews is difficult, but boy is it worth it (at least in my opinion). I personally think they are a great opportunity to grow as a developer as well. Anyways, I suppose I have gone on enough.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: