Hacker News new | past | comments | ask | show | jobs | submit login
I had to give a wrong answer to get the job (2017) (dewitters.com)
351 points by Starz0r on July 19, 2021 | hide | past | favorite | 387 comments



I bombed an interview at a game company because I gave a right answer that I couldn't get them to understand.

I don't remember the exact problem they wanted me to solve, but the answer involved a dynamic collection and they wanted it to grow with constant time complexity. They were probably looking for a linked list. But I said I'd use a dynamic array because those have constant time when averaged over a series of appends.

I don't know if I remembered the term "amortized complexity" or not, but it was clear that they had never heard of doing amortized analysis across a series of operations and they absolutely did not get my answer. They got hung up on the idea that some appends force the array to grow and get copied. I tried to explain that that was only true for a predictable fraction of them, but they were stuck on the idea that this meant dynamic arrays had O(n) worst case performance. They clearly thought I didn't know what the hell I was talking about.

I'm pretty sure that was the point where they decided to pass on me.

But now I'm a senior software engineer at Google and I just finished writing a textbook that explains dynamic arrays including their amortized analysis, so joke's on them.


Interestingly, you can use scheduling to make a non-amortized dynamic array. Your probably know this, but for other commenters who do not—

Keep two arrays, of size n and 2n. Initially the first has capacity c = n/2 and the second has capacity 0. Reads go to the first array. When you append, append one element to the first array, and copy two elements to the second array. By the time the first array is full, it has been entirely copied to the second array—so discard the first array, move the second array to be first, and allocate a new "next" array.

The same trick can be applied to many amortized data structures (but it gets very complicated when you have many operations).


This is dangerous, but if well documented and understood it might be okay. Some data might contain unique things (for argument's sake, say a std::unique_ptr). It can get tricky since you need to know the implementation details of everything that gets inserted and it's ownership behavior, since elements can be kept at two places. (A copy in array n and one in 2n.)

Then there is the fact that you basically make every insertion 3x as costly. You better have a good reason to need this given the additional complexity and caveats.

As for the original interview question, there are systems where an occasional longer pause is not OK. Personally, I think that sticking to your gun must have come across as being stubborn and unyielding and maybe not ready to admit errors. As an interviewer, and for having worked with people who always think they're right and inflexible, it would have been a red flag.


> Then there is the fact that you basically make every insertion 3x as costly. You better have a good reason to need this given the additional complexity and caveats.

> As for the original interview question, there are systems where an occasional longer pause is not OK.

As you say, it's a tradeoff between worst-case operation cost and average operation cost: If you allow any worst-case operation cost, you can get really efficient on average. If you want really efficient worst-case operation cost, you can't get it down to exactly the average operation cost you could've otherwise gotten.

I wouldn't be surprised if this tradeoff is inherent to many de-amortization problems. But since the difference between the amortized and de-amortized solution is usually only a constant factor (as it is in this case), you have to be very specific about the model of computation if you want to prove anything mathematically.


To be clear, there is no tradeoff in the number of operations. You are doing the exact same number on average (or total or amortized), while improving the worst-case.

You do lose the ability to do a no-copy realloc() though, and increase the average memory use (not peak).


> Then there is the fact that you basically make every insertion 3x as costly. You better have a good reason to need this given the additional complexity and caveats.

This is the exact same average cost per element, just spread out evenly. Consider that each element still gets copied to the array of twice the size once whether you use this technique or the classic amortized version. Same average performance without the hitches (although copying a larger block of memory at once would likely be a bit faster).


> This is dangerous, but if well documented and understood it might be okay.

I agree in principle. I think even something like std::unique_ptr could work; the concern would be self-referential types. Notably Rust doesn't allow those anyway, so I think any Rust type would be fine, not just Copy types.

> Then there is the fact that you basically make every insertion 3x as costly. You better have a good reason to need this given the additional complexity and caveats.

Still might be a lot better than a linked list for small types (due to less RAM spent on pointers and better cache locality because it's contiguous). With the exception of intrusive linked lists of stuff that is separately allocated anyway, I really hate linked lists.


One thing I’ve always been curious about, can you guarantee worst case linear memory use with this setup while supporting pop() at the same time?


It depends on how strictly the structure of each array is defined, the answer can be yes but I don't know what effect this has on modern superscaler CPUs.

The source array might be considered to have even offsets (0, 2, 4, etc, index left shift 1 bit) relative to the double-sized array.

Addition operations past the halfway value of the smaller array have a relation to values in the second by (Index << 1) % n + 1 ; that is they pair with the values from the start of the smaller array.

When a removal operation, like pop(), selects a victim in the source array the corresponding operations must also apply in sync to the values in the larger array.


Easily! You can just make pop do the opposite of push. For example, if push copies two elements to the bigger array, make pop copy two elements to the smaller array.


I challenge you to try to implement it this way and test that (1) an arbitrary sequence of push/pop is valid and (2) doesn’t use more than linear space.


It's pretty easy to make sure an arbitrary sequence is valid if you make push and pop be almost exact opposites of each other.

Let me walk through a simple version based on the description above, ignoring that it's rather inefficient:

Start with an array of size 64, with 32 elements.

Our first action is either a push or a pop. I'll split those up.

* * * If the first action is a push:

From here, call the existing array Small and make a new 128 element array called Big.

For the first push, store it in Small[32]. Then copy Small[0] and Small[1] to Big[0] and Big[1].

For the second push, store it in Small[33]. Then copy Small[2] and Small[3] to Big[2] and Big[3].

Then for a pop, just remove the element in Small[33].

Then pushing again, store it in Small[33]. Then copy Small[2] and Small[3] to Big[2] and Big[3]. Notice how this is exactly the same as the previous push. And if you undid two pushes, then did two more pushes, both of them would be the same as they were before.

So, analyzing this, once we push once, any arbitrary sequence of pushes and pops is going to do one of the following:

* Pushes is always more than pops, but the difference is always under 32. This bounces around forever, undoing and redoing pushes. This is valid, and always uses the same amount of space for 33-63 elements, so that's clearly linear too.

* Eventually pops catches up to pushes. This means we're back to 32 elements, right where we started. Throw out the 'Big' array too. Going back where we started is valid and uses linear space. If the next action is a push, go to the start of the push instructions. If it's a pop, go to the start of the pop instructions.

* Eventually pushes minus pops reaches 32. So now our Small array is completely full, and our Big array is half full, and they contain exactly the same data. Throw out the Small array. Now we're back where we started, except with twice as many elements in an array twice as big. Use all the same logic as before, but with 2x numbers. This is valid and uses linear space.

* * * If the first action is a pop:

From here, call the existing array Big and make a new 32 element array called Small.

For the first pop, copy Big[0] to Small[0]. Then delete and return Big[31].

For the second pop, copy Big[1] to Small[1]. Then delete and return Big[30].

If we get a push, put it in Big[30].

If there's another pop, copy Big[1] to Small[1]. Then delete and return Big[30]. Notice how this is exactly the same as the previous pop. And if you undid two pops, then did two more pops, both of them would be the same as they were before.

So, analyzing this, once we pop once, any arbitrary sequence of pops and pushes is going to do one of the following:

* Pops are always more than pushes, but the difference is always under 16. This bounces around forever, undoing and redoing pops. This is valid, and always uses the same amount of space for 17-31 elements, so that's clearly linear too.

* Eventually pushes catch up to pops. This means we're back to 32 elements, right where we started. Throw out the 'Small' array too. Going back where we started is valid and uses linear space. If the next action is a push, go to the start of the push instructions. If it's a pop, go to the start of the pop instructions.

* Eventually pops minus pushes reaches 16. So now our Small array is half full, and our Big array is one quarter full, and they contain exactly the same data. Throw out the Big array. Now we're back where we started, except with half as many elements in an array half as big. Use all the same logic as before, but with numbers cut by 2x. This is valid and uses linear space.

There.

Now, there are easy optimizations that could be done on top of that to cut the memory use in half, or combine pushing and popping into the same logic, or all sorts of other improvements.

And you want to have a special case if the size gets too small to stop shrinking.

But that should be a perfectly good basic explanation of an algorithm that's very straightforward and has no wiggle room for anything to go wrong.


Ah, I was in a rust/c++ frame of mind here—it seems you deeply rely on a copy operation being available, but unless you use indirection and weak pointers it might not be.

I think with copy available this certainly works


I used copy mostly because the earlier post used it, but we can easily use moves instead. The access operator will just have to use some arithmetic to calculate which array an element is in. Moves also make it easier to have less memory overhead.

If moves aren't available then a resizable array was doomed from the start.


Hrm, is it really that easy? If you only have a placeholder in one of your arrays that points to the moved object in the other, then you need to do the actual move back if you have a sequence of pops/pushes which force you to delloc the moved-to array (say you have a grow sequence which gets you almost ready to make the 64 array the small one and then a series of pops back)

Thought maybe in principle there’s yet another “amortization process“ which can be layered on top of what you already described that tries to keep the proportion of placeholders in both small and big arrays equal.


You don't need placeholders. Can you explain why you're thinking about placeholders?

For using move, the most straightforward way is: pushes and pops go directly to the big array. For every push you also move one element from small to big. For every pop you also move one element from big to small. This is slightly different from the algorithm above, but basically equivalent.

Let's say you start with 32 elements in one array. If you grow into a bigger array, then by the time you add 32 more all your elements will be in the big array. If you shrink into a smaller array, then by the time you remove 16 all your elements will be in the small array.

> (say you have a grow sequence which gets you almost ready to make the 64 array the small one and then a series of pops back)

So let's say after the pushes we have 60/64 elements in the 64 array, almost full. There are also 2/32 in the smaller array.

Then we do 10 pops. Now there are 40/64 in the bigger array, and 12/32 in the smaller array.

Seems problem-free to me.

If we keep doing pops we end up with 0/64 in the bigger array and 32/32 in the smaller array. At this point we could push again, or we could promote the 32 array to 'big' if we need to pop.

When the arrays are size 32 and 64, the logic to access an element looks like this:

n = total number of elements

x = the key of element being accessed

if ((x % 32) < (n - 32)) return big[x] else return small[x]


IIUC, this is assuming that memory allocation is O(1)?


yes it's rare to not assume that.


The coefficient can and will bite your head off, however.


The primary thing I like about this approach vs. stopping to copy the entire array on expanding its capacity is cache hotness.


Evidence?


I bet they weren’t as dumb as you think and you were passed on for being stubborn and drunk on ego.

Of the many scenarios where amortized complexity is not okay, code in a tight loop where predictable performance is key, e.g. code running game logic, jumps immediately to the top of the list. The fact that you were unable to incorporate this into the conversation makes me suspect you were more interested in putting on a show than a practical solution to the task at hand. The worst case performance for a dynamic array is O(n), but the amortized performance is constant. They aren't the same thing.

You may be a different, more mature, engineer nowadays. But, when I encounter the type of persona your story describes I tend to respond with strong no’s irrespective of whether you have since written a textbook.


Unfortunately, you are wrong.

Arrays used as backing for lists are faster than linked list in almost all cases, assuming they are implemented correctly (as is the case in Java, which I bring as an example).

Linked lists have a lot of huge downsides that are not easily captured in their naive big-O characterization. Big-O does not tell anything about how efficient things are. Two algorithms can have same big-O complexity yet hugely different costs.

For example, you can easily parallelize operations on array lists but you can't do that on linked list where to get to next node you need to dereference a pointer.

Even without this, you get bonus from prefetching data to cache when searching through array list or when doing things like moving data. Speculative dereferencing pointers is nowhere close to just regular prefetch.

Array lists are denser in memory than linked lists (because of additional pointers). This means that a lot of things works just more efficiently -- cache, memory transfers, prefetching, less need to resolve TLBs, etc.

Inserting into array list at the end is as fast as into linked list (after taking account for amortized cost), but what not many people appreciate is that finding an insertion place and inserting within array list is also the same cost or even better than in linked list.

That is because to find insertion/deletion place you must first run linear search on linked list and that costs a lot. Actually, it costs more than copying the same size of array list.


I’m not sure what I am wrong about, I’m simply pointing out that worse average case performance is sometimes desirable over better average case worse worst case performance. Context is everything.

Also some things to note: a) game logic tends not to parallelize very well for various reasons, but even so it depends on the domain of the problem whether or not you can run a parallel algorithm on a linked list, b) if you already have a reference to the insertion point you can avoid the walk, c) cache coherence is only important of you are actually iterating over the list, etc.


I have never ever seen a linked list in a game, anywhere, serving any purpose.


Look at all the instances of "next" in this file: https://github.com/id-Software/Quake-III-Arena/blob/dbe4ddb1...


He is still probably technically right.

"I have never seen (...)"


Nobody responding asserted anything to the contrary. Rather, examples were provided rhetorically and for anybody interested in exploring more.



GP basically created a hypothetical situation wherein they were right, and then here you are saying they're wrong. The only case in which you would be right in saying the GP is unequivocally wrong is if dynamic arrays were Pareto-optimal compared to linked lists.

Alice: Can you think of any situation where apples are better than oranges?

Bob: Well, if someone had a craving for apples, then they would be better.

Charlie: Unfortunately, you are wrong. Even in that situation, oranges are juicier, have more tang, and contain more vitamin C.


Sorry but your analogy makes no sense because Kiwis are better.


Worth the DVs (;


> That is because to find insertion/deletion place you must first run linear search on linked list and that costs a lot.

That isn't always true. If you already have a pointer to the position in the list, and the list has an API that supports it (which a good implementation would), then with a linked list you wouldn't need to traverse the list again.

That said, in most cases that probably isn't worth the downsides of a linked list, especially if the size of the list is small.


> Arrays used as backing for lists are faster than linked list in almost all cases, assuming they are implemented correctly (as is the case in Java, which I bring as an example).

I think you're overcorrecting here.

There used to be a conventional wisdom that linked lists were always faster than dynamic arrays because you don't have to copy or shift items. But then CPUs got faster while memory didn't and caching effects became so prevalent that that conventional wisdom is no longer true.

Today, in many cases, arrays are faster. And that awareness of caching effects is becoming greater. But I think there's a tendency to over-apply new knowledge that seems counter-intuitive.

The reality is somewhere in the middle. Arrays can be surprisingly fast, even when you need to shift stuff around to insert or copy to grow. But there are still plenty of cases where linked lists are better if you are doing a lot of inserting or rearranging. I don't think the guidance is so much "linked lists bad!" as it is "arrays maybe not bad".


> There used to be a conventional wisdom that linked lists were always faster than dynamic arrays because you don't have to copy or shift items. But then CPUs got faster while memory didn't and caching effects became so prevalent that that conventional wisdom is no longer true.

Nothing to do with caches or conventional wisdom no longer being true.

Think for a second, if you want to insert somewhere inside your shiny linked list, how do you find where to insert?

Unless you have some kind of index to the list (in which case it no longer is a linked list, it is some other data structure), you have three options:

a) at the beginning of the list

b) at the end of the list

c) in the middle of the list, in which case you need to run linear search to find the place.

If you want to do this at the end of the list, then array list has the same amortized cost as linked list.

If you need to do this at the beginning of the list, then array list can be reversed. Unless you have the very special case of having to add at both beginning and end of the list.

So most likely you need to insert somewhere within the list to see the improvement over an array list.

But then you need to do the linear search and the search is so much slower with linked list that it completely offsets the cost of copying of all that data.


> Think for a second, if you want to insert somewhere inside your shiny linked list, how do you find where to insert?

You usually have a pointer to the node where you want to insert.

I don't think many people would suggest using linked lists in cases where you actually do need to seek for the insertion point.


They were talking about big-O analysis (or whatever you want to call it), then you jump in talking about speed and performance. That's not the same thing, and as you say, may not even be closely related. I think your right, but again, it's not applicable to the point of contention the above comments were discussing.

Also, in an interview, it should be fine to talk about all this. It could lead to some good technical discussion where you can show your knowledge. If such a conversation does arise though, remember you have to make these people like you and ramming your point down their throats and forcing them to acknowledge that you're right and they're wrong probably doesn't do that. Also, don't forget you may, indeed, be wrong.


What is not applicable?

If one person says a linked list is faster than an array list but the opposite is true, then why do you claim it is not applicable?

Isn't it exactly the point of the article, that sometime you need to say what the interviewer wants to hear rather than what is the actual truth?

It is an unfortunate truth that a lot of interviewers will ask about linked lists vs array lists and at the same time will not understand that linked lists will be slower for insertions or deletions in most cases.

That is because you first need to find the insertion/deletion place, and any benefit of faster insertion/deletion for linked list will be offset by much slower search.


None of the comments above yours in the thread mention any form of the word "fast" or "speed". They mention "performance" in reference to big-O complexity. Big-O is not always about speed.


> None of the comments above yours in the thread mention any form of the word "fast" or "speed". They mention "performance" in reference to big-O complexity. Big-O is not always about speed.

I am sorry, do you want to say "performance" and "big-O" have nothing to with trying to make the program go faster?

I think you have lost your way and need to backtrack a little bit.

The whole point of big-O analysis is to be able to reason about how fast a program will be given input size.


If I ask for something to be done in O(1) I'm not asking for it to be fast, I'm asking for it to take the exact same amount of time every time no matter what. That might end up being slower, but so what, maybe that's what I need.

If I ask for an O(1) algorithm and you build something that is as fast as possible, faster in every case, but sometimes it's really fast and sometimes it's a little less fast but still fast -- well, then it's not O(1) and not what I asked for. It may be fast, but if it's sometimes faster than at other times it's not O(1).

Thus, I do not consider big-O to be synonymous with speed. They are different, because the best possible big-O, O(1), does not necessarily mean the fastest.


> Thus, I do not consider big-O to be synonymous with speed. They are different, because the best possible big-O, O(1), does not necessarily mean the fastest.

Maybe not synonymous in the sense of technically exactly equal, but certainly a pretty good approximation -- and absolutely synonymous in the sense that the whole purpose of big-O notation is to be able to reason about speed in more general terms, comparing algorithms in pseudo code in stead of having to actually write the program so you can run it through a benchmark.

So, no: If "the best possible big-O, O(1), does not necessarily mean the fastest," then it isn't actually the best.


A hash table has E[O(1)] inserts and lookups. Most people ignore the expected part and just say those operations are O(1).

Asymptotic complexity is of course only loosely correlated with speed. In real systems with real workload sizes, constant factors matter a lot.


I'm confused - that isn't what I understand O(1) to mean. To me, O(1) means only that there exists some constant that bounds the runtime, while individual invocations can absolutely be sometimes faster.


Well, not exactly.

What it really means is that the execution time does not depend on the size of input (n).

What it does not say is how much time it takes to execute, whether it is exactly same amount of time every time or whether there exists some kind of upper bound on execution time.

For example, an algorithm that takes 1/(randFloat()) time to insert an element to the data structure where randFloat() returns any possible floating point number with equal distribution, is still O(1) algorithm, even though there is no upper limit on how long it can take to execute.


> an algorithm that takes 1/(randFloat()) time to insert an element to the data structure where randFloat() returns any possible floating point number with equal distribution, is still O(1) algorithm, even though there is no upper limit on how long it can take to execute.

According to the definition, if 1/(randFloat()) = O(1) then there must be a constant M that satisfies 1/(randFloat()) <= M * 1. But according to your own words there's no upper limit to 1/(randFloat()), therefore there's no such constant M, therefore it's not O(1).

(In practice on most systems there would be an upper limit since a float can't be infinitely close to zero, but let's act as if that wasn't the case.)


More a case of "f(n) = 1/randFloat() does not have a well-defined limit as n goes to infinity", so it would be hard to say that it fits in ANY complexity class.

What is, however, clear is that its run-time does not depend on the size of the input. And technically, that means we can find a constant (infinity) that always... But, that is pretty unsatisfying.



> The whole point of big-O analysis is to be able to reason about how fast a program will be given input size.

It's an important technicality that matters when you are doing performance tuning of code on modern CPU's. Big-O is asymptotic but omits the constant multiplier, so if you're comparing, for example, deletion from a naive binary search tree vs. an unsorted array, it's not necessarily obvious that a tree search on a naive pointer based tree (O(log n)) is faster than a find, swap, and delete (O(n)) until n is sufficiently large.

Another example is iterating through the pixels of a very large bitmap on the order of 10m+ pixels. While iterating through all pixels should be linear to the number of pixels (O(width * height)), assuming it's a row-oriented bitmap, which most are, scanning row by row can be substantially faster than scanning column by column because of caching behaviors.

Point being, big-O and actual performance are not always the same thing because the constant factor can sometimes dominate depending on what you're trying to do.


"Performance" depends on a number of factors beyond the code itself.

To take a real world example, immutable operations in Javascript are often more performant than mutable operations--thanks to the fact that the Chrome Engine "cheats" with its hot path functionality.

However, in Big O, constantly generating new data structures, even within loops, would clearly trash the algorithm's space complexity. On compilers that don't optimize for immutability, the Big O algorithm is more performant; on the ones that do, the immutable approach is more performant.

It's because of example like these that you want to disconnect the concept of "performance" from Big O. Because context matters for the former, while the latter relates only to the algorithm itself.


Replying here because I can't to the relevant comment:

> No it’s not, it’s about trying to quantify algorithmic complexity.

And what would you say is the goal of quantifying algorithmic complexity?


In cryptography the goal might be to ensure the algorithm always takes the exact same amount of time, or the exact same number of CPU instructions, even if it is slower than alternatives. This is a case where we are interested in complexity without regard to speed.

Thus, the answer to why we care about complexity is "it depends". But it is not always about speed.


BTW I've found that in cases where the reply and/or edit button(s) disappear, refreshing the page usually causes them to appear.


There's a delay before the reply button appears to encourage more thought as discussions get deeper. A HN thing.


In those cases, if you click on the "X minutes ago" timestamp you'll get a single comment view with a reply box. Yours didn't have a reply link in-thread, so just like I did here.


Wow I thought this was some rendering bug for the longest time. Is there any (un)official list of HN quirks, out of curiosity?


No it’s not, it’s about trying to quantify algorithmic complexity.


Linked lists are better for immutable/persistent structures


If your data structure is immutable, then an array is always faster than a linked list.


Immutable doesn't necessarily mean unchanging in the context of data structures.

An immutable data structure can support adding, removing, and/or changing the data but the way it does it is in such a way that once data is created it isn't modified until it is completely unused and unreferenced.

So an immutable data structure has the benefit that data stays the same and stays the same in memory so it can be shared where duplication exists. You can copy said data-structure but it won't actually modify the underlying data and will just allocate a new tag/head that holds some reference to the underlying data. Now this new copy can be modified and it will instead just allocate a new block of memory for the changed portion and stitch it in to the head/tag structure as if it was just some diff/delta.

Here immutable means that the underlying data is unchanging and can be reused/shared by multiple instances, not that it is unchanging at all from an external point of view.

Now in the case of your comment, an array is extraordinarily bad in most cases for an immutable data structure as it is extremely difficult to sub-divide and let go of unused/old parts without breaking that immutability for the in-use data. Linked lists have their own issues as well but can be a valid trade-off for certain types of immutable data structures. Generally though trees will be the primary underlying structure for these immutable data structures as they are easy to use for this purpose, have reasonably good lookup times, have reasonable worst time complexity, and most importantly decent average/amortised time complexity.


The term you are looking for is pointer stability. It can be a useful property, but you can also get it storing pointers in a vector. That would often be faster in practice than a linked list.


Isn't that a persistent DS and I thought OP mistook it for immutable-at-memory-level data structure.. like an array you create once but read many times from..


Well yes but I've always seen immutable DS as synonymous for persistent DS. What OP refers to I've found to be referred to as fixed, const, const static, or compile-time static data structures.

Of course naming in CS and Math is terrible and we love reusing terminology so that everything is confusing but that's just where we are.

I wasn't trying to be negative towards OP or anything. I just saw that since their comment was downvoted so heavily (which it really shouldn't have been), it might be beneficial to reply with an explanation to clear up any misconceptions and smooth out the conversation.


> for being stubborn and drunk on ego.

I think you would have a hard time finding someone who knows me describe me that way. Maybe the way I related the anecdote here doesn't present me well. I am pretty meek and easily flustered. I don't think I was any more confident back then than I am now which is, alas, not very.

> The fact that you were unable to incorporate this into the conversation

In this case, it was an interview with three other engineers simultaneously (a truly cursed interview format), so it was hard to incorporate much of anything into the conversation. It felt much like I imagine a thesis examination where the three of them were in charge of pacing and questions. (Also, it was ten years ago, so I'm sure my memory has faded.

What I recall was them asking me how I'd do some sort of growable collection. I said I'd probably do a dynamic array. I mean, that is the way 99% of growable collections are done today—look at Java's ArrayList and C#'s List. This is a company that does strictly PC games and I was hiring for a tools position. Dynamic arrays are the right solution most of the time in that context.

They asked what the complexity was. I said something like "Constant time, across multiple appends." They didn't seem to get that and asked what the worst case was. I was some appends are O(n) but amortized across a series of them, it's constant. When I tried to clarify, they said they wanted to move on. I think in their minds I was hopelessly lost and they wanted to get to the next question so that I didn't embarrass myself further.

I would have been happy to have a more productive discussion about which problems amortized analysis was the right fit for. My impression was that they had never heard of amortized analysis at all, and thought I was confusing average case and worst case analysis (which I was not). From their perspective, I can see how I looked lost or wrong.

Overall, they had a superior tone that I found off-putting. (For comparison, I didn't get that impression from any of the interviews I had at Google the very next day. My Google interviewers were all kind, engaging, and really fun to talk to.)


You may be entirely correct in your analysis. There are definitely a fair share of toxic interview setups and interviewers out there. What I found off putting about the way you presented the story was the bit at the end where you gloated about your current employment status and achievements as if they missed out on some great mind. It gives the impression that in your view, your intellectual prowess was more important than being a good candidate and potential teammate. While that of course may just be the way it came off, that was my perception. So it drove me to present the alternate, less charitable, interpretation where the interviewers did in fact understand amortized complexity and simply didn't find it suitable.

> I would have been happy to have a more productive discussion about which problems amortized analysis was the right fit for.

This is the type of conversation I tend to enjoy during interview problem solving because IMO it most accurately reflects the type of conversation you'd actually have with teammates when building a system. I also might try to bait this topic by specifying O(1) worst case insertion up front and watch for how the candidate interprets the requirement, what type of questions are asked, etc. If you started implementing an array backed thing I might stop you early and prompt the insertion analysis specifically. However, in my experience people tend to discuss an overview of the solution before writing any code which is a great point to iron out any clear non-starters or missing requirements.

> Overall, they had a superior tone that I found off-putting.

Fair. It's a two way street after all. Even if they were entirely fine and it was all a comms misunderstanding, sometimes personalities just don't gel. Glad you found success the next day!


> What I found off putting about the way you presented the story was the bit at the end where you gloated about your current employment status and achievements as if they missed out on some great mind.

Yeah, that's fair. I'm not one to gloat usually but I never felt like I got closure from what was a pretty unpleasant interview experience so I couldn't resist the urge to get a dig in.

> This is the type of conversation I tend to enjoy during interview problem solving because IMO it most accurately reflects the type of conversation you'd actually have with teammates when building a system.

My Google interviews were so much better in this respect. The interviewers were really candid and fluid. They did a great job of letting me bring up points that I thought were relevant but not get too ratholed. Overall just a miles better experience.

> sometimes personalities just don't gel.

One way to look at it was that the interview was a success because it established that I likely wouldn't be happy working with those devs and that kind of culture.


> I bet they weren’t as dumb as you think and you were passed on for being stubborn and drunk on ego.

This flippant remark followed by the egotistical follow up is the kind of art I read hacker news for, thank you.


If the question was about constant time complexity but the questioners had a secret additional requirement that they refused to share, that's a flaw in the questioning. If the questioners had that additional secret requirement and understood amortized analysis, then they should have at least given a hint at their secret requirement by saying something like "your proposal does indeed deliver amortized constant time complexity, but what could the potential downsides be of this approach?" If the commenter is giving an accurate description of what happened in the interview, then the questioners were absolutely the ones being stubborn or ignorant. (And, incidentally, I'm pretty sure that dynamic arrays are going to come out ahead in practice even with the secret "tight loop" requirement.)


If they were not dumb, they would've explained him why amortized complexity is not suitable measure for their application.

If they were not dumb, they would definitely understand what he was saying and not sit clueless.


Games work with time budgets of many milliseconds per frame, relatively long timescales from a cpu pov. It is rare to prefer predictable per iteration latency in loops over higher throughput unless the deferred batch of work is quite big. But of course this can compound in some cases, eg you have thousands of these arrays being extended in lockstep and they all trigger the extra work at the same time...


If the arrays are big enough for growing them to mess up your time budget, then it's quite likely that a linked list would already be failing.


Yep, and the associated dynamic memory allocation for LL nodes will still have latency spikes from the malloc implementation so its worst case latency is still bad.


I'll add the really correct but stubborn answer. It's a game, why are we allocating memory?

Working in for example Unity I need to not ever allocate anything per frame if I can help it (and then knowing other gotchas such as that foreach and/or getting the .Count on a list will also cause memory to be allocated).


Interviews are about shows, not boring answers.


This is so true. It’s about putting as many good vibes and good feelings into the interview as possible while also seeming competent. “Do I want to work with this guy?”


In some cases amortized isn't good enough though, and games could certainly be one of them!

(Anywhere you're servicing some kind of interactive request might qualify, depending on this size of your datastructures.)

This is actually something I like to work through in interviews:

OK, you've got amortized-constant-time append to your array-backed colletion, but now what if we need it to be really constant time? How could we achieve this? (Same question & trick applies to hashtables.)

(The answer can be found, amongst other places, in the Go or redis source code.)

edit: or in pavpanchekha's comment, they beat me to it :-)

also, just realized you're the author of crafting interpreters. absolutely love that book!


>> In some cases amortized isn't good enough though, and games could certainly be one of them!

Real time systems including games have hard limits on how long something can take in the worst case, not the average. In games this manifests as: All the stuff has to be done in 16ms to render a frame on time, if we're late the user will perceive a glitch. The consequences can be worse in control systems in any number of real-world systems where a glitch can lead to stability problems.

So getting back to TFA, this is part of knowing what the "right" answer is for the context.


So it would be fine if they had said, "okay, for this problem let's add the requirement that every add must be fast, because it's within a frame rendering loop", but (from OP's description) they didn't even seem to understand that's a separate desideratum, or that it doesn't mean the amortized time is bad.


It's probably implicit, it's a gaming company, however it might be fair to say the interviewers didn't appreciate the possibilities outside of gaming and unconsciously assumed this context was understood.


> worst case, not the average

Yup, I can't remember which book but I'm pretty sure I remember reading some stuff from Michael Abrash about how some of the algorithms in quake were changed for worse average performance but better worst case performance. It's fairly intuitive when you think about the context... and that is the critical point - abstract problems are interesting, but it's important to evaluate them in their full context.


One thing I’d add is in games the workload is known in a way that other general applications don’t have. In this case you can think of the work as “preloading” where the allocations / disk reading can happen several seconds before they are needed for rendering or physics/collision. This is a pretty cool special case as it means you can hit your allocation targets before you need it giving the best of both worlds: the cache coherence in arrays, and perfect framerate because you have far fewer memory allocations focused on known game events (cinematic transitions, enemy changes, or map streaming boundries, etc)


Just a thought -- gaming is latency sensitive. Maybe their issue with it wasn't about average performance, but that the once-in-a-while perf hit would be enough to cause a bad experience for the person playing the game? I know I'd be frustrated if there was a predictable lag spike while playing a game.


Any game engine design worth its salt would:

1. Probably not used linked lists (contiguous layout means better cache efficiency)

2. Would try to understand their data requirements and allocate memory up front as much as possible - doing a similar amortized analysis the OP is suggesting rather than a generic "always have O(1) insertion" at the cost of using an inferior data structure (a linked list)


Any game engine architect worth her salt would know to not speak so absolutely about cache coherency, and that if you're dealing with a use-case where iteration is massively infrequent but random insertions and removals are likely, you could be better off with the linked list :)


Certainly if you design a scenario where linked lists are superior to use, you should use linked lists. Fortunately, these are few and far between in real production software.


No, latency didn't come up. (I agree that could be a reason not to use a dynamic array.) They were hung up on the idea that a dynamic array must be O(n) because at least some of the appends copy.


Assuming you can average over all requests imposes that latency does not matter, doesn't it?

If you're the interviewee, it's up to you to get that clarification


The question they asked was about its complexity in big-O terms, not so much about its real world performance, at least as I recall.


You explicitly said they were hung up on the "worst case O(n)" situation, so I suspect they were concerned about the latency. Insertion into a dynamic array has a worst case of O(n) and an average case of O(1), no?


It's not great to have to double your memory usage while you reallocate your array. On more limited devices (see games consoles or mobile devices) you'll end up fragmenting your memory pretty quickly if you do that too often and the next time you try to increase your array you may not have a contiguous enough block to allocate the larger array.

There's also the cost of copying objects especially if you don't know if the objects you're copying from your original array to the resized array have an overloaded copy constructor. Why copy these objects and incur that cost if you can choose a datastructure that meets their requirements without this behaviour.

If you're holding pointers to these elements elsewhere re-allocating invalids those, and yes you probably shouldn't do that but games generally are trying to get the most performance from fixed hardware so shortcuts this will be taken, its a least something to talk about in the interview.

I can see why they were confused by your answer as its really not suited to the constraints of games and the systems they run on.


> It's not great to have to double your memory usage while you reallocate your array.

You don't have to use a growth factor of 2. Any constant multiple of the current size will give you amortized constant complexity.

> On more limited devices (see games consoles or mobile devices) you'll end up fragmenting your memory pretty quickly if you do that too often and the next time you try to increase your array you may not have a contiguous enough block to allocate the larger array.

If fragmentation is a concern, you can pre-allocate a fixed capacity. Or you can use a small-block allocator that avoids arbitrary fragmentation at some relatively minor cost in wasted space.

> Why copy these objects and incur that cost if you can choose a datastructure that meets their requirements without this behaviour.

We have an intuition that moving stuff around in memory is slow, but copying a big contiguous block of memory is quite faster on most CPUs. The cost of doing that is likely to be lower than the cost of cache misses by using some non-contiguous collection like a linked list.

> as its really not suited to the constraints of games and the systems they run on.

For what it's worth, I was a senior software engineer at EA and shipped games on the DS, NGC, PS2, Xbox, X360, and PC.


When you reallocate your array you will in memory have your old array and your new larger array while you move your data over. At the very least you're using 2x and the extra memory for your expansion.

For your other points if you'd mentioned them in the interview you'd probably have been better received. Copying is really only that fast for POD objects (your objects copy constructors may need to do reallocation themselves or worse) so if you're suggesting a general solution you should be aware of that (or at least mention move constructors if they were available at the time) .

I would be surprised if any of the games you worked on actually shipped with an amortised resize of dynamic arrays (at least not for anything that didn't matter in the first place) so I don't know why you'd suggest it as a general solution in a game dev context.


A linked list needs a pointer for every piece of data. If the data is small, this will also be a 2-fold memory impact. Plus impacts on cache coherency.


A typical C implementation would be using realloc():

The realloc() function tries to change the size of the allocation pointed to by ptr to size, and returns ptr. If there is not enough room to enlarge the memory allocation pointed to by ptr, realloc() creates a new allocation, copies as much of the old data pointed to by ptr as will fit to the new allocation, frees the old allocation, and returns a pointer to the allocated memory.

Worst case definitely 2x memory. But not necessarily always.


> It's not great to have to double your memory usage while you reallocate your array. On more limited devices (see games consoles or mobile devices) you'll end up fragmenting your memory pretty quickly if you do that too often and the next time you try to increase your array you may not have a contiguous enough block to allocate the larger array.

That doesn't smell right to me, assuming you're talking about userspace applications on newer hardware. aarch64 supports at least 39-bit virtual addresses [1] and x86-64 supports at least 48-bit virtual addresses [2]. Have you actually had allocations fail on these systems due to virtual address space fragmentation?

Certainly this is something to consider when dealing with low-RAM devices with no MMU or on 32-bit, but the former hasn't applied to the device categories you mentioned in probably 20 years, and in 2021 the latter is at least the exception rather than the rule.

[1] https://www.kernel.org/doc/html/v5.8/arm64/memory.html

[2] https://en.wikipedia.org/wiki/X86-64#Virtual_address_space_d...


There was a meme going around where a doctor took a high school biology quiz.

Q: What are mitochondria?

A: [Long complex scientific answer about ATP synthesis]

Grader: Wrong. Mitochondria are the powerhouse of the cell.


In a "Business and Communication Systems" exam at school I got the question "Explain how email works". Being the nerdy teen I was, I wrote about SMTP, MX records and that sort of thing.

That was not the right answer.


In game dev, that one copy might cause a frame drop. Even if amortized over time the frame rate would be slightly higher, the occasional dip might make it unplayable.

Of course, arrays have better cache locality, but they didn't ask about that.

They were probably looking for a linked list with a small array in each node, which gives more constant write performance and less cache misses on read.


They didn't ask about latency either. I would have been happy to have the discussion of real-world performance, but we were stuck on the basic "what is the algorithmic complexity of this?" question.


If they had the clue what they were asking about, wouldn't they have corrected GP? "No dude we can't have that worst case here in game dev!!". But they asked about algo complexity and couldn't recognize GP was talking about amortized complexity, something sounds wrong.


Why didn't you just say "...or, you could use a linked list" when you notice them doubting your first answer?


By that point, they said we should just move on to the next question.


I had a less dramatic one where someone argued with me that the lookup time on a binary tree was O(H), the height of the tree, not O(log2n). I was so baffled by the argument that I didn't realize until after the interview that I should have pointed out that the height of a binary tree is log2n.


They were technically correct. The lookup time on a binary search tree is O(H), which is equal to O(log2n) if the tree is balanced. Tree data structures invest a lot of complexity into keeping the tree balanced.


Doesn't this only affect inserts and deletes though? I mean I get your point, but on a read you can assume that a binary tree is balanced (by definition). Or am I missing something?


No, not all binary trees are balanced binary trees.


Ah got it, thanks.


I see a lot of responses defending the interviewers, or saying what you should have done.

Maybe so, but I'm just gonna go ahead and say - those interviewers were shit. Most interviewers are, so it's not really a judgement on them, but the problem in that room wasn't you.


It sounds like you bombed the interview because your answer was not the correct answer for the domain.

For games, it absolutely does matter that only some appends trigger latency, because that causes stuttering in the game play. A linked list may be slower in most use cases...but the performance cost is fixed and can be easily designed around.


I feel like I've replied to this same thing about five times now, but, yes, I completely understand the latency concern with growing a dynamic array. At the time, we weren't talking about it. Their question was, "What is the complexity of this?" And I said, "It's constant time over a series of appends." We didn't get past that.


The guy worked at EA for 8 years.. He must know what he is talking about


he also wrote the crafting interpreter book : )


And Game Programming Patterns, which has a chapter on the performance effects of contiguous data:

https://gameprogrammingpatterns.com/data-locality.html

:)


That means nothing. Time != experience. You have to have actually learned during that time for it to count for anything.


Link to said textbook content on dynamic arrays: http://www.craftinginterpreters.com/chunks-of-bytecode.html#...

Edit: Ah, I just saw munificent linked to this in a reply elsewhere in the thread.


Can you link the textbook? I'd be interested in reading it.


I talk about dynamic arrays in Crafting Interpreters:

http://craftinginterpreters.com/chunks-of-bytecode.html#a-dy...


Thank you for giving away your book for free. I signed up for the mailing list and hope to purchase a copy when you're releasing printed versions.


You're welcome! I'm getting really close to having the print and ebook editions ready for sale.


i really dig the art style of your diagrams! playful and clear


You should have just said "Look guys, trust me, I am going to write Game Programming Patterns one day"


I was already writing it when I did this interview! (Though, I hadn't gotten to the chapter on data locality yet.)

One of the main reasons I started writing it was to help my job search. Which, ironically, ended up not being necessary because I left the game industry. What's crazy to think about is that if I hadn't failed this interview, I probably wouldn't have gone to Google.

So this one weird failure to explain the big-O of dynamic arrays may have dramatically changed the course of my career. Or, who knows, maybe they failed me for other reasons.


Well, all's well that ends well. Failing that interview just might have made you a couple million dollars richer. With all due lip service to all the other advantages of working on hard problems at Google, of course :)

Funnily enough, I am trying to go the other way over the next decade or so, ...so looking forward to reading your book, heh


So, amortized complexity is different than actual complexity. The doubling the size of the array whenever you overflow leads to log(n) performance which isnt constant. What they were looking for was an array of arrays such that each consecutive nested array is double the size of the previous. This way, you get the same number of expected allocations, but you dont ever have to copy data (and therefore inserts are actually constant time). It also has the benefit of being able to consolidate the subarrays into a single one at places where there is time to do so. So, you didnt actually give the question asker what they were asking for.


This answer is the epitome of all of the answers here pointing probably in the wrong direction.

The OP is basically saying any discussion of 'amortization' or anything past something very simple, was completely beyond them.

And your response, like many others here, is gong way off into the weeds, suggesting 'what they were really expecting' etc..

I definitely understand HNers willingness to go into the weeds for no apparent reason, but the lack of social comprehension here is really odd.

The OP has reiterated over and over the social context and yet everyone seems to be happy to dismiss it whilst providing their bit of extraneous extra input.

It goes on like a comedy.

"But they must have been expecting this niche/novel thing way over here in the corner, and well if you didn't get that ..."

It's as though the respondents are validating the poor ability of we techies to establish context.

Any specific requirements in the interview, it would seem, could have been discussed by the interviewer and frankly without providing very specific contextual details, there's no such thing as a 'right answer' because it always 'depends'.

And finally, why anyone would expect specific correct answers instead of a discussion about the constraints is also odd to begin with.


I'm fairly certain they were looking for a linked list, but I could be wrong.


> fairly certain they were looking for a linked list, but I could be wrong

They were probably looking for an answer which went along the lines of a linked list and then what to fix - the ratio of pointer sizes to data (16-item nodes), sorted insert optimizations, the ability to traverse to the 500th element faster etc (skips with pow-10 or a regular skip list etc).

I bombed a similar soft-ball interview question in the past, because I was asked to build out a work item queue using a database in SQL & I got stuck failing to explain why it was a bad idea to use an ACID backend for a queue.

Some constructive feedback from the person who referred me came back as "dude, you just wouldn't let go of the topic and the interviewer gave up on you".

That was definitely true for me, but I remember that feedback more than the actual technical correctness of my opinion.

That was a clear failure of fit in multiple ways, but it didn't matter whether I was right or not, I had failed to persuade someone who would be my team lead on a topic which I had deep knowledge on.

Even if I was hired, it wasn't going to work out.


There is a tradeoff between the cost of introducing another component, and the advantages of a perfect component.

If you already have an ACID database at hand, and your queue requirements are not that big, using the database and NOT having to have someone around who knows exactly how Rabbit MQ was set up is better than to require everyone around you to know the piece of specialized software that you happen to be a domain expert on.

Conversely if your needs are demanding enough, then it is best not to let your team discover the hard way why people build dedicated queueing systems.

If you are unable to accept that this tradeoff even exists, then I wouldn't want to be your team lead.


> it was a bad idea to use an ACID backend for a queue

Why do you think so? It completely depends on the utilization of their ACID backend, and how well it handles independent data.


Couldnt you show them example on e.g whiteboard how it'd work?

it'd be huge pros that you can explain concepts to people who do not know them in 5mins.


I was on a whiteboard, but the social cues I was getting from them made it pretty clear that even my attempts to explain were just digging myself deeper into a hole in their eyes.


Next you'll try to convince us that "munificent" is a real word and not just a misspelling of "magnificent". Sheesh.


Here is the question that was asked, and here is the question that was answered (quoting):

Asked: "constant time complexity"

Answered: "constant time when averaged over a series of appends"

In an interview, the onus is really on candidate to answer the question as it was asked. Candidate did not answer question, and there was a breakdown in communication. Candidates interpretation is that interviewers didn't know what candidate was talking about. Interviewers could just as easily have been frustrated that candidate did not see significance of the difference.

In my experience, interview technique requires establishing a common ground. As an interviewee, I find that the first step of answering a question is to first go through a number of possibilities starting with the most obvious first. They were most certainly looking for a linked list. This is a simple question with an obvious answer. It would have been good practice to start with the answer they obviously wanted, and then try to broaden their minds. Instead, candidate willfully derailed the interview.

To me this looks like a classic breakdown of communication with a root cause of hubris (quite possibly on both sides).

While both sides failed in this account, only one has the benefit of hindsight in writing about it here today. Time has offered an opportunity for reflection. What has candidate learned? "[look how clever I am] so joke's on them"


Typically interviewers are looking for the general known common way to do things. I heard people failed cause they wrote code that use bitwise AND instead of MODULO to check if a positive integer is even. Just the way it works even if I don't like it.


I, too, lost a question on my Data Structures final exam by using amortized analysis.


I really enjoyed your Crafting Interpreters book! Jokes on them for missing out.


Game Programming Patterns is one of my favorite books btw


You wrote a book about std::vector?


am I the only one concerned that this would even be a point of contention? This seems too trivial for anyone doing hiring or being hired to be hung up on.


> am I the only one concerned that this would even be a point of contention? This seems too trivial for anyone doing hiring or being hired to be hung up on.

You'd be surprised! I've done probably 200+ interviews across a couple companies. Over time if anything my questions have gotten simpler.

I look for someone who understands the fundamentals of the stuff on their resume, asks good requirements questions, thinks a bit before leaping into the weeds, can explain their thought process, ideally does some of their own double-checking, and (if it comes up) can debug a problem I spot without my spelling out their error.

And last but not least: I don't want to cause a panic attack, which would be mean and tell me nothing. Sometimes my candidates have been too nervous for me to know if I should hire them, and it's not a good experience for anyone.

So I make the problem as simple as I can and make sure we get through the rest of that in as relaxed a fashion as I can manage. I'll ask a simple (often first-year CS) coding problem or a design problem, rarely both in the same interview slot. I never ask for tons of code on a whiteboard in a 45-minute slot.


I mean, I'm guessing this is the main reason that they passed on me, but I'll never know for sure. Maybe they didn't like my personality. Maybe it was some other question I thought I did well on but didn't even realize I got wrong. Maybe a combination of things.

For what it's worth, I didn't get a very good vibe from them. The impression I got was that their culture was more aggressive and competitive than I like. (Maybe this was to be expected for a company that made a competitive eSports game.) I'm not really what you'd call a brogrammer.

So it was probably for the best that they said no. I ended up at Google, which has been better for me in every possible way.


>I'm not really what you'd call a brogrammer.

I'd say you dodged a bullet, but it's more like you dodged a ball.

https://www.youtube.com/watch?v=W-XbDZUnUmw&ab_channel=Movie...


Can't really comment on whether that was the correct way to handle the situation, given that passing the interview was the goal. However, I would have a hard time giving an incorrect answer on purpose. I also consider an interview as an opportunity to learn about the company I'm going to work for, and especially my future colleagues.

I would probably try to give a full answer, such as "there's a widely-held concept X, but my understanding is that this is not completely correct, and in fact a better model is Y." A technical lead that can't work with an answer like that is one I probably don't want to work with.


The goal of the interview shouldn't always be to get the job. If you have any length of career, you are interviewing the company as much (or more) than they are interviewing you.

My approach would have been to go with the new idea and see what discussion can come out of it. If they don't have time to get into it, that's one thing but if they are simply not open-minded to new ideas, I don't want to work there.


In the real world, people have to apply to dozens if not hundreds of jobs just to get an offer. The company has all of the power, so you tell them what they want to hear or you starve.


The job application process varies immensely by position and profession. For a junior developer there are hundreds of new listing each day but hundreds of qualified people applying, so the company has all the power and your comment would be true in that context. For a very qualified senior dev, there are dozens of positions posted each day where the company is truly set back until the role is filled, with only a handful of qualified people applying, so the senior dev has a lot of power.

In another profession like say a specialized electrical engineer, there may be only a few positions posted a month and a handful of qualified people so both the engineer and the company would be desperate that they could together be a fit.

This story was about an anecdotal experience where the power dynamic was fairly equal


That's true of unskilled jobs where there are usually more applicants than positions. That's not what's being discussed here.


Well - not true for all industries. In some, there’s a lot of competition for candidates.


We don't even need to look at other industries. It's demonstrably not the case for professional software developers at the moment. The companies are simply doing whatever is in their best interest which is retaining expensive and hard to hire for talent, ergo they'll treat you nicer. The second it's no longer expensive or hard to hire, things will take a turn for the worse (for the candidates).


I really love this kind of thinking. This isn't the first time I have happen upon it, but I really like the idea that the work I do for a company is an exchange in which both the company and I get something out of it.

Certainly when I got my first job I felt like I was there to take a job, not seeing if the company was a good fit.


Yeah, and obviously that's an ideal, not necessarily reality -- fresh out of college, or just looking more urgently for a job, I think the stakes are a little different and you gotta be a little more willing to just "take the job."

But as someone with experience, looking for higher-level positions, ideally with some time on your hands / savings in the bank to take your time interviewing around? Yeah, it should totally be a two-way street.

"Experienced software developer" is a comically in-demand job title. It's good to know that fact, and to be proportionately selective.


I don't think I could make a decision about whether I'd want to work for a company on the basis of a single employee's attitude. That person might not even be part of my day to day work group.


And yet the company chose to let themselves be represented by this person. It might also go the other way around, you are interviewed by someone that's awesome and you end up with another team that truly sucks.

In that case bad luck but if they send a dick to interview you it is more likely that the rest of them are also gonna be dicks. I like companies where you actually interview with some of the people you will be working with at least as the last interview round.


I mean, this is all the surmise of the person writing the article. I'd like to draw your attention to the fact that the interviewer never actually did anything wrong in the actual event described.

In terms of your idea that a single interviewer should turn you off from working for a company, possibly. Companies can be pretty big. I am skeptical that there are many medium or large companies that can prevent all negative interviewing experiences. Especially if the negative experience is, "I saw my interviewer make a facial expression that I over-analyzed and made a huge narrative about."

That said, if you want to use that as a signal that's your prerogative. My guess is that you'd mostly get false negatives from this signal but if you have many options for employment that's hardly a major problem.


It's true that there will be some false negatives. I basically have to go with my gut that if I get a certain strength of a vibe I have to interpret it some way. In one case it was one of the top senior devs interviewing me and didn't like the terms I was using or pronouncing them and seemed to be somewhat hung up on it. I considered that trivial and was trying to get to the meat of the discussion. I took that as a cue for the engineering culture, how would you interpret it?


Fair points though I was really only replying to your comment here, not the article any longer. I do agree that the article seems to "overanalyze" the situation, potentially there was literally nothing and he gave the 'wrong answer' for no good reason at all. Makes a nice article though I guess. FWIW I have experience with giving the "hard truth" in the "wrong situation" and I've come out unscathed, because I was very objectively correct and there was no way around it. Nothing pedantically debatable in my case, just "this is how the protocol works, not the way you say, no interpretation possible, look it up in the RFC". Not the actual example but like someone saying it's "SYN, ACK, ACK ACK" instead of "SYN, SYN ACK, ACK".

Companies can be large and in that case it's likely you will be encountering a fair number of people you won't like while there anyway. The question is whether you interview with "random interviewer of the day" (still bad if that's a bad apple but if the company is large you might take your chances while if the company is small, it's likely this is 'their best' or you will work closely with this person) or if you're actually interviewing with the team members you'll work with. If the company is large but you know you are interviewing with the team members you'll be working with and the feeling just isn't there or worse, then I think I'd take the possible false positive if I wasn't in a bind for a job.

At my current place for example I interviewed with a bunch of people in various rounds, from the typical HR stuff at the front, through various layers from CTO, my boss, our architect and then a sample of people from my immediate team. All of them very pleasant, had the best interview experience ever with the architect. We had a great discussion of pros and cons of various choices I made for the code I had to write, alternative approaches etc. Really awesome. Felt basically like a regular working session, no leetcode quizzing BS.

Previously I have interviewed with other companies where it just became clear after some time that the company and I were on very different terms with regards to how we think software development should work. A different interviewer might have been able to 'hide' some of it and I might have joined that company and been miserable.


The more time I've spent in tech, the more I realize that there are very few "correct answers" - there are things that work in a particular context. Software patterns in particular are just what you make of them.

I would have no problem saying "many people use MVC to mean 3 tier and other people use it to mean XYZ".

It is very difficult to be completely technically correct. It is also rare that one needs to be completely technically correct in conversation.


Also each person has an unique model in their head. Even if you read the same material, you could interpret it differently and people learn in different ways.


Do you want to have that discussion with your team-lead almost fully cold, in front of his superior? Because in the article, that is the key point. The applicant was totally willing to go into a fair discussion about the pattern.

However, doing that in front of the tech-lead his boss is likely to be different from a fair discussion. Disagreement about technical architecture is fine. Doing so in front of higher management is likely a different ball game.


Unless you absolutely need the job, I would consider this a great opportunity to figure out how his boss would react.

Getting into a lively technical discussion when you can see how well people can control their egos to find out the best course of action is very valuable. If I had a chance during an interview to find out how people behave in those circumstances I would definitely go for it. Who knows maybe the "boss" would be present in future technical discussions as well, I wouldn't want his presenc to change the discourse negatively.

And as I've personally been in positions like that (being the tech lead with the boss on the call), I'm usually thrilled to learn new information and discuss technical things, raising my opinion of the interviewee greatly.

Maybe that's because I come from a very straight forward culture where political subterfuge is frowned upon, but it wouldn't actually look very good to me if a member of my team would subtly alter a technical matter for the worse because of political pressure. Would I be able to trust that person? I would always have this doubt that they are "playing a deeper game", regardless if it was for my benefit or not.


The tech lead is not the one being interviewed here. Presumably, they are already respected in their workplace, and admitting that they learned something from the candidate's full answer should be seen as a positive by their manager. The way the candidate frames the answer is important here, so no-one seems stupid for holding a common misconception.

And if the tech lead can't admit in front of their manager to not knowing everything? That's a sign of a toxic workplace.

I'm not ashamed to admit that I have previously found myself in a situation similar to the tech lead: the interviewee gave a solution that I didn't know about - one that could be considered better - and my manager was in the room interviewing with me. I told the interviewee that this is a new direction I haven't thought about until now. It's important, otherwise we can't continue an honest discussion about the solution.


seems like having a discussion like that in front/with management would be a great way to suss out the way management will respond to future differences of opinion. A very valuable thing to know in deciding whether to accept a position


If you're already deciding you need to BS to keep the managers sweet in the interview, that seems like a pretty toxic work environment.


And giving a response like this is still a good indication of people skills -- Do you know how to present a controversial opinion in a way that encourages people to accept it, or do you ram it down people's throats?


Yeah I think this is way overblown- The way to approach this is "Well typically MVC is presented as such and people think its represented in typical architectures as blah blah blah, but from an actual theory perspective what actually happens is that blah blah blah..."

You cover both bases, give the blogspam answer that most view is the "right" one, but also show that you have a much deeper understanding.

My favorite question when interviewing node.js candidates was around this type of ambiguity- "Is node.js single threaded or multithreaded?" and most would immediately shout back "single threaded!" but the reality is that its a bit more complicated than that, its a single threaded event loop backed by a thread pool, and I wouldn't ever hold a "single threaded!" answer against them, but start to ask leading questions as to what happens when we have say 5 requests all come in more or less at the same time to see if they really understood how things work. And the point is not to gotcha! them, its just to start a discussion- though if they reply back "well its complicated..." then I know they probably are already there.

In the context of MVC though, I feel this is really overly pedantic. MVC is an idea that is never actually implemented in purity, and understanding the basic idea is all that is really needed, outside of maybe very specific roles implementing web frameworks.


Or "MVC is often thought of as mapping to the three tiers although that's a bit of a simplification that doesn't fully capture the mapping."

And there are essentially always nuances. So that's not really a controversial statement. At which point, the tech lead can accept the answer as pretty much what they were looking for or can probe further if it's actually the nuances they're interested in.


> "there's a widely-held concept X, but my understanding is that this is not completely correct, and in fact a better model is Y."

Why not say "there are many similarities and some differences, they're similar in these ways..."


There's not a lot of context given about the behaviors exhibited in this interview, but as many others here have said, red flags abound in this anecdote.

> But the problem was, that this guys manager was sitting next to him. If he didn’t know, I would totally humiliate him in front of his boss. So either he would stick to his guns and refuse the correctness of my answer, to save face. Or he needed to agree that he was wrong, and lose face.

I would love to ask the author if their team lead ever confirmed these suspicions. I would doubt it, but happy to amend my position if wrong. Regardless, to me this sounds like the exact type of culture I wouldn't want a new hire bringing in. I don't need to be managed or protected, especially from my own manager. I need honest answers, not what I want to hear. I wasn't in the room, I can't read the vibes like the author did, but I'll consider the omissions here telling-- there's no description of an anxious or frazzled team lead, or an imperious manager peering down. It seems like the only thing the author saved the team lead from were problems in their own head.


So I worked there for a few months, and the team and team lead were awesome. But the manager and whole management team not so much. It was a very political environment where the team lead and anyone at that or higher level needed to cover their asses with emails etc.

So it was actually good that I didn't raise this issue.

One thing I remember at that place was that the team lead was on holiday, and the manager took over. But some release went wrong because the manager made some fault. He was able to turn this story around that the team lead messed up right before his holiday, and that he, the manager, had to step in to fix the situation. Oh boy.


> So I decided to start by giving the correct answer, and see how he responded. I explained “The model-view-controller is a software pattern, and so resides inside the written code. Since in most cases, this code only runs on the application tier, …”. But then I saw him frowning, and so knew this was not the answer the was expecting. So I continued:

This is a great cold reading technique that works in magic tricks too.

You have some trick where someone needs to pick from 10 cards. And your patter goes something like "Picture the card in your mind. Ace of hearts. Ace of hearts" If they give a big reaction then you've found their card and performed a miracle. If not then you just continue "Of course, that's just an example..." and continue the patter throwing out other hints. Of course, it's tough to make this your only trick but it can really elevate a good trick to amazing.

Great example at 1.50 here: https://www.youtube.com/watch?v=QI5-NDiY7IM


I used to do a very stupid magic trick where I would have someone pick a card and then shuffle it back into the deck. I would shuffle a few more times while asking them to focus on their card. Then I would ask them to name their card. They did. I would put down the deck and say "not only have I found your Ten of Clubs..." and then I would look at the top card and finish with "...but I have changed it into the Ace of Spades".

Every once in a while, purely by luck, the correct card would be at the top. In which case I would just change the second line to "...but I have brought it to the top of the deck for you". My sister begged me to tell her how I did it for years.


Dai Vernon had something akin to that. I -assume- it was the serendipitous case you mention, but it is, after all, anecdote, so it might have been something more. But, pulling from Wikipedia - 'He also had an encounter with another up-and-coming young magician from his town, Cliff Green, who asked Vernon, "What kind of magic do you do?" Vernon responded by asking the boy to name a card. Upon pulling a pack of cards from his pocket, Vernon turned over the top card of the deck to reveal the named card and replied to Green "That's the kind of magic I do. What kind of magic do you do?" '


Bah, that's an easy one: To begin with, make sure what you're wearing has a total of at least fifty-two pockets...


All great until you come across someone with a permanent poker face. During my last interview, the interviewer was frowning the whole time.


I once got two raises in an interview because I was so (pleasantly) surprised by the size of the first offer that I wasn't sure what to say. There was a bit more to this situation - I liked the job I was in and the employer really needed someone, but it was still a revelation.

Keeping your mouth shut, saying no, and not lying or BSing are three of the most useful skills you can develop. Many people feel pressure to respond quickly, especially in a performance situation like an interview, just as the writer of this article was highly tuned to what his interviewer wanted to hear. Controlling the tempo and direction of your own responses is a way to gain the strategic initiative without being disagreeable.


We are trained to answer quickly, people find silence uncomfortable (except with close friends).

Just pausing and not answering quickly when given an offer is a power move. I know it consciously but I still find it hard/impossible to do in real life.


As someone with a default poker face I am so sorry, I swear I smile, just on the inside mostly.


It's in the eyes. My resting bitch face is broken up with moments of smizing.


If there’s two sides to something present both and note positives and negatives of each.


Read about the concept of multiple outs. It has many practical use cases in everyday life.


Interviews aren't only just about skill. Sometimes you're just having a bad day, or the interviewer is. Either will prevent you from success.


If you had paid more attention to that frown you might have passed the interview.

(It's Monday. Smile a little.)


> "Picture the card in your mind. Ace of hearts. Ace of hearts" If they give a big reaction then you've found their card and performed a miracle. If not then you just continue "Of course, that's just an example..." and continue the patter throwing out other hints

Wouldn't you have to do this on average five times to get the right one? Wouldn't it be a bit suspicious giving five example cards before arriving at the right one?


The hard part is writing your patter so that it doesn't sound like you're listing cards. Without giving away too many magical secrets, I invite you to consider the fact that there are many ways to reference cards more subtly. Color, suit, number, high and low. There are also a few different possible reactions.


So basically running a Decision Tree algorithm.


That’s not how it works. Usually you try to force a card with a high percentage success but if it fails you will fall back to another way of revealing the card


Yeah, it's all about card control. I used to work with someone who could shuffle and unshuffle a deck, and place any card into the deck and pull it back out (after one or more shuffles.) Takes years of practice.


Heh. Happens all the time with tests and questionnaires, the choice is always frustrating:

* "Does the author really mean what they are asking? Are the mistakes in the phrasing or corner cases intentional, meant to catch me, test my deep knowledge?", or

* "Is the author just not very good with logic / not thinking this through?"

I go with the latter in "soft social" contexts. Never regretted it yet.

This saved my hide recently in June, when I had to undergo a mandatory psychological examination for my gun license permit. Serious official stuff, with my permit on the line… And the "serious" psy-test was exactly as "robust" you'd expect from the field of psychology.

I answered the way I figured the test author construed the questions (= what they likely meant to ask), not what they actually asked. Easy pass.


Same with some driving exam questions I have had. For example worded something like that, "does driving faster in some sections of the journey affect your planned time of arrival?"

I don't remember if the wording was specifically like this, but it was something that to me logically obvious answer is "yes", but the correct answer and what they expect is "no", to show that you are a reasonable driver who won't speed unnecessarily.

And I think throughout the driving exam there were several questions like that. This type of thing really annoys me.

EDIT: I found the actual question - although I still had to translate it - and I think I was giving the question a benefit of the doubt from my memory, with the term "planned" as I had written here, the actual way the question was setup was following:

What affects the duration of a planned journey?

a) Length of the journey.

b) Maximum speed in few lone sections of the journey.

It uses checkboxes, so you can check both.

They expect you to check only a), but b) is in my view also logically correct answer as if at any section even if it's shorter than 1 metres, if your max speed nears 0, duration of your journey will reach infinity. But it is worded in such a way that most people will get the hint that B) should not be checked.


If I remember correctly, that question likely refers to some argument in the driving manual about speeding.

Something along the lines of "speeding in most urban areas doesn't actually help get someone to their destination faster. There are always multiple traffic lights and other cars to contend with, so it's not helpful to go over the speed limit and it increases the risk and severity of an accident." It's perfectly solid advice, who hasn't been passed aggressively by an ah-ole with tinted windows only to catch up to them at every traffic light?

The question obviously wants to screen for that tidbit of knowledge, but it's not phrased rigorously enough for the HN-crowd, I guess.


Isn't there some confirmation bias (not sure if that's necessarily the correct term here, but you'll get the gist regardless) here in that you're more likely to remember the situations in which you catch up with the person who passed you (i.e. "look at this idiot getting nowhere"), and less likely to remember the situations in which that person makes a light that you don't and you never see them again? Massively depends on the specifics of the roads you're on too of course (and also how the lights are timed).


Years ago the did an experiment in Germany (I know, a one time test with 2 cars/drivers is more or less anecdata).

The task was to drive from Duesseldorf to Munich. Two identical cars, two very experienced drivers. one was told to go as fast as possible without breaking speed limits on the way (we don't have a general speed limit on the German Autobahn (highway)) tthe oother ro drive at a relaxed 120 km/h were possible and also honor speed limits.

Both were equipped with EEG (heart rate and stuff).

The driving distance is slightly above 600km.

The first drive (as fast as possible) arrived first. Waiting for the relaxed driver to come in second. That happened 20 minutes later. So on a trip of > 5 hours the gain was 20 minutes.

But at what cost. The EEG told a story of pure stress, massive heart rate spikes even for an experienced driver like the one behind the wheel. While the other one came in not only at a relaxed speed but a way more relaxed body and mind.

Medical doctors concluded that the EEG of number two was way more healthy.

Btw: One reason why the first car was only 20 minutes quicker was the fact that the driver had to stop to refuel. This cost him minutes. While the second driver arrived with gas to spare. So even economically it made sense to drive a relaxed style. Not to speak of the ecological aspect.

So to wrap up. The fast driver often comes in first. But not as quick as the feel they are. And at a high price.


Counter anecdata. I monitor my heartrate (along with other relevant statistics like speed, distance, elevation, and barometric pressure) while engaged in activities like mountaineering. Even when I'm not physically straining myself but just carefully traversing an exposed face, I don't consider a raised heartrate there 'less healthy' in and of itself because it's a side-effect of the excitement I'm feeling, and that feeling (sometimes not necessarily in the moment, but always afterwards) gives me an overall sense of improved wellbeing. Do I get stressed out sometimes? Sure, it's a dangerous activity. But overcoming that and accomplishing my goal rewards my mental health in a different way. Only half tongue-in-cheek: Maybe the faster driver was simply having more fun?

In this case you're probably right that the faster driver was just more stressed for no real benefit, but an EEG is not always a good proxy for how "healthy" something is (even ignoring obvious cases like physical exertion).

If you have a link to the study I'd love to read more.

EDIT:

One other thing I missed on the first read of your comment was the fact that the driver was instructed to "drive as fast as possible" and then given access to roads with no speed limits. I feel like that would have the potential to exacerbate the 'negative' side of things and that a more reasonable middle-ground could be found both in terms of driver stress and also fuel economy.


To clarify:

The spikes and reactions were stress reactions measured by medical doctors from the monitoring of multiple signals (heart rate being one).

So they came to the experimental conclusion that (at least in this experiment) driving at the limit of what was possible in terms of speed the car could go and speed traffic would somehow allow was a factor of stress for the driver.

They also qualified the added amount of fuel necessary to drive the distance.

But they did not say one was better, one was worse. They just let the viewer decide on which variant they preferred.

And as I said: If the externalities were priced into taxes and cost of fuel - why not let people and the market decide if it was worth to them to arrive 20 minutes quicker on this distance.


Along those lines there's a notable pop-sci book by Robert Sapolsky, "Why Zebra's don't get ulcers" (https://www.amazon.com/Why-Zebras-Dont-Ulcers-Third/dp/08050...).

Basically this guy made his scientific career by doing epic experiments where he observed communities of baboons during various social interactions, blow-gunned individual baboons with tranquilizers, then very quickly, took samples of their blood to analyze glucocorticoids (these are stress-response hormones and have a half-life measured in minutes).

Anyway, crudely stated, the major finding is that animals have intense episodic stress throughout their lives but never suffer health consequences from that stress because it's occasional. Humans, on the other hand, can get the same levels of "fight-or-flight" stress but at long-lived, daily intervals. Excessive glucocorticoids, over a long term, can interfere with the normal functioning of the body and precipitate a wide variety of health problems including heart-disease (and ulcers, as the title suggests).

In the case of driving, an aggressive lane-changing drive in a fast car might be exhilarating under certain conditions, but it's a different story for a daily commute. It's no accident that the advice given to people that experience aggressive drivers on the road is often along the lines of "Let him go, don't become a part of his bad day." Aggressive driving is a self-reinforcing bad habit that becomes part of people's identity in many cases. Personally, I don't care about the health of aggressive drivers, but I do care about their propensity to cause accidents and hurt innocent people.


> but I do care about their propensity to cause accidents and hurt innocent people

Absolutely agree. I live in a 30km/h zone (regular speed in a city is 50km/h in Germany). And we have speeders every day. While kids are playing and elderly people trying to cross the streets.

What I wanted to say:

I care - and we petitioned the city to at least have marked parking spaces implemented with big flower pots on every side to at least reduce the speed people can go here.


There's that, but there's also the possibility that it's conflating two things: If you're speeding, you'll also have to go around cars that aren't, which reminds me of this one about weaving between lanes vs sticking to one lane that the Mythbusters tested: https://www.youtube.com/watch?v=ZefgUVg3qx0 (4 minute long cut of segments from the episode).

They found that weaving between lanes is faster, but not by very much depending on which lane you're comparing it to. The final result (shown at 3:54 in the link) was:

* Weaving between lanes: 1h 16m

* Lane 1: 1h 19m

* Lane 2: 1h 20m

* Lane 3: 1h 29m

* Lane 4: 1h 33m

Lanes 1 and 2 are close enough I can imagine it being chance and more tests resulting no noticeable difference. And for an over-an-hour drive, it's already no reasonable difference anyway.


This basically just says you have to know which lane will usually be faster and why.

If you know the area and traffic patterns you can do that and basically stick to your lane for long periods and just switch a few times. I have something like that on my pre-covid commute for example, where you have to be in the right most lane until between a certain off ramp and the corresponding on ramp. There's always lots of traffic and you go between 10 and maaaybe 40 km/h. There you switch to the leftmost land so that you don't get bogged down by the regular mergers from the on ramp in the right lane and the 'middle lane drivers' who merge into the middle lane right about there too. Then until the next on ramp you stay on the left lane but have to switch over to the right lane again because no trucks allowed on it and most people try to pass on the left lane while everyone else is stuck with middle lane drivers and trucks.

If you know the area in the above example, just choose lane 1 for sanity. If you don't know the area and happen to stick to lane 4,id rather try weaving to be honest.


On the other hand, if you are driving on the interstate highways across the USA, driving 85 mph instead of the 65 mph limit will save you roughly 3 hours from an 800-mile journey.


it isn't true though. suppose there are n lanes and more than n cars, which is almost always the case in the city. if you pass someone, even just to end up stopped at the same light, you are still ahead of them in line. moving ahead by one slot is a very marginal gain, but it does add up if you pass many cars on your journey. in addition to being ahead by however many slots, eventually you will encounter step-wise jumps in travel time where you get through a yellow but the person right behind you has to stop.

over enough journeys, driving faster will always decrease your average travel time (unless you crash or get pulled over, I guess). the way to discourage people from speeding is not to tell them something obviously wrong, but to discuss the legal/safety risks and the increased gas and brake pad consumption. I understand that I can get places faster by speeding in the city, but I don't do it because, to me at least, the gains are marginal compared to the risk, cost, and stress.


> [...] eventually you will encounter step-wise jumps in travel time where you get through a yellow but the person right behind you has to stop.

Keep in mind that driving manuals are deliberately written at a 6th to 9th grade reading level depending on the state.

Yes, you're right that the gains from speeding are marginal compared to the risk, cost, and stress. But it's hard to communicate that kind of nuance. If the manual just says lead-foots aren't going to get somewhere faster by speeding to each intersection that's "good enough" and very close to reality for the purpose of the manual. National merit semifinalists can get the queue-theory version from their high-school math/driving instructor during driver's ed.


>the way to discourage people from speeding is not to tell them something obviously wrong, but to discuss the legal/safety risks and the increased gas and brake pad consumption.

Is it though? Almost all the knowledge we have is taught using flawed and simplified models that we can niptick and find "obviously wrong" when deeply examined. https://en.wikipedia.org/wiki/Wittgenstein%27s_ladder

And IME, when I'm explaining something, pedantry and specifics almost always get my audience confused more than help to get the point across.


I was taking a firearms safety course and there was a true/false section on a test. I can’t recall what the question was specifically but it was about which laws something fell under and the main issue was the word ‘or’ - when used as a logical OR the answer was True. When used as a sort of common day ‘or’ the answer was probably false.

I put false without thinking. I had everything else on the test right and part of the course was the instructor reviewing your test and going over your incorrect answers. He wasn’t a programmer (I think former military tbh) and didn’t really understand why I got it wrong even after I explained it…


What other meaning of the word "or" is there?


In common usage, "or" is usually taken to mean exclusive or (XOR). Logical OR is non-exclusive. In most cases you can infer which is meant from context or it doesn't actually matter (much).

On a test for a gun license, it could get a little squishy as there are likely questions about the law (which only sometimes aligns with common sense) translated into lay English (which lacks the degree of precision you would likely find in the actual legal phrasing).

From Reddit ELI5:

"Cream or Sugar, in coffee is OR it means one the other or both

Fish or Chicken, for dinner is XOR it means one or the other not both."

https://www.reddit.com/r/explainlikeimfive/comments/4v6dcd/c...


Let's just imagine (for the sake of an argument) that you're just about to be given a roadside alcohol breath test by a police officer, but in this universe having just eaten fish or chicken can cause a false positive. The officer asks you if you had eaten fish or chicken recently. In fact, you had both fish and chicken in your last meal an hour ago (don't ask). You should answer "yes".

Most cases where "or" is taken to mean xor are actually asking for a choice, expecting one of the options as an answer. If the "or" is part of a question expecting a yes/no answer, then it is much harder to justify it being an xor.


I imagine they could ask questions like, "should you point the barrel at the ground or check the chamber for a bullet?" and you might say "or?" NO stop you need to do both, while the expected answer is YES you idiot you need to do both. Sometimes you just can't win.


distinction between inclusive and exclusive or


XOR.


Actually, as worded, I think "no" is technically the correct answer to that. The current projected/expected arrival time would be altered at the time that you begin driving more quickly, but the planned arrival time is ossified before you start the car.


Yeah, I couldn't find the specific wording right now and whether there was some sort of plausible play with the word "planned". This question is also translated.


Ah okay, that makes sense.


Actually I managed to find out the exact wording of the question, it was fairly different from what I remembered:

What affects the duration of a planned journey?

a) Length of the journey.

b) Maximum speed in few lone sections of the journey.

It uses checkboxes, so you can check both.

They expect you to check only a), but b) is in my view also logically correct answer as if at any section even if it's shorter than 1 metres, if your max speed nears 0, duration of your journey will reach infinity. But it is worded in such a way that most people will get the hint that B) should not be checked.


While we are at looking at it pedantically (keeping also in mind that you said it's translated) I think their expectation of the answer is fair and they give more than a hint on it as well. Like you'd be stupid not to know what answer they want you to check.

"few" and "lone sections". As in statistically irrelevant. The assumption is that people 'get' for example (and its being taught over and over in driving school too) that if you speed to get through one traffic light in the city you'll not make it through the next one anyway and thus not really be faster. On the highway if you only speed once or twice for a short lone section you won't really notice in the end.

You don't even have to come at it with a nerd mathematicians mind and play tricks like Max speed near zero (not a realistic scenario so not something a driving exam would ask but definitely a fun scenario in a math lecture at university maybe). I can think of multiple sections in my city where speeding through the right traffic light when it's "cherry green" will net you a significant advantage because of the awful street and intersection layout and traffic light phases in that area. If you do the same just one traffic light after it though it won't help you at all because the next light will catch you unless you really want to cause an accident.

And if you are driving a 500km+ route and you can "speed" through it at 160+ km/h for a significant portion of it vs. going 120 that will save you a noticeable amount of time on the journey. This assumes something like Germany and your route being in parts with not many speed limits and not too much traffic so that you can comfortably cruise at 160 to 180 vs. arriving as a nervous wreck close to a heart attack because of all the near crashes from slowing down for the grandma going 100 switching to the left lane a few car lengths in front of you.


Ah okay, yeah, clearly would need to be checked then if they wanted a literal and honest answer.


> I don't remember if the wording was specifically like this [..]


You’ve just reminded me of a question I had on a driver’s test many years ago.

The free-response question was something like: “What is the single most important thing you can do to improve your safety as a driver?”

I answered: “Always drive sober.”

The “correct” answer was: “Fasten your seatbelt.”

I can only imagine the grader’s face / thoughts when they had to mark my answer as incorrect.


Certainly over longer distances and highway driving, driving faster will get you somewhere faster. In city driving though, where most people drive, it is correct that speeding up for certain portions of the journey will not get you there faster since the increased speed over short spurts can't make up for congestion, stop lights, etc.


Pretty much. Every EVIP (emergency vehicle incident prevention) course preaches this. The advantage to lights and sirens and opticom (the traffic light changing technology) is minimizing stop and go, and optimizing flow of traffic in your direction. They demonstrate that an extra 20mph on your average response, which is likely less than 3 miles (if not quite a bit less), and not much more than 5 until you start to get more rural, is likely only to save you seconds - and there's very very few things where an ambulance arriving literally seconds faster makes a difference in patient outcomes.

But it does slow your reaction time, and it does massively increase your accident risk. And that's couched in, "And now it's going to take many many more minutes for another unit to get to your patient than the few seconds you'd have saved".


Wow those sound like such badly designed questions. What country is this?


I don't know about that person, but a friend of mine from India was telling me about a driving test question there - [translated] "How far should you stay behind another car - A) 2 meters B) 2 seconds". Apparently the correct answer is 2 seconds. In my opinion, the question is weirdly ambiguous.


What's ambiguous about it? Two meters behind a car at highway speeds can get you killed.

If it were a question only about cars at rest it would be phrased something like "how far behind a car should you stop when queued" or something of that nature.


There are few places in India where you can drive at "highway speeds".


"Two seconds" is what I learned too. The book here says that you should be at the spot where the previous car was two seconds ago, which is good advice, as it's independent of speed.

You can think of it as "it'll take you one second to react and one to brake to a stop, so you should be two seconds away".


I guess two meters is sufficient when you drive a walking pace, but any faster than that two meters will be too little. You say that this was in India, and I'm guessing two meters is way too much in a traffic jam :-)


This makes sense to me. What matters is the reaction time and braking distance.


I’m curious, why do you think that’s ambiguous?


I recall taking "ethics training" at a former job where a typical quiz question was something like this:

A vendor offers you season tickets to the Cubs games. Are you allowed to accept?

1) Yes, because neither of you are in the baseball business.

2) No, not under any circumstances.

3) Yes, if you give the tickets to someone else, such as your brother-in-law.

The "right answer" was quite obvious, making it more of an intelligence test than an ethics test.


The tax forms in my country are filled with this second-guessing. For example if it asks how much income you earned from interest, you're supposed to exclude income that you've already paid tax on and exclude some kinds of investment that are asked about separately in another question but they don't explain this so you have to keep tweaking the numbers until the automatic calculation looks right. There's also a question about having "returned to the country" which really confused me when I was travelling back and forth. I had to talk to the helpdesk to discover that "returned" isn't the same as "arrived after having previously left". I might have got some of the wording wrong but that's the gist.


I don't know about gun-permit tests, but people DO fail these kinds of screens. To a large extent failing a test like that would demonstrate how far out-of-touch the test-taker is.

I suppose in this case it might instead just be a formality? And no one has actually ever failed that test?


Tests are actually one of the sturdier aspects of Psychology, epistemically speaking. It's one of the areas where reliability is actually testable.

That said improvised psychology is as bad as anything improvised, only people are generally more comfortable improvising it (and indeed commenting on it) without knowledge of their lack of knowledge, as opposed to fields that look more like science such as software engineering.


This isn't a great example to me. I doubt the interviewers would disagree that the actual code in MVC runs at the "application tier".

I think they were just trying to elicit the idea that the model defines interaction with the database and that the view defines interaction with the browser client. That there is some relation there between MVC and 3-tier architecture. The Wikipedia snippet that disputes any relationship seems overly pedantic to me.

Though all moot to me since there's very little real world pure MVC or pure "3-tier" anyway. For good reason.


The article really rubbed me the wrong way. It feels like the writer was insanely judge mental of interviewer, immediately thinking that he was so much smarter than the guy that he was afraid to embarrass his future boss. All this confirmed by a brief moment if body language? Give me a break.


I do not think I am smarter than anyone else. It's just that some things you know better and other things you don't. I worked with the team lead for several months, and really enjoyed it. He was smart and a very good team lead.

Sorry if my article gave you a wrong vibe.


I think you're fine. The GP is just threatened by your everyday social intelligence. It's not that you're superior to him, it's that he feels inferior or lacking compared to you.


Agreed. I think the better and less risky way to handle this situation is something like:

"There are two views on this. Some people say x, but I would argue that the better view is y."

Giving the wrong answer intentionally is both dishonest - never a quality I would want to have, especially in an interview - and liable to backfire.


As far as I can remember now, I was also not 100% sure about my answer, because I did need to look it up later to confirm that I was right. So maybe I just took the easy road and answered what I knew they wanted to hear.


exactly. The author could have gone into the differences between the 2, how they are the same but not really, by choosing the words wisely. He could have made it a non-attack by interjecting "from my experience" or "from what I know" or "the way I usually refer to this" etc. There are no correct answers, since these patterns are never applied in practice in a pure format, pretty much just like the classic design patterns from the GOF. This is a weak example from OP.


MVC is older than 3 tier applications and saying "model=database" makes me cringe and squirm, but at least I know they do a lot of CRUDs :)


Seconded. What they're calling the right answer is the pedantic answer.


Well, no… the correct answer is that MVC and 3-tier are actually orthogonal concepts. And probably the better answer even in his given interview context.


Disagree. The difference is fundamental and if you don't know it you are going to be stuck with a false understanding of the framework you're working with - and you will likely end up trying to shove bad abstractions into places where they don't belong, resulting in an unmaintainable mess of code. I have seen this happen in a lot of codebases where people fundamentally misunderstand the abstractions they're building on top of.


> “How does this architecture relate to the model-view-controller pattern?”

It's an ambiguous question.

One answer is that all the mvc code runs on the application layer. This correct in a nearly tautological way (the code is where the code is).

The other answer is how does it relate. The model relates to the database in their three tier example.

Indeed, I think this question definitely implies something other than "where does the mvc code exist?"

I really don't think his answer is even a pedantic one. It's bending over backwards to read a question in a certain way that his preferred clever answer could be considered correct.


I disagree with this. If you believe that one thing "3 tier" means is that the client doesn't interact directly with a database, as the old fat-client -> database model did, then "MVC" was one pattern to accomplish that. So, to me, there is some relationship. Enough of one to have a interview discussion about it.

I do agree that they aren't the same thing, but they are pretty clearly related to me.

The interview question was “How does this architecture relate to the model-view-controller pattern?”


The model's state is not the same thing as the model and even though it's 99% the case, the model in MVC should not be an anemic model.


The data layer in a 3 tier architecture isn't necessarily anemic either.


I find it to be pedantic as well. Sure, all of the code exists in the application layer. But that doesn't really tell you anything about it's relationship to the system as a whole does it?

How about we look at vectors of change? Or dependency relationships? You see, in many systems changes[0] to the UI affect changes to the View and changes[1] to the database affect changes to the Model.

[0][1] I don't mean to say the UI changes and then the View is updated to match. I mean that stakeholders will have decided that the UI needs to display different information, so the View is updated to accommodate such a change (which can then be passed to the UI). The same story above for the Model/Database relationship.


MVC always seemed totally arbitrary and impractical to me.


To be honest, one of the best predictors I have for job performance is learning something new in a job interview. Sometimes I get the feeling there are far to few people like this in the industry, but if I want to hire great colleagues, that just works the best.

Then again, I'd never ask a vanilla architecture question like the company in OP's post, as I've found that the best predictors for job performance are also the very questions that are the hardest to grade. Deep, open questions on a vast field — where there isn't anything close to a correct answer but rather thousands.

I still often go with the good old "what happens when you enter xyz.com in your browser and hit enter?", just because I can take it anywhere I want.

One day, I hit my bonus question "so what happens after hitting enter and before the first request leaving your computer?"

And the guy was like "well, the microswitch pulls the line of the keyboard matrix from high to low, generating a scancode that's then turned into a keycode in the controller..."

5 Minutes later we were all over his self-built keyboard and he had a job offer in hand 30 minutes later. I don't even wait for HR for the good ones.

Unfortunately he got an offer in hardware, and actually I was glad for him.


Thats my favourite answer to my favourite question


I was once asked to write a function that, given the time, draw an analog clock. Given the nature of the position I was applying for, this wasn't an unreasonable question.

I wrote something on a whiteboard. What followed was the most surreal discussion I've had in an interview. My function took into account the seconds, minutes and hour for the hour hand, and so on. Just like a normal clock would. The interviewer insisted this was wrong. I tried to tease out of him if we were talking about what to do if the specs made no sense, or if he wanted me to draw an unusual clock face. Nope. He just insisted I had no idea how clocks work. I spent the interview trying to understand what he was really asking of me as politely as I could while he spent the interview insisting I didn't know how something as basic as a clock works.

To this day I don't know what he really expected out of me. I wasn't surprised when I learned they didn't want to hire me. I doubt I would have wanted to work there after that interview.


Ugh. This reminds me of when I interviewed with Google some years ago. It's a somewhat long and boring story, but basically, it was a distributed map-reduce problem. I clearly stated my assumptions and asked for clarification/confirmation so we could disambiguate. The interviewer and I were very clear what he was asking and what I would be answering.

So I proceed to answer the question, describing what I was doing and writing code, and at one point he stops me to say, "That won't work because X may not be the case." But X was one of the points of clarification up-front, so I called him out on it politely but firmly. I wasn't sure if it was [a] a language thing (he's a non-native English speaker with a fairly heavy accent), [b] an issue with experience (he was being reverse-shadowed in the interview), [c] him trying to test soft skills in some really bizarre way, or [d] some bullshit that won't fly.

I have no real problem with a,b,c, though the more experienced engineer doing a reverse shadow should step in when things go amiss, so I fault that engineer for jumping in. But regardless, I was thoroughly nonplussed by the way Google runs interviews, and I've essentially sworn them off. It wasn't until Facebook that I found a company that I want to work for even less.


But everyone loves FAANG

Where would you want to work?


Haha amazingly there’s a chance I interviewed you, or you interviewed with my company when we were first rolling this out, or another company was also using the same question. I remember the idea was to make the analog like one of those analogs that didn’t move continuously but ticked at every sixty second interval. So it never needed to be in between minute ticks.

Lots of us didn’t fully understand the question when we were giving it initially and it caused a lot of problems. I’m guessing you had the minute hand drawn as if it moved continuously since you mention seconds.

Let me apologize for either myself or my company :)


For the record. Most analog clocks have smooth motion for minute and hour (for the precision of the gearing). Its the second hand that ticks.


I don't know much about clocks, but the Swiss railway clocks are by some considered a canonical example of an analog clock, and they behave the way GP describes. But all other analog clocks I've ever seen behave the way you describe.

https://en.wikipedia.org/wiki/Swiss_railway_clock


And conversely there are clocks with smooth movement of the second hand, something I prefer because then there's no annoying ticking noise.


Interesting, the one I have moves the minute hand about 5 times per minute or so, rather than being continuous.


And conversely there are clocks with smooth movement of the second hand, some


I once interviewed with Boeing research and gave them the spiel about my PhD work on network protocols. They started asking me about Petri nets. I know nothing about Petri nets other than that they're another approach to distributed systems.

I'm fairly sure I left them convinced I didn't understand my own research.


"Damn, that guy knows nothing! He hadn't the faintest idea of how fishing nets work!"


A job interview is also you judging the company and people you’re applying to join. If you feel the team lead can’t handle you answering questions honestly because it might reveal one of his or her blind spots, that’s not a great sign.


At a certain point in your career, you'll start finding that the current TL you'd be joining often doesn't have the same context as you do. It can be threatening for junior leadership to onboard someone who has extra capabilities beyond what they do, and a big part of your job in these situations is to rock the boat...slowly.

Bear in mind, many TLs are in their current role by virtue of having a good idea, supportive management, trust of their team and good/lucky execution. This combination of activities can easily mean that you have a great up and coming TL with ~3 years of experience. Management may be bringing in more senior talent as the upscaling product requires it, but that doesn't mean you start with the same trust, management support, ideas, or in-house knowledge as the TL.

In such a situation, setting out to prove that you can outsmart the junior TL as your first contribution seems suspect...

Bear in mind that often successful companies, have many inexperienced leaders due to growth.


Ironic, as I almost always interview for the opposite.

I don't really care what you currently know because software development paradigms can be learned with relative ease for a motivated and intelligent individual who already knows something.

So instead I:

1) Poke around their CV/Resume. Just ensuring that they have the stuff they say to a basic level (some people just spam keywords)

2) Get them to a point where they admit they don't know something. Usually this means a deep dive on something common to the role I'm hiring for and that they have experience in.

If you're unwilling to say you don't know something I usually reject the candidate. I know interviews are stressful and you want to impress your interviewer, but if you're willing to try to bullshit me at an interview then you're willing to bullshit me in a post-mortem, and I can't have that.

Honesty - Intelligence - Experience

in that order, always.


Exactly this. When I interview people, I also consider it a HUGE red flag if I can't get the interviewee to say "I don't know" at some point.


This may be a little naive, but what if your expertise runs out before theirs? It’s not unthinkable that a candidate has more depth in a field than you do.

I remember asking a friend once if he could answer any Tolkien question, and he answered that he couldn’t answer everything, but the stuff he couldn’t answer, I didn’t know the right questions for.


> what if your expertise runs out before theirs?

I'm always hopeful that the person I'm interviewing is more capable than I, and they very often are. But there's an interesting thing: the more expert a person is, the more willing they tend to be to say "I don't know" (often rapidly followed by informed speculation and a comment about how they would find out). Combining that with the fact that true experts usually prefer to operate at the limits of their knowledge means it's not hard to get such a candidate to say "I don't know".

It's certainly not necessary to operate at the same level as a person to get them to operate at the limits of their knowledge and ability. In fact, relatively inexperienced people have a knack for asking those "naive" questions that lead into very deep pools.

Edit: I should also mention that my interview style is not in the form of a quiz. It's a conversation. I think that makes a difference as well.


This has happened. Then you ask them to explain it more.

If you don’t understand then they likely don’t understand it enough to explain it.


I was also thinking something along the line of whether you'd even want to work somewhere like that. But I guess not everyone is in a position to be so picky about their employment or maybe it would be a gateway to other opportunities.


One thing this reminds me of is the use of "Compute the nth Fibonacci number" as an interview question. The interviewer may expect you to show off your knowledge of dynamic programming by memoizing the function. But you take a risk if you implement the matrix exponentiation algorithm [1], which is actually optimal in that it only uses O(log(n)) arithmetic operations. I had an interviewer once who seemed a bit skeptical when I mentioned there was a sublinear algorithm for Fibonacci.

[1] : https://math.stackexchange.com/a/867404/165144


Is it really sublinear though? The data that the algorithm accepts is a number n, the size of the data is thus m=log(n), so the algorithm runs in O(m) time -> linear in terms of data size. I vaguely recall there was some nuisance related to this (pseudopolynomial algorithms ring a bell), but haven't interviewed in a while so may be misremembering it.


If you're computing the time complexity this way, then the simple linear dynamic programming is actually quadratic, and the matrix exponentiation algorithm is still faster.

Generally I would assume that multiplication and addition are constant time for big-O analysis, until given a reason otherwise. That might be less appropriate for fibonacci than most other problems though.


> Generally I would assume that multiplication and addition are constant time for big-O analysis, until given a reason otherwise.

I think this assumption can still be taken for the exponentiation version - I'm more nitpicking about the fact that there is a simple way to rephrase the problem (see below) to arrive at a different meaning of "linear" vs "sublinear" and it's better to be very explicit in cases where things may be misunderstood.

Rephrasing: fib(x) takes an x which is an array of bits of length m, which are the binary representation of number n. Return nth Fibonacci number.


It is sublinear, n is the nth Fibonacci number, or the size of the series up to the desired value. The standard (non-naive) algorithm is linear with respect to n. The matrix version makes use of fast exponentiation which is log(n), using the same n as before the target Fibonacci number.


log(n) is ok. Sublinear with respect to n is also ok. "Sublinear" on its own is not, at least if my understanding is correct (sublinear relative to what?), although I can see that it can be a common shortcut to make.


All the necessary information to answer your question is in the original comment. They're talking about computing the nth Fibonacci number and say that the matrix exponentiation version is O(log(n)). Unless you think they're using n to represent two (potentially) different things, there is little room for confusion. "sublinear" refers back to that O(log(n)) algorithm.


They are representing n to mean 2 different things. In this case, n is meant to represent a value passed in to the algorithm. The problem is that complexity is usually expressed relative to size of the input, which in case of this algorithm is log(n) = m bits. This makes the exponentiation version actually linear in terms of bits needed to represent the input.

It's like saying that an algorithm that accepts n x n matrix and takes O(n) steps to compute something is "linear" - it's not linear, because the size of the data is m = n^2, which makes it O(sqrt(m)).


What two things do they use it to mean? There's literally only one thing it refers to in the original comment: The nth Fibonacci number. Show me the second.

All additional uses of n in that comment are references to that same thing, the nth Fibonacci number.


In our case n = value of the input, log(n) = size of the input. Complexity is expressed relative to the size of the input. Size of the input is also usually expressed as n, which is shadowed by "value of the input" in the problem statement, so "sublinear with respect to n" has different meaning than "sublinear with respect to size of the input", & saying "sublinear" when talking about complexity implicitly translates into "sublinear with respect to the size of the input", which is incorrect without any additional statements - exponentiation algorithm is "linear with respect to the size of the input".


Asymptotic analysis is about finding some quantifiable property (or properties) of an algorithm (in this case it can be seen as the index into the sequence of Fibonacci numbers) and determining how fast the algorithm "grows" (in this case it's about time, not space, though can be used for space as well) with respect to that quantifiable property.

The original commenter uses n to indicate which value in the sequence is being computed. They then say that there is a O(log(n)) algorithm (that is, it grows with the logarithm of the index) that can find the nth Fibonacci number. The n in O(log(n)) is still referring to that same index in the sequence, it has not changed its meaning. I do not know how else to explain this to you. At this point I can only presume that you are confused about the fundamentals of algorithm analysis or you're a troll.


I really cannot make it clearer that I'm nitpicking on the statement that it is "sublinear". I'm not disagreeing that it's O(log(n)). I'm also not disagreeing that it's "sublinear with respect to n". I'm disagreeing with it being "sublinear", because there are at least 2 meanings that come to my mind: 1. "it's sublinear with respect to n" - this is true 2. "it's sublinear with respect to size of the input" - this is false

I do not know how else to explain this to you that it is the size of the input that matters when talking about complexity. Example quora answer that recognizes the distinction (discussions about the technicality in the naive isPrime impl that checks numbers from 2 to sqrt(n)): https://qr.ae/pGuORe.


Technically, you can calculate the nth Fibonacci number in O(1) with the golden number.


It's still O(n) or O(log(n)) as you have to compute the value using exponentiation to the nth power. Which offers either a linear algorithm or a faster logarithmic algorithm if you use fast exponentiation. It isn't actually O(1).


> Anyway, there was only 1 proper solution to this: I had to answer what they thought was correct.

I think it was a poor solution. There are ways to respectfully disagree.

"I really think I'm right on this one. Since it's a question of facts, not opinion, we could easily verify it later."

Did the author know they were hired because they didn't rock the boat, or despite of it?

If I were hiring someone, I would want to know I'm hiring someone who can argue politely for what they think is right, and not see the argument in the context of "winning or losing", but in trying to find the best way forward.


> Since it's a question of facts

Is it?

I think it's an opinion.

How would you empirically test if the answer was right or wrong? What's on Wikipedia is also someone's opinion, isn't it? What's in a book that Wikipedia references is again someone's opinion.

If you can't write a program or other set of instructions to test it... it's an opinion.

'How does Dr Foo say that MVC maps to database-server-client?' would be a question of fact though.


Well, it's a matter of definitions. What means it's a completely useless and dumb question, but also that the majoritarian opinion on the relevant context is a fact.

That said, I don't think I've seen MVC used with the same meaning twice... what makes the question even worse.


MVC is like OOP - it means different things to different people in different contexts, and they often contradict each other.

Apple's definition of MVC specifically contradicts the Wikipedia definition for MVC that OP gives.

https://developer.apple.com/library/archive/documentation/Ge...

The current Wikipedia content on multitier vs MVC is quite vague.

https://en.wikipedia.org/wiki/Multitier_architecture

I think "correct" is a reach here.

Stock CRUD multitier should not be confused with MVC, nor vice versa.

A company that doesn't understand the differences is a Red Flag.


That’s entirely reasonable, but people aren’t reasonable. When ego is involved, even a polite argument may appear as a personal attack.


That's entirely reasonable, and that's exactly why you hire engineers that are aware of this, and don't base their choices on ego.

If you hiring involves ego, you're going to end up with a Challenger disaster some time down the road.


I really, really don't want to work for a manager who involves his ego in technical questions.


knowing how many people can't take disagreement well, I don't think this was a poor decision, practically you can't just trust everyone to have a good head on their shoulders, when you just want a job to survive or get your foot in the door, you can't be picky with who you work for.


I think the underlying assumption here, is that it is better for your job prospects to agree with something you know is wrong, in order to avoid confrontation.

I don't think this should be the default recommended advice. It could just as well cost you the job, if you are interviewed by two people, with the other also knowing what his or her colleague said was wrong, and being disappointed you didn't find a way of politely disagreeing.

In job interviews, I would count it as a positive if I could see that the candidate could handle disagreement. That is in my view more important than getting a rather generic technical question "correct".


Behavioral interview questions give you more insight to this e.g. "Tell me about a time when...". In a technical interview you'd be wasting precious time determining how they present arguments. You'd also get false signal from interviewees who are uncomfortable debating their interviewer/know that they need to progress through the question.


In my experience, being "right" is no bar to getting the job.

In one case where I was going for a C++ expert job, I got asked a question (that I can't remember now), gave the correct answer, which the interviewer disputed. I asked if they had a copy of TC++PL on hand, which they did, and I pointed out the relevant section. I got the job.

In another case, I was asked something complicated about "const" in C++ (which has a few gotchas), gave the right answer, but was still disputed. I got the job, and on my first day the guy that asked the question came up and apologised to me - he'd read up on it after.

IMHO, telling the truth about technical matters is always best; I might fib a bit about other things.


Your experience is not universal.

I've been passed over for a job because I did not recite the right buzzwords (framework names, etc.). Your job in an interview is to please the person across the table from you. Some interviewers are looking for a sharp technical mind unafraid to challenge them. Others view any challenge to what they think is right as a threat.


> Some interviewers are looking for a sharp technical mind unafraid to challenge them.

And people like that are those you want to work with.

> Others view any challenge to what they think is right as a threat.

And those you don't.


It get's tricky with ambiguous subjects. If there is a concrete correct answer then yes, give that stick to your guns. If there isn't and the issue comes down to opinion and interpretation ... things get dicy ...


Well, I disagree - people are presumably paying me for my experience and successes, which will affect my opinion and interpretation. If they don't like those, then presumably they don't value my experience, and we would probably not be happy working together.


I think there's a slight confusion here. An opinion should be stated as an opinion and not treated as or claimed to be a fact. If the point in question is stated as an opinion then fine and I'd agree with you. On the other hand, saying that an opinion is the factually one true way is a big red flag on either side.

Otherwise, I'd agree with your stance completely.


This behavior selects for good, or at least compatible, coworkers. Of course if you are desperate for a job then you should read the room and say whatever they want to hear, but if the job market for your skill set is strong, you should show your personality and be open about how you approach technical problems (unless you're really an over-the-top asshole, and nobody thinks that about themselves, so it's pointless to give advice for that situation.)

Once in a technical interview I was asked what data structure I would use for certain functionality "if performance was really critical." I said it would depend on the size and structure of the data that we needed to support, and when the interviewer said "unbounded," I said that the answer to that would go beyond an in-memory data structure, and if performance is critical you need to be able to project the sizes of data you need to support in the near future.

I could tell the interviewer thought my answer was ignorant and sloppy. He started giving me a few "hints," which showed that what he wanted was the data structure had the best big-O performance. So I told what the best big-O performance was for the problem and what common data structure would provide it.

Then he said, "So you would use that?" wanting to put the question to rest and move on the the next one, and I could have said yes. But instead I said "maybe," and I told him I remembered Bjarne Stroustrup talking about how algorithms classes in computer science education give students the wrong idea about how software engineers choose data structures and algorithms in practice. The university version, he said, is that if performance isn't critical, you just pick a container with the right functionality, and then if it turns out that performance matters, you pick something with the best possible big-O characteristics to get ideal performance.

In reality (according to Stroustrup), when performance isn't critical, you should pick something with good big-O performance, and if it turns out that performance matters, you measure on realistic hardware with the data sizes and characteristics you need to support, and in many practical performance-critical cases you will end up choosing something with theoretically suboptimal big-O performance.

I told the interviewer I liked Stroustrup's approach, and I always used data structures with known good performance by default, but I would measure if it mattered. I didn't get the job, and that was probably for the best at that stage in my career. I've nevertheless ended up working in situations like that, where people living by ideas I knew well thought I was an idiot for not understanding them, when I really just didn't completely agree with them, and those situations did not end well.


When we first designed Django we decided to describe it as a MTV - Model Template View - framework because we thought that the classic "controller" concept from GUI applications didn't really apply to server-side web applications.

Rails took a different path: they called their Ruby application code the "controller" and their template files the "view".

With the benefit of hindsight, I'm not at all confident we made the right choice.

I still think we were right from a pedantic point of view, but having to spend over a decade constantly explaining that "no, in Django the view layer is a different thing from the template layer" doesn't feel to me like it added much value for all of the extra effort!


I was once in this position, but decided to continue through with the "right" answer anyway. Even though the interviewer was adamant that he was right (as, of course, was I), we briefly stopped the interview and made some Google searches to get to the truth.

I could ostensibly argue that I got the job not by telling the interviewer what he wanted to hear, but by doubling down on the right answer and showing that I had the capacity to prove it.


I’ve seen people go through the pipeline with “no hires” from interviewers because of small differences of opinion. In your case, I’d wager that the disagreeing interviewers feedback may have just been discarded.


They didn't fully clarify what kind of feedback the interviewer likely ended up giving in the end, though. I'd be curious to know if after they Googled it the interviewer acknowledged he was wrong and the interviewee was right. (I think they're kind of implying that, but it's unclear.)


Late but, for the record, I did get positive feedback from that interview, and the interviewer admitted his mental model was wrong.


I don't feel this particular example is very strong since MVC is a bit of a loose/spongy term, especially nowadays. But I've been in interviews - on both sides of the table - where the interviewer is just outright incorrect. On several occasions I've needed to defend the candidate as my colleague was protesting something the candidate came up with based on incorrect knowledge. In my experience, this is very common. I'm sure I've been incorrect as an interviewer at times too. I really dislike software job interviews, on either side of the table, this just being one reason.


I was in a job interview several years ago and I was given the following prompt: "You have a database containing locations with their corresponding latitudes and longitudes. We want to be able to input an arbitrary latitude and longitude and have the program return all locations within a radius from that point from the database." My initial reaction was to say "I would use a GIS library/API", but the interviewer wanted me to come up with the algorithm on my own. I had done some GIS work at a previous job, so I started explaining how I'd approach it. I was a bit rusty, but I started explaining what I could remember about the Haversine formula and rhumb lines. The interviewer told me that I was overcomplicating it and explained that I can treat the latitude and longitude as standard Cartesian coordinates. I explained how that wouldn't work as they're spherical coordinates, not planar, and lines of longitude aren't parallel. I believe this is where the problem went from being an issue with technical approach to an issue with expectations. What they really wanted was an algorithm that could find all points a given radius from an arbitrary X,Y coordinate. However, by trying to turn it into a real-world problem with latitudes and longitudes they neglected to consider that they had fundamentally changed the challenge. I'm quite stubborn, and I tried to point out that the question was much more difficult than they intended, but they were equally stubborn and insisted that spherical coordinates could be directly converted to planar coordinates with no issue. Long story short, I didn't get a call back and I'm still frustrated about that interview ~3 years later, but I'm also glad that I stood my ground rather than give an incorrect answer.


There's a sizable portion of people believing that spherical coordinates can be directly converted to planar coordinates :-).


Mathematically speaking, isn't it true you can compute the coordinate transformation (with your choice of map projection, perhaps ignoring the poles)? The real problem is that your metric / notion of distance has changed, so you can't simply compute distance as sqrt((x2-x1)^2 + (y2-y1)^2) and expect it to map neatly to a circle on the globe.


I would have just as stubborn about the spherical-to-planar issue. IMO it's easy to illustrate by pointing out an extreme example: two longitude lines can be feet apart near the poles (ignoring the intersection aspect for simplicity) and miles apart at the equator. If someone doesn't understand that... I don't know what to say.


Some times "good enough for jazz" applies I solved a similar problem - find all geo locations within 2 hours driving time of x,y

I basically used the great circle distance (using I think a CPAN module) as an approximation - this was for geotargeting AdWords.


Maybe they were expecting converting to 3 dimensional Cartesian coordinates (X/Y/Z), then check that one point is inside the sphere of given radius from the other point?


This article is pointing out that interviewing/recruiting is as much dominance play for the interviewers as it is testing of candidates. Doing interviews can be considered recognition of rank inside the company. Some probably consider this sanctioned "lording" an implicit reward, and thus will make sure they get they what they want. Some consider it a hazing ritual.

Noting how many comments here are of the form "this is what /I/ would accept" or "what /I/ usually want out of /them/".

And the point was specifically how interviewers insert their personal issues into the questions and make others dance.


This answer is insightful and matches my understanding.

The article demonstrates an interviewee skillfully maneuvering around their interviewers. It undermines the dominance display which, judging by the comment section, more than a few folks enjoy but also don't want to admit to enjoying.

I'll go a step further and say the desire for dominance in this scenario stems less from avarice and more from insecurity. A new face who who does their job too well, and knows things that the interviewer does not, is someone who can replace the interviewer.


Hah, I'm way too stubborn to give a wrong answer. Once had an interview where the interviewer asked me how to identify open connections on a linux host. Told him my go-to was `lsof -i` because fuck yeah, `lsof`. He told me no, the answer was `netstat`, which I took as a fun opportunity to explain why I prefer `lsof -i` over netstat. They still thought I was wrong and I got the impression that they took it as a challenge against their authority.

I did not get a callback.


This reminds me of a post that made the rounds here a few years ago: Google's Director of Engineering Hiring Test[1]. "Recruiter: that's not the answer I have on my sheet of paper"

1: http://www.gwan.com/blog/20160405.html


Mvc isn’t an architecture at all. It’s just meant to be a tiny pattern for ui widgets from the 70s that people started to use as an architecture. The actual goal is to just keep a tiny piece of business logic from having a dependency on a specific input or on the view. It applied to a single button or a single input field back then.

Architecture is more about deciding what direction your dependencies go and where your hard boundaries go

https://youtu.be/o_TH-Y78tt4 (approx 27 mins in)


Model View Controller doesn't really mean anything in 2021. It might have meant something when the term was first coined, but everyone has such different ideas on what it means it's not a useful term anymore.


It meant something before Rails bastardized it.

Until then (and thanks to Smalltalk) it was a composite UI design pattern combining observable and mediator patterns. It was also a recursive pattern, in which the "editor" could itself be a model-view-controller.

Rails designers completely misunderstood it (or deliberately ignored it) and simply reused the terms for something that was only marginally similar, not recursive and was not inherently "active".

Other frameworks adopted Rails' terminology and now we are left with the original pattern having been completely forgotten.


Give them what they want and leave yourself an out:

"As an analogy, perhaps one could view..."

Two months later when that person runs across the wiki article:

"Ah, right that's why I said as an analogy perhaps one could view it that way. I thought you knew?"


I think a lot of people have had the experience of giving the right answer and have the interviewer believe it's wrong, whether they are wrong or the interviewee didn't explain it well enough. I've done both. In one case, I make a flippant comment about how you don't really know where the bottlenecks in your software are unless you measure. I was making a statement which I believed the other person would simply agree with as a baseline. However, he took it as a challenge and said, "I know where our bottlenecks are". Oops.


I don't like this kind of interview question because it's all about subjective terminology and superficial knowledge and not about logic or reasoning.

The author is right, the MVC pattern can be considered from many different angles. It's possible to have MVC just on the client side (e.g. with React framework with Redux you have a store as a model, components are views and the router is essentially a controller). React (the library) is itself is also a controller since it handles the DOM diffing and handles the state reactive update mechanism and thus acts as the glue logic between all the views.

I wouldn't want to work for a company with such a rigid view of software development.

It's a sign of seniority when you notice inconsistencies with terminology.

For example, I know some senior developers who had totally different ideas about what is 'unit testing' versus 'integration testing'. Both are valid views because the terminology is still currently ambiguous.

Does unit testing have to only test 1 class in complete isolation (stub out all function calls to dependencies)? Or is it OK to test a class along with its dependencies (no stubbing)? Some developers say that if you include dependencies, then you're testing more than 1 class so it should be called 'integration test', other developers will claim that it's still a unit test with respect to that class - That integration tests must interact with the system from outside via the API (not method calls on a class). Either way, I think that stubbing out dependencies is a bad idea in most cases (aside from mocking I/O calls to external systems like a database) so if I was to accept the definition of a unit test as being without dependencies, then I would very rarely use unit tests... Anyway this shows that even a simple term which is widely known can be the subject of conflicting opinions and it's wrong to criticize people for choosing a definition which doesn't match your own.

Software development doesn't have much global consensus nowadays and part of the problem is that companies are using bad hiring techniques to interview candidates; companies end up forming tribes of like-minded individuals and completely miss all these nuances and these debates.


Its not even superficial knowledge, it's overloaded terminology. MVC as originally formulated came out of Smalltalk and looked more like what we would call MVVM today with smaller controllers. Apple took MVC with iOS and moved it to the opposite extreme with their god-controller objects that knew everything and did everything. Meanwhile what web developers think of as MVC is really the JSP Model-2 architecture where the controller is the entry point to the system which is responsible for coordinating with the models and views to generate a response, where in traditional thick client GUI MVC the view layer is the entry point into the system and the controller sits between the views and the model.

Point being, "What is MVC?" is a very expansive question and I'd be very wary of working for any developer who thought there was one right answer to it.


This is a good lesson in why LC style interviews are so popular. There are generally only one or two correct/optimal answers that an individual could code in 20 minutes. There is limited ambiguity as to the task to be solved for both the interviewer and the interviewee. There is easy calibration across multiple interviewers as a set of boolean progressions points - did they solve the problem? did they need hints? did they present an optimal/correct solution? The edge cases/traps are likewise known to the interviewers for quick calibration.

The alternatives I've see boil down to these trivia style interviews which come down to simply memorizing answers the interviewer deemed important/correct. The most rediculous cases include obscura such as "what would you do if [X debugging tool] stalled [Y application process?" or "what would you do if you saw a server with high IO time?"

There are many versions of correct answer to these, but odds are your interviewer has a specific one in mind. In a real discussion of these events there would likely be back and forth on root cause/severity/solution, but you simply can't have back and forth in an interview situation. The starting impression will always devolve towards "this person doesn't know what they are talking about".


Who would want to work at a company where you knowingly do things the wrong way due to the fragile egos of your superiors. Debating technical issues is fun and leads to shared knowledge and better outcomes. Part of interviewing is showing off what kind of person you would be to work with. Challenging assumptions and making a good argument should help you in an interview if the place has a good culture.


During an interview for a job with a Wall Street outfit, the interviewer asked me “In C++, how would you determine the concrete type of a pointer to a base class if you did not have runtime type information?”

I couldn’t think of an answer, and then he explains “It’s quite simple - you would use dynamic_cast.”

I needed the job so I just smiled and said “Oh, that’s really cool - I didn’t know about that.”


That's the one thing I hate about web (backend) development in languages like java, c# and so on

There's GIAAAAANT focus on patterns, architecture, patterns once again

and even more talk/discussions about it, yet even people with years of experience get stuff wrong.

I feel like systems programming is simpler in that matter.


MVC is such an overloaded term ... I don't think I've seen behavioral equivalence between any two implementations/frameworks in any language I've dealt with. As always, the devil is in the details. So going much further than "the model deals with data (either connected or unconnected), the controller usually deals with behavior, and the view usually deals with presentation" it's really hard to talk about MVC in general without some tighter constraints. For instance, are you models POJOs? Are they tied to the DB directly (i.e. something like ActiveRecord)? Do they process/send events (i.e. something like a model in Swing)? Etc ...

It get's complicated fast if you want to discuss more than very abstract generalities.


This story uses a technical question (MVC vs n-tier) as an example, but conforming your answer to interviewer expectations also applies to behavioral and soft-skill questions (perhaps more so).

There are some traditional corporate HR-screening questions that have flummoxed tech people since forever. Those who are savvy about people-skills in a workspace know instinctively how to answer these questions, but some techies have a very hard time with them because they're either being radically honest or awkwardly trying to second-guess what the interviewer wants to hear.

The best thing you one can do is to practice. Interviewing is a skill (for both sides), it doesn't come naturally to most folks.


My first programming job interview (like 100 years ago) was a group interview. One guy asked me to write string compare in C.

Not sure I did it right, but I basically walked the two char* pointers looking for a '0' or a mismatch. The guy asking the question said, "No, first you should compare the string lengths -- since if they have a different length they will be different."

I was nervous, but I thought he was wrong. I said, "Well, to get the length you need to walk both strings, so that isn't faster."

He got annoyed and said, "They have optimized functions for that!"

I didn't argue. Needless to say, I didn't get the job :)


Modern strcmp and strlen both contain optimizations based on modern hardwarae, and I believe strlen can be faster than strcmp, but that's overoptimizing for performance.


The headline feels wrong, as the author started by laying out their assumptions, and revised after more or less getting their question answered.

One could just as easily form this as a question.

“All three are usually at the application layer in my experience. Would you like me to discuss this, or give you a more general mapping?”

In either case, asking questions before answering is something any good interviewer should be looking for as a positive signal.

A general knowledge question like this, though, seems designed for a quick answer, so I’d only ask the question if I was really confused rather than splitting hairs.


If you have to lie in an interview to get a job, that's a pretty strong sign that the job is not worth having.


Or that it's just a single bad interviewer in an otherwise great organization, or a normally good interviewer having a bad day in a great organization, or any of an infinity of possibilities.

Interviews are probably mostly useless to both sides. In the end, it's sink or swim.


My comment wasn't really based on any qualities of the interviewer or company at all. It's based on the perception of the applicant. If an applicant feels it necessary to be deceptive in an interview, that's a big sign that the job is a poor match for them.

It doesn't matter if the applicant's sense is justified or not. Either the applicant doesn't feel they are truly qualified for the job, and are being deceptive to avoid detection, or the applicant doesn't trust the company or people they're interviewing with. Either way, that means it's a poor fit.

After all, it's not exactly rare that a person is a poor fit in a company even if the company, and the applicant, are both excellent.


> If he didn’t know, I would totally humiliate him in front of his boss.

I only know this from Indian culture. There, someone can "loose their face" if you correct them in front of others.

A friend if mine attended a conference talk of a colleague. She noticed that they made fundamental mistakes in their statistical analysis when she asked if they performed some prior tests. The result was her manager telling her that she shouldn't ask questions in public anymore.


I guess the real question is what Koen's goal was. Was it to be hired for this particular job? Or was it to get a technically satisfying job?

Personally I'd find it very difficult to knowingly give a wrong answer to a client or potential employer, even if it's clearly the one they want. After all, they're hiring me to find the right answer for them, not to agree with them. That's just me though.


> But then I saw him frowning, and so knew this was not the answer the was expecting.

This is why, as a blind person, I'm fundamentally screwed when looking for a job. I suspect situations like this contribute greatly to the 70% unemployment rate in the blind community.


Good post.

Humans are important (and also illogical). If we are to work in a team, then we definitely need to factor humans into the equation.

That also means, that when we evaluate the employer, we should try to find out as much as possible about the human culture there. Even if they are all tech whizzes, if the team is broken, the job will be a nightmare.

I was talking to a guy a couple of days ago, about a job he quit after two days.

He noticed that every time the manager walked onto the floor, everyone put their heads down, and avoided eye contact. It was only a matter of time before the manager cut a victim out of the herd, and humiliated them in front of the others.

During the interview, this same manager was a font of friendliness. But on the floor, he was a tyrant.


I agree that the author deserves some recognition for their interview social skill flex. I imagine most engineers would not perform real-time perspective taking in this situation. But should they? My takeaway is less pragmatic -- shouldn't those levying judgement during an interview be open to self-reflection? Can't an engineer be celebrated for being capable of learning something new in front of their manager? Shouldn't a manager be capable of understanding their employees are human? Shouldn't those involved in an interview process understand that architecture is far more complex than the cookie cutter representations some use to simplify it?


Yes, depending if you're just fishing around or not, you can either test the limits with your statements or just tell them what you think they want to hear.

I have found it's practical to go on a fishing trip while employed and test the limits, less pressure. Of course, this can backfire and did for me, but overall it was beneficial.

For example, if it's something like a high street bank, don't mess around, these are highly political institutions, similar can be said for large, established companies.

Sometimes, a wizard is required, sometimes a yes person is required.

It's good if you know some hr people personally as friends, listen to their stories.


Great skill to not always having to be right (even when you are). This helps you alot when you want to acctually accomplish or change something - which is something you usually want to do when you have alot of knowledge.


In the Benelux, try to get a job after answering what you really think about Scrum. If the local Scrum Master is present in the interview team you will be "persona non grata" for the rest of your career...

"One Hacker Way Rational alternative of Agile - Erik Meijer"

https://youtu.be/2u0sNRO-QKQ

Nowadays a Developer has to act a little bit like in this scene from Good Will Hunting.

Make yourself stupid...you will be hired.

https://youtu.be/UpL3ncoK99U


> try to get a job after answering what you really think about Scrum.

That's providing an opinion, a very perilous thing.

Scrum can work, and it can also fail spectacularly. Often it just presses forward as well as any other process. By saying, "Scrum is awful" you're positioning yourself against the current trend in many businesses (similarly saying that about DevOps in many places, or DevSecOps if you go to the US gov't or DoD contractor). And since the value of Scrum is so dependent on the people doing the work and how they actually run things there's no universal statement that can be made about it anyways.

So don't, qualify it: I've seen Scrum at a past employer where they insisted on only code-based stories in each Sprint which led to massive technical debt, and eventually the project came to an incredible slowdown as they paid no attention to refactoring or other cleanup tasks until after they were bit by it. If the stories are a mix of maintenance and development stories then it seems to be more successful.

Such a statement is hard to dispute (it's about personal experience) and is non-confrontational (you're not entirely dismissing Scrum but you're not endorsing it either, only offering insight into what seems to be a problem feature and a potential solution). You aren't making yourself stupid, but you're opening up a potential discussion. If the Scrum Master is present you can get into a discussion about their specific process as implemented in their organization. That can give you more useful insight than saying something negative as if it's a universal truth, when it's not.


I have had two engineers in the last few weeks tell me that my (correct) interview answers were wrong. It's extremely frustrating feeling to be already stressed from an interview and then deal with this on top.

In one case I argued for a while until we moved on. In the other, I was able to show the interviewer why I was right and they eventually saw my side. I'm not sure if this was poor communication on my part (definitely possible!) but I felt helpless. Still, I don't think I'd be able to intentionally say an incorrect answer just to get the job.


I've found that these situations are actually quite useful from my side as a prospective employee. If the interviewers are treating this as a "gotcha" interview and it turns into a test instead of a discussion, then that's a pretty good sign that this job is not going to be a good fit for me. The best jobs I've had are where the interviewers treated me as a potential future colleague, and the technical questions were more of a starting point for a detailed discussion of different scenarios rather than a straight quiz.


IMHO, having done a fair whack of sales and negotiation in my work, the best way to suss these things out is to hit them back with questions.

"Before I answer, can you clarify if we're talking about MVC as the term was used for Smalltalk applications or as it's used now to refer to web frameworks?"

If they look at you blankly about Smalltalk, you know what kind of answer you're supposed to give. If they smile and chuckle, you get to nerd the fuck out. :-)


IMO the better way to do this is:

"That term/phrase is actually used in a couple of different ways. Originally it referred to X But it's sometimes also used to mean Y"


Just about the same happens every time I am asked about why do we need microservices or TDD.

Because if I went to say that most projects need neither I would probably never get any job.


I strongly suspect that Google didn't extend me an offer because one of my interviewers was insistent that a race condition could only be a bug and could not be used strategically, but with a electrical engineering background I could not let that slide. Unfortunately it didn't occur to me during the interview to pull up Wikipedia. I'd prefer not to work somewhere which requires dishonest agreement, so it's been for the best.


So they took the job instead of noping the fuck outta there?


Once had a interviewer for a startup ask their final question after they found out I performed stand-up comedy as a hobby:

If SNL were to call you and offer you a job while you were working for us, would you take it?

I know what they wanted me to say but I thought it was a pretty dumb question. So I said:

Of course I’d take the job with SNL, but I didn’t move to this city for stand-up comedy, that’s just a side hobby.

I did not get the job and I guess they were looking for someone more dishonest.


Not exactly this, but I ran into a similar situation in an Uber SWE interview where I presented 2 solutions for a problem, one after another with the latter being better. However, the Senior SWE interviewer didn't know better and was fixated on the 1st solution and didn't see a problem with it. Always a bit frustrating when this happens and I chose to stick with what was the "right" answer without a fuss.


The questions in this story seem like basic filtering questions, since the amount of signal one could derive from their answers is essentially perfunctory. Why is basic filtering being done in a multi-party in-person interview? That seems like a waste of everybody's time. Especially that of the company doing the hiring.

A whole separate matter is the use of what amounts to a technical glossary being used as an evaluation criteria.


My first interviewer for a Service Desk job:

Interviewer was super nervous, visibly shaking! I poured myself, and him, a glass of water and took a sip. He took a sip, and visibly calmed down.

The answer to every "Do you know XYZ?" IT product was a meek "No". But I still got the job.

Their reasoning? You can teach a nice person technical things with proven interest. It is hard to teach a technically knowledgeable person to be nice.


I guess. I don't think I would want to work with a technical lead who can't admit when they are wrong. How will they learn? How will it be communicating with them using logic? That seems really toxic and I think I would have just been honest and see how well they "learn". When I am interviewing, I am also interviewing the company, not just them interviewing me.


Reading this, the partial first answer the author started with was, in fact, a bad answer. Where the MVC pattern is used was not really relevant to the question.

Not to mention, the second answer is correct. Three-tier architecture and MVC don't have to be exactly the same to be related and have major elements that correspond to each other.

Going deeper, if anyone is interested... it's pretty common in MVC for the view to point at elements of the model. To make it more concrete, imagine a Circle type composed of x, y, and r and a CircleView that displays a Circle. The CircleView might point to an instance of Circle that is also in the model. But as long as the controller put in in the view and the view know it's from the model, that's a perfectly valid MVC (IMO anyway. People will argue about this.) Interestingly, a similar situation arises in a three-tier architecture where storage is able to directly serve resources over standard protocols. E.g., imagine an application to store and share images where the images are stored in KV store that also has an HTTP interface (like S3). When displaying images, the app layer could render HTML to the browser client with <img> elements where the href links directly to the storage. The client would then bypass the app layer to access the storage itself. This is perfectly valid for the same reason as the MVC case -- the client doesn't know or depend on the fact that the resource is directly from the storage, and the middle tier is what controls what the client it going to access.


I hope it was not Boeing ;)


If a team lead can’t handle being wrong in front of their manager they are not a team lead you want to work for. Their ego is fragile, they’re not told when they’re wrong, and they’re unlikely to learn at the pace a lead must.

Being a lead does not make you infallible or even least wrong. You must always be learning and that means often being wrong.

Source: me, a lead


Have to say that the interview was handled masterfully. However, regarding the technical premise of the discussion. I agree that the model & view are in the middle layer, but isn't the view in the view layer? It's whole purpose is to package it up for display which includes haveing the graphical elements rendered.


I'd rather give the correct answer, if they are going to fight the answer despite being wrong then it's a good sign I don't want to work with these people. Interviews being a two way process and all, this seems like it was an excellent opportunity to test their character.


I agree with your logic. Anecdotally I had an experience that proves your point. The ASVAB [1] test had about a dozen incorrect questions. I brought this to their attention and was told to pick the least wrong answer. This was exactly the mind-set I dealt with my entire time in the military. Incorrect questions meaning that either all the answers were correct or incorrect based on the question

[1] - https://www.todaysmilitary.com/joining-eligibility/asvab-tes...


I believe I missed out on a google job because I used uint32_t from stdint.h (rather than unsigned long). The interviewer in this case worked on compilers and didn't like a recent-college-grad schooling him on the dangers of assuming integer type sizes.

I don't regret it. =)


This is a bad answer even in the contrived context in which it is given.

The better answer is along the lines of: Architecture patterns are abstract concepts whose implementation may vary. You can express an MVC Architecture without a database or without a GUI, however commonly…


Model View Controller question is always a tricky one. My understanding of that was different from Wikipedia explanation. With this "not one perfect answer" mindset, I wonder, whether software engineering is close to art or science?


The only way to win the MVC debate is to not play.

No other topic in our design pattern study group would incite such heated debates.

Any more, any time someone refers to MVC, non ironically, I just mentally check out. Bozo bit style.


Similar thing happened to me in early years of dev. I decided not to take the offer if I couldn't express myself. Nowadays I'd love to disagree with someone, especially on interview.


At one of the Big Four advisory firms we were only allowed one-on-one interviews. I was never able to get a satisfactory answer for this rule.

Now I wonder if it was designed to prevent this specific scenario.


They may have a different answer, but the explanation I've gotten for one-on-one interviews is that it prevents the "grilled by a panel" kind of feeling. (I wouldn't know since I've never been in a panel interview.)


Do you really want to work under a tech lead who misunderstands the fundamentals? From my experience, this always ends up with a frustrating work environment.


2007 was a popular time for people trying to cram "MVC" vocabulary into the web world it made no sense. Ruby community was a contributor.


Wow, my article is on hacker news! :) Enjoy


Hard to judge when you don't have the context. But from this it seems like the interviewers were plain wrong.


But now you're working for someone who knows less than you do, and is willing to punish you for that. Yay.


This is a shit interview question anyway.


Tangential question - can anyone recommend a good resource that goes over common architecture patterns?


Very important skill, much like not overthinking a test question.


Them: Please randomize a list in O(n) time.

Me: so you just want me to generate O(n lg n) bits of entropy in O(n) time? That's not possible.


I think you can do this with a Fisher-Yates shuffle (https://en.wikipedia.org/wiki/Fisher%E2%80%93Yates_shuffle), right?


The "right" answer in this context is the one that gets you the job, not the most technically and factually correct.


Id rather not work under a moron.


Quick summary of the story - TLDR; sometimes the right answer depends on your interviewing circumstances.

> When I arrived at the interview, there were 2 other guys present. One would be my direct team lead, which was also the technical lead, and the other was his manager.

> Then they moved on to a follow up question: “How does this architecture relate to the model-view-controller pattern?”. I knew this question was really tricky, because I know a lot of people make the mistake of directly linking the tiers to each of the model-view-controller.

> Normally, I would have given the correct answer, and have a nice discussion if they considered it wrong. But the problem was, that this guys manager was sitting next to him. If he didn’t know, I would totally humiliate him in front of his boss. So either he would stick to his guns and refuse the correctness of my answer, to save face. Or he needed to agree that he was wrong, and lose face. Anyway, there was only 1 proper solution to this: I had to answer what they thought was correct.

> The moral of the story? Job interviews are not all about your technical skills, it’s about people skills too. And this is good, because you need both in your job.


The author comes across as an insufferable know-it-all.


He handled that gracefully, which shows very good people ("soft") skills.

One "consultant" technique I use in such situations is to say the correct thing, but while giving "credit" to the person who was actually wrong: "I think $JOE is right about $TOPIC, and what he's saying is that <proceed to making the _correct_ case>". I find that many will be persuaded and are more willing to accept the correction in this form, as they're not being told that they're wrong. If they still don't agree, then I can ratchet up to more pointed criticism: "oh then I misunderstood you...but isn't there a problem then because...."; but, if you can keep bringing them in by validating at least some what of they thought, it helps.


Why would you want to work for a supervisor who is so prideful they can't stand the fact that there are other people who know more than them? Aside from needing an income of course.


Yeah well, that's a pretty big thing to put aside just like that, isn't it?


I've hit this as well. There are a number of deep technical falsehoods believed by the industry that are very widespread. These falsehoods are so ingrained that pushing against these falsehoods even rationally or logically could land you in hot water. The one that affected me personally has to do with Quaternions.

At the company where I work we convert all of our data to quaternions when transmitting over the wire or for data storage. In the gaming and robotics industry there is a misguided assumption that quaternions are always better, and at my company we force this assumption onto all engineers by using typed protobufs. We can never send EUler angles over the wire, we must always send quaternions.

This is actually fundamentally bad. Like it's not even a design question. It is by logic worse to store things as quaternions. Quaternions are only good for certain transformation calculations. They are not as good for data transmission or storage. So I made a proposal to offer alternatives but I was shot down even by the CEO (who took the time to personally make his own viewpoint known on the entire slack thread out of nowhere) because all of these people buy into the misguided notion that quaternions are always better.

The person I was talking to about this was so hell bent on believing that quaternions are better that if I pressed the point further I could start an all out conflict that could get me fired so I had to stop and pretend (aka lie) to agree.

The fact of the matter is, Quaternions are a higher entropy form of storage for rotation and orientation. You lose information when converting something to a quaternion and this is good for calculation but definitively bad when you choose to use quaternions for data storage or transmission. If you transmit or store things as Euler angles you CAN always convert it to a quaternion. The conversion is trivial and mostly a non-issue.

The problem is that once you have a quaternion you can't go back to Euler Angles without additional assumptions. The back conversion algorithm is not One to One. So by forcing this format as storage you are limiting the future productivity of this data by keeping it in a higher entropy form.

Each quaternion is realized by TWO euler angles within a 360 range of motion across 3 axis-es. When you convert something to a quaternion you cannot go backwards. You cannot find the original euler angle where the quaternion came from because you HAVE two options to choose from.

For gaming this problem is not so apparent because you're in a virtual world and having everything exist in quaternions is ok because rotational orientations don't have to be realized by actual movement or rotations. The computer simply draws the object at the required orientation.

But real world rotations HAVE to be realized by euler angles. You cannot Orient something in reality without actually turning it about an axis. Gimbal lock cannot be erased in the real world and even the Apollo module suffered from this phenomenon despite the fact that the engineers knew about quaternions. People at my company seem to think the issue disappears once you switch everything to quaternions.

Thus for something as simple as having one robot gimbal imitate another... if the communication protocol between them both was exclusively quaternions (with no additional assumptions) the imitating robot can choose an alternative euler angle to project it's motion onto and the two robots WILL not be be in sync. Total information Loss.

So all in all this proposal never went through. I was shut down by stubbornness and over confidence by "robotics experts" who've been brainwashed by false dogma. The people I was proposing this to told me that I should trust the extensive experience their backgrounds of building self driving cars at uber and building robots at CMU. Yeah I respect that but can you not see the literal logic of the issue here? I don't respect people who aren't able to see logic.

The company culture is just part of the story, these falsehoods are likely held industry wide and you'd get these issues everywhere. False Dogma is powerful. Try telling a christian that walking on water is ludicrous when looking at it logically. It's same issue here. Peoples' brains will fight logic if it goes against their beliefs.

Very likely I might even get replies to this post who have so much confidence in quaternions that they'll come up with a retort that doesn't fully understand the problem I illustrated here.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: