Hacker News new | past | comments | ask | show | jobs | submit login

Interestingly, you managed to illustrate the O(n) debate above pretty well.

O(n) is short hand for here is "what happens when n gets large, but it I'm telling you nothing about when n is small". For example some sorts are O(n log(n)) and some are O(n^2). Naturally for big sorts O(n log(n)) is preferable - but when n is 5 say the O(n^2) will often be faster. People use O(n) precisely because it carries all this unsaid nuance.

In your example you probably have upwards of 30 instructions but possibly hundreds, which sits comfortably in the big side of things. If there was a way to reduce that to an O(1) solution (which implies that if you can add many new instructions without adding lines of code) then you've coded it badly - but we all know that's not possible for a micro controller emulator.

In his example he used O(n), but implied a O(1) solution is possible. The same logic applies. His use of Big-O means for large n, it's possible to add many new cases without adding code. If that is true, his solution is far better than the one wanted by the interviewer. However, it's likely n was small because this code had to be written for a job interview. It's entirely possible he managed to find a enough commonality in those small number of cases to reduce it to being 6 lines, while in the real world problem is indeed O(n). In that case the correct solution depends on the instructions he was given - whether he was asked to solve this particular problem in an optimal fashion, or demonstrate how he would solve the problem in general using those n cases as an illustration.

He doesn't say, so we don't know. His use of O(n) does hint - but the grey beards among us would like to see proof the problem really can be solved with O(1) and the interviewer was demanding an O(n) solution despite that. It does seem like a really weird thing to demand.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: