Hacker News new | past | comments | ask | show | jobs | submit login

In some cases amortized isn't good enough though, and games could certainly be one of them!

(Anywhere you're servicing some kind of interactive request might qualify, depending on this size of your datastructures.)

This is actually something I like to work through in interviews:

OK, you've got amortized-constant-time append to your array-backed colletion, but now what if we need it to be really constant time? How could we achieve this? (Same question & trick applies to hashtables.)

(The answer can be found, amongst other places, in the Go or redis source code.)

edit: or in pavpanchekha's comment, they beat me to it :-)

also, just realized you're the author of crafting interpreters. absolutely love that book!




>> In some cases amortized isn't good enough though, and games could certainly be one of them!

Real time systems including games have hard limits on how long something can take in the worst case, not the average. In games this manifests as: All the stuff has to be done in 16ms to render a frame on time, if we're late the user will perceive a glitch. The consequences can be worse in control systems in any number of real-world systems where a glitch can lead to stability problems.

So getting back to TFA, this is part of knowing what the "right" answer is for the context.


So it would be fine if they had said, "okay, for this problem let's add the requirement that every add must be fast, because it's within a frame rendering loop", but (from OP's description) they didn't even seem to understand that's a separate desideratum, or that it doesn't mean the amortized time is bad.


It's probably implicit, it's a gaming company, however it might be fair to say the interviewers didn't appreciate the possibilities outside of gaming and unconsciously assumed this context was understood.


> worst case, not the average

Yup, I can't remember which book but I'm pretty sure I remember reading some stuff from Michael Abrash about how some of the algorithms in quake were changed for worse average performance but better worst case performance. It's fairly intuitive when you think about the context... and that is the critical point - abstract problems are interesting, but it's important to evaluate them in their full context.


One thing I’d add is in games the workload is known in a way that other general applications don’t have. In this case you can think of the work as “preloading” where the allocations / disk reading can happen several seconds before they are needed for rendering or physics/collision. This is a pretty cool special case as it means you can hit your allocation targets before you need it giving the best of both worlds: the cache coherence in arrays, and perfect framerate because you have far fewer memory allocations focused on known game events (cinematic transitions, enemy changes, or map streaming boundries, etc)




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: