Hacker News new | past | comments | ask | show | jobs | submit login

It is not obvious that a two level model is a cleaner way to think about todays memory access, which has 4-5 levels of caches before you even hit possibly NUMA RAM, then an SSD, then a HDD and then maybe big datasets that can only be accessed over the network.

But then, I am a physicist, not an engineer, so to me starting from empirical observations is actually a very good way to construct a model.




Well you can apply it to any pair of (adjacent) levels of the memory hierarchy. But the main problem with the square root model is that it only models random access time, but not when they are incurred and when data is already in cache. (There are also 2-3 levels of caches, no architecture that I’m aware of has more than 3, maybe 4 if you count the CPU registers but their allocation is usually fixed at compile time)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: