Hacker News new | past | comments | ask | show | jobs | submit login

Would we be better off if CPU clock speeds were set such that the memory could keep up again

Absolutely not, because of the locality principle. As Terje Mathisen used to say, "All programming is an exercise in caching."

Locality isn't a property of a specific coding style or methodology, it's just the way programs work. No matter what kind of architecture we end up using 50 years from now, it will have a fast cache of some kind, backed up by slower memory of some kind. We'll have a different set of problems to confront in day-to-day development work, but hobbling the CPU won't be the answer to any of them.




Sure, caching is important. But today, we have multiple layers of cache: registers, L1, L2, sometimes L3, and RAM, all of which are caches for nonvolatile (increasingly flash) storage. All of that layering surely has a cost. So what would we get if a processor with no caches between registers and RAM were manufactured using a current process (say, 14 nm), clocked such that DRAM could keep up (so, 100 MHz if another comment on this thread is accurate), and placed on a board with enough RAM for a general-purpose OS as opposed to an RTOS for a single embedded application? Would the net result be any more power efficient than the processors that smartphones use now?


L1/L2 cache levels are transparent optimizations over the top of register--ram, so eliminating them in a RAM-bound application would save you transistors (power usage) without losing performance. But although a few certain RAM-bound applications might perform equivalently, you've destroyed all other classes of application in the process.

Power efficiency is more complex, often it's better to briefly burst then get back to sleep faster, rather than drag things out at 100 MHz, but a specific answer would depend on many factors.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: