Hacker News new | past | comments | ask | show | jobs | submit login

Throw a ridiculous amount of RAM at it is more correct assessment. NVMe reads are still an “I/O” and that is slow. And for at least 10 years buying enough RAM to have all off the interesting parts of OLTP psql database either in shared_buffers or in the OS-level buffer cache is completely feasible.



> NVMe reads are still an “I/O” and that is slow

It's orders of magnitude faster than SAS/SATA SSDs and you can throw 10 of them into 1U server. It's nowhere near "slow" and still easy enough to be CPU bottlenecked before you get IO bottlenecked.

But yes, pair of 1TB RAM servers gotta cost you less than half year's worth of developer salary


an array of modern SSDs can get to a similar bandwidth to RAM, albeit with significantly worse latency still. It's not that hard to push the bottleneck elsewhere in a lot of workloads. High performance fileservers, for example, need pretty beefy CPUs to keep up.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: