Hacker News new | past | comments | ask | show | jobs | submit login

Just lots, wherever you look. io_uring and eBPF are the new hotness, but we've had operating systems scale up from a few CPUs to hundreds or thousands in the past few decades, we've had filesystems like ZFS and BTRFS with checksums and snapshots and versions, there's namespaces and containers, hypervisors built into the OS and paravirtualization, multiqueue devices with command queue programming models, all sorts of interesting cool things.

Before someone jumps in and says we've had all these things before, including io_uring and eBPF (if you squint) and scaling to hundreds of CPUs and versioning filesystems etc. That is true. IRIX scaled on a select few workloads on multimillion dollar hardware and even that fell in a heap when you looked at it wrong. The early log structured filesystems that could do versioning and snapshots had horrific performance problems. IBM invented the hypervisor in the 1960s but it was hardly useful outside multimillion dollar machines for a long time. Asynchronous syscall batching has been tried (in Linux even) about 20 years ago, as have in-kernel virtual machines, etc.

So my point is not that there isn't a very long pipeline of research and ideas in computer science, it is that said long pipeline is still yielding great results as things get polished or stars align the right way and people keep standing on the shoulders of others, and the final missing idea falls into place or the right person puts in the effort to put idea into code. And those things have been continually going into practical operating systems that everybody uses.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: