Hacker News new | past | comments | ask | show | jobs | submit login

Companies managing large houses with many thousands of computers care about efficiency gains in the sub-percent range - so even a 5% performance penalty is entirely a no-go. I doubt the kernel is nearly as featureful as Linux, either.

Now, could one of these academic kernels have sheer engineering effort put into optimizing that 5-15% regression away? Probably.




Didn't the recent CPU bug mitigations add 10% perf penalties to a lot of big server farms?

We constantly pay this cost for other security-related issues, but when it comes to a systemic, one-time penalty that almost entirely eliminates a class of bug we freeze up.

Obviously it's a lot of engineering effort to transition but we don't think twice about other security issues


Sure, they added around that performance penalty. But you only need to compile your kernel with that flag if you run untrusted code on the machine, given the nature of the exploits... I'm sure many companies have weighed the pros and cons of that decision, and said "by the time we've got foreign code running on this machine, we're already hosed, so the performance cost isn't worth it."

In general, these types of decisions aren't uncommon.


Or just don't use GC.


Whether reference counting or GC is more efficient depends on your workload. There are common workloads where reference counting is better. And others where GC is better. In general neither can win.

So saying "just don't use GC" as a answer to improving throughput shows ignorance.

So what are the real tradeoffs?

GC is less work. Is easier to get right. And lets you handle complex self-referential data structures more easily.

Reference counting lets you handle real-time constraints more easily. Is simpler to understand. And results in faster freeing of potentially scarce external resources.


Having built reference counted systems, they're anything but easy to understand unless you're greatly restricting the graph of objects and the references they can have to each other. And once you've restricted who can point to what, you've lost a lot of the flexibility that a RC system affords.


The system itself can be hard to understand.

The basis on which the system works is easy to understand.


"So saying "just don't use GC" as a answer to improving throughput shows ignorance."

It could also mean they use Rust, memory pools, some kind of checker that catches those errors, or separation logic. Probably that order due to popularity.

As a counterpoint to "don't use GC," there's also low-latency or real-time implementations of RC/GC to consider as well. A lot of developers using RC/GC languages don't know they exist. Maybe we need a killer app that uses it to spread awareness far and wide. More open implementations of the concept, too.


> there's also low-latency or real-time implementations of RC/GC to consider as well

These exist but tend to have poor throughput, frequently set a maximum amount of memory you're allowed to use, and typically don't perform well when compared to commonly used garbage collectors.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: