Hacker News new | past | comments | ask | show | jobs | submit login

> The oom killer only kicks in sometimes, e.g. when programs make truly egregious allocation requests.

This is a myth. Allocation failures happen at least as much on small allocations as on big ones. In fact, I see OOMs every day and the vast majority of the time the trigger was a small allocation. For example, the kernel trying and failing to allocate a socket buffer.

And that's really the root of the issue. You have a giant application with a 200GB committed working memory set doing important, critical work; and it gets shot down because some other process just tried to initiate an HTTP request. It's a ludicrous situation. And people defending Linux here by saying the same problem exists everywhere else are wishful apologists--the situation is absolutely not the same everywhere else.

Even setting aside the issue of strict memory accounting--which, BTW, both Windows and Solaris are perfectly capable of doing, and do by default--Linux could still do dramatically better. Clearly there's some level of unreliability people are willing to put up with for the benefits of efficiency, but Linux blew past that equilibrium long ago.




E.g. == for example, other cases are permitted. What I am saying is that it only kicks in under some circumstances, not necessarily when one wants it to.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: