Hacker News new | past | comments | ask | show | jobs | submit login

I think there's a confusion here between the copy-on-write system that makes fork() so fast under Unix and Linux's overcommiting memory, which allows malloc() to return success for memory allocation that will in fact only be allocated as necessary. (e.g. you can allocate 32GB using malloc but as long as you don't write anything there, the kernel isn't really allocating that much memory).

The latter causes much trouble because people tend to assume that, say, a database can allocate a certain amount of memory and be sure that it will never run out of memory as long as it does not explicitly allocate more memory. This is not true for Linux at all, and a process can be terminated at any time (if I am not mistaken, with the confusing SIGILL signal and an OOM-killer message sent to the system log) if the system is running out of memory.

I suspect the diagnostics in the article may be wrong, and there is something else causing the problem there. My polite guess would be something with the JVM (you are all having problems with the JVM... hmmm...). Merely forking a process is a tiny operation under Linux and I don't believe memory overcommitting is relevant at all here. And in any case you can simply turn off overcommits with a sysctl to verify the theory.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: