Hacker News new | past | comments | ask | show | jobs | submit login

Yup, there are risks too though with that approach. You should be sure that ulimit for core size is large enough (not default almost always). Also, a core dump can take a very long time if your programs address space is large. It might involve writing tens of gigabytes to disk. So not only do you need the file system space, but you also need to be prepared to wait tens of minutes while your program is dumping core.



While what you say is true, this is not really a problem in practice; most of the time, the process doesn't have the amount of memory allocated to be a serious problem, and the coredump file is written in a smart way [1]: usually, the address space has many 'gaps': instead of writing these all as \0 characters to disk, it uses more elaborate storage techniques. The end result is that you can have a coredump file which is reported to have a size of 1GB, while only having 100MB written on disk.

[1] https://en.wikipedia.org/wiki/Core_dump#Format




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: