Hacker News new | past | comments | ask | show | jobs | submit login

One big problem with pools is that you have to deallocate the pool sometime (or else your program's working set will continually grow over time), and when you do, all the pointers into that pool become dangling.

Another problem with pools is that you can't deallocate individual objects inside a pool. This is bad for long-lived, highly mutable data stores (think constantly mutating DOMs, in which objects appear and disappear all the time, or what have you).




Another big downside to pools is that a lot of GC implementations will scan only 'live'objects. Large object pools unnaturally increase the number of objects that need to be scanned during GC, negating a (sometimes) useful GC efficiency tick.


Precisely. This is an especially important point in response to the oft-repeated "just use pools if you want to avoid the GC in a GC'd language" fallacy.


If your GC is triggered after the allocated memory increases by X% (which is fairly common), then this technique is effective, since it lowers allocation rate.

Also, Go doesn't scan arrays that are marked as containing no pointers, so representing an index as a massive array of values has proven quite effective for me.


I was talking about pools as a standalone memory management technique, not a supplement to GC.


Fair enough. I suppose there are a significant number of applications where there isn't an obvious way to perform coarse-grained partitioning of object lifetimes. If you are a language designer looking to force a memory management scheme on all of your users pools would be a bad choice.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: