Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Similarly, when performance matters reference counting is essentially deterministic much easier to understand and model.

Is it? What happens if you remove that one last reference to a long chain of objects? You might unexpectedly be doing a ton of freeing and have a long pause. And free itself can be expensive.





Technically it's not a pause as the pauses introduced by a typical STW tracing GC. It does not stop the other threads. The app can still continue to work during that cleanup.

And it pops up in the profiler immediately with a nice stack trace showing where it rooted from. Then you fix it by e.g. moving cleanup to background to unlock this thread, not cleaning it at all (e.g. if the process dies anyway soon), or just remodel the data structure to not have so many tiny objects, etc.

Essentially this is exactly "way more deterministic and easier to understand and model". No-one said it is free from performance traps.

> And free itself can be expensive.

The total amortized cost of malloc/free is usually much lower than the total cost of tracing; unless you give tracing GC a horrendous amount of additional memory (> 10x of resident live set).

malloc/free are especially efficient when they are used for managing bigger objects. But even with tiny allocations like 8 bytes size (which are rarely kept on heap) I found modern allocators like mimalloc or jemalloc easily outperformed modern GCs of Java (in terms of CPU cycles spent, not wall clock).


> Is it?

Yes.

> What happens if you remove that one last reference to a long chain of objects?

A mass free sometime vaguely in future based on the GC's whims and knobs and tuning, when doing non-refcounting garbage collection.

A mass free there and then, when refcounting. Which might still cause problems - but they are at least deterministic problems. Problems that will show up in ≈any profiler exactly where the last reference was lost, which you can then choose to e.g. ameliorate (at least when you have source access) by choosing a more appropriate allocator. Or deferring cleanup over several frames, if that's what you're into. Or eating the pause for less cache thrashing and higher throughput. Or mixing strategies depending on application context (game (un)loading screen probably prioritizes throughput, streaming mid-gameplay probably prioritizes framerate...)

> You might unexpectedly be doing a ton of freeing and have a long pause. And free itself can be expensive.

Much more rarely than GC pauses cause problems IME.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: