Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't disagree with you, but it might help to be clear here. When people (including me) say "GC" they usually mean tracing GC. Technically, reference counting and other techniques are also GC. Except for short-running scripts (where memory consumption literally can't grow enough to matter) or the very hardest of hard real-time (where any kind of dynamic allocation is verboten), every program needs some way to find and recycle garbage. Games are not in either of those excepted categories.

I'm not just saying this to be pedantic. It's important to note that every GC method involves some overhead and has a potential to make execution time less deterministic. This is certainly true of reference counting, generally true of object pooling, etc. There are even cases where tracing GC improves locality of reference vs. other methods, and thus improves performance and predictability, though those are rare AFAIK. So when you say "extract every last CPU cycle" it's important that such a goal does not necessarily favor some other approach.

Many would also say that an aversion to tracing GC does not justify having nothing beyond plain malloc/free. No borrow checker, no reference counting (which can be done with very low overhead), no arena or custom-allocator support? Not in 2023. Leaving programmers entirely on their own for memory correctness is an eminently questionable choice, especially when other language features make gratuitous or hidden allocations easy. As a C programmer for 30 years I'm not going to say it's an illegal choice, but it certainly needs more than "but games" to justify it.



I have never seen a game developer argue for "no GC" in games, where "no GC" includes no arena allocators. anyone who advocates for this is misguided. it's much better to use arena/pool allocators than to malloc/free dynamically, in games.


In my own domain of storage servers and such, I've had success using arenas - per request, usually, but also per session etc. For things with longer lives, object pooling has worked well. Very rarely have I needed much beyond those two, and I suspect the same might be true in games though I've never worked in that space. When I've had to work on codebases that did a lot more "random" allocations I've usually ended up having to spend time migrating toward one or more often both of those approaches just to keep the bug load down and myself sane.

The really cool thing is that supporting custom allocators in a language and its libraries (e.g. Zig) makes both approaches easier, with less boilerplate. I see what's going on with the borrow checker in Rust and - as a long-time fan of static analysis - I understand that it might theoretically be better, but I find it hard to get very excited when I see that the implementation difficulty and cognitive load are so much higher than a simpler approach I've already seen work well.


this is exactly how I feel! in school, the idea of pool/arena allocators wasn't touched upon at all—the options, as I was presented them, were either "manual memory management" (malloc/free/new/delete), "RAII" (smart pointers), or "garbage collection". I had not heard of the concept of "lifetimes", outside of scopes when using RAII, until later, and even then probably only after Rust made that be like it's whole thing.

nobody ever told me, check it out, it's not hard to make like the dumbest possible "bump allocator", where you allocate a hunk of memory and dole it out bit by bit as the application requests it, and then later you can just free the whole hunk at once… or even just move the bump pointer back to the start, and reuse it next frame/request/etc.! that's like 95+% of the reason why people reach for things like garbage collection to begin with, and it seems so obvious in hindsight!

actually, it was this top-shelf HN comment that really spelled it all out for me, and again, it all seems so obvious in hindsight: https://news.ycombinator.com/item?id=26443768&p=2#26451692


Pretty awesome to see Andy Kelley, Steve Klabnik, and Joe Blow all writing at length about this stuff in one thread, isn't it? In a way, that was my original point. I didn't come here to say it's wrong for Cakelisp to forego tracing GC in favor of nothing at all, but just that the rationale - and maybe compatible solutions to common issues with that approach - should be explained better. I hope the author takes that to heart, because (as plenty of other comments here indicate) without explanation it's a blocker for many people.


Yes, to be clear I am talking about general purpose gc out of control of the developer. Certainly not arguing for the most low level memory management imaginable. All the techniques you mentioned are valuable.


Oh, it's worse than that -- I've seen people pedantically argue that manual malloc and free count as "GC", which you can indeed make a historical argument for, but it's not what most people mean.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: