can you disable the GC? in my last role we had a large C++ application that had embedded lua. I didn't touch it much, but I would have thought that while most of the stuff it did was calling out to our C++ api, the "lua objects" and tables e.t.c. would still be created and need to be garbage collected as normal.
For such a dynamic language Lua is quite good about avoiding hidden dynamic allocation. Creating a new closure (or plain Lua function), coroutine, string, or table will, of course, allocate a new object. But all of those are either explicit or fairly obvious. Lua's iterator construct is careful to avoid dynamic allocation--I believe it's one reason why iterators take two explicit state arguments. And Lua has a special type for C functions (as opposed to Lua functions), allowing you to push, pass, and call C functions without dynamic allocation.
Likewise for lightuserdata (just a void pointer), and of course numbers (floats and integers) and booleans--no dynamic allocation.
Nested function calls could result in a (Lua VM) stack reallocation. But Lua guarantees tail call optimization. And the default stack size is a compile-time constant.
Finally, Lua is very rigorous about catching allocation failure while maintaining consistent state. Well-written applications can execute data processing code from behind a protected call or coroutines resume (which is implicitly protected) and still keep chugging along in the event of allocation failure anywhere in the VM. The core of the application, such as the event loop and critical event handlers, can be written to avoid dynamic allocation after initialization.
For a usage like that, no as you'll probably rack up lots of allocations and need to GC eventually. But if your goal is this from the outset, there's way to do it as other post mentions. If you never create more than a fixed amount of Lua objects, closures, or unique string values you can certainly indefinitely postpone the GC
Can you entirely turn off lua's GC?