If the second one can't allocate then how does it handle the case where you don't have enough capacity to insert the new (k,v) pair?
I can see that the difference between the two is in self.growIfNeeded() call, https://github.com/ziglang/zig/blob/0dffab7356685c7643aa6e3c..., which the one that doesn't allocate really doesn't have. Does it assume that the predefined capacity will not be reached?
> Does it assume that the predefined capacity will not be reached?
Yes, the function name says exactly that as well, though admittedly what this actually means and what consequences it might have if the assumption is false are probably fairly opaque to a novice user.
> how does it handle the case where you don't have enough capacity to insert the new (k,v) pair?
Breaking the invariant results in safety-checked undefined behavior, that's what the "asserts" in the doc comment signifies[0]. Basically, if something goes awry at runtime you'll get a crash along with a nice error trace in Debug/ReleaseSafe mode or with the appropriate @setRuntimeSafety call.
If instead you'd like to have errors that you can handle, you could use your real allocator where you expect to actually use it and pass a failing allocator[1] everywhere else (but that's sort of abusing the API IMO, I don't know if I'd actually recommend you do this).
Right, that's what I thought is happening under the hood. Thanks for confirming.
This also means that there is nothing novel about this approach that cannot be achieved in other programming languages as one of the parent comments claimed.
This is simply a hashmap with pre-allocated pool of memory.
I can see that the difference between the two is in self.growIfNeeded() call, https://github.com/ziglang/zig/blob/0dffab7356685c7643aa6e3c..., which the one that doesn't allocate really doesn't have. Does it assume that the predefined capacity will not be reached?