This means that you have array accesses that cause index out of bounds fatal errors instead of invalid pointer dereferencing that causes fatal segmentation failure errors. Detecting this kind of bugs reliably is a very good thing, but preventing such errors (and optimizing away bounds checking if possible) would be better.
Invalid pointer dereferencing isn't guaranteed to cause segfaults, you might just get garbage memory or nasal demons from LLVM handling undefined behavior.
That's the key advantage of using integers (or a GC'd language) - it's memory safe.
But if you use integers, and delete an array element and reuse it for something else, but somewhere there is a use-after-free integer, you also get garbage memory
Or worse, actual data belonging to someone else. If the integer is a user id, and you delete an user, reuse it for another user, the former user might see data for another use. That is a big security issue
If dereferencing an invalid pointer reliably caused a segfault, this would be true. But it's undefined behaviour, so it can segfault, corrupt data, leak data, etc. Given that the possible behaviour is a superset of what can go wrong using an integer index, i would say it's worse.
I would agree that using naive integer indices, and running the risk of accessing the wrong data, is also completely unacceptable, though.
I'm agreeing with you. I'm saying that a bounds checker is better than a segfault because it always works.
(You're right that it actually doesn't always work because sometimes your stale index will still be inside the array but refer to a different thing, but at least it can't be abused to write into other arrays).
The alternative in Rust would be using `.get()` in those situations, which returns an optional result. That still doesn't account for valid but outdated array indices, though.