I would also prefer unsigned indexes but exactly because the compiler may assume that there will be no overflow, signed index access may be a bit faster and therefore preferable.
On most machines this days there is not really a performance difference between the math done on signed or unsigned integers. The only case would be if your wanting to the compiler to optimize on the fact that UB does exist. So like this in example "impossible things" get optimized out. The author here clearly does not want that.
Is there a reason why the language doesn't provide UB-on-overflow (and wrappring overflow) for both unsigned and signed types?
It always feels dirty deliberately using a signed type for something you know can never be neagtive just because that signed type has other properties you want.