Signed types are the sane ones and give you error checking possibilities (negative size doesn't make sense; huge positive size may be right or an error). Unsigned types break trivial math like x < y => x-1 < y-1.
On the contrary! Unsigned types are the only ones in C were everything is sanely defined, they are the integers mod 2^n. Signed types have undefined behaviour on overflow in C. In-band signalling of error states is error-prone. It is unfortunately commonly done in C because you can't return multiple values and passing in pointers is uglier.
It's not so bad. It means you can be really consistent about returning error codes from every function in the exact same way. This is one of the very few things I actually like about the Win32 API. If only they used the same type of error codes in every section of the API.
Do signed types not also "break trivial math" like that, just at a different boundary? Genuine question. (The 0 boundary is obviously going to be more commonly hit than the 2^32 boundary, but nonetheless.)
"If an array is so large (greater than PTRDIFF_MAX elements, but less than SIZE_MAX bytes), that the difference between two pointers may not be representable as std::ptrdiff_t, the result of subtracting two such pointers is undefined. "
Signed types have a worse problem. They typically wrap around, but the behavior is undefined. That means the compiler can assume it never happens and optimize your code accordingly, which can lead to all sorts of entertaining misbehavior.