You're succumbing to selection bias[1]. You never notices all of the 16-bit counts in all of the software you use that don't overflow. For all we know, there could be a thousand of them for every case where 16 bits is too low.
Assuming you managed to accumulate enough items to fill a 64-bit counter, if you got 1 core of your super-fast PC to increment a 64-bit value for each one, it would take over 100 years to count them all. (Assuming you don't just decide to pop -1 in the counter. Presumably you'd want to make sure you haven't miscounted during the accumulation phase.)
Well, for counts 16 bits is enough, unless you count something really small, like every individual byte in something.
And for counts 32 bits should be enough, since it's 4 billion.
Sure -- but it is an exceptional amount of memory to use for tiny objects like those. If you're working with four billion objects at a time, they're probably more substantial than eight bytes.
Or you spent a lot of effort to get them to 8 bytes or even smaller, to fit as many of them as possible in memory. See use-cases like time-series/analytics databases, point-clouds, simulations with many elements...