Why even put them in network byte order? Every modern system is little endian, if you standardize on that, only exotic systems would have to deserialize anything.
If you force the most common system to translate byte order, then you'll have some confidence that your code is performing the translation correctly. If instead you rely on hoping that everyone added the correct no-op translation calls everywhere, you'll find your code doesn't work as soon as you port it to another CPU.
This is a nice side effect of network byte order being the opposite of the dominant cpu order, though obviously it was never intended.
Because when someone builds a hugely popular exotic system in the future, because it is one (1) cent cheaper, you'd end up with code that has to check to see if it's running on such a system.
This doesn't make any sense for multiple reasons, but especially because you wouldn't be checking anything in the first place. A big endian system would would reorder bytes and a little endian system would just use it directly from memory without another copy or reordering anything.
There's not a library pattern for host to little endian, or little endian to host, like we have with hton and ntoh. Which makes it more likely to be messed up.