binary blobs is just the biggest example and was only mentioned in relation to the "lingua franca" argument, many other common things are also larger in JSON. Only if you have many larger not escaped utf-8 strings does this overheads amortize. E.g. uuids are something not uncommonly send around and it's 17 bytes in msgpack as a bin value and 38 bytes in json (not inlcuding `:` and `,` ). That 211% the storage cost. Multiply it with something going on and producing endless amounts of events (e.g. some unmount/mount loop) and that difference can matter.
Through yes for most use cases this will never matter.
I get your point, but you have to understand that for every second you’ve spent writing that comment, globally hundreds of millions of HTTP responses have been processed that contain UUIDs of some kind.
Yes, there’s a more optimal format than hex-encoding UUID values. However it simply does not matter for any use case this targets.
16 bytes vs 38 bytes is completely meaningless in the context of a local process sending a request to a local daemon. It’s meaningless when making a HTTP request as well, unfortunately.
I’d have loved Arrow to be the format chosen, but that’s not lowering the barrier to entry much.
binary blobs is just the biggest example and was only mentioned in relation to the "lingua franca" argument, many other common things are also larger in JSON. Only if you have many larger not escaped utf-8 strings does this overheads amortize. E.g. uuids are something not uncommonly send around and it's 17 bytes in msgpack as a bin value and 38 bytes in json (not inlcuding `:` and `,` ). That 211% the storage cost. Multiply it with something going on and producing endless amounts of events (e.g. some unmount/mount loop) and that difference can matter.
Through yes for most use cases this will never matter.