All you need to care about for cases like these, when you're talking about the size of something, is that both malloc() and new[] handle allocation size using size_t.
That, to me, says pretty clearly that "the proper type to express the size, in bytes, of something you're going to store in memory is size_t".
It can't be too small, since that would break the core allocation interfaces which really doesn't seem likely.
You don't need to know how many bits are in size_t all that often, and certainly not for the quoted code.
For cross platform interoperability, API with the exact size type helps remove any ambiguity. Using size_t might be fine for intra-process usage, but as soon as we are dealing with data across platforms, exact size type definition is a must.
I see it the other way around. How many bits you need to address something in memory depends on the platform. Thus `size_t` is the only cross-platform type you can use. A fixed-size integral is going to work on some, but not all.
That is correct. For file formats and packets etc you must use exact sizes.
However for cross platform support using size_t in an API (as in what is exposed via .dll or .so) is a must. It's exactly the correct way to write cross platform code.
Sounds like you are mixing up you data's in-memory representation with their storage/transmission representation. This is risky business.
If you have no requirement that says otherwise, you should have an explicit marshalling and demarshalling steps that transform your live data objects into opaque BLObs. It would be highly desirable if your BLObs have some header that contains metadata to be used exclusively for marshalling purposes, at the very least size of the payload, object type id and format version id will save you lots of trouble.
Now what happens if you need high performance and are willing to trade of code complexity for faster execution. You can just copy your native object's bytes into the BLOB payload, just as long as you can correctly identify the source platform's relevant characteristics in the header. Then when the target host does the demarshalling step, it can decide if the native format is compatible with it's own platform and just copy the payload into a zeroed buffer of the correct size. If that its not the case, it will have to perform and extra deferred marshalling step to put the payload in "canonical" format prior to demarshalling proper.
You can even make the behavior configurable, so that customers running an heterogeneous environment do not suffer a performance hit for the sake of the customers in homogeneous environments.
Of course the data in storage or over the wire needs to be marshalled and unmarshalled (whether explicitly standardizing on a particular wire format or with header based hacks or whatnot). That's not the point.
The point is that a lot of the times, the two machines on either end of the wire need to agree on sizes of various fields you're sending (say in protocol headers). And then you want to work with that data internally in the code on either side. You better be absolutely sure how many bits you have in each type that you're allocating for these purposes.
And going even beyond that, very common, use case -- a lot of code reads cleaner and lends itself to debuggability when you know the exact sizes of the types you're using. It's not something reserved for just network programming.
Sorry, I fail to see the point in your second paragraph. Of course in the business logic level you need to allocate variables that can hold every possible value in the valid range, but as long as this is the case, why does it matter that you use types that have the same byte size in every possible platform?
In your third paragraph, i agree on the debuggability front (if you are actually reading memory dumps, otherwise, why should it matter). About the code reading clearer, I guess this is more a matter of taste.
It matters because of code readability, debuggability and all sorts of code hygiene reasons. If I'm using size_t for a field in my protocol on a 32 bit platform on one end and 64 bit platform on the other, which size wins over the wire? Can that question be answered while in debugging flow trying to track down a memory stomping error?
All you need to care about for cases like these, when you're talking about the size of something, is that both malloc() and new[] handle allocation size using size_t.
That, to me, says pretty clearly that "the proper type to express the size, in bytes, of something you're going to store in memory is size_t".
It can't be too small, since that would break the core allocation interfaces which really doesn't seem likely.
You don't need to know how many bits are in size_t all that often, and certainly not for the quoted code.