Hacker News new | past | comments | ask | show | jobs | submit login
When ‘int’ is the new ‘short’ (googleprojectzero.blogspot.com)
128 points by noondip on July 8, 2015 | hide | past | favorite | 99 comments



I've never really liked the short/int/long definitions. They're just out there to confuse you, especially if you've written your share of assembly.

For as long as I can remember I've just used size_t, uintptr_t, uint32_t/int32_t (or any of the 16/32/64 variants), exactly because I want to be explicit about the machine word sizes I'll be dealing with. Before that, I always used similar (u32/i32, LONG/ULONG, ...) platform-specific typedefs on proprietary systems too.

For all practical purposes, int/unsigned int has been at least 32 bits since the early 90's (well, on modern platforms) but why use those if you can explicitly declare how many bits you actually need.

(I've bumped into a few archaic platforms where stdint headers weren't present but it's easy to just add a few build-specific typedefs somewhere in that case.)


Well I know historically it was because there were potentially performance issues with using explicit sizes - int was basically the guarantee that you would be working in the CPU's native size, and hence at it's most efficient, no need to shift and mask bits to get the value that you wanted. Obviously this leads straight to a bunch of relatively subtle bugs, but I guess for some applications the tradeoff of speed vs safety was worth it.


On 8 bit machines it's the reverse, an int being 16 bits minimum requires more operations to handle than an 8 bit number. Pass an 'int' push two things on the stack. Etc. Causes your code size to balloon noticeably.


Many 8bit processors such as AVR have instructions that work with 16 bit numbers (stored in register pairs). So that's not the case.


That only applies to adds, subtracts, and register moves. 16-bit Booleans, shifts/rotates, multiplies, and load/store still need to be done with multiple instructions.


I did a little mucking on some AVR code of mine. Sometimes going from an uint8_t to a uint16_t saves a couple of bytes Sometimes adds a dozen.

One case changing an index in a for loop to an int, code went from 34024 bytes to 34018 (saved four bytes). But changing uint8_t i, j, k; to uint16_t i, j, k; code compiled to 34068 bytes, gain of 44 bytes.


C++11 has improved the situation somewhat by making it possible to explicit state what you want from your numeric types: http://en.cppreference.com/w/cpp/types/integer


<stdint.h> was actually introduced by C99 and included in C++11 with other C99 changes.


Let's also mention how utterly moronic it is to type three our four entire english words for one type definition. Do you really want to type `unsigned long long int x = 4` all the time? No, you should never type that. It should always be `uint64_t x = 4`. (That's also ignoring the mental indirection required for every line of code that experienced programmers take for granted. Try explaining to a new programmer why 'long' changes storage size on platforms or why 'long long' and 'long' is usually, but not always the same size. Be explicit, not english.)

Basically, if you are still typing "unsigned long long" or even "long" in 2015 in a modern environment, please stop. But—you may say—we want long to be 32 bit on 32 bits and 64 bit on 64 bits! No, you don't. That makes your system difficult to reason about. Plus, you'll probably start casting your inputs to printf() instead of using proper printf type macros, which breaks things even further. Adopt proper types, use proper types, stop programming like it's 1975. Good luck.


The biggest frustration with those types is printf:

    "Num: %" PRIuFAST32 " found\n"
Is quite annoying to type instead of:

    "Num: %d found\n"


I always found it odd that there's no printf extension to just automatically print the correct integer size. GCC is already smart enough to tell me if %d is wrong for a long; why can't I just do %N and have it print out the correct length for the argument?


It's not even necessary for the compiler to do it: writing a type-safe wrapper is pretty easy in C++11 and a decent approximation is doable in pure C99 (ask me if you want a sample). Yet using the ugly PRI macros seems common even in C++ projects. It's understandable why one wouldn't want to use ostream type formatting, but at least write a better printf...

(Type-safe at runtime, that is, via automatically passing a type info argument along with each format argument; C++ is braindead enough that doing the check at compile time is only somewhat possible with C++14 - as far as I can tell, it only works if you define the format outside of a function - and probably fairly slow, since that involves a template instantiation per character in the format. Oh well.)


> and a decent approximation is doable in pure C99 (ask me if you want a sample).

I'd like to see that.

I can see some pretty straightforward ways to do it with C11 and _Generic, but I don't see how C99 helps.


With C++14 you don't need to define the format string outside functions, but it does get a little hairy. Here's a half-assed example: http://coliru.stacked-crooked.com/a/3f34563e9a85af51


Because printf is just a function, and doesn't have access to the type information at the call-site.


Just a hypothetical idea: what if the standard allowed a compiler to modify the format string during compilation, replacing e.g. %N with the appropriate conversion specification, subject to proving that a given format string is a plain old literal that is not touched anywhere else?


Although compilers will warn about it, it's still possible to generate a format string dynamically at runtime. This might be done if you want different formats for the same fixed argument types (though there are probably better/safer ways).


This is exactly what I was suggesting. The compiler definitely has that information.


It's a good idea, but then you are tied to one compiler.

So, what if Clang implements it, but not GCC? Or what if Clang and GCC implement it, but not the Sun or Intel compilers? Or what about all the GCC copies every board maker forks when they create something custom?

It's tricky making in-compiler behavior non-standard (though, I guess that's what compiler flags are for).


Cant a macro handle it?


Maybe it's there for those that don't care about that sort of "details". If all you care about is passing around some small numbers, int does the job :)!


Conversion to and from C's `int` is one place where I think Rust is failing too :(

Rust allows silent truncation of values in numeric conversions and considers it "safe", because other features for memory safety will catch buffer overflows — but it doesn't care about cases where the program will do a memory-safe logically-invalid thing (e.g. write to a wrong location within a buffer).

That's because Rust has no integer size promotion at all, which means `len as usize` and `len as c_int` are required all over the place when interfacing with C (and the `as` operator has no overflow checking by design).


I think integer size promotion would only make it worse.

I do agree that "as" should have range checking. Some of it can be done statically (e.g. converting to a strictly larger type, and when the range of possible values are known) and the rest can be done with minimal overhead if they emit the code right.


It's not minimal overhead in many cases. Think about vectorization, for example.


Rather than "type casts", implicit or explicit, I would like to see something with unambiguous, explicit semantics. Something like LLVM IR's operations, e.g. zero_extend, sign_extend, bitcast, floattoint, pointer_cast, etc.

This may lead to some unnecessarily verbose code in some places but it leaves no surprises. It also gets rid of the need to do dirty tricks like union casts or pointer tricks when you want a bitcast from float to int or vice versa which is common when you write SIMD code, for example, using bitwise ops to extract the sign bit of a float or something like that.


How does the lack of integer size promotion make it easier to overflow? I don't follow.


I want to convert between types without precision loss, but Rust has only option of no conversion at all, or a bug-prone conversion allowing precision loss.

The problem is that `x as usize` will compile without warnings to something I didn't intend and causes bugs if usize is smaller than the type of `x`.

AFAIK I can't avoid casts to `usize` any other way than using `usize` for almost every integer type in Rust. To make things worse my main use case for Rust is interfacing with C code, which means I have to deal with other types as often as `usize` and end up with these risky casts in almost every expression! It's awful.

What I'd prefer is something (it could be another operator, but I'd prefer promotion to keep syntactical noise low) that would either compile if typeof(x) <= usize, or would not compile at all (i.e. if I write code that is accidentally 64-bit only, I want it to fail to compile on 32-bit machine, instead of merrily compiling to something that is buggy and will corrupt the data or even be exploitable through FFI which requires these casts that become unsafe).


Couldn't you write that operator yourself?

    #[cfg(target_word_size="32")]
    struct ThisCastIsUnsafeFixIt;
    #[cfg(target_word_size="32")]
    const u64_to_usize: ThisCastIsUnsafeFixIt = ThisCastIsUnsafeFixIt;
    #[cfg(target_word_size="64")]
    fn u64_to_usize(x: u64) -> usize { x }
    
    ...
    
    u64_to_usize(123) // won't compile on 32-bit
If "u64_to_usize" is too long of a name for you, then you should be able to do the same thing with a trait instead.


> Couldn't you write that operator yourself?

I guess I could (thanks for the tip - I didn't know about this cfg(), I've been trying with traits and sizeof::<>).

But I'd prefer it to be in the language:

• lossless integer type conversion seems like a very basic problem to me, that shouldn't need programmers to fix it themselves in a custom way in every crate.

• even if I fix it in my code, I'm still worried about other people's code, because I assume that they also develop on x64 and unintentionally write casts that are subtly broken on smaller architectures.


It's kind of bizarre that chrome goes to exceptional lengths to sandbox things into lots of mutually untrusting processes, then parses network input outside of that protection.


Many many C/C++ programmers, even many of those employed by google, have a natural tendency to use a old C types like short/int/long etc. for integral types without thinking through X-platform issues or api interactions with other code.

Also, size_t is a frustrating beast. It's meaning is dependent on the platform. The Single Unix spec only calls for size_t to be an unsigned integer type. Now imagine you're writing code to compile over multiple mobile platforms as well as on x86_64 on the server side. Can you tell me what is the largest number you can address with that type -- without getting into a long google/stackoverflow session or hitting the compiler manuals for each of those platforms? If you absolutely want to make sure that your type can handle the values you expect it to handle, better give it well defined types provided by stdint.h (uint32_t is sooo much better than just int or unsigned int or even size_t for this purpose).

Now granted, you'd need to interact with external libraries (including libc/libc++) that'll want to use size_t etc. Not much you can do here but be very careful when passing data back and forth between your code and the library code. But that's been the lot of C coders since time began.


I disagree.

All you need to care about for cases like these, when you're talking about the size of something, is that both malloc() and new[] handle allocation size using size_t.

That, to me, says pretty clearly that "the proper type to express the size, in bytes, of something you're going to store in memory is size_t".

It can't be too small, since that would break the core allocation interfaces which really doesn't seem likely.

You don't need to know how many bits are in size_t all that often, and certainly not for the quoted code.


For cross platform interoperability, API with the exact size type helps remove any ambiguity. Using size_t might be fine for intra-process usage, but as soon as we are dealing with data across platforms, exact size type definition is a must.


I see it the other way around. How many bits you need to address something in memory depends on the platform. Thus `size_t` is the only cross-platform type you can use. A fixed-size integral is going to work on some, but not all.


> Using size_t might be fine for intra-process usage, but as soon as we are dealing with data across platforms, exact size type definition is a must.

I don't know why you are downvoted, but this is very important.

Never send anything "on the wire" (or to a file) unless you know its exact size and endianness.


That is correct. For file formats and packets etc you must use exact sizes.

However for cross platform support using size_t in an API (as in what is exposed via .dll or .so) is a must. It's exactly the correct way to write cross platform code.


A big part of the data I work with needs to be serialized and cross platform, having explicitly sized types is the only way to keep sanity.


Sounds like you are mixing up you data's in-memory representation with their storage/transmission representation. This is risky business.

If you have no requirement that says otherwise, you should have an explicit marshalling and demarshalling steps that transform your live data objects into opaque BLObs. It would be highly desirable if your BLObs have some header that contains metadata to be used exclusively for marshalling purposes, at the very least size of the payload, object type id and format version id will save you lots of trouble.

Now what happens if you need high performance and are willing to trade of code complexity for faster execution. You can just copy your native object's bytes into the BLOB payload, just as long as you can correctly identify the source platform's relevant characteristics in the header. Then when the target host does the demarshalling step, it can decide if the native format is compatible with it's own platform and just copy the payload into a zeroed buffer of the correct size. If that its not the case, it will have to perform and extra deferred marshalling step to put the payload in "canonical" format prior to demarshalling proper.

You can even make the behavior configurable, so that customers running an heterogeneous environment do not suffer a performance hit for the sake of the customers in homogeneous environments.


Of course the data in storage or over the wire needs to be marshalled and unmarshalled (whether explicitly standardizing on a particular wire format or with header based hacks or whatnot). That's not the point.

The point is that a lot of the times, the two machines on either end of the wire need to agree on sizes of various fields you're sending (say in protocol headers). And then you want to work with that data internally in the code on either side. You better be absolutely sure how many bits you have in each type that you're allocating for these purposes.

And going even beyond that, very common, use case -- a lot of code reads cleaner and lends itself to debuggability when you know the exact sizes of the types you're using. It's not something reserved for just network programming.


Sorry, I fail to see the point in your second paragraph. Of course in the business logic level you need to allocate variables that can hold every possible value in the valid range, but as long as this is the case, why does it matter that you use types that have the same byte size in every possible platform?

In your third paragraph, i agree on the debuggability front (if you are actually reading memory dumps, otherwise, why should it matter). About the code reading clearer, I guess this is more a matter of taste.


It matters because of code readability, debuggability and all sorts of code hygiene reasons. If I'm using size_t for a field in my protocol on a 32 bit platform on one end and 64 bit platform on the other, which size wins over the wire? Can that question be answered while in debugging flow trying to track down a memory stomping error?


> Can you tell me what is the largest number you can address with that type -- without getting into a long google/stackoverflow session or hitting the compiler manuals for each of those platforms?

Why does this matter? size_t is intended to be used as an index into a dense array, i.e. for every Index i you may want to store in a size_t, you also store i elements X of some data type. Since that number is limited both by the software architecture and the hardware available at runtime, why would you want to know exactly how many X you can store?


"hardware available at runtime"

Because you want to communicate with other systems. Not just the system your code is running on at that moment.


size_t is guaranteed to be able to contain size of any valid object.

You can never ever have a buffer bigger than what size_t allows. If you do then you're no longer talking about the C programming language as it explicitly breaks the C specification.


Actually the Google C++ style guide forbids short, long, and long long and recommends int only under the assumption that it is 32-bits wide.


meta: This is one of those times when downvotes here on HN completely baffle (and I must admit, somewhat amuse) me. Who am I offending by sharing an opinion about integer types in C?


Your rant against size_t and portability is misplaced as size_t increases portability when used appropriately.


That is called disagreement and shouldn't be downvoted. Moderation isn't to express agreement.


Expressing disagreement is expressly an acceptable basis for downvoting, and given the more general purpose of voting (expressing whether or not a contribution is valuable), while not all disagreement involves perception that the thing disagreed with isn't valuable, that something is inaccurate or wrong certainly can be a reason to conclude that is not a valuable contribution to a discussion.


People downvote incorrect information all the time.


While you may disagree with my opinion on what is the better type to use, I believe everything I said in my comment was factually correct.


size_t's raison d'etre is portability, you should be using it for things that are indexes/sizes into memory. FWIW, I don't have the karma to downvote.


Not the place to discuss, but 9/10 of my comments drop to 0 or -1 before going positive, even the ones that end up very positive. I don't worry about downvotes until the comment has been up for at least 30 minutes.


I think the most interesting bit here is this:

> Now, the more astute reader will point out that I just sent over 4 gigabytes of data over the internet; and that this can’t really be all that interesting - but that argument is readily countered with gzip encoding, reducing the required data to a 4 megabyte payload.

This was pretty much my first thought on seeing the IOBuffer signature - "That exploit payload is going to be huge". But things are not always as they seem and using gzip to generate a large string on the client is something I had not previously considered.


You can blow up all sorts of things with gzip: https://en.wikipedia.org/wiki/Zip_bomb


I don't entirely understand why 'int' is the new 'short' here; int hasn't been a particularly good way to store sizes since C99.

Good spot though; I kind of doubt that this was a conscience design decision and probably just a slip up.


Agreed on both points. Plain int hasn't been very good for a long time. C99 <stdint.h> is really the way to go, but since this is C++ we're talking about, both the C++ committee and Microsoft Visual Studio deserve most of the blame for why people weren't using them since neither recognized/supported it for the longest time. (Visual Studio just finally got stdint and stdbool, about 15 years late.)

And agreed, good catch.


Visual Studio has had stdint.h since its 2010 edition. Before that there were readily-available substitutes (like https://code.google.com/p/msinttypes/), or you could do it yourself by typedef'ing [unsigned] __intNN as [u]intNN_t.


But not stdbool, nor a lot of other C99-ish features until Visual Studio 2013 Update 4 which shipped this past November. 10 or 15 years too long makes little difference. Way too long over due.


Somebody elsewhere pointed out to me that these will give types that are not aliases of the common ones. I.e. __int8 isn't the same as either unsigned char or char. Probably won't make a difference most places, but what does?


That is also the case in usual implementations of stdint.h, where int8_t is defined to be `signed char`. In C and C++, `char`, `signed char`, and `unsigned char` are different types, and `char` is not guaranteed to be signed or unsigned---that's up to the implementation.

EDIT: looking at the documentation, it appears that __int8 is supposed to always be an alias for `char`, even as far back as 2003: https://msdn.microsoft.com/en-us/library/29dh1w7z(v=vs.71).a.... However, the workaround found in msinttypes suggests that Visual Studio 6 does have this problem. I weep for those still using it.


Microsoft only cares about C++, and C++11 was the revision that updated the C headers to C99.

In any case, it isn't as if the language doesn't allow for type alias.


It's not been a problem (for a long time now) to find appropriate headers that typedef'd fixed size types (int32_t etc.) for almost any platform you would care about. For any reasonably complicated cross-platform codebase, it's not so much effort to include such a header in your code for the sake of improving the readability and debuggability of your code by a lot.


A lot of people know to do this, but I have met a shocking amount of C++ developers that know nothing of <stdint.h>.

And here we have a prime example of Chrome, a major cross-platform project with high visibility that is not using these types nor didn't define their own in this case. These types were intended to help reduce mistakes. But compiler fragmentation basically resulted in organizations avoiding it, leading to these types of mistakes that could have been avoided in the first place.


I work with a MSVC guy who not only didn't know about stdint, but also thought/thinks that 'word' means 16 bits, and insists that the WORD typedef is more portable than uint16_t.

Still, at least nowadays there is no excuse -- everything from TI to VC++ supports stdint variations.


Google's C++ style guide recommends using int unless a fixed size is needed, such as binary compatibility in network code or file formats.


https://google-styleguide.googlecode.com/svn/trunk/cppguide....

Maybe I'm reading this wrong, but to me it seems like this is saying go ahead and use the fixed size variants whenever, but it is still OK to use int when you need <=32 bits.


Yeah it's al lukewarm endorsement of int, quite possibly there only to accommodate legacy code.

"<stdint.h> defines types like int16_t, uint32_t, int64_t, etc. You should always use those in preference to short, unsigned long long and the like, when you need a guarantee on the size of an integer. Of the C integer types, only int should be used."


size_t has been the right type for buffer/array sizes since ANSI C.


  Now; on x86_64, with gcc and clang an int is still a 32-bit integer type;
Minor nit. The size of int is typically defined by the platform ; the compiler follows along. All the major/popular ones happen to define int as 32-bit, so that's what you are seeing with gcc/clang. Maybe on Solaris you might see it as 64-bit.


Can anyone shed some light on the design rationale for using integer types (most commonly int it seems, as here) followed by a check if the number is not negative, whereas one could just use an unsigned type right away?


I disagree with it, but the Google C++ Style Guide offers the rationale you're looking for (under "On Unsigned Integers"): https://google-styleguide.googlecode.com/svn/trunk/cppguide....


Hmm, that seems a rather weak argument indeed? Purely anecdotical, but the number of bugs I'v seen which stem from using int and then failing to check if it is negative (followed by using it as index or converting to unsigned) by far outweigh the number of times I even saw the type of loop they mention as a con.


Yeah this doesn't convince me

The bug example there, sure, needs a signed type, so you can't blame the type for its wrong usage

I've had more bugs coming from using signed types that I'll not be bothered by writing 'unsigned' ever again


I disagree that a signed type is the correct solution to that, rather you need a while loop:

  std::size_t i = foo.size();
  while (i != 0) {
    --i;
    ...
  }


The point is that C int types are an unsafe mess, so it's better to have one simple rule than memorize all the corner cases and address them all the time.


I am addressing the specific example given by the Google style guide. The code for counting down using a signed integer as they do has more corner cases than the while loop I have shown. Using their way, you have to remember to subtract 1 from the size at the start and then use >= in the loop conditional. My way is just the inverse of what you do while counting up.

It's also worth pointing out the style of loop they give can't be used at all if you are counting down iterators or pointers instead of numbers.


Different semantics: int overflow is undefined but can handle negative values, unsigned wraps around. So you would use int if your expression can yield a negative value.

(You can't thus check for int overflow by checking for wraparound, see http://c-faq.com/misc/intovf.html)


> So you would use int if your expression can yield a negative value.

Of course, but as you say it yourself: if. It seems a bit too general to abandon unsigned completely because there are cases where it is not appropriate. By that logic there wouldn't be much types one can use at all.


Scott Meyers offers his rationale here: http://www.aristeia.com/Papers/C++ReportColumns/sep95.pdf


For return values of a function it allows overloading, with negative numbers indicating errors.

That's useful if your language lacks the ability to return multiple values from a function. That typically was (probably still is on quite a few architectures) the case for languages designed for speed.

Also, in C, int was implicit (see https://github.com/mortdeus/legacy-cc for example source code). So, using int made your programs shorter. That's important if your multi-user system doesn't have much memory (the first PDP-11 that ran Unix had 24 kilobytes of memory), and if you like concise porgrams, as Ritchie apparently did.



I've recently tried to fix JOE (a UNIX portable program) so that it will compile without warnings and with minimum casting with -Wconversion. This is what I've found:

Chars: I hate it that they are signed because I like the convention of promoting them to int, and then using -1 as an error. It's easy to forget to convert to unsigned first, and the compiler will not complain. In the past I've used 'unsigned char' everywhere, but it's a mess because strings are chars and all library functions expect chars. My new strategy is to use 256 as the error code instead of -1. The only problem is that getchar() uses -1, so it's weird. IMHO, it's a C-standard mistake that char is signed.

I used to use int for indexes and long for file offsets. But these days, int is too short on 64-bit systems and long is not large enough on 32-bit systems.

ptrdiff_t is the new int. I've switched to ptrdiff_t in place of int and off_t in place of long. Ptrdiff_t is correct on every system except maybe 16-bit MS-DOS (where it's 32-bits, but I think it should be 16-bits). Off_t is a long long if you have '#define _FILE_OFFSET_BITS 64'. Ptrdiff_t is ugly and is defined in an odd include file: stddef.h. It's not used much by the C library.

The C library likes to use size_t and ssize_t. The definition of ssize_t is just crazy (it should just be the signed version of size_t, but it isn't).

I understand why size_t is unsigned, but I kind of wish it was signed. It's rare that you have items larger than 2^(word size - 1), so signed is OK. You are guaranteed to have -Wconversion warnings if you use size_t, because ptrdiff_t is signed (even if you don't use ptrdiff_t, you still get a signed result to pointer differences so you will have warnings). Anyway, to limit the damage I make versions of malloc, strlen and sizeof which return or take ptrdiff_t. They complain if the result is ever negative. Yes this is weird, but I think it's better than having many explicit casts to fix warnings. Casts are always dangerous.


> IMHO, it's a C-standard mistake that char is signed.

The standard doesn't specify whether char is signed or unsigned, it's left to the implementation.


chars are only signed on some platforms (x86, for one). On others they're unsigned (ARM, for one).

One knockon effect of this is that strcmp() will return different values on the two different platforms for UTF-8 strings (because 0xff > 32, but -1 < 32)...

Incidentally, I don't know if you know about intptr_t; it's an int large enough to put a pointer in losslessly. It's dead handy. (My current project involves a system with 16-bit ints, 32-bit long longs, and 20-bit pointers...)


I did not know that chars were unsigned on arm- interesting.

I try to be conservative with the definitions I use, so I'm worried that intptr_t might be too new.


In my experience we can't rely on manual handling of these integer overflow issues, especially with changing compiler behavior over time.

I've noted some compile time and run time checking options at:

http://www.pixelbeat.org/programming/gcc/integer_overflow.ht...


A signed type for a buffer size in Chrome. Sigh, C/C++ just wasn't made for humans.


Signed types are the sane ones and give you error checking possibilities (negative size doesn't make sense; huge positive size may be right or an error). Unsigned types break trivial math like x < y => x-1 < y-1.


On the contrary! Unsigned types are the only ones in C were everything is sanely defined, they are the integers mod 2^n. Signed types have undefined behaviour on overflow in C. In-band signalling of error states is error-prone. It is unfortunately commonly done in C because you can't return multiple values and passing in pointers is uglier.


> and passing in pointers is uglier

It's not so bad. It means you can be really consistent about returning error codes from every function in the exact same way. This is one of the very few things I actually like about the Win32 API. If only they used the same type of error codes in every section of the API.


Do signed types not also "break trivial math" like that, just at a different boundary? Genuine question. (The 0 boundary is obviously going to be more commonly hit than the 2^32 boundary, but nonetheless.)


Yes, except that you're far more often operating at the 0 boundary instead of at the INT_MIN / INT_MAX boundaries.

Also note that C only half-heartedly supports objects larger than SIZE_MAX/2: relevant quote from http://en.cppreference.com/w/cpp/types/ptrdiff_t

"If an array is so large (greater than PTRDIFF_MAX elements, but less than SIZE_MAX bytes), that the difference between two pointers may not be representable as std::ptrdiff_t, the result of subtracting two such pointers is undefined. "


Signed types have the same "wraparound" problem, i think the OP meant that they don't have this problem at the zero boundary.


Signed types have a worse problem. They typically wrap around, but the behavior is undefined. That means the compiler can assume it never happens and optimize your code accordingly, which can lead to all sorts of entertaining misbehavior.


Sorry meant two's complement


How can they hide the bug in an open source project after fixing it. Isn't commit public?


How exactly are you going to serve a 4GB certificate? Who in the world is going to wait for that to load?


The article mentions that it can be compressed to 4 MB...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: