It's not the assignment. It's the multiplication x * 0x1ff.
The compiler has done range analysis and knows that at this point, x is non-negative. The programmer has dilgently ensured that values are such that the multiplication can't overflow, therefore the result of it is also non-negative. That means the later check for i being non-negative is trivially true.
If it's wrong break on compiler time, not on run time
The problem is the compiler going implementation defined on the multiplication/assignment then going all language lawyer on the following line and blaming the user
> In general, addition like 'a + b' also isn't safe in C.
I'm not sure what you mean by 'keping the sanitizing check'?
A C program is basically a bunch of bytes, and the C standard tells you what those bytes are supposed to mean. A compiler's job is to translate the bytes into whatever target language you fancy, and making sure to preserve the proscribed behaviour. And that's exactly what the compiler did.
All optimizing compilers do stuff like this. You yourself ask the compiler to do it when you pass it '-O2'. Its default behavior is, in fact, to not optimize based on the assumption that UB won't happen.
Fun thing there is rust feeds into the same optimisation pipeline as C or C++, so there's a definite risk of it inheriting some of their semantics via errors in the compiler implementation.
There have been several cases where Rust's use of "restrict" pointers exposed bugs in LLVM, and the Rust compiler had to disable some optimizations as a workaround. But I haven't heard of anything like that happening with signed overflow. (Probably any bugs with basic integer behavior would get noticed quickly?)
Another thing to watch out for here (especially if anyone's trying to transpile Rust to C) might be C's strict aliasing rules accidentally getting applied to Rust raw pointers.
The promise of Rust is that you never run into undefined behavior if you only use safe code. There are some caveats (using dependencies with badly written unsafe code, the noalias bugs others mentioned) but in the general case, if you're writing code without 'unsafe' blocks, you're not going to trigger UB.
You're certainly not going to run into LLVM optimizing your bound-check out of existence because it occurs after an overflowing operation.
Compiler bugs are always a potential issue for fuckup.
Hell, rust has had codegen issues because it extensively leveraged features which are almost non-existent in C, and thus were little exercised and poorly tested.
Rust is a good replacement for most use cases, but I think specifically in case where you're looking for a more predictable and less risky implicit behavior, the replacement should be more stable and predictable than Rust is at the moment.
Question, but where do you get the idea that Rust is not stable or predictable? I understand you were asking for something _more_ stable/predictable than Rust, but Rust is already very stable and very predictable (in fact I can't think of any ways that Rust is "unpredictable").
C has a specification and weird stuff doesn't happen as long as you follow it. Doing so is very hard at times, but if a project cares about unpredictable optimizations, then it probably also cares about other kinds of unpredictable behavior. Which unfortunately eliminates a lot of languages that make no guarantees about their semantics.