> to allow compilers to optimized based upon the assumption that a program won't do certain things.
Emphasis mine. This is always wrong. Correct (and thus legitimate-to-optize-based-on) knowledge of program behavior is derived by actually looking at what the program actually does, eg "p can never be NULL because if is was, a previous jz/bz/cmovz pc would have taken us somewhere else"[0]. Optimising "based on" undefined behaviour is only legitimate to the extent that it consists of choosing the most convenient option from the space of concrete realizations of particular undefined behaviour that are consistent with the environment (especially the hardware).
0: Note that I don't say "a previous if-else statement", because when we say "p can never be NULL", we're already in the process of looking for reasons to remove if-else statements.
There are many cases where accommodating weird corner cases would be expensive, and would only be useful for some kinds of program. Requiring that all implementations intended for all kinds of task handle corner cases that won't be relevant for most kinds of tasks would needlessly degrade efficiency. The problem is that there's no way for programs to specify which corner cases they do or don't need.
> Requiring that all implementations intended for all kinds of task handle corner cases that won't be relevant for most kinds of tasks would needlessly degrade efficiency.
Yes, that's what undefined behaviour is for. Eg requiring that implementations handle integer overflow needlessly degrades efficiency of the overwhelming majority of tasks where integers do not if fact overflow.
> The problem is that there's no way for programs to specify which corner cases they do or don't need.
Wait, are you just asking (the situationally appropriate equivalent of) `(int32_t)((uint32_t)x+(uint32_t)y)` and/or `#pragma unsafe assert(p!=NULL)`? Because while it's a shame the standard doesn't provide standardized ways to specify these things (as I admitted upthread) programs are prefectly capable of using the former, and implementations are perfectly capable of supporting the latter; I'm just arguing that the defaults should be sensible.
In many cases, the semantics programmers would require are much looser than anything provided for by the Standard. For example, if a programmer requires an expression that computes (x \* y / z) when there is no overflow, and computes an arbitrary value with no side effects when there is an overflow, a programmer could write the expression with unsigned and signed casting operators, but that would force a compiler generate machine code to actually perform the multiplication and division even in cases where it knows that y will always be twice z. Under "yield any value with no side effects" semantics, a compiler could replace the expression with (x \* 2), which would be much faster to compute.
Emphasis mine. This is always wrong. Correct (and thus legitimate-to-optize-based-on) knowledge of program behavior is derived by actually looking at what the program actually does, eg "p can never be NULL because if is was, a previous jz/bz/cmovz pc would have taken us somewhere else"[0]. Optimising "based on" undefined behaviour is only legitimate to the extent that it consists of choosing the most convenient option from the space of concrete realizations of particular undefined behaviour that are consistent with the environment (especially the hardware).
0: Note that I don't say "a previous if-else statement", because when we say "p can never be NULL", we're already in the process of looking for reasons to remove if-else statements.