> Why, most likely because almost all of them are ultimately implemented in C
That isn't right: languages created before and after C exhibit the same behaviour. Some languages do explicitly check the overflow flag every time.
The reason for not always checking (or never checking) is that in a tight loop the extra instruction can significantly affect performance especially back when CPUs had a fraction of their current speediness capability. The reason for not exposing the value to the higher level constructs is similar: you have to check it every time it might change and update an appropriate structure, which is expensive given an overflow should be a rare occurrence so checking every time and saving the result is wasteful.
The linked article specifically mentions the performance effect of checking overflow flags in software. I believe what it is calling for is some form of interrupt that fires when the flag is switched on in a context where a handler for it is enabled - a fairly expensive event happens upon overflow but when all is well there is no difference in performance (no extra instructions run). Of course there would be complications here: how does the CPU keep track of what to call (if anything) in the current situation? Task/thread handling code in the OS would presumably need to be involved in helping maintain this information during context switches.
The performance penalty you mention ("in a tight loop the extra instruction can significantly affect performance") doesn't happen if you add the new type in the language (that is, what in VC++ is a "SafeInt"). In the really tight loop you wouldn't use that type. That type is important exactly for the things I've given a MSFT's example (calculating how much to allocate -- overflow means you allocated much less and you don't catch that!) So no, you don't have to "check every time."
The reason it's not in standard C is to be portable with some odd old architecture which doesn't have the overflow flag at all. Some modern language can be clearly designed to depend on the overflow flag. The cost would happen only when the programmer really does access it (in a modern language: by using such a type) and the cost would be minimal, as there is a direct hardware support.
> I believe what it is calling for is some form of interrupt that fires when the flag is switched on in a context where a handler for it is enabled
And that is misguided, as it doesn't allow for fine grained control -- it's all or nothing, either all instructions generate "an interrupt" or none. If you want to change the behavior from the variable to variable, changing processor mode would cost. If you add a new instructions for all things that can overflow and trap, you'd add a lot of new instructions. So it's also bad. The simplest approach is: use what's already there. The flag is there in the CPU, it's not used by the languages OP mentions, but once the language supports it for the "safe" integer type, it will be checked only when it's really needed: for that type and nowhere else.
Finally, the maintenance of the exception handling code (what you name under "how does the CPU keep track of what to call") is something that modern compilers and even assembly writers must take care of and is very good understood among them: for example, Winx64 ABI expects every non-leaf function to maintain the stack unwinding information properly and even if I write assembly code I effectively have to support exceptions outside of my code for every non-trivial function I write. So this part is very good known, and the most is taken care of outside of the OS. The OS merely has some expectations, the compiler (in broader sense, that is, including native code generator and linker) writers must fulfill them.
That isn't right: languages created before and after C exhibit the same behaviour. Some languages do explicitly check the overflow flag every time.
The reason for not always checking (or never checking) is that in a tight loop the extra instruction can significantly affect performance especially back when CPUs had a fraction of their current speediness capability. The reason for not exposing the value to the higher level constructs is similar: you have to check it every time it might change and update an appropriate structure, which is expensive given an overflow should be a rare occurrence so checking every time and saving the result is wasteful.
The linked article specifically mentions the performance effect of checking overflow flags in software. I believe what it is calling for is some form of interrupt that fires when the flag is switched on in a context where a handler for it is enabled - a fairly expensive event happens upon overflow but when all is well there is no difference in performance (no extra instructions run). Of course there would be complications here: how does the CPU keep track of what to call (if anything) in the current situation? Task/thread handling code in the OS would presumably need to be involved in helping maintain this information during context switches.