Sounds like you're suggesting a causal relationship the other way, though. As per this explanation, putting effort into debugging edge cases will statistically cause the comments to swear more.
Rorschach test for programmers: give your confident gut feeling explanation for this phenomenon.
I'll do mine: there's likely a correlation between needing to maintain a professional conduct which includes forgoing foul language (you're programming at work) and writing code under time pressure where getting a product ready for release is more important than strict adherence to clean programming practice (you're programming at work).
Take almost any two things like this and you're actually virtually guaranteed to draw out some weak, but quite likely statistically significant, correlation.
What lies behind that correlation is probably a entropic mishmash of so many factors that it defies human explanation, and also, defies any attempt to try to "harness" the forces that seem to appear. It could be that all the siblings to the comment are right all at once.
I'll cop to just glancing at the graphs, but they don't look out of line for this effect to me intuitively.
Also backing this is that more-or-less the same article/thesis could easily have been written for the opposite correlation.
My gut feeling: when you start to submit swear words in your code, it indicates that you "breathe" the code and know it in and out.
The other extreme: if you have no idea what you are doing, you might try to mimic "corp speak" in your code to hide the fact that you actually have no clue.
In other words: it needs some confidence in your ability to assess some aspect of the code in order to use swear words.
This seems unlikely to be true in this case because the study was looking at github projects, and it seems unlikely the sample had enough code from "uptight" work places, to have an affect one way or another
The developer who knows what they're doing is also more likely to be 1) overworked because they do much of the useful stuff and 2) cognizant of bureaucracy which gets in the way of them doing useful stuff.
I believe "go bananas" is a bit of metonymy from "go ape", possibly through "go apeshit", with the idea that bananas are something of a precursor to primate excrement.
I started a project to implement a LZ77 decoder in Game Boy assembly to see if it could compress the sprites in Pokémon Red and Blue versions better than the algorithms actually used in the game. Results are inconclusive so far, but it's been an enlightening experience.
Just a security patch, but I like seeing NetHack featured.
Looking forward to 3.7 where they finally address a glaring omission where up until now deities have been weirdly indifferent about people vomiting on altars.
The expression ((INT32_MAX / 0x1ff) <= x) is well-defined and will not cause overflow for any value of x of type int32_t. There is nothing to "optimize away" because there are no inputs that would invoke undefined behavior.
The original code was like this:
if (x < 0)
return 0;
int32_t i = x * 0x1ff / 0xffff;
if (i >= 0 && i < sizeof(tab)) {
(x * 0x1ff / 0xffff) can only be negative if (x < 0), which can be ruled out because the function would have returned already, or if a signed overflow has occurred, which is undefined behavior so anything can happen. The compiler can remove the (i >= 0) check because the only way it's false is if undefined behavior has already been invoked.
If you add the overflow check you quoted before the assignment, the compiler can still optimize away the (i >= 0) check like before with the same reasoning. Only this time the function will return before an overflow would occur.
The point of the "sledgehammer principle" described in the article is that UB checks must occur before the UB might be invoked and branch away. You obviously can't do this either:
int i = 2 / x;
if (i != undefined) {
return i;
}
return 0;
Instead, you'll have to do something like:
if (x != 0) {
int i = 2 / x;
return i;
}
return 0;
AltGr+8 is reasonable enough that I don't feel like learning a new layout.
Ctrl+AltGr+8 is involved enough that I might as well press Esc.
A lot of software developers are unaware of the AltGr[1] key or even assume US ANSI layout altogether so as a user I have been trained not to take keybindings involving that layer for granted.
As a real world example of similar issues, there's piece of software (I think it was telnet or mosh, but I apologize if I misremember) where Ctrl+^ is used as an escape sequence. This doesn't work for me, possibly because caret is a dead key[2] on my keyboard. For some reason, perhaps related to using scancodes instead of key codes, Ctrl+6 happens to work in that application.
Even though parts of the original source survive in the binary and are passed as pointers to a print function, the source code itself doesn't get read at compile or runtime (aside from being read once into the compiler).
If it's OK to read parts of the loaded binary to use as strings, I don't see why it wouldn't be OK to read the whole loaded binary, as long as you touch the source code file. I'd simply accept that the platform allows for some fairly trivial quines.
Anyone know any good articles, tutorials or books about ABI design? How did we end up with what we have now, what lessons are there to learn? How would you go about designing a new one from scratch? Would this fall more under operating systems or compilers discipline?