My favorite example of a "this should never happen" error was when I got a call from a customer, who started the conversation by asking, "Who is Brian?".
I was caught a bit off guard, but I assumed the customer must know someone at the company, since Brian was the name of the previous electrical engineer/firmware programmer. So, I told them that Brian didn't work here any more, but was there anything that I could help them with? The customer said, "Well, the device says that I should call Brian". I was confused by this, and asked a lot of questions until I determined that the device was actually displaying "CALL BRIAN" on the LCD display.
This was quite unusual, and at first I didn't believe the customer, until he sent a picture of the device showing the message.
So, I dug into the code, and quickly found the "Call Brian" error condition. It was definitely one of those "this should never happen" cases. I presume that Brian had put that in during firmware development to catch an error case he was afraid might happen due to overwriting valid memory locations.
I got the device back, and found out that the device had a processor problem (I don't remember exactly what) that would write corrupted data to memory. So, really, it should never happen.
That particular device has now been in production for 10 years, and that is the only time that error has ever appeared.
"Hi alttab, i tried to use your program but it closed and displayed a message saying 'segregation fault' or something...i'm not a racist, i love all people, please give me a call back"
I saw one like this once. Back in the early 90s I was working at a computer lab at my university. We had just gotten in a 300MHz DEC alpha, and that thing was a screamer! It was so fast that X-windows didn't feel slow on it! (And this was in the day of 25-50Mhz 386s and 486s.)
I was compiling some tiny test program on it, and it spit out an error message that said something to the extent of "This shouldn't happen. Email Dave and tell him what you did - david<something>@digital.com." I ended up forwarding it to our IT department whom I assume sent it on to DEC. I don't know if Dave ever saw it or not, though.
I remember getting this message myself back in those days, on my brand-spanking new DEC Alpha, which shipped with a 'pre-beta' compiler to those of us who were avid recipients of DEC's first batch of Alpha workstations in anticipation of a strong porting effort to get away from the "MIPS situation" at the time .. heady days indeed!
Yeah, honestly, as a one incidence sort of thing, this sounds awesome haha. You could search the code for it, find the relevant piece immediately, and the user was prompted to call you guys quickly to get it resolved!
How do you do that? I get on bootup you could do a little diddy, but how would you know if random bits are getting flipped? Seems tricky for an embedded device...
Not quite for memory _corruption_ but back when I was writing API code in C, I would place 'sentinels' at each end of my structs.
struct somestruct {
int s1;
int data;
char * moreData;
int s2;
}
When the caller of the API needed to call my code, it had to first call a function to get an instance of the struct. This constructor like code would allocate the memory for the struct, and then set s1 and s2 to 0xDEADBEEF;
The user would then fill out the rest of the struct and pass it back in as an argument to another call.
If either s1 or s2 wasn't 0xDEADBEEF, I would throw an error to the caller.
I helped me catch a lot of cases where the caller to the API had overrun some string while filling out the inputs.
This reminds me of something a friend of mine did once.
He had a structure that was getting overwritten with garbage due to an overrun somewhere else in the code. Rather than debugging and trying to find out what was doing it he just put "char temp[1000];" at the top of the struct to "absorb the damage".
I believe it's still running like that in production to this day.
The code above got written that way because at my first job, I inherited a godawful business charting API written by the lead developer.
The input to the API was a struct with 70-80 members that the caller had to fill in and there were no defaults for anything! Naturally there were not just scalars, but lots of arrays and strings in the struct, which could easily be overrun or often left null.
The users, quite understandably, didn't fill out everything, which led to frequent crashes in _my_ code because that's where the pointers would get dereferenced.
When they would see that the crash was not in their code, the users of the API would punt the error to me even though it was their bad input that caused the problem. This would happen 10-12 times a day.
I rewrote the entire thing in a paranoid style , employing the trick above and others to try and ensure that if there was bad input, that it would always crash on their side of the fence.
After I was done I got one legitimate bug report for the code, even though it was in use worldwide in our medium sized company.
However this might not have caught the error condition described upthread. That condition might have overwritten data or moreData without touching s1 or s2.
"This should never happen" is a design pattern of defensive programming. This is the same pattern for assert.
The usual use is to catch errors caused by misuse of a method. There is some invariant that the method assumes but is not enforced by the type signature of the interface. So if something goes wrong in outside code, or someone tries to use the method incorrectly, the invariant is not satisfied. When you catch such a problem, the current code context is FUBAR. The question is how aggressively to bail out : spew errors to a log and proceed with some GIGO calculation? Throw an exception? Exit the program?
It's a sub-pattern of "ain't got time for dat". Developer knows that condition should never happen, but is not inclined to prove it (as represented by coding type checking or other handling) yet realizes it shouldn't be ignored outright (if only to document the unproven condition in code, or to shut the compiler up about incompleteness warnings).
It's also for cases when you know it can't happen. For example, Java's String class has a method
public byte[] getBytes(String charsetName) throws UnsupportedEncodingException
Since it can throw a checked exception you have to catch it, which generally is fine, but consider this case:
someString.getBytes("UTF-8");
This call can never fail (support for UTF-8 encoding is required in Java) but in my case I have to do something with the exception in the catch statement or our static code analysis tool will start complaining (and rightfully so). So that's where I'll log a 'can't happen error'. It truly cannot happen.
Also in Java: Switching or if-else chains on an `enum`. It's still a good practice to include a final `else` or `default` case, but it should really never happen. Actually, inclusion of the `default` case will be enforced by the compiler if it can detect a code path that doesn't return. [0]
enum Whatever { FOO, BAR }
if (whatever == FOO) {
} else if (whatever == BAR) {
} else {
// Should never happen!
}
FWIW getBytes(Charset) doesn't throw, and there's a base set of charsets in StandardCharsets (1.7+):
someString.getBytes(StandardCharsets.UTF_8);
(Charset.forName doesn't throw either, StandardCharsets avoid stringly-typed code but it's not available on 1.6 so if you're still stuck there Charset.forName works)
Swift has the force-unwrap operator ! and its exception-handling variant try! for that. Sometimes you know that the exception case in the API will never arise, and the appropriate thing to do is to crash and let the programmer know that one of their assumptions is wrong. For example, you might be parsing JSON data that was generated within the program itself; normally JSON deserialization can fail for malformed JSON, but if you just constructed that JSON string within the same function and passed it directly, you know it's not gonna fail. It's pretty handy to ignore the error and turn it into an assertion in these cases, although this power should be used judiciously.
getBytes is poorly designed. In a safety-oriented language like Haskell or Rust, the set of encodings would be represented as an ADT (which forms a closed set) or s Typeclass (open set). All possible type-correct encoding arguments would be safe.
An ExceptT or Maybe monad for handling encoding errors feels a lot like throwing exceptions, although they are less disruptive than exceptions. I'd probably represent a decoder as a function with type ByteString -> ErrorT ParseError m Text, which is neither an ADT or Typeclass. It's a 3rd solution. Either that or an Attoparsec parser, which is probably equivalent. An encoder seems like it shouldn't fail at all, but if it eventually forks out to one of the C locale functions I can see it throwing errors too.
Meanwhile, in the real world, Data.Text.Encoding uses a 4th solution implementing decodeUtf8With and encodeUtf8 that ultimately represents the UTF-8 encoding as a pair of FFI functions with these signatures:
text-icu also ultimately represents an encoding as an opaque pointer returned by the ICU library, and works in the IO monad. So it too could fail in similar ways. Errors throw an exception of type ICUError, which the caller can catch using the 'catch' function from Control.Exception.
The encoding library does use typeclasses like you suggest, but I'm not sure anybody uses it. Sometimes people drop in #haskell and complain about that library, and the response is usually "don't use that; use the one in Data.Text.Encoding instead".
I don't use Rust, but if the language is at all practical, I imagine they shuttle their equivalent of pointers and bytestrings around and depend on foreign C libraries and locales just the same. Probably they don't want to change the core library every time the Unicode Consortium publishes a new encoding scheme, so I can't imagine them exposing only a closed type.
So, looking purely at the signature and comparing it to examples from a language you suggested, it doesn't appear to be poorly designed at all. It's exactly what I would expect and want in any language, and the library consensus seems to agree. I think you're just imagining the grass being greener on the other side.
In Rust, our main two string types are String and &str, which are both UTF-8 encoded. For interoperability with other things, we have additional types that you can convert to/from. http://andrewbrinker.github.io/blog/2016/03/27/string-types-... is a recent overview in a blog post.
You still run into the problem with other functions, though. For example, in Haskell, 'tail' is a partial function - it's undefined if the list is nil. If, in your code, you write:
xs = if p x then concat [[x, "bar"], foos] else "baz" : foos
ys = tail xs
Then you know that the call to tail is not going to fail in your code, because the input has a guaranteed minimum length of either 1 or 2. You can't make that same guarantee about tail in isolation, though.
Sometimes you need also a user provided encoding (think of editors). In that case, the exception makes sense. Haskell or Rust would need to provide an extra API for this case. But generally you are right, stronger type checking would be preferable. Anyway, I dislike API's which take a String but only support a strongly limited subset of these. In that case, a dedicated type suits much better.
This one bothers me every time. Other parts in the library provide a checked and an unchecked variant to achieve the same. If you put in a hardcoded string, you know it never fails.
BTW I made a (almost religious) habit out of ensuring that everything I touch is encoded UTF-8 or can be converted to that as I have been bitten several times hard by unexpected encoding stuff. Therefore, the above problem catches me pretty often.
I often use such error checking. Usually the value of such error message is in making the code easier to reason about, to further clarify some obscure use case (which can't happen). And if it does happen anyway - well, at least we get that alert. ;)
Except that you never know how the code you wrote in a module will be used by other developers in the future. Such defensive programming will a least help others prevent mistakes when using your code.
Note that gcc and clang's __builtin_unreachable() are optimization pragmas, not assertions. If control actually reaches a __builtin_unreachable(), your program doesn't necessarily abort. Terrible things can happen such as switch statements jumping into random addresses or functions running off the end without returning:
Sure, these aren't for defensive programming—they're for places where you know a location is unreachable, but your compiler can't prove it for you. The example given in the rust docs, for example, is a match clause with complete guard arms (i.e. if n < 0 and if n >= 0).
Disagree on this. It has nothing to do with efficiency in context of unlikely events. As others have noted here, it is effectively an assertion of expected language/system/operating-environment properties. Think axioms.
No, I'm primarily thinking of "ain't got time for dat", as in "there's a very real deadline, I have a lot of other things to get done, and this case isn't ever going to happen and I don't have time to prove it to the compiler."
Well, it's used in many, many ways and places, and that's one of them. Often, I'm using it after a two or more branch decision tree, where each branch returns. If someone refactors the code at some point and removes a return, or changes the condition to allow a case to fall through, it catches that.
You could be inclined to see it as "is not inclined to prove it", but I prefer to think it happens mostly because someone didn't think they changed something that could affect that (i.e. "I was sure that simple change to the boolean expression was equivalent when I made it...")
> is not inclined to prove it (as represented by coding type checking or other handling)
This is known as a reachability problem that in the general case cannot be proven. So, "not inclined" may actually mean "can't (in any reasonable amount of time)".
It's not "in any reasonable amount of time". Proving in the general case whether a variable is unused in code or not is equivalent to solving the halting problem. This follows from Rice's theorem:
Yes, but it's more nuanced than that. Even if you can prove that a computation always terminates you can't necessarily prove that it yields the wanted result in any reasonable amount of time. This is the bounded halting problem, and it applies even to languages that only allow terminating computations. In those languages, the halting problem is nominally gone, but the bounded halting problem is just as bad as for Turing-complete languages. Just how bad is it? It is a time complexity class that includes all time complexity classes, i.e. it is harder (in general) than any problem that is computable and known to complete within f(n) steps, where n is the size of the input and f is any computable function.
I came to say much the same thing, but even take it a step further and point out that proofs are subject to human error as well. More powerful programming constructs are a great tool, but at some point it's turtles all the way down. The Knuth quote comes to mind: "Beware of bugs in the above code; I have only proved it correct, not tried it."
And this pattern is exactly why I prefer compile time type safety in my languages. This pattern is still sometimes necessary but there is a whole class of error this pattern gets used for that you can many times eliminate.
What's interesting is that (as described in the present top comment on this article, about "CALL BRIAN"), if the abstraction of "type safety" is leaky (as it is, e.g. in the presence of memory or hardware errors), this kind of paranoia can actually have real-world benefits even though you can prove the impossibility of the code running using static analysis.
Sometimes the important artifact is the executable in the larger context of the deployed system, rather than the code you generate it from.
There is nothing preventing modeling the contextual environment within static analysis. Static analysis/type systems help the programmer draw the line between the known and the unknown. Some conditions are just not practical or efficient to check for. However for the context you use it within you can make certain guarantees about the code. This is still incredibly useful even though it doesn't guarantee an error can never take place.
You say it "is" leaky in the presence of memory corruption. However that is not necessarily true. One could model software memory verification within a type system. Meaning, you could guarantee at compile time that each time a variable is read it is verified via checksum against its last written value. This would not be particularly efficient, but the point stands that type systems can be used (and should be used) to model hardware failures.
This is no different than network link failures, etc...
That is exactly the point, to mitigate it. Just because you can't mitigate everything doesn't mean it is less valuable. It is still extremely valuable to mitigate the things you do (or choose to) have control over.
In typed languages this is pretty often circumvented by choosing the wrong type. Java (even the language library) is literally littered with API's which take a String where the parameter does not have the semantics of a string (a bunch of characters with no meaning). The worst offenders take a String, support only a very limited subset and offer no explanation which are valid ones.
I agree that it does demonstrate defensive programming.
But can I also just add that the error message remains unhelpful and outright "bad." You should absolutely have checks for "impossible" situations, but when those checks fail you need some way of determining which check failed (and cannot always assume you'll have stack backtrace, in particular if an end-user is telling you the error message).
For example you could do this: "Impossible Error in GetName(): {Exception}"
Using github to search like this reminds me of how a CS professor of mine would show the "best commit messages of the year" (homework was submitted via git) by looking for various patterns like all caps, all symbols, etc.
"The Strange Log" is a Twitter account that tweets bug fix comments from games' release notes. Without context, the comments can be rather mysterious or surreal:
Spouses less likely to run away into the dark abyss.
Bald inmate digging grows hair.
Player can die from lava while praying successfully.
Colonists will no longer stare each other to death.
Trash monsters will now have a chance to drop the intended cat ear colors.
Suicide animation speeds up search for apples, berries.
Fix trees not going to their burnt state when they go to sleep while on fire.
Potions are tasting much better now, especially the harmful ones.
The Pinking Shears stir from their slumber, awakened by what may seem, to those innocent in the ways of The Shears, a triviality, a nothing-of-consequence. But there are consequences indeed for recklessly trailing your whitespace. Naturally, they a dire!
One, two! One, two! And through and through
The Pinking Shears went snicker-snack!
They plucked your tail and with your space
They went sniksnuking back.
Let me tell you, that can be uncomfortable, so always pre-sniksnuk your trailing whites. May The Shears be with you.
I definitely aim to remove things like that, but I tend to sneak it in with something else (partly because the fix likely came as soon as I used an editor properly configured to strip it).
I'm not sure if that's a 'bad' practice or just not the purest. But I don't think removing trailing comment space is bad in itself. As above, it probably didn't take any time at all; done automatically by a well-configured editor.
What really struck me about this is that for me, personally, it would be very difficult to use both of the skill sets demonstrated here at the same time: coding and writing prose.
In case anyone looks at that and thinks it's cute or clever or fun... it's not. People will end up hating you for doing that. Okay, maybe hate is too strong, but it will certainly engender some strong negative feelings in anyone who has to try to figure out what you did (and when and why you did it).
A common objection from new developers is that "commit messages are hard and they slow me down", so if that's how you feel (for example, you're starting a fresh project): go crazy with your commits and don't let them slow you down. But then rebase, squash, and edit the commits before sharing them.
If a developer consistently pushes commits like this, they should be guided by their team lead to understand their importance. But if over time they refused to improve them, in many cases that would eventually be fireable. Commits are the technical paper trail: whether used when diagnosing issues, merging, compiling release notes, or whatever. Making the messages clear is extremely important.
This is pretty simple to do outside of die-hard "continuous integration" shops -- most of the time I can just do
git rebase -i HEAD~4
(where '4' is the number of little dumb commits I made) and squash them all into the first commit.
But if I made a few commits, then pulled in someone's changes, then made a few more... well... http://xkcd.com/1597/
Hahaha, I went through the masters of comp sci program there. Those commits mentioning Borja cracked me up, and I swear "i love the smell of segfaults in the morning" was written on the wall in the big lecture room in the physical sciences building. Gives me flashbacks - that program was brutal (though really good).
Just looking at this one course and the projects... This is night and day harder than anything I had to do in my CS program. We just dicked around in Java for most of it, no C, and definitely no low level socket programming or reproducing an RFC.
I had (1992, man.ac.uk) VDM, Pascal, SML, Prolog, midi-port communication in 68000 assembly, Pascal-with-embedded-Oracle, Tarski's World, etc., and not a single damned thing from that course has been useful since.
Yeah I came in thinking it would be easy after crushing the programming pre-req tests. Was in for a complete surprise - they don't mess around at UChicago.
In the masters program you can somewhat build your own degree after taking their core classes, so you can at least study what you're really interested in (or avoid the hard stuff like a lot of people did). Most of the profs had real industry experience too. A C++ class for example was taught by a guy who sold his company, now is a research fellow at a huge lab, and sits on the standards board. We got to bounce questions off Bjarne Stroustrup. Was pretty cool.
And youR* reply is about as useful as the whole thread, the reporting of other peoples laziness of committing with proper descriptions of their own code. What a surprise, people are lazy and it bites you in the ass in the long run wowiwow, what a surprise...
Perhaps if this type of post didn't come by every X time, it wouldn't annoy me so much.
My favorite part is the Java project that has an exception class called ThisShouldNeverHappenException [1]. Only in Java would someone create an exception class for a condition that should never happen :)
Not just Java. I've seen similar classes in C++ and C# to indicate things which should never occur/are clearly bad programmer mistakes/... Think InternalErrorException/DevFailedError etc. Sometimes it's just a sane thing to do, and using such names means you don't need to write the dreaded 'should never happen' comment manually anymore.
To elaborate a bit, in C++ the relevant exception class is called `std::logic_error` (contrast with `std::runtime_error`). I like it. It is a bit more descriptive of the actual situation than "this should never happen".
Assertions can be disabled at runtime. If your goal is to call out the fact that something has gone fundamentally wrong with your program's state, an exception is the way to go.
I don't think it is a programmer error. Many times when you program against some less reliable API or doing some network stuff the variety of exceptional situations can be really overwhelming. Such exceptions can be some sort of sink to evaluate later whether you still need to improve the code to handle some very specific corner cases.
Exceptions in Java are to raise them. In Go there is a panic() call, which semantically sounds to me very much like ThisShouldNeverHappenException. It doesn't make Go worse and Java better though.
GitHub's search is pretty interesting: every time I refresh the search page it shows a different number of results: 18,401,830; 17,751,631; 15,995,799.
Which is anyways quite a lot of results, but then this search finds ThisShouldNeverHappenException, the string "this should never happen" and stuff like
// *This* gets run before every test.
if (b > d) {
fail("XX *should never happen*");
}
Usually, you'd never actually _count_ the expected results for such stuff. Instead, you'd return the estimate number of records (think SQL EXPLAIN). I'd gather the number seems very dynamic because of the constant stream of commits.
All shards compute their own, and send the result in a non deterministic order. My explanation is that the merge part of HLL depends on the order.
Line 95 of this method [1] from this implementation [2] shows a max(mask(x), mask(y)), which is not associative but is a close proxy to max()(which is associative).
Sometimes you have to satisfy the compiler because it has less information about a situation than you do. Ideally this would be captured in the type system, but limitations (language, project, politics, time, etc) may prevent this. As another comment mentions, it's a kind of invariant check.
For example, based on external information you might know[1] that a condition inside of a loop will always be hit exactly one time. Your compiler or tools might not be able to determine that same thing. It may try to force you to do something like assign a value or whatever you did in that condition, that it can't guarantee has happened. In such a scenario, it might[2] make sense to have something like "this should never happen" after the loop, with a brief comment explaining why you've done this.
[1] I think this is the crux of the issue. We programmers often think we "know" something, but it might be an incorrect assumption. IMO part of being a good programmer is examining your assumptions at every step. The chasm is vast, between "the framework strongly guarantees X" and "the function that gets called before this one has done X already". The former is OK if you want to get work done, while the latter is much more brittle and possibly dangerous, depending on the level of coupling you're willing to accept.
[2] Nine times out of ten, a reorganization of the logic makes more sense. However, I do think there are scenarios where this pattern is the best choice given the options.
In my experience, "this should never happen" cases often are a sign of very brittle design that branches into many separate but nearly-identical paths, and could be simplified to remove them. The other thing it points to is bad error handling paths (assuming that an error could "never happen".)
Also funny to see Java being the most verbose as usual, with its ThisShouldNeverHappenException.java
Actually they at least need to know about a RuntimeException then.
So that would've worked, too:
throw new RuntimeException("This should never happen!");
There would've been a difference between a Exception and a RuntimeException.
Also sometimes this kind of RuntimeException happens when you convert one type into another and want to explictly call all cases something like that:
interface A
class AA implements A
class AB implements A
if (x instanceof AA) {}
else if (x instanceof AB) {}
else { throw new IllegalStateException; }
It's sometimes better to explicitly call all states instead of using the last else for the AB branch, since sometimes this will be extended later or the compiler would've throw an error since you have a return inside the if or else if.
Btw. Kotlin and Scala won't have this problem due to pattern matching.
I found myself doing something similar to this in Java more than any other language.
Usually because some insane abstraction was built over a concept and some of the implementations could throw something like IOException and some could not but of course the API throws IOException as a checked exception.
Totally THIS. Many C coding standards require that every return value be checked, every "switch" statement have a default case, etc. Programmers comply, but sometimes the conditions they're checking truly are impossible, so you end up with messages like these. By contrast, coding standards for languages with exceptions typically do not require such constant checking in the code. An uncaught exception is the moral equivalent of "this should never happen" but doesn't show up in a search like this. Without even getting into the issue of whether C programmers are really more diligent than Java or Python programmers (for example) there's a measurement problem to contend with here.
I didn't even notice that. Good observation. I wonder if that's simply because C is a lot older and may just have a lot more dead code or this type of code or something else entirely.
It is because C has no exception handling, and C coders still want to be sure.
For example once I had a game where I ended putting a couple "should never happen" in my code related to some OS stuff, and... the "should never happen" happened once, after figuring how to reproduce it, it was a driver bug (or something like that, happened years ago, I don't remember the details anymore, only remember that it went away after I switched from Alsa to OSS4 on my Linux box).
EDIT: should never happen is good against compiler bugs too, I've seen my fair share of them.
Maybe it's because C coders want to know why something happens more than coders in other popular languages; say popular since the frequency of an occurrence is proportional to the volume of source code in a given language.
Not trolling here. I'm curious why this is getting so much attention. Isn't saying "this should never happen" just making an assertion about the behavior of your code? As far as the apparent inconsistencies in the code seen in the results go, is it really fair to judge a random piece of code from a random person (and completely out of context)? Don't we all have scratch/experimental/incomplete code hosted on our github accounts?
But what they mean is, 'This should never happen unless there's a serious problem upstream' and so these sorts of Asserts and Throws are actually very useful. Its all about 'Fail Fast'
I usually annotate should-never-happen asserts by the most plausible explanation how it _could_ happen, for the sake of some poor guy unlucky enough for having to debug my code.
E.g. "Should not happen — probably a bug in Apache Commons Math?"
Or "Shouldn't really happen, barring compiler bugs or cosmic rays"
As they told us, there is no such thing as probability of zero.
call error("in omatch: can't happen");
- Line from omatch routine on pg. 146 [1].
Snips from discussion in [1]: "We can expect problems, therefore, and should prepare for them. ... Garbage is bad enough, but garbage which is expected to contain a count to tell you how long it is can be much worse. ... The first time we ran this code it said 'can't happen' We got that message perhaps a hundred times in the process of adding the rest of the code... This experience speaks for itself; if you're going to walk a high-wire, use a net."
'Can't happen' is as much a pattern as 'Hello, World'... and it has the same genesis.
[1] Brian Kernighan, P.J. Plauger, 'Software Tools', Addison-Wesley 1976
One of the most common causes of failures are cases that the programmer never considered. Once of my favorite test coverage tools shows you missed branches and I find that invaluable.
My initial reaction was "oh no" but as I thought about it, this explicitly indicates that the programmer actually thought about a case.
And is it any different than what most of us do in our unit tests? If we're expecting an exception that isn't thrown or the wrong exception is thrown, we force a test failure.
One of the goals our team is working towards is more robust and complete metric and log collection. We specifically want to capture exceptions that make it to the application server for analysis, but this assumes that the developer has a) considered all cases and b) caught intermediate exceptions and continued processing (or abort).
^^ Above links to a "Why do this always happen" search on Github likely to express that if something that should never happens occurs, then you need to understand why it happened, which is often hard, since it never happens. Often solution is to be able to reproduce a bug by being able to playback what happened.
EDIT: Ha, turns out the link is just an attempt to prove that bugs that should never happen occur ten times more in C... Which is questionable.
One product I worked on a number of years ago had a CantHappen() function with a simple implementation. It displayed this message box:
You are not here.
Another message was in the Mac installer when there wasn't room to install:
Your hard disk is too small.
That's the complete text of both messages, and yes, it displayed them to the customer. <sigh>
After seeing these, I started going through the code and found a bunch of other rude, confusing, or jargony messages. It actually turned into a fun little project cleaning these up!
There's almost twice as many "should not get here" results[1], which I think would mostly be used in the same situation, in case someone is looking for more.
I remember one place I worked had an error thrown that showed as hex code "48 45 4C 4C 4E 4F". As far as I know it only occurred once in test when someone did something epically stupid as a patch to the code preceding it. We removed the patch and never saw the code again. Have no real clue who put it in there, as code control was a later addition.
Cray Research had a linker named SEGLDR that was written in Fortran, whose STOP statement allows an optional string message, and so it was used for run-time assertion checking like
IF(.NOT.CHECK()) STOP 'xxx'
Anyway, somebody (not me) got into trouble when Very Serious Customers were offended by seeing the occasional STOP DAMN message at link time.
Never ever EVER put bad language in any unexpected error cases or logs - even as you're developing it... it WILL somehow magically make its way to production, and it WILL appear!
Not only will it manifest itself in production, it will do so at the time and with the user most likely to cause great embarrassment.
[NB There is a related phenomena to do with a demo of a system that includes "adult content" (i.e. porn) - the likelihood of a user randomly stumbling upon this content is practically 100%, even if this content is a tiny part of the overall demo].
I remind myself frequently not to swear in code as I am quite juvenile on a normal day. If I'm struggling I'll just make up a word, so that if it ever happens that someone else sees it I can say it is an acronym but that I have conveniently forgotten for what.
I had a hearthy chuckle when learning about Android's Log class: the highest level of importance for a message is WTF. Documented as What a Terrible Failure.
Wow, deja-vu, this is just what happened to me one time when I was working on a new booking system for a hotel.
I was working on site over the winter season when the hotel was closed down, a lovely old hotel high up in the Rockies. Every day I sat down and wanted to write some brilliant code but all I ever ended up writing was the same comment
All objects and no functions makes Jack a dull boy.
over and over again.
/
* This should never happen exception. Use in situation that really shouldn't happen...NEVER
*
*
*/
public class NeverHappenException extends RuntimeException {
Could be that some of the "This should never happen" can be deduced by optimizing compilers and never exist in assembly?
Not talking about the interpreted stuff.
I was caught a bit off guard, but I assumed the customer must know someone at the company, since Brian was the name of the previous electrical engineer/firmware programmer. So, I told them that Brian didn't work here any more, but was there anything that I could help them with? The customer said, "Well, the device says that I should call Brian". I was confused by this, and asked a lot of questions until I determined that the device was actually displaying "CALL BRIAN" on the LCD display.
This was quite unusual, and at first I didn't believe the customer, until he sent a picture of the device showing the message.
So, I dug into the code, and quickly found the "Call Brian" error condition. It was definitely one of those "this should never happen" cases. I presume that Brian had put that in during firmware development to catch an error case he was afraid might happen due to overwriting valid memory locations.
I got the device back, and found out that the device had a processor problem (I don't remember exactly what) that would write corrupted data to memory. So, really, it should never happen.
That particular device has now been in production for 10 years, and that is the only time that error has ever appeared.