Hacker News new | past | comments | ask | show | jobs | submit login
Avoid exception throwing in performance-sensitive code (lemire.me)
166 points by kristianp on Dec 22, 2022 | hide | past | favorite | 240 comments



Different languages have different exception handing optimizations. A Java version of the example can run very slow or very fast, depending on how clever you are.

When a new RuntimeException is thrown half the time, the example runs about 650 times slower when compared to a function which adds up the integers without using exceptions. If I define an exception subclass which doesn't fill in the stack trace, then it runs about 10 times slower. If I throw a singleton exception instance, then the performance is identical.

The reason why the performance is identical is because HotSpot inlined the code and converted the immediate throw-catch into a simple goto. It wasn't smart enough to see that the stack trace wasn't needed, nor was it smart enough to see that allocating new instances wasn't needed either. I had to make those transformations manually.


Moreover, different languages implement the exception handling differently.

The article focuses on C++, which has a notion of object destructors (most – but not all – programming languages don't have the destructors).

Implications for the exception handling are manyfold: upon an entry into a «try» block, a C++ compiler has to account for all objects created at the method (or function) scope up until this point and register their corresponding destructors in the exception unwinding table. Then, since C++ allows objects to be created on the stack (via RAII or an explicit object declaration), the call frame has to be correctly accounted for as well. Both of which are computationally expensive things to do.

When an exception is thrown out, the «throw» statement results in a reverse walk back of the registered destructors first (apart from the objects created on the heap), and then adjusting the frame pointer and placing an exception object on the stack before returning from the method's (or function's) exception handler.

All of that takes many CPU cycles and wreaks havoc on instruction scheduling, pipelines, the TLB and stuff, therefore making the exception handling very expensive in C++ with little room left for optimisations. Exception handling performance in earlier revisions of C++ was abysmal. It is also all C++ specific and does not apply to other programming languages.

Java, for instance, doesn't do that, and leaves the heap clean-up (where all Java objects are created anyway) to the garbage collector, so the exception handling is less taxing in Java – at the exception raising point.

P.S. The above is a gross oversimplification of how the exception handling works in C++, but it should it give a rough idea of why the author has observed a slowdown at an orders of magnitude scale.


It also depends on the implementation, for example VC++ can ping back on Win32 structured exception handling, which other OSes don't have.


> converted the immediate throw-catch into a simple goto.

This is very interesting. Both GCC and clang do not do that, as they represent exceptions as abnormal edges out of a basic block and don't optimize further. I guess in Java exceptions are common enough that it is worth the additional effort of transforming some of these cases into normal jumps when the destination is seen, while in C++ it is more of a vicious circle of exceptions not being optimized because they are uncommon and being used sparingly because they are optimized.

It is of course possible that Java exceptions semantics are such that they might be easier to optimize (lots of observable side effects in C++ unfortunately).


Thanks for this short and fine example of Java optimization. I still have a lingering loathing for the language based on the 2000s era marketing, but technically speaking there's rather a lot to like about the java ecosystem.


Just because it started off that way does not mean it remains so. As a matter of fact, it has come along very nicely in the past few years with many cool features including virtual threads, sealed types, pattern matching, records, and more.


Yeah to be fair I have to give this same kind of spiel for JavaScript too.


I must admit I use exceptions heavily for validation, e.g. checking input at API bounds. It makes the code fairly clean. I would imagine this is considered bad practice, but if there was no overhead this seems preferable over wrapping every call in some wrapper object. Any good links to further info on this?


Food for thought: what are the expected consequences of the exception?

If the error will stop the program flow and show a warning dialog to the user, it's useless to think too much about performance. More or less the same if it's going to log some message and abort the operation.

What is usually frowned upon is using the exception as a kind of goto for normal flow of the program. Exceptions should be... the exception.

Otherwise all this performance brouhaha is a waste of time.


Exception is a way to return from a function abnormally. So yes, they are for exceptional conditions. IMO it's perfectly fine to use them for input validation. The Java standard library itself does it all the time.


Java is the... uhm... exception here.

That's not how idiomatic C++ is. Nor Rust. Nor Go.

One of Java's (incl standard library) main design flaws is overuse of exceptions.

It should not be emulated. It's too late to fix Java, but that doesn't make it a good idea.


Rust uses Result. Checked exceptions are Java's analogue of Result.


I disagree. Exceptions are a fundamentally different control flow. Exceptions are exceptions. Return values are not.

I would not call std::optional in C++ any form of checked exception, and the difference isn't that std::optional doesn't carry value-missing metadata.


So what do you do in C++ if the input to your function is not what you expect?


I think a language's standard library sets a good pattern for how to write idiomatic code in that language.

C++ throws on memory allocation error, but that's about it. Memory allocation errors are special in all languages. Because of lazy allocation and overcommit, unless you specifically set your environment to work otherwise, your program will probably just crash when it gets its first page fault that can't be honored.

Open a file? fstream sets .is_open() (or its operator bool, so just "if (!f)")

Write fails? Sets .fail()

POSIX stuff usually return an error code, and set errno.

Modern C++ has std::optional.

But there's of course another answer to this, and that is "C++ has zero cost abstractions", meaning for example if you don't check for nullptr, then neither will the language. There's no NullPointerException because C++ just says that this is Undefined Behavior.

Oh, here's one: If you use dynamic_cast to try to downcast into the wrong type, that'll throw an exception. But first of all: don't downcast, and second of all: This is not an "unexpected input to function". This is a complete programming error and it's probably best to terminate. I.e. this is something Go style would panic about, not return an error.

Do you have more specific examples about unexpected input to a function that you would want to return an error for?

Ugh. Actually std::stoi() violates this pattern. If std::optional existed in C++11 it would probably have been used here.


> Memory allocation errors are special in all languages.

Actually no, not in Java! You can catch java.lang.OutOfMemoryError just like any other exception. If the memory pressure is high though, it's possible that another OOM would be thrown from the code that handles it.


You can catch std::bad_alloc in C++, too. That's not my point. Especially because destructors free immediately (not wait for GC) I would expect C++ to handle this as a language much better than Java.

But when memory pressure is high you can get killed at any time. E.g. on Linux the OOM killer might decide that it's best for the system that your process dies, even if you've not done memory allocations or needed to page fault for hours.

IIRC OpenBSD doesn't overcommit memory, but in my experience its system stability is much worse when memory is low.


Return an error or fatally exit the program, depending.


If you return an error, it needs to be checked for in every place where this function is called. Yes, I know C libraries and OS APIs do this often, but that's C, it's the only thing it can do. This just invites human error. Besides, it's often desirable to handle multiple different error conditions (arising from different steps as you process the input data) in one place, which is complicated with this approach. Java-like exception handling makes it easy to handle all errors in a single place and be sure none go unnoticed. This is especially useful when you're accepting external input (a file, a network packet/stream, an HTTP request, etc).

If you exit the program on error, this does work in some cases like command-line utilities, but your users would not be happy if your GUI app crashes when you open a malformed file.


Entire books, I'm sure, have been written about the pro and cons of exceptions for error handling.

Yes, I called it "idiomatic C++" before, but reasonable people disagree about the best option.

Smarter people than me have written good things in the E section of the C++ Core Guidelines. The people involved have overlap with the C++ standards committee:

http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#...

At least if you use exceptions for errors C++ doesn't need "finally", because it has working RAII, unlike Java.


In modern Java, and by modern I mean 8 and newer, you no longer need "finally", there's "try with resources" instead that would close everything upon leaving the try block:

    try(FileInputStream in=new FileInputStream(file)){
        // do something with the file data
    }catch(IOException x){
        x.printStackTrace();
    }
It's been a very long time since I last wrote "something.close()".


I'm not arguing for or against this approach, merely responding to a question with a factual answer.

But since you brought it up ;) I bounce back and forth between which approach I like. Sometimes exceptions seem bulky and unwieldy and honestly a bit lazy. Returning an error feels verbose and annoying and bulky as well. But it also feels like returning an error forces you to think about what should happen, where exceptions let you kick the can down the road.

Neither are great, but I find that code that uses exceptions ends up being poorer in design and functioning, but also errors-on-return tends to be harder to read.


This is fine, this is exception handling which you are describing. What exceptions should not be for is handling normal control flow. That is what branches are for. That is what the article is about and what people replying are complaining about.


"Exceptions should be... the exception."

:)


The problem is exceptions have entirely opaque flow control. They're the opposite of a goto statement: a comes-from statement if you will. Flow control could originate literally anywhere down the stack and that makes reasoning about what's happening very difficult. Depending on the language it can also have a super broad and ever-changing surface area.


This is not correct, a “comes-from” statement would still be the same behavior as a goto except it’s defined at the label and there is no “goto” at the departure line.

Exceptions are just “goto whatever catch is in the call stack”.

It’s still completely obvious when you see a throw statement that it can throw, and static analysis can tell you exactly what can be thrown by each function.


And it is still opaque where it goes to the programmer when they see it in code. The fact you can use tools to find where it might land (yeah, no shit, you can do same for goto...) is just a mitigation to the problem.


You’re missing the point of exceptions. When you throw an exception, you’re not supposed to care where it lands. That’s the caller’s responsibility.

“Who will catch this” is supposed to be opaque. It’s like wondering who will call this function.

If you want to goto some specific point in the code, just call it.


It unwinds the call stack until a catch.

It is the same "problem" as not knowing where a return will return to.


No, it is not the same problem. You need to trace every function, and every function calling those functions, and every function calling those functions, all the way to catch.

You can't "just" search for function name, you have to rely on code analysis tools.

Making code more opaque coz you can get thru the mess of it via tooling is terrible direction


Return only ever moves up the stack one level. That makes it really easy to reason about. This can move up the stack an unlimited amount, and the handler has to be prepared to properly deal with the current state, no matter where it comes from.


This is no different than returning an error in go far down. You don’t know how far up it will be passed or handled.


You do. Errors have to be handled or passed up explicitly at each level. Errors are one level at a time, not 'n'.


There's nothing opaque about having an implicit potential return after every line as well as an implicit additional error result value. Think Go, but without the ceremony.


Unchecked exceptions need extra care. I've seen process level exceptions being logged when the underlying cause was a failure to parse an integer. The exception was being caught in an exception handler half a dozen scopes away.


If you check for actual exceptional cases, you have next to no overhead. The overhead comes from creating the exception environment and going up the stack in unusual ways. For validation where most data is correct, this should have next to no impact.


I hate to say it but you are using goto.

Goto is used often in C parsers for a reason, because it is easier to reason that way when you're validating something rather than using deeply nested branches. People shit on goto because of dijkstra's paper but few people have even read that paper and fewer even know that it is the origin of "x considered harmful" meme.

Goto is a boon. Anytime people use exceptions like how you are you're just admitting you're itching for a goto and your language doesn't provide it. And thus you shouldn't clutch your pearls when others use it in c.


It does sound abusive since validations are expected to fail and so should not require an alternative way to return their outcome.


> I use exceptions heavily for validation

This fits within my understanding of a reasonable use for exceptions - usually these are in the form of assertions, which themselves throw exceptions but can be turned off at runtime if you don't expect to ever see this condition in real production circumstances.


so long as you’re not catching the exception and figuring out how to continue execution of the same function i think you have a good use case here. it’s no different than `assert`ing invariants at the top of a function, for example. if invariant is violated, the call should be aborted.


Think that's probably the best use case actually - API bounds are often "just check a bunch of stuff, from AA thru data validity to whether connection finished cleanly" so just putting catch over whole request handling is easy to reason about.


If you involve |API| an external input, no one cares about the validation, that actually succeeds in virtually all cases. You should care only about the fast path - the rest are quite irrelevant (outside DoS attempts)


validating with exceptions is good because it makes your code much cleaner to read. the cost benefits of reading and easy maintainability is extremely high

the example shows an extra time of 500nanoseconds when not throwing. this is 0.005 milliseconds. if your api takes more than 1ms then changing it to not throw will not be noticeable. even if the exception time cost was 10x higher and it took 0.05ms longer it would not be noticeable


Why not just use a branch friend, why not? None of the uses you listed is faster but at best case is "equivalent" why avoid it at all?

I know the article is about performance but from a sheer programmjng perspective brqnches are there for a reason.


Great details! Why only half the time though? What is the behavior the rest of the time?


I wanted to make sure all code paths were executed, and so I filled the array of ints with 50% negative values.


One note: did you run the code w/ warmup and all, or just OSR (on stack replacement).


Exceptions are starting to feel like a legacy programming paradigm to me. Rust & Go have, at least in a practical programming context, shown that errors-as-values has far less footguns and encourages better error handling practices than exceptions, which often are treated as an afterthought or end up being abused like in this post.

Whenever I'm writing Python or Java I can't help but feel anxious about calling a function and having absolutely no idea if or what exceptions it might throw, and then having to resort to digging through documentation or class hierarchies to figure it out.


At least for Go (haven't used Rust), I entirely disagree. After ~3 years of professional programming with the language, `if err != nil` is still constantly annoying me when writing code, but especially and most importantly, when reviewing code.

Not to mention, Go has proven conclusively to me that exceptions are exactly the right pattern for error propagation - the 99.9% pattern is "function returns error with message; callers add context; top-level caller aborts and logs + returns message to user, or sometimes retries". This is exactly how exceptions work out of the box, without the need to pollute all code between the error source and the top-level caller.

If anything, I'd like to see a language seeking to add smarter information to exception stack traces - not just the function + code line, but also some information about local variables might be doable, and would supplant the one occasional gap between Go-style hand-built context and Java/Python auto-generated exceptions.


Absolutely. Even after "compressing" my code as much as possible, it looks as if 2/3 of a Go program is pure error handling and no business logic. I just don't understand this militant objection toward exceptions now. Even back in the days of slower virtual machines and crappier hardware, we were fine, and now it's code red! Get rid of exceptions! Performance!

This is in an era of tech where everyone is building BAD distributed systems, with non-optimized databases, a fur ball of slow HTTP queries, and giant payloads. Exceptions are the least of your concerns.


>it looks as if 2/3 of a Go program is pure error handling and no business logic.

This is exactly why it's good. It forces developers to think about things outside the happy path business logic. And it does this at the point where they know best what such an error could mean.

Proponents of exceptions let their users catch the exceptions they overlooked.

If you ask me Go is not forcing this enough, the ML style Result types in Rust are a much better abstraction here.


> This is exactly why it's good.

How did we manage to get this far before people ~~went insane~~ decided that this is somehow favourable?


After working in an SRE-type role on a codebase full of unchecked exceptions, or exceptions that are caught at the wrong levels because programmers just want their code to work - forcing the explicit handling of errors is favourable because it requires developers to think about what they really want to happen when something goes wrong at this point in the code. Whereas without it, often they will simply catch the exception and continue on silently, or re-raise it without proper context.

Conversely, sometimes there are really simple cases that are totally ignored by programmers because they forget that an exception might be thrown. The worst offender in my experience is in Python where people assume the existence of dictionary keys and get KeyError exceptions, or NoneType exceptions. Without type hints, there's no mechanism to warn you as you write the code that these things could occur. So programmers don't plan for it. If there are explicit errors, the programmer is forced to consider the case that the key they want doesn't exist, or that this variable isn't actually the type they expect (static typing helps here too).

In general, I think this prevents footguns when you are working on large programs. It's worth the verbosity.


See, you perfectly explain why this is the completely wrong approach.

You have a headache and take painkillers, believing it's going to cure it. No, it doesn't cure the headache, it just masks it out.

The cure is teaching people how to actually program the machines. That's all it takes. Decades of abstraction and piling up more and more nonsense eventually led to the situation we are in now, where people praise bullshit which wouldn't be necessary if people had actually learned how to program the machine and how to do it properly.

Instead people need more and more help to do even the simplest things because they've never learned how to do things the right way. So now idiots require tools preventing them from making mistakes they wouldn't be making if they were properly educated!


This is also how errors-as-values work in Rust. Functions that may fail return Result<T, E> - putting an ? at the end of a failable function call returns T on success, otherwise it propagates E. It reduces

    result, err := call()
    if err != nil {
        return err
    }
to

    let result = call()?;
Completely unobtrusive, but makes failure awareness an obligation.


I know about ? in Rust - the one thing that isn't very clear to me is how often it is enough. That is, with Go, it's quite typical to do something like:

  result, err := call()
  if err != nil {
    return fmt.Errorf("Error while trying to call: %v", err)
  }
Essentially manually building a stack trace. If just doing "return err", you end up with calls to a REST service failing with messages like `couldn't parse "" as int` even in internal logs, which isn't helpful. With exceptions (in any language except C++), the stack trace is often decent enough.

Does ? add any kind of context implicitly, or do you have to actually pattern match manually to add context?


There is not (normally) context added automatically. One can use `map_err(|e| ...)` to add context like this:

    let result = call().map_err(|e| SomeError::MoreData(e))?;
Or if using one of the common error libraries to have non-specific error kinds:

    let result = call().context("some data")?;
or

    let result = call().with_context(format!("something happened: {}", other_thing))?;
For something that more directly matches the behavior of the go code, this (using the `anyhow` crate) is a possible match:

    let result = call().map_err(|e| anyhow::Error::msg(format!("Error while trying to call: {:?}", e)?;
Though one would normally avoid this when using anyhow (or in rust in general) as it means we're flattening the error instead of generating a list of causes.


if you only limit yourself to the standard library, you would need to unwrap and rewrap the error - although admittedly since Result<T, E> is a type like any other, you could add an extension function (through a trait) so that you can add context more easily. If you use use one of the error handling libraries though, you're in luck - adding context is usually a single function call away:

    //without context
    some_function()?;
    //with context, using anyhow
    some_function().context("function returned error")?;
    //with wrap_err, using eyre
    some_function().wrap_err("function returned error eyre")?;


Absolutely agree with you on that front. Go's tedious error checking and manual rewrapping is pretty annoying and I'm sure there's a smarter way to do things waiting to catch on. That being said, I prefer that annoyance over the horrors of working in a codebase where exceptions are used for control flow, not checked at all, or just treated as an afterthought. In my experience, those programs tend to be fragile, often crash with little context, or just silently continue after a failure. At least Go forces you to deal with it.


I think what Java got wrong was allowing catching unchecked exceptions. If Java had only allowed recovering from checked exceptions, it would have been a very similar experience to Rust's Result and panic.

Instead, the trend was to avoid using checked exceptions at all, which perpetuated the crazy situation we're in where library code can suddenly abort and the author shrugs and says "shoulda read the docs".


> I think what Java got wrong was allowing catching unchecked exceptions. If Java had only allowed recovering from checked exceptions, it would have been a very similar experience to Rust's Result and panic.

> Instead, the trend was to avoid using checked exceptions at all

Java's type system was far too weak for that to be at all practical. Early Java not only didn't have first-class functions, it didn't even have generics. Even today checked exceptions are horribly cumbersome - for example, it's simply not possible to write a wrapper function that accepts a function and calls it, and throws the same set of checked exceptions that the inner function does.

If Java had had proper ML-style types from day 1, like Rust does, then it could have done "errors are values" in a practical way. But forcing people to use checked exceptions would have resulted in either C-style errno codes, or the whole language collapsing under its own weight.


That's fair. Lack of inference is also a huge problem with Java's checked exceptions.

My point is less that Java could have done this differently with the other type choices that they made and more that it's a shame that Java's implementation of checked exceptions has poisoned the concept for so many. I think a language with strong type inference and a checked exception mechanism would be better than Rust's Result type because you get the stack trace.


> it's simply not possible to write a wrapper function that accepts a function and calls it, and throws the same set of checked exceptions that the inner function does

It is possible, and straightforward, to do that if the set of checked exceptions is of a statically known size, and the wrapper function can propagate the exceptions rather than catch and rethrow them. In my experience, that is a very common situation (although not universal).

If you want to catch and rethrow, that can also be done, but it requires an evil hack, and the wrapper will need to take a reified type for the exception.


Yes, checked exceptions with a proper type system and abstraction facilities would be completely equivalent to Result<T,E>, except built into the language.


> where library code can suddenly abort and the author shrugs and says "shoulda read the docs".

In see no difference. Except for the fact that you don't have the stacktrace (in Go), and unroll the Stack manually. In any case, there'll be a generic catch (whatever you call it) on the top level of your daemon to recover from the failed request and handle the next.


The difference is that the error states are explicit rather than implicit. The function's signature tells you what the possible errors you'll need to handle are, so you don't get caught off guard at runtime.

As for stack traces: checked exceptions are semantically almost identical to error-as-value but they do give you a stack trace. They give you all the static guarantees of error-as-value plus the benefit of being able to know where the error came from.


> checked exceptions are semantically almost identical to error-as-value but they do give you a stack trace

I completely agree with you on that one. The only thing I'd want to add is that aside from example code, usually you don't care a lot about what specific errors might pop up, unless you're at the top level of your program. This might be the request handler, or some UI loop. On the other hand, where you care about errors, you usually don't care about results or their types.

I've been in the industry long enough that I vividly remember the catch-and-wrap orgies in Javaland of 2003. A lot of it was caused by the inferior type system of Java, which hinders composability (such as when a layer is moved to another node and suddendly you have to deal with remoting exceptions). But it also goes to show that knowing a specific class of error is often overrated in application code, and should be, well, the exception.


In Go the programmer is forced to make a decision on what to do with the error. Common patterns include (A) return an annotated error, (B) log it and continue, (C) retry, (D) aggregate the errors in some way. If you believe the only way to handle an error is (A), then Go's design makes no sense.


> In Go the programmer is forced to make a decision on what to do with the error.

Actually go runs perfectly fine if you just do nothing with the error, and then you are flying past errors like it is nobody's business. Of course you can you static analysis to tell you if you forgot to handle an error, and most people do, but that is not really a language feature.

In Java a programmer is also forced to make a decision to add explicit exception handling or not. But if they choose to not add it, the program won't just pretend like all is fine and dandy, it will stop the execution of the current function and pass the exception up the stack.


You seem to believe options B to D are not available to programmers in languages with exceptions. The real value comes from making option E much more unlikely: ignoring both the result and error value altogether, because you relied on the side effect of the function you called.


You might like how Rust does error handling. Rather than a function returning a triple of (val, error) where the error can be ignored, functions return Result<T, E>. If you want to get the T, you must write code that handles both possibilities. If your function instead wants the T and propagate the E upwards if it exists, you can do that with one character - “?”


I indeed like Rust's approach a lot more than Go's. What I like less still is that it gives the impression that it's even possible to define functions that cannot fail. This is not true. One just has to look at how runtimes deal with stack overflow errors to see how the good old Java RuntimeException creeps in in various forms (e.g. panics) because checked exceptions and it's recent incarnation as error values are a leaky abstraction.


Rust makes a distinction between recoverable and unrecoverable errors. Recoverable errors are the E in Result<T, E>. You can take action and recover, depending on what kind of E it is.

Unrecoverable errors are things like stack overflows or out of bounds array access. There is no reasonable way to soldier on after this, so the program should just end. Trying to continue the program in such situations only leads to pain. Like array accesses out of bounds that allow you to read unrelated memory.

But it’s still an evolving area. For example, failure to allocate memory - is that recoverable or unrecoverable? Initially it was thought that it was unrecoverable, and programs would panic if memory failed to allocate. This seemed reasonable, until folks tried to use Rust within the Linux kernel. Within the kernel, failure to allocate memory is recoverable. Rust is evolving the semantics here.

All this to say, yes, Rust does allow you to define functions that either fail in a recoverable way, in which case the calling function should handle it. Or they fail in an unrecoverable way in which case there’s nothing the calling function can do to recover. Thankfully, panics in third party code are relatively rare so this doesn’t happen in practice.


> Unrecoverable errors are things like stack overflows or out of bounds array access. There is no reasonable way to soldier on after this, so the program should just end

No, I wholeheartedly disagree with this. It's the equivalent of exit(1) some way down the stack. Whats recoverable or not depends on the use case and is a decision to be made by the caller of a function, not the implementor.


GP might have been referring to undefined/invalid behaviour (whether in the language or in some OS syscall or whatever). After the demons came out of your nose you can never fix the problem, so there is no point trying to handle the error.

Otherwise I agree with you, that library code should not fail/crash/exit(1) just because of some judgement about recoverability, and out to clean up after itself before passing control back to the caller. If the user wants to fix some ENOSPC deep in my library by shelling out to "rm -rf /" and then trying again, that's fine by me, and this should be reflected in the API.


GP might have meant undefined behavior, but specifically mentioned stack overflows and out of bounds array access as unrecoverable errors. These sound brutal, but are in fact all but undefined. Proper handling is expected in the large class of applications which run as servers.


> it gives the impression that it's even possible to define functions that cannot fail

Do you mean that e.g. an out-of-bounds error will panic? If that's the case, you can always access arrays/slices with some checked access, that will return a Result/Option and cannot panic. But it would be a PITA if you couldn't skip that.


I mean stack overflows or out of memory errors. It might fail one request, but no reason to fail all others.


That's a very specific case, that could be handled non-trivially.

Usually your HTTP framework will already have this implemented, i.e. a panic in a request handler will be "caught", converted to some 500 response, and should not affect other requests.


My point is that this is in no way different from any other class of errors, _except_ in those cases where it is. It's practical to assume all errors are handled like this, because this catch all needs to exist anyway. And unless you have _very specific needs_, this can be automated.


I think conflating these two into one paradigm is worse. The catch-all (exceptions) style is nice only for a few very specific cases, like the request handler example. Everywhere else I want to either bubble up (like exceptions, but Result<> and ? sugar is as good or better), OR I want to handle the error. For the latter case, exceptions are not good at all.


IMHO, quite often when you're tempted to handle an error, you're either wrong to do so, or in some kind of infrastructure glue code. A request handler, a task executor, a strategy chain, a retry loop, you name it. And this code needs to deal with both classes of errors anyway to be bugfree.


From those examples, I think only request and task should deal with panics. Things that start threads or processes.

Other points of "catch all errors" don't need that. And then there are a lot of places where you do handle errors, if they are conceptually a Result. I know you can just catch SpecificError, but the ergonomics are just horrible in terms of control flow.


I don't think threading is relevant to this.

> there are a lot of places where you do handle errors, if they are conceptually a Result

Yet, what's conceptually a result lies in the eye of the beholder, and should not be dictated by an API designer IMHO. Rust's ? is a step in the right direction, but I'd argue since you care about specific errors in maybe 0.1% of invocations tops (in production code), and that's a stretch, the ? should actually work the other way round. And if the mechanism does not provide a way to select specific errors (such as a proper catch clause) the errors it exposes as a result should include runtime errors as well.


I'm talking about unchecked exceptions. It's not about what is possible it's about what patterns a language encourages. It feels like we've lost the thread of discussion.

  - You say there is no difference between unchecked exceptions and Go's errors
  - I say yes there is since Go forces users to handle errors explicitly
  - You say that's not technically true in all cases.
OK. Yes. I should have said "nudges users" instead of force. It's a shortcoming of the language. It is still really hard for me to see unchecked exceptions and value-based error handling as the same thing. One of them encourages doing nothing and hoping that bubbling up is the right answer. Very often, especially in a multi-threaded context, it is not.


> You say there is no difference between unchecked exceptions and Go's errors

Where do I say that? I say that Go programmers, in 99.9% of cases, do manually what exceptions do automatically. In terms of cumbersome, error values are the equivalent of checked exceptions. The equivalent of runtime exceptions are panics.


Maybe I misinterpreted you. I don't disagree that checked exceptions and errors are ~the same thing.

> Go programmers, in 99.9% of cases, do manually what exceptions do automatically

Our experience working with Go must be very different.

Grepping for "err := " and looking at the first 10 results in my team's codebase.

  * 4 cases where the error is just returned
  * 1 case where the error is returned only if it matches a certain type (otherwise logged)
  * 1 case where the error is logged as a warning.
  * 1 case where the error is logged, some metric is incremented, and then execution continues as usual. (a fail-open authentication check)
  * 1 case where the error is returned as different error type.
  * 1 case where the error is returned, but only after accessing the result (this is a strange design / antipattern), and annotating it with a human-readable explanation.
  * 1 case where the error is treated as a boolean condition, and is not returned. (the error condition is "does not exist").
So in this sample it matches the "automatic behavior" in only about half the cases. In other cases, substituting the existing behavior with exception's automatic behavior would cause severe bugs.


This actually argues my point, because it seems (ignoring the strange design) only the last case would really exist in business code in another language.

(Obviously, without source code access the following is guesswork) The other cases might either not be required (since you'll get a stacktrace anyway and don't need to leave breadcrumbs) or be part of some general infrastructure, say interceptor, that logs interesting things on boundaries. At least that's my day to day experience comparing Go and Java.


There is a lot of confirmation bias in this post. I will leave it at that.


Nevertheless, thanks for trying to back it up with data. I'll use your method and look at our code when time allows it, to see if I need to adjust my priors.


Java got wrong with the concept of checked exceptions. They're not needed. Python, C++ or JavaScript exceptions are totally fine. And checked exceptions bring nothing but issues.

The only thing that I'd add to the unchecked exceptions is noexcept with compile-time checking. Something along the lines:

1. You can declare method as `nothrows`. Compiler will ensure that no exceptions are thrown (`java.lang.Error` can still be thrown, but you're not supposed to deal with it in any way in most code).

2. You can declare method as `throws Exception1, Exception2`. Compiler will ensure that only subclasses of those exceptions are thrown from this method.

3. You can omit any declaration. In this case compiler will compute list of possible exceptions implicitly (it's a union of all exceptions thrown by all called methods).

So basically it'll allow to: document list of thrown exceptions and it'll allow to statically check that other exceptions are not thrown, so this documentation is compiler-checked.

Of course this idea needs battle testing, but I think that I'd like it. You can either opt-in and write code documenting all thrown exceptions (which is good for libraries) or you can opt-out and write simple code without bothering with exceptions (which is good for applications).

Also there should be proper support for generic exceptions, so I can write Function<T, R, E> and E would be of type containing union of all thrown exceptions. That's required for example for moving exception signatures from lambdas in functional collections.


I disagree that standard exceptions are fine for recoverable errors.

Recoverable error states are part of your function's interface. Languages with exceptions make this implicit: in order to know how to interface with a function's error states you have to go digging through the docs, which hopefully document the exceptions it throws. If they don't, then you either have to trace the entire call tree or just wait for the library to throw something so you know what you're supposed to be catching.

On the other hand, if a language requires that recoverable errors be documented in the function signature (either as a checked exception or error-as-value), you know that you're either handling the exceptions or intentionally letting them propagate. There are no surprises at runtime because the set of all possible error states is known statically.


The fact is that checked exceptions are avoided in modern Java. They are part of function interface, compiler forces you to handle or propagate it. Yet people didn't like it and actively sabotage this design. So this design didn't work. You can shove it down the peoples throat or you can design features that will help rather than irritate.


Java always gets the blame, yet the concept of checked exceptions was introduced by CLU, adopted by Modula-3 and C++, before it came to Java.

And even though C++ dropped exception specifications, they still kept the difference between might throw anything or doesn't throw at all, and there is also the paper to reintroduce them Swift style.


Those are not industry languages and Java was known to borrow only well developed and widely adopted features. For example it took like 20 years to add lambdas which were in lisp 60 years ago. Checked exceptions are a sore exception to this rule.


C++ isn't an industry language?!?


There're no checked exceptions in C++. noexcept does not do anything related to type system, it just terminates an application if an exception has been thrown.


CFront until C++11, and Herb's paper for value type exceptions...


Catching unchecked exceptions at module boundaries is perfectly fine and as far as I know (which admittedly isn't much) both Go and Rust in fact allow doing exactly that.


I long wished there was C# Intellisense that showed what kind of exceptions I may receive when calling particular function. At least from code it can infer from:

- .NET library has documentation comments, with <exception> tags

- It could look at my code and see what exceptions get thrown.

- It could be clever enough to know that new might throw OutOfMemoryException

- Clever enough to know checked arithmetic might throw OverflowException

- Might understand where/what exceptions get caught and do not bubble up.

- Smart enough to understand if value can possibly be null and properties are accessed, thus may result in ArgumentNullException.

- As for 3rd party code, I don't know if .pdb provide enough info to know whether particular function call may throw particular exception? But it's all IL, so that should be enough to infer from 3rd party code what kind of exceptions I might encounter:

   // throw new ArgumentNullException("serviceProvider");
   IL_0013: ldstr "serviceProvider"
   IL_0018: newobj instance void [mscorlib]System.ArgumentNullException::.ctor(string)
   IL_001d: throw

Eh, what a dream.


What you are asking for is checked exceptions where the caller gets notified by the compiler that the function you call might throw one or more exceptions.

However, it is worth taking it a step further and focus on "why the error occurred" instead of "an error has occurred".

An example of this is Code Contracts. Like Intellisense, live code analysis with contracts would tell you "the method you call will throw <exception-type> with the input you give".

Not only will that cover giving the user information about which error conditions that can arise, it will also give you the reason why.


Well, code contracts are unfortunately dead, not available in .NET Core.

C# Anders Hejlsberg, C# designer has a word on checked exceptions for which I agree with him that we should somehow address it more efficiently: https://www.artima.com/articles/the-trouble-with-checked-exc...

Looks like what I'd like in the meantime is soft-checked-exceptions. Well, just for documentation purposes. And I'll leave the code for myself. I just want to take into consideration various failure modes and now I can just do generic error handling or waste too much time. Yes, what you are saying, live code analysis would be good enough not to screw up C# language itself with bad design decisions.


> Exceptions are starting to feel like a legacy programming paradigm to me. Rust & Go have, at least in a practical programming context, shown that errors-as-values has far less footguns

They don't really. Because they force you to handle errors even in code that couldn't care less about errors because that's the responsibility of a higher level up the stack. And worse, they assume that an exception is a valid reason to just stop program execution altogether (when it rarely is).

See Erlang for how you should handle exceptions (you have a supervision tree with various restart strategies).


Just remembered that Rust realised that manual handling of errors everywhere doesn't work, and introduced `try!` macro, and then the `?` operator.

These are more practical (and good thing Rust isn't rigid at the expense of pragmatism).

And of course since panics are not the answer to everything exceptional, there's now a `catch_unwind`, too


Has it shown this? Are there solid metrics or is it simply because they're popular and it's not the idiomatic way to do things? My understanding is they still have panics anyhow.


I should have caveated that it's purely subjective. In my experience, developers write much more reliable and understandable code when they handle errors as values rather than as exceptions.


On the other hand the Rust syntax have gradually moved closer to exceptions, now with the "?" operator which automatically unpack the result or propagates the error value. This feels awfully close to exceptions, even if it in theory is "just a value".


I think the main difference is that it forces the acknowledgement of the error at the point of the function call, which is not the case with exceptions.


Not so much legacy as specifically of the time period where Java was created.

C++ doesn't do this (in the standard library, setting a good tone for idiomatic code).

Older languages don't abuse the exception idea. Nor do newer ones.

(Go is a bad example though, since it took the convenience of Java style exceptions away, but replaced them with nothing)


The Rust Result type combined with mandatory error handling leads to very robust programs. It is also easier to reason about, as there is no hidden (non-local) control flow. Exceptions are unergonomical and slow.


> feel anxious about calling a function and having absolutely no idea if or what exceptions it might throw

You can try doing what most of my coworkers do and just not care.


Exceptions work well for 2 kinds of error handling: crash and log, and general retry and log.

The second one is more cumbersome with return values if the possible errors originate deep in the stack.

Beyond a simple retry, Exceptions are not specific enough for handling errors.


Exceptions are typically custom types - you can build an arbitrarily detailed exception taxonomy (typically branching off the language-standard one), and in most (all?) languages supporting exceptions, you can also give them arbitrary state.

You really can't get more specific than that. If exceptions still feel "not specific enough for handling errors", perhaps it's because you're only thinking of most trivial examples, as given by 101 tutorials and people arguing against exception handling?


You can build a separate exception for every possible raise site. But that is overly burdensome. Moreover, there is no guarantee that any (standard) library calls have an appropriately detailed taxonomy. You could, of course, wrap these library calls and catch exceptions so you can re-raise them yourself. But at that point you are losing the main advantage of exceptions over return values: the ability to separate the happy path from the error-handling path.

To me, once I am building my own expansive taxonomy of exceptions, I am much happier using Optional, or Result/Either type return values.


> You can build a separate exception for every possible raise site.

Why would you want to do that? You don't do that with Result/Either.

> To me, once I am building my own expansive taxonomy of exceptions, I am much happier using Optional, or Result/Either type return values.

I may be missing something, because to me, this doesn't follow. Optional type is "result or nothing"; and with Result/Either type, you either use something generic (e.g. symbol, string), or go very specific (even if it's just one of the dozen different "newtype" names for symbol/string). To me, this choice with Result/Either is exactly equivalent to choosing an exception taxonomy. You're doing the same work either way.


With result/either you are forced to handle the error very close to where it happend. With exceptions (in my understanding) the goal is to be able to handle errors much further up the stack. But in such cases you aren't sure where the exception originated, so you need to encode that in the exception to understand the error.

If you do very close exception handling, you lose what I consider to be 'the point of exceptions'. Hence I don't think that should be done.

The core of my argument is then that error handling beyond 'just retry' needs a lot of detail about the error. And I believe this detail is easier to encode if the error handling is close to the error. Since exception handling is meant to be further away from the error, it isn't as suited to this kind of error handling.


Rust do still has exceptions, they're just called "results". You can't really have values as exceptions unless you plug a lot of ad-hoc constructs into your language so that they eventually become similar to exceptions [1]:

> Add syntactic sugar for working with the Result type which models common exception handling constructs.

The whole error handling story with Rust can be summarized as "we have results but want to have exceptions".

[1] https://rust-lang.github.io/rfcs/0243-trait-based-exception-...


Errors in rust are not exceptions, they're data contained within an `Either` monad (Result)

Additionally, on top of the linked RFC being nearly 9 years old it doesn't at all indicate "we want to have exceptions".

The ? operator allows propogating the errors if they can't be immediately handled (similar to monadic `do` notation, or early returns)


Just because Rust people are in denial about it doesn't make them not exceptions. The Either/Result monad is isomorphic to checked exceptions, with any differences being pretty much just syntax.


It makes generics and higher-order functions much more elegant. Any function of `(a->t)->t`(or many other similar signatures) will automatically be able to return the correct error type if t is a Result<>.

Compare this to checked exceptions, where (even in an ideal world) you'd need a separate type parameter for the error type, plus a bunch of extra language features and syntax to make it work. And then what happens if you want to use the same function with _no_ error type?

For a concrete example, try using Java's `stream().map()` with checked exceptions.

Of course there are implementations of checked exceptions which are much closer to Result than Java's implementation, and in those cases I would agree with you.


No! Exceptions have a very well defined meaning. Exceptions in Rust are non-aborting panics.


When people talk about "exceptions", they mean resumable exceptions that are intended to be caught. But "catching" an unwinding panic in Rust is so (deliberately!) limited, cumbersome, and unidiomatic that it doesn't qualify as an implementation of resumable exceptions. The only reason the ability to "catch" an unwinding panic even exists in Rust is to prevent unwinding across FFI boundaries, which would be UB; it's a correctness mechanism, not an error-handling mechanism.


This is one of my basic interview questions to candidates and it is amazing how many people have no idea how exceptions work or how much they cost.

Back in 2005, I was involved in a replatforming of the legacy COBOL and TRANSACT HP3000 mainframe codebase to the modern system. The code was transpiled into an [unholy mess of] C# and ASP.NET. The Transact code was extremely procedural and had mainframe forms interspread in it. Each block of code that ran between the form display and wait for input block turned into a function in C#, and all of them were chained in the main function with gotos. To get out of the function, the transpiled code would throw new Exception("with some custom message indicating step to go to next"), it'll be caught upstack, message parsed and then goto'ed to that. So yeah, control flow with Exceptions

I am shuddering just thinking about it now, it was impossible to read. Under medium load - and I was in charge of load testing - it threw some astronomical numbers of exceptions/sec and it was just so ugly.

Try as I might I couldn't dissuade them to change the ways they returned from the function. It all went to production. The customer just threw really big machines and lots of them and they just exceptioned all day long.


That seems exceptionally bad!


They did try though.


This post is poorly named. What it's saying is to avoid throwing exceptions for normal flow control, and just use them for exceptional cases (file not found, etc).

Performance-sensitive code or not, exceptions for exceptional situations are not going to hinder the app's performance until the exceptional situation becomes common (unless your language does lots of exception setup work on the happy path - which most have stopped doing by now).


This! I'd like to think exceptions are reasonable as a terminal state for a particular user workflow, which sometimes even end up having the program terminated. Examples include "unexpected I/O, corrupt files etc."

Exceptions should never be used if the control flow has means to recovery. For example, if-else statements with more different, downstream instructions.


What about, say, a webhook server that must call User-provided webhooks, and retry them on failures, or handle maximum request duration?

I’d argue throwing well-nested exceptions, catching them and retrying is the most elegant solution to this problem there is.


It also depends on the language. For example, Python very much operates under the "ask for forgiveness, not permission" mantra, and I even see this used for dict lookups. The number of times I encounter code like this:

  try:
    my_dict[key]
  catch KeyException:
    pass
rather than

  if key in my_dict:
    my_dict[key]
is astounding. I don't know what the performance difference is in Python though.


What about

  thing = my_dict.get(key)
  if thing is None:
      ...


yes, my_dict.get() is my preferred solution too, if I can come up with a sensible default value or action. Otherwise, I do my_dict[key] without try/except. But I'm not sure if that's just a style preference or if there's a large performance difference in Python.


The later should require two lookups unless it is optimized away.

Is there an easy way to get bytecode for Python snippets?


Seems like a language design flaw that this is slow. Imo Rust got this right by making "exceptions" nothing special, just a data type holding an error which can be processed just as fast as anything else.


When exceptions are used for actual exceptional cases, that is both slower in the happy path (as you are having to add a ton of comparisons for the error case, vs. zero-cost exceptions as are now commonly deployed) and more verbose (though Rust has been working to mitigate this over the years, they still failed to realize the people who made the kind of error object popular also designed an abstraction for monads and a syntax sugar that made using the result/either object "look like" exceptions) than exceptions.


Does OCaml use that exception like setup?


Yes and no: OCaml supports exceptions, uses them extensively in the standard library, and most guides encourage people to use them for exceptional cases (i.e. where an error is not an expected use case and the caller probably will just propagate the error rather than try to "handle" it... which I would assert is true for almost all error conditions: you should essentially never "handle" errors or "catch" exceptions), so it doesn't really need such syntax... and yet (apparently? I am not an OCaml developer, having only used it enough to demonstrate it for a class I taught a while back) it was finally introduced in a recent version ;P.

https://jobjo.github.io/2019/04/24/ocaml-has-some-new-shiny-...


> Seems like a language design flaw that this is slow

I think stack traces take up much of the time, including converting it [1]. Without them it could probably be a lot faster. Also see hashmash's post.

But other than that, there is the logic issue, not using exceptions for normal flow control makes sense at least to me too independent of any performance questions.

[1] For a Java example: https://ionutbalosin.com/2018/06/getting-the-stack-trace-ver...


IIRC, that's why Go errors don't come with stack traces by default, performance.

I'll admit that this has enraged me on a few occasions when all I have to work with is a logged error message of

   strconv.ParseInt: parsing "": invalid syntax
With no clues as to where the hell the error actually happened, so I have to start grepping the app's code, then the code of its dependencies, then the code of the dependencies' dependencies.


A stack trace would be helpful, but so would adding context to the error (when appropriate), instead of just bubbling it up indiscriminately.

Of course, you can’t enforce this is dependencies, but at least tracking this to the first layer dependencies of your code should be fairly easy right?

TBH, Go style errors seem more flexible if some care is put into using them, however they are extremely unhelpful if they are used improperly (even if it’s not your own code).


I agree very much. This was ultimately an ORM trying to convert a NULL to an int. An error message like "I was trying to load EntityX and got this error" would've been ideal.

It's a reasonably common issue with running Go apps, you're relying on the coder to make good decisions to give you useful insights.

Other ecosystems I've used err more in favour of the person running the thing - compare the more widespread Go logging libs to, say, Java, where the sysop has very fine-grained control if they need it.


This is another reason why Common Lisp style condition system (where exception handlers can execute without unwinding the stack) just seems to work better. If you need the stack, get the stack, if you don't then just use handler-case. I really don't see the downside for any language that has anonymous function literals.


CL condition system is based on error handlers, which is the 3rd paradigm aside from returning error values and exceptions. Unfortunately, it didn't really permeate the Unix based languages (C, C++, Java), because Unix had completely kneecapped implementation of error handlers in the OS (lack of user defined signals and their hierarchy). Error handlers are so much underrated that even books like Code Complete do not mention them.

Each of the three paradigms has pros and cons in terms of code simplicity and runtime cost (in normal/error path), I don't think there is a clear winner.


Java has them now but it didn't at the time (and even now they're kind of bodged in IIRC).


I don't think Java has support for CL condition system-style restarts. You're maybe talking about exceptions that simply don't populate the stack trace?

To clarify a bit, in CL, when throwing an exception, you can optionally register one or more "restarts", which are essentially lambdas of 0 or more parameters. When the exception is thrown, the stack is walked to find the appropriate handler, but it is not unwound. Whoever catches the exception will also receive these restart lambdas. If they chose to call one of the lambdas (passing it the proper parameters), stack unwinding will not happen at all, and execution will continue from the place the exception was thrown. Only if none of the restarts are invoked is the stack unwound, and execution continued from the catch block.

For a somewhat trivial example, the CL runtime throws an exception whenever a variable that was not defined is being read. That exception includes a restart that allows you to define a value for the variable - if this is called, execution will continue where the variable was being read, using this value for the undefined variable. Of course, this would be crazy to do automatically, but it is very nifty when debugging, as this option is presented to you in the REPL.


Pretty sure GP was talking about first-class functions not condition-system &c.


Exceptions are not the Error/Result type in Rust, they are panics. Which can be very expensive. But thankfully rust does them right and only uses them for unrecoverable errors.


You're right, what Rust gets right is standardized errors that aren't exceptions. In other languages you pretty much just return {success: false, message: ""} or whatever million combinations of this idea people sprinkled in the code.

Having a single type handle this with a bunch of utility functions associated is great.


Not sure I'd agree. In my limited experience its a hassle with tons of different error types that need converting. You end up converting LibAError to LibBError, etc. Thats tedious so many people use `thiserror` or `anyhow`, where `anyhow` just hides the error types behind a `&dyn Anyhow::error` or whatnot. Since stable Rust lacks compile time or runtime introspection its difficult to figure out whats wrapped up in the dyn Trait without knowing ahead of time.

Inside a single library its not too bad. You get probably an enum type with some errors. But adding stacktraves or anything is a pain if possible at all.


Which is not to suggest you should be using exceptions for flow control.


Python's StopIteration has entered the chat.


I remember reading a long time ago that cpython had done some work to make exceptions lightweight enough for it to be reasonable to use them for control flow... No idea what that was though.


Well, if that's the only "escape hatch" you have, because otherwise the language authors decided to limit what you can do with control flow...


I remember programming in Ada in my undergrad years and using exceptions for control flow was a common idiom. It was only later and in other languages that exceptions were treated purely as error conditions. Using exceptions allows you to separate corner cases from the normal logic. This was very clean -- not ugly as the author infers. His example is ugly, but is a straw man example.


It depends on how this is converted into machine level code. In quite a few cases it breaks down into rather ugly hacks, which is what creates the notion that they are "expensive" (and they are in these cases), and concluding from that should only be used sparingly, especially in code which is performance sensitive. With a sane ABI that includes exceptions as part of the daily life, so to speak, it is also possible to shift normal control flow to them without having any (larger) performance impact from it.


Definitely depends on the language. OCaml exceptions are quite performant, for example, and get used in a lot of ways that you probably wouldn't want to use them in other languages.


Yes, raising an exn in OCaml is close to jumping to an address stored in a register. https://stackoverflow.com/a/8567429

As a result, raising and catching exns in OCaml is cheap. List.fold_until in Base, for example, is implemented with exceptions: https://github.com/janestreet/base/blob/ae169dc8097b3da8e99d... (With_return is internally implemented by raising an exception in a try-with.)

Implementations of functional programming languages often have powerful, yet fast!, non-local control-flow primitives. GHC recently sprouted delimited continuations (https://ghc.gitlab.haskell.org/ghc/doc/users_guide/9.6.1-not...), for example, and Lisps have had that for much longer. It's easy to program in an imperative language for a long time and think that straight-line control flow is the end-all, be-all for performance—but it doesn't have to be the case.


Exceptions are great for exceptional conditions in performance-sensitive code. They provide a mechanism to move your error-handling code far out of the hot path. If you are expecting an occasional exception, they are terrible, and the C way of returning an error code is the way to go.

In most cases, when you don't control what you expect, exceptions are not great. In a constrained embedded system or a trading system, they can be amazing.


Why would error-handling code impact performance?

Besides, exceptions in C++ are known to have negative impacts on overall performance even if you don't use them. (see: https://preshing.com/20110807/the-cost-of-enabling-exception...)


This blog post is specifically talking about 32-bit x86 C++ ABI, which was notoriously not zero-cost wrt exceptions even on success-only code paths. Lessons learned from that went into the Itanium C++ exceptions ABI based on static unwind maps, which has been adopted by basically every other architecture except for Windows, which has its own take (that is still zero-cost).


Huh? It’s for skylake which works imply x64 no? I doubt Daniel Lenore is running 32 but code given his background of focusing on applying AVX everywhere which requires x64. X64 uses itanium exception handling.


OP was ambiguous, he means that the article "The Cost of Enabling Exception Handling" that was linked only applies to Windows x86, not to Windows AMD64 or the Itanium ABI.

The Itanium ABI and 64-bit Windows have near-zero cost when exceptions aren't thrown.


Code paging. If your hot code spans multiple pages, you run the risk of the CPU needing to perform page fetching to continue executing your code. If your exception handling code causes the hot path to exceed the page size, then you run the risk of cache misses which can cripple performance.

Rust has the #[cold] attribute for this exact reason - to mark functions or branches as 'cold', placing them into a separate section to reduce the hot path's code size.


If you use error codes, you always pay the cost (even cheap) of generating codes. If the function isn’t online able this might increase the stack usage, prevent inlining in the first place, etc.

With exceptions the error paths don’t even have to exist in your hot code.

The cost of exceptions when they aren’t thrown has come a long way since 2011 (the time of that post) in clang and gcc and I’m 99% sure is almost always zero now.


>Unfortunately, I often see solutions abusing exceptions:

    int sum = 0;
    for (int x : a) {
        try {
            sum += get_positive_value(x);
        } catch (...) {
            sum += -x;
        }
    }
you need to fire whoever does that...


This is a real problem with C++ exceptions - so much effort has gone into making them "free if you don't use them" that any time you do use them the perf is horrific.

This means even reasonable "control flow" cases like network errors for example can be terrible.

The more general case - and the real thing this article is complaining about - is the use of exceptions for normal control flow, which is something I agree is awful (aeons ago exceptions under .net's debugger were orders of magnitude slower than outside the debugger, but it meant that if you tried to debug logic involving an ANTLR generated parser your life was misery as - at least then - ANTLR used exceptions for parser control flow \o/)


Here's another link about the problems with C++ exception that I find very insightful https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2022/p25...

The major issue that stood out to me is that in a multi-threaded environment, exception unwinding is effectively single-threaded. This means that if you have C++ code that throws a lot of exceptions, you are going to see a lot of threads getting blocked by lock contention


to be clear, the single-threaded behaviour of some implementations is, well, implementation specific. There is nothing in the C++ standard that requires that. In particular, for GCC under linux, the issue is that the unwinder need to parse the unwind tables and needs to protect itself from a concurrent ldclose yanking them under it. There is some work currently on GCC to move this cost from unwinding to ldclose and allow concurrent unwinding without breaking the ABI (but it is hard).


Sounds like a job for hazard pointers.


Possibly! Alternatively a biased RW lock might work.


For JS devs, I’ve found it’s actually best to isolate your exception handling from exceptional logic. As in, this will perform worse called in a hot loop:

  function fallible(a) {
    try {
      return anything(a)
    } catch {
      const b = somethingElse(a)

      return anotherThing(b)
    }
  }
… than if your somethingElse case handles anotherThing, or if you do more work in the try block. In some cases exception throwing outperforms if conditions as long as both the try and catch blocks only do one thing each.


excepts were the only way in CLU to handle control flow. something like

    while read_character { 
        handle_character 
    } except_when end_of_file {
        return string
    }
and it did this primarily because it was typesafe: read_character always returned a character, and the "except" cases could return what they were declared to return. "except" was far from exceptional.

There's nothing ineffecient about that, it's meatly linked up at compile/load time, and exceptions that go up the stack simply unwind the stack just as returning values up the stack do.

What gets users confused is conflating this with the idea of hardware interrupts and system signals, external exceptional events that can occur at any time, interrupting the flow of control and needing registers (cpu state) to be saved so the process can be restored and continue when the interrupt is handled. (and which CLU handled through the same system) Such interrupts and signals do have a superficial resemblance to excepts that are unexceptional, and to not so unusual and recoverable errors like out-of-disk-space on a file write, which is only code you're going to write for an important high availability or headless system, or a nice fat rich word processor that people will sit in all day long.

I'm just explaining this all because when I read discussions like this I'm constantly thinking "but...but... did you think of...?"


Returning an option sumtype is also typesafe.

Exceptions have to walk up the stack until a suitable handler is found, that handler can't know ahead of time where the value is coming from or if it will ever arrive - just what type it will be if it arrives. Code emitting exceptions also have no knowledge who (if anyone) is going to handle their output. It is a nonlocal goto in reverse.

Compared to regular functional returns there are so many unknowns. I'd prefer returning an option value any day of the week.


> Code emitting exceptions also have no knowledge who (if anyone) is going to handle their output.

This is no different from code returning a regular value. When writing `return -EINVAL;` or `return 0, fmt.Errorf("")` or `return Err(something)`, you have no idea who (if anyone) is going to handle your output.

Also, one reasonable way of implementing exceptions could be exactly to translate all functions that can throw exceptions to functions returning a Result type, and translating function calls to such functions to the equivalent of pattern matching on that return. Of course, this mostly forces the language to only support checked exceptions (otherwise this overhead would be added to all function calls, even those that can't actually fail).

Most languages that implement exceptions have chosen a different trade-off though: make exceptions costly to throw, but make sure they have 0 cost on the happy path. Happy-path code becomes more efficient than it is possible to be in a language which uses return values for errors (since there is no need to check the result before using it), but the unhappy path becomes significantly worse.


A normal function return is handled directly by the caller, even if the caller decides to completely ignore it. In the case of exceptions it continues to implicitly pop the stack to attempt to find a handler. The explicit vs implicit nature is quite different.

I'd also point out that in gcc and clang support nodiscard and warn_unused_result giving a function some ability to force callers to handle returns. Go, rust and even java (thanks to errorprone) have similar guardrails.

I think you also need to balance the marginal efficiency wins of not checking for errors on return with the overall robustness of your program. The likelihood that an error condition is properly handled is heavily predicated on your ability to know that it might occur in the first place. In a language where exceptions are common place make this very challenging because they heavily rely on unchecked exceptions. Languages that value error checking have tended to shun exception style in favor of returning option types.


> one reasonable way of implementing exceptions could be exactly to translate all functions that can throw exceptions to functions returning a Result type, and translating function calls to such functions to the equivalent of pattern matching on that return

That's exactly what the lightweight exception proposal for C++ argued for. It additionally used some ABI tricks like storing the discriminant in a flag register when returning from an exception throwing function allowing for very cheap and compact pattern matching.


That's one way to compile exceptions. There are other methods with different tradeoffs for different cases.


In python, exceptions for control flow is how the language works. Think about that.


Thank you! Was starting to think I got all my codebase wrong and misunderstood what is « pythonic »

So doing this in python is ok (fastest way to check if a key is in a dict is catching KeyError for example, if I remember correctly)


Yes, do not let the HN anti-exception crusaders gaslight you. I still remember when I "discovered" exceptions in programming - "this is so much cleaner!"

All this theoretical mumbo jumbo is just noise. Very few of us are dealing with the type of programming every day where exceptions can actually become a noticeable bottleneck.


I'm a little disappointed that the author doesn't explain why throwing the exception is much slower.


For Java see The Exceptional Performance of Lil' Exception from Aleksey Shipilëv, https://shipilev.net/blog/2014/exceptional-performance. As always, Shipilëv does a fantastic job at explaining inner details of the JVM and observed performance profile.

A few years ago, I got hit by the high cost of an hidden exception (used for flow control by the JDK) while using LocalDate#format to parse a valid date. It was fun to troubleshoot and fix OpenJDK https://unportant.info/using-exceptions-for-flow-control-is-...

I would be interested in reading similar articles for other languages.


Stack traces. The information required to build a stack trace is deliberately kept off the critical path so it doesn't impact performance during normal operation, but that means that building a stack trace requires going out and fetching the debug symbols and correlating them.

Without stack traces, exceptions are just a type of goto.


While you are absolutely right that collecting stack traces is an extremely costly operation, it's not the only problem. For example, in C++, which doesn't collect any kind of stack trace, throwing an exception is still ~1 order of magnitude slower than returning a value through all the layers. Note that this cost only happens when the exception is thrown; exception-based code is otherwise slightly faster than `if ret < 0` style C code, as the check is entirely omitted.

There is some explanation as to why this happens in this SO response [0]. The gist is that the dynamic nature of exception handling means that the compiler needs to consult runtime type information to decide where to jump when the exception is thrown, which means trawling through some relatively lengthy data structures. Adding to the problem, these data structures are not normally used a lot, so they are very likely not to be cached - though this may change for a program that actually throws exceptions in a hot loop, and the difference may not be as stark.

[0] https://stackoverflow.com/questions/13835817/are-exceptions-...


Ah, fair, I don't have a lot of experience with C++. My answer was based on Java, which from the benchmarks I've seen doesn't suffer from any performance hit when you use a static exception object with no stacktrace.


What about dynamic languages? Are they always collecting the stack and keeping it into some exception object that the exception can grab the data from at any time? And if that's the case wouldn't it always be slow regardless of raising or not ?


A performant dynamic language won't bundle the debug symbols either, so my assumption would be that the performance of an exception would still be bad.


I think what the article is addressing is likely not what is being discussed in this thread. The article ("Unfortunately, I often see solutions abusing exceptions" followed by using exceptions to replace if statements) is very likely discussing poor understanding of the topic from student assignment rather than actual code in the industry. Such mistakes (abuse of exceptions in the article) often go away once students become actual professionals because these mistakes are more easily caught during code review than other types of common mistakes.


Maybe "don't use exceptions for conditional logic" would be a better way of phrasing this. If exceptions become a performance bottleneck, there's nothing exceptional about them anymore.


This should be, avoid exception catching in performance sensitive code. Assuming one isn’t doing exceptions instead of normal flow control like if/else, the dominant cost is the branch(es) to determine if there is an error to communicate. Not that there is a possible exception. Now, the catching part/after it is throw sure that is expensive in at least C++. The full stack needs to be unwound along with potential RTTI. So this becomes, do you really need that branch in the hot loop kind of problem


The author mentions Go, but Go has "exceptions" too in the form of panics. I tested the speed of throwing panics, and if the error type evaluation is involved, the panics don't lag behind returning errors one bit. I even created an experimental library to support this in a playground project: https://github.com/apitalist/lang


I’m curious if there are ecosystems where get_positive_value would be a reasonable name for the method described. I’d expect something like assert_positive, or check_is_positive, or throw_if_negative. I admit these would suggest returning void rather than int, but “get” just isn’t right either in the ecosystems I’m familiar with (e.g. Rust, Python, JavaScript).


Not having exception is not having incorrect code i.e. code that does not throw. This is very hard to do in languages that naturally throws exception like Java or C/C++. If you have errors as part of the contract however...now you can be sure (or surer) that the function you are calling does not fail.


I think most serious C++ shops are probably compiling with -fno-exceptions.


That is entirely false. You can barely even use half the language without exceptions.


I don't really know how you'd falsify the claim (it's not particularly falsifiable), but I'm open to being wrong... That being said, only using 50% of a language like C++ might be considered a feature and not a bug. ;)


More anecdata my shop didn't allow exceptions in production code until very recently. I've written c++ with no exceptions for a quarter century. Now that I'm allowed to add them, I don't want to. I don't like the invisible control flow. I like how rust does it. Mostly my shop still ignores exceptions unless a dependency uses them.

What half of the language am I missing?


I don't understand why this blogpost is getting so many upvotes.

Anyhow! I use exceptions and page faults to optimize my loops. There's no need to check a loop's condition every iteration when you can abuse an intentional error to do it for you. Also works great using page faults!


Impossible to avoid in python.. exceptions are even how a simple for loop is implemented under the covers!


Python was never designed to be performance sensitive. Most of python’s high performance libraries were written in C/C++.


Is that still true in recent Python versions? Sounds like low-hanging fruit for perf optimisation.


Yes, the for loop still calls next() on the iterator until it throws a StopIteration exception.


Using exceptions in python isn't any more expensive than not using exceptions because the interpreter pays the same cost no matter what. It's a core design decision. Changing it would probably break some things.


Python 3.11 made the "try" part of exceptions zero-cost. The "except" part only has overhead if the exception is triggered.


So the answer to the question is yes, because for loops signal their finishing by raising an exception.


I'm completely ignorant of the subject, but most surprising to me in his example was that the compiler didn't optimize away the inefficiency. Is there something about exception handling that makes it not get taken into account during optimization?


C++ exceptions and by extension Rust panics and C# exceptions (with caveats) involve stack unwinding as well as gathering of corresponding details and producing an exception object (C#). It is very expensive and using it as a normal condition in general is not the best idea. It is by definition cannot be optimized in a way that gets both good performance and keeps all existing side effects.

That's why Rust's Result<T, E> is vastly superior because it out of box pushes the users to very efficient and idiomatic error handling mechanism which is based on its enum types that enjoy a lot of compiler optimizations.

You can also achieve fairly good results with struct-based custom Result<T, E> implementations in C# but until the language gets the support of proper discriminated unions, it will always be inferior.

Thankfully, in C# and, I assume, in other exception-based languages the compiler is smart enough to recognize exceptions as cold paths and correctly reorder emitted code to minimize their impact when you do not throw them. But traversing try-catch blocks still tends to pessimize the codegen significantly hence heavy reliance of C# on various throw helpers to keep hot paths clean.


In the C# stdlib there is at least currently the Task and ValueTask pair, where the latter is a struct optimized for 'failure is rare and success is synchronous' that degrades smoothly to the former, which can represent asynchronous computation and lets you recover failure exceptions w/stack without having to ever actually throw. I do wish the async part was decoupled from it, but it's nice that all the error information is preserved if used correctly.


While correct in terms of convenience, it still wraps an exception with all associated costs. But yet again, you can hand-roll custom `Result<T, E>` even today and implement something like `.Wrap(Func<T> func)` that would produce `Result<T, Exception>` to interoperate.

Anyway, if the callee throws an exception, it will be just as costly as any other even if you can examine the `Value/Task<T>` for `.IsFaulted` and `.Exception`.

Possibly counterintuitively, if thrown, it will also dwarf the overhead of allocating state machine object and executing `MoveNext()` decoration. Hence even in async there is value to be had in not using exceptions in performance-sensitive scenarios.


Nick Chapsas did a demo of just how much overhead regularly thrown exceptions cause: https://www.youtube.com/watch?v=2f2elFRmeLE

Spoiler, each throw/catch costs 20 microseconds


I don't know how it is now, but back in the day C++ exceptions were so messy and non-performant that Google forbid their use in company C++ code. And they employed at least one member of the standards committee!


  "On their face, the benefits of using exceptions outweigh the costs, especially 
  in new projects. However, for existing code, the introduction of exceptions has 
  implications on all dependent code. If exceptions can be propagated beyond a
  new project, it also becomes problematic to integrate the new project into
  existing exception-free code. Because most existing C++ code at Google is not 
  prepared to deal with exceptions, it is comparatively difficult to adopt new 
  code that generates exceptions." (https://google.github.io/styleguide/cppguide.html#Exceptions)
So, basically, Google forbids exceptions for historical reasons, not because of performance.

But, sadly, countless companies parroted this section of Google's style guide for all the wrong reasons (mostly just cargo culting Google) leading to unfortunate fragmentation of error handling in the C++ library ecosystem.


Looks to still be the case:

https://google.github.io/styleguide/cppguide.html#Exceptions

One could argue this is something of a technical debt issue, as the rationale notes:

> "Given that Google's existing code is not exception-tolerant, the costs of using exceptions are somewhat greater than the costs in a new project. The conversion process would be slow and error-prone. We don't believe that the available alternatives to exceptions, such as error codes and assertions, introduce a significant burden."

> "Our advice against using exceptions is not predicated on philosophical or moral grounds, but practical ones. Because we'd like to use our open-source projects at Google and it's difficult to do so if those projects use exceptions, we need to advise against exceptions in Google open-source projects as well. Things would probably be different if we had to do it all over again from scratch."


Exceptions in C++ have a ton of slow machinery, so available optimization is limited, and they are assumed to be... exceptional... so they are not generally speed-optimized. If you are expecting an occasional exception in a C++ program, std::optional with error checking will make you a lot happier.


On top of everything else, the behavior of exceptions when mixed with setjmp and longjmp is undefined, which can be a real problem if you're using any libraries that you haven't exhaustively analyzed to confirm that they don't use that feature. If memory serves, this includes things like the Lua interpreter (ETA: this used to be true. ca 2008, the Lua interpreter was changed so that it can conditionally compile to implement its try/catch via setjmp / longjmp or via C++ exceptions).

I generally consider exceptions in C++ harmful and avoid them if possible. It is not, unfortunately, always possible, as exceptions are the only way the language defines for constructors to fail.


Exceptions in all modern C++ compilers are implemented in a way that prioritizes the performance of cases where an exception is NOT thrown at the cost of performance when an exception is thrown. Note that this means C++ code should be slightly faster than equivalent C/Go/Rust-style code that has to do an if/else after every function call that can fail to check if it did.

Once this decision was made, investing time in optimizing programs that use exceptions for regular control flow became very very low in the priority list of all C++ compilers. This would include recognizing cases where an exception could be replaced with a single if/else, and optimizations for the unhappy path in general.


Branch prediction.


I'm no C++ expert but I think it must be hard for the compiler to figure out - the noexcept keyword exists in C++ to denote a method that doesn't throw exceptions (so the compiler knows it doesn't need to add default exception handling code). Given that this is a manual thing up to the programmer it mustn't be straight forward for the compiler to infer.


noexcept actually forces it, if it doesn’t know it must essentially wrap the function body in a try { …body… } catch(…) { terminate( ); }. So the compiler is doing the same amount of work inside the function. Outside it can assume that no exception will escape that function, yes. If it knows, it doesn’t need to.


Without profile guided optimization you normally assume it will want a slow path. For one, handling it far away rather than nearby inline, which would hurt cache efficiency (either wasted within a cache line or extra wasted prefetching) if there weren't exceptions being thrown.


C++ exceptions have a lot of machinery behind the scenes, and I don’t think anybody has ever really put the effort in to optimize pathological edge cases like this.


A lot of time and effort went into optimizing the most common case (no exceptions - ideally this should be no-cost) at the expense of everything else. It's very much by design.


Yes, 100%! This is exactly the machinery the makes it hard for a compiler to elide the exception.

I don’t think there’s anything in the standard that prevents the compiler from optimizing away said exception completely, but neither gcc or clang do so even in extremely trivial cases where the exception is identical to an if statement:

https://godbolt.org/z/8Y81bzo5s


TIL that it's even possible to use exceptions instead of bog standard if statements.

Would love to know why people would do this, though. Surely everyone masters if-else ssatements well before they even learn what a try-catch statement is!?


Using exceptions as control flow is a pattern I saw a lot at AWS in Java code.

It boiled down to it was simpler to abuse exceptions. You could instead return objects, but the code simply ended up being more to write.

At some point I tried to write things the 'proper' way by returning a Result object with the possible states, but it ended up being more complex than just throwing exceptions.


It’s because Java supports nonlocal returns and pattern matching, but only when you use exceptions. Returning POJOs to report problems makes all the intervening code harder to read because the happy path is no longer separated.


If/else statements don't bubble and exceptions do. The code samples in this example are clearly simple representations of the pattern, but the pattern is to call into a more complex function than `get_positive_value`.


If .. Else .. Finally?


This was a fairly common position when I was programming C++ over a decade ago.

Performance critical code like game engines, etc used to avoid exceptions like the plague. Not entirely surprised to find that’s still the case.


99.99% (maybe even more) of game engines is not performance critical code though. I guess the reason was a different one: for exceptions to work you need a sane memory model. It's very hard to write exception safe code in C++. A garbage collector is very helpful to accomplish that, and _this_ is the thing that's problematic, because it interferes with the 0.01% performance critical inner loops.


> It's very hard to write exception safe code in C++.

How? As long as RAII is used it's basically for free.


But there are a bunch of codes which are yet to be verified as exception-safe. And in the gaming industry it's often third party code that you cannot even inspect.


The right statement is then that it is very hard to retrofit exceptions in an non-exception safe code base. Exception safety itself is not easy, as it requires different code patterns and idioms than usual.

I believe that those idioms are useful and great even for non-exceptional code, but that's another story.


I agree and would replace my original statment of "It's very hard to write exception safe code in C++" with this.


With the 32-bit x86 ABI, exceptions hurt any time your code had the possibility of throwing one. The 64-bit ABI fixed that, and is REALLY slow when an exception shows up, but does not cost anything in the happy path.


More advanced languages like Koka or Eff use exception-like functionality for _every_ side effect. Every IO, every log statement, practically everything and they perform fine.


> I often see solutions abusing exceptions

Wow, I thought I had seen bad code, but even I've never seen this. My condolences, OP.


i wonder how feasible it would be to implement a syntax coloring mode that provides some indication of instruction count from static analysis. so not quite dynamic profiling, but rather something that uncovers (sometimes) surprising implementation details like this and creates a better sense for what the compiler is doing under the hood.


"Avoid using exceptions for control flow" is unhelpful advice because it is a paradox: exceptions ARE control flow.


Personally, I'd say "Avoid exceptions" and leave it at that.


umm the example is contrived, and the title is overly broad. performance-sensitive code has exceptional situations that can either be handled, or should cause the program to abort. performance-sensitive code use libraries that throw exceptions too. or the recommendation here will be to use non-exception-throwing libraries only in performance-sensitive code? that significantly reduces what libraries i can use in my code. my interpretation of the essay is: where possible prefer is/else to try/catch. branching on if/else is easy and inexpensive. try/catch could be expensive if (1) new exception instance is created every time, (2) stack/backtrace is generated. you really want to avoid (2) when all you need from the exception is the information that the abnormal happened. if you can use a singleton exception, you may be able to use try/catch. then you can make the beauty (of the code) judgement later.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: