I'm sure I'm in a very tiny minority, but I kinda like having errors as values - I prefer having errors be things that are part of the normal control flow of my code, and that can be handled explicitly within an if block (which may start simple but can and does evolve to wrap details, log and skip, fallback to an alternative, etc.)
Yes it means errors shows up everywhere in your code - the same types of errors are something all languages have to deal with, just accept our faulty reality for what it is and have the discipline to account for it.
Most Go criticism I've seen on error handling has been “if you are going to use errors-as-values, sum types and pattern matching were the well established approach to that when Go was created and multivalued returns when only one value is usually going to be meaningful at a time is both logically and ergonomically worse” not “don't use errors as values”.
(Ignore the obvious SQL injection issues in that query, thx :)
It takes a lot to make me mad, but this inane behavior manages it. This behavior, singularly, makes it nearly impossible to achieve higher levels of ergonomic safety on the types of errors functions can return. You basically always have to just return the `error` type, and require callers to do runtime reflection.
And this can rear its ugly head all the time. You've got deep functions that are super specific and know exactly what kinds of errors they can return. You've got higher-layered functions that call lots of things that all return different error types, so those may just return `error`. Turns out; they all just have to return `error`, because if you make them more specific, at the freakin TYPE DEFINITION, a segment of the syntax that other sane languages would say is "compiled away" and "not relevant to runtime behavior", you lose any ability to guarantee nil comparisons will work the way higher-level callers expect.
Its a problem in any situation where a function broadens the type of a struct pointer its returning to an interface. Its just most commonly encountered with errors, because you can't just return `DatabaseError` there, because the zero-value is non-nil and everyone checks errors with `if err != nil`.
The internal Go style guide of a billion dollar tech company every single person reading this has heard of reads: Never return pointers to structs designed to be used as errors. Its a significant, real problem. Its antithetical to any reasonable understanding of how this code should work. Its antithetical to even unreasonable understandings. Blog posts which explain it start with "this makes sense when you understand how reflection works on interface-fulfilling pointer values" then launch into a twenty paragraph graduate thesis as if Go wasn't explicitly designed to help fresh-out-of-college engineers write productive and performant code for Google.
I like Go; but their recent statement that Go will never break backward compatibility genuinely scares me because I'm not sure they can fix this without breaking existing code; which means it may never get fixed. I'm doubtful the designers even consider it a bug. Just... ugh.
Pointer to struct is not equivalent to an interface. You're not supposed to return your custom error-like struct pointers from functions - you're supposed to return an "error" interface. Just change the "QueryDatabase" to return "(string, error)" and your whole problem will disappear: https://go.dev/play/p/fb0e_4loDBf
This is a common mistake when coming from exception-based-error-handling languages where exceptions are differentiated by type. In Go, if you want a more granular distinction between different kinds of errors, you don't use types, you use values: https://go.dev/play/p/ddzhAqRgK_1
If you insist on using the typesystem for granular error checking, then you can define an "Is(target error) bool" method on your custom type to differentiate in the same way: https://go.dev/play/p/gZmYgOq6wSo
Go interfaces generally belong in the package that uses values of the interface type, not the package that implements those values. The implementing package should return concrete (usually pointer or struct) types
There clearly is this one exception ONLY for the `error` interface. This advice plus the weird runtime check conventions like `Is` / `As` are nowhere to be seen for other interfaces in the ecosystem.
Go makes a lot of effort so that errors are special. There is an exceptional tuple-type that exists solely for returning errors, and the error-return-interface convention breaks interface-return conventions. If they implemented sum-types, then we would have the typical `Result<T, Error>`, and then go programs would have much more invariants and be much more composable. But presumably compilation- and/or run-time would be worse.
Another alternative would have been adding syntax sugar like rust's `?` for `if err != nil { return ..., err }`, and `match err { case CustomError: ... case error: ... case nil ... }`. And some helpers for composing/piping functions.
There would be two major problems with sum types (especially for results) in Go 1.x:
- There are genuine use cases where returning both a value and an error makes complete sense. My favourite example is doing I/O: some bytes were copied, but there was a problem with the rest.
- It has significant overlap in scope & functionality with interface types. You could have "type Color enum { Named(string); RGB(int, int, int) }", or you could have "type Color interface { RGB() (int, int, int) }; type NamedColor string;" etc and it's not very clear which style you'd be supposed to use, and in what case.
The first problem could be somewhat alleviated by having very special result types that can have both a return value and an error, but these would have to be different from regular "variant" results, so you'd end up either with two ways to do a very similar thing, or yet another layer of abstraction.
The second problem is fundamental to the design of the language, and I would absolutely *hate* to see Go go through the same unholy mess as Python: "old-style" vs "new-style" classes, dataclasses (with third-party "attrs" as a stepping stone), protocols/generic containers, "match"/destructuring, all of these were clumsily grafted on and code that mixes all of these styles can be found all over the place. You do need to make ADTs and pattern matching a first-class citizen in 1.0, otherwise the place you end up in is not pretty at all.
Go? Yeah, we have a lot of SortIntegers([]int) and SortStrings([]string) and SortByName([]Person) in older code, but that's about its biggest sin. As of 1.21, slices.Sort & slices.SortFunc are in std and fixing this older code is pretty much a mechanical task. It's not perfect, but at least it's not worse by trying too hard to be perfect.
> There are genuine use cases where returning both a value and an error makes complete sense.
I don't feel anyone is saying that the addition of sum types would have to replace the existing (Result, error) pattern. Its just a tool in the toolbox. In fact, I think having both is really interesting from a function signature communication perspective; if a function returns (Result, error) in a post-sum-type world, that hints to me that the Result might still be useful even if an error is returned.
> I would absolutely hate to see Go go through the same unholy mess as Python: "old-style" vs "new-style" classes,
Anytime someone brings up how horrible the Py2 to Py3 transition was, I will remind them that Python is the most popular programming language in the world. Clearly; the transition wasn't actually bad enough to negatively impact its popularity; yet its the token example of "don't break existing programs, you don't want to be like Python".
That take is just counter-intuitively wrong. Its like the SDLC paradigm of "releasing often reduces bugs"; it feels wrong, but its actually right. I'm sure saying "we'll never break existing programs" imparts a nice warm feeling in your heart, "we're mature, unlike those other dumb languages". But there is extremely little evidence that backward-incompatible language changes hurt the adoption of programming languages. Actually; there's substantially more evidence that languages which can't adapt and evolve to change will eventually die.
I'm not asserting that will happen to Go; I think they're good about bringing forward enhancements to the language in ways that don't break existing programs (like generics). But I also strongly believe they need to rethink their "we won't break existing programs" rule. I am begging the Go team to break my programs. Its not a big deal. I just can't imagine still writing Go programs in 2050 like we are today; it leaves so much value on the table, and we can do better.
> My favourite example is doing I/O: some bytes were copied, but there was a problem with the rest.
How do you reconcile that with the general rule or convention that `if err != nil` the value is unsafe to use?
Apart from this, I don't think sum-types should be added this late either. I should have written "if they had implemented". "old vs new style" does more damage than good. Today, it is what it is.
What I do think is feasible is adding some more syntax sugar around errors.
Conventions are not laws, and should not be followed blindly.
> What I do think is feasible is adding some more syntax sugar around errors.
The most common example I hear is something like Rust's `?` operator for propagating errors. In Go, errors should almost always be expanded with additional context, and programmers should think hard about errors and how to handle them. Syntax sugar would just make it easier for people to ignore errors. And you can't ignore errors forever - they always come up, most often in the place you least expect them.
Actually, stack traces are a very poor way to provide context.
I can't count how many times I've worked with some library in ${LANGUAGE_WITH_EXCEPTIONS}, and had an exception print out a giant stack trace, only for me to realize that the stack trace is useless because the place where exception is raised isn't the place where the error actually happened, but is actually some wrapper/worker collector/other kind of indirection mechanism. In other words, stack traces are directly bound to call stacks, and call stacks don't necessarily contain all relevant context of an error - they only do so in case of simple, single-threaded programs.
Golang-style error-as-values actually provide real, human-curated context that is relevant to the operation at hand. They can be passed between goroutines, and are completely independent of any call stacks. That, in my opinion, makes them vastly superior to stack traces.
It's true that stacktraces don't work out so well in many languages. But ironically in go they do seem to work fine in my experience. Adding a stacktrace to custom go errors do show me correct and useful traces across goroutines. I guess it's the fact that async is built into the language as a primitive or something.
Regarding wrapping errors, if you don't provide a stacktrace or at least file:line, how do you actually map the error to source code? Do you just grep and pray that the message is unique enough?
> Do you just grep and pray that the message is unique enough?
If you're new to the codebase, yes, grepping can get you a long way.
In my experience, just following the error messages from the top (usually errors are printed only in the main function) is enough. Hypothetical example, if you have a file sync program, and you get an error "file sync failed: device Foo unreachable: connect error: quic://123.45.67.89/ i/o timeout", you can already mentally map where exactly the error has happened. The exact file and line should be easy to locate.
Of course, if a codebase doesn't write descriptive enough errors, or just propagates them without context, error diagnosis is going to be difficult. But that's why Go encourages people to think about errors and not ignore them.
It's a contrived problem, at least. In reality, your code is going to look more like:
func GetUserByID(id string) (string, error) {
s, err := QueryDatabase(fmt.Sprintf("select * from users where id = %v", id))
if err != nil {
// Do something with the error, returning a new error if necessary.
return "", someNewErr
}
return s, nil
}
The callers of GetUserByID have absolutely no concern for the implementation details of QueryDatabase. When requirements change and you replace QueryDatabase with QueryWebService, callers expect to still get the same errors back, not HTTP errors all of a sudden when the code was previously getting MySQL errors. That would be plain horrible API design. So, you wouldn't actually ever encounter this particular issue in practice (assuming nobody hates you).
Sure; the higher-order callers should do that. I love wrapping errors.
But the issue is that the source of the problem is in lower-level code which made the theoretically correct decision to be specific what you return and generic in what you accept. That lower-level code cannot guarantee that its higher-order callers will wrap the errors it returns. So; the lower-level code has to remain generic.
Obviously my example is contrived; but the problem is not. It is allowed, but essentially never safe to communicate to the compiler the error types your function returns. Its ok to use custom error types; you just can't tell the compiler about them. This is a problem that is so deep in the language it has influenced how error types and functions are designed in the standard library. I'm aware of one production outage related to this problem (yeah yeah, blame bad reviews and bad testing, I get it, it still happened). I've caught it a dozen times in code reviews (especially after that happened).
Its a real problem. People don't write perfect code.
> the theoretically correct decision to be specific what you return and generic in what you accept
Yet another case of that old pitfall. Untyped nil uses the same keyword as typed nil, when they are not equal. It's an unfortunate contradiction within the language.
And it means that the entrenched practice of
err != /* untyped */ nil
is an overly specific check (nil error is harmless whether it contains the type information or not).
Someone, somewhere has thrown at us a principle ("the theoretically correct decision") and someone else has thrown at us a conflicting principle ("err != nil is a universal idiom").
The problem: these are not principles! Both of these are decisions with tradeoffs. Who is making these decisions? Some blog authors or conference speakers? No: the team that owns the code. So, either:
- have a linter that catches *ErrX return declarations automatically,
- or, have a linter that prevents err != nil (generally: interface != nil)
Since the latter is impractical today, one needs to decide the former. Solved, isn't it?
> but essentially never safe to communicate to the compiler the error types your function returns.
It is safe as long as you don't write too much code for no reason. The problem in the example is that DatabaseError needlessly defined an Error method. It serves no purpose other than to allow the issue to arise. Remove said method, which has no reason to exist, and the program will fail to compile.
> Its a real problem. People don't write perfect code.
Okay, sure. It is possible that a programmer may, for whatever reason, leave out entire blocks of logic that the program needs. Imagine not adding billing logic to your storefront software – I'm sure it has happened to someone before! Making interfaces more intuitive does not solve that problem, though.
Why are you complaining that this person wrote a clear and small example that demonstrates what they are talking about? It is painfully rude; a description and a couple lines of code that runs is the gold standard for starting a programming conversation.
Hardly a contrived problem, since I know of at least one occurrence where I had issues in production because of it. It was not error handling that was the problem, which is exposed here.
The problem was that I had a variable of an interface type and in the code was producing different objects implementing that interface. Now, a nil value was also allowed if none of the cases matched and then the variable was stored in a sync.Map as a value.
The problem was that when retriving the value the nil check never matched. Why? Well, it's type is defined as the interface even if it doesn't point to any implementation.
I get that the designers might not have considered a different option: it is allowed to invoke a method on a nil object, making what some languages would call static methods. Still, I agree with the grandparent: I think this is Go's biggest flaw, simply because it is so surprising.
Returning *DatabaseError instead of error is a pretty clear programmer mistake, which should be caught and fixed by any reasonable kind of code review.
Short explanation: a nil pointer to a struct that implements an interfaces is a valid non-nil value of that interface.
---
Long explanation:
In Go, "error" is an interface - i.e. a type that has an "Error() string" method defined on it.
The custom defined struct, "DatabaseError" is not an interface - it's a struct, and it has an "Error() string" defined on it. Therefore, any value of "DatabaseError" (or "*DatabaseError") type fulfills the "error" interface, and can be cast to a non-nil "error". Even the nil pointer to "DatabaseError" - you can call methods on a nil pointer, therefore it's a valid non-nil interface.
The problem in the code is the implicit cast of "*DatabaseError" struct into the "error" interface in line 21., which assumes that nil pointer is the same as nil interface. It isn't. The solution is either to 1) return "error" instead of "*DatabaseError" in "QueryDatabase" (https://go.dev/play/p/fb0e_4loDBf), or 2) to explicitly check the return value of "QueryDatabase" for nil pointer before cast (https://go.dev/play/p/TgikAk1mSn0).
I prefer the former approach, because even if you're free to write your own "error" interface implementation, you're still supposed to use it through the "error" interface, not the struct pointer directly. Interface is more than just a struct pointer.
P.S.
As I explained in the other comment, the intention of the original comment is to differentiate between kinds of errors through the type system. That's not the Go way - Go avoids type hierarchies as much as possible. The proper way to differentiate errors in Go is through values: https://go.dev/play/p/ddzhAqRgK_1
Or, if the user insists on having a error type hierarchy, define a custom "Is(target error) bool" method on each custom error type: https://go.dev/play/p/gZmYgOq6wSo
- a typed nil: "I know the type, but there's no value"
- an untyped nil: "I don't know the type and there's no value"
But when you compare these two, they are not equal. Sigh.
(Correcting my wording: "untyped" is not precise enough, because it's "I don't know the type exactly, but I know that it's one of the types that fit X interface". So maybe more like "nonconcrete nil".)
There's no such thing as an "untyped nil" in Go - nil is simply a zero value for "reference types" (i.e. pointers, maps, slices, channels, functions, interfaces). It must be always typed. E.g., the following code:
package main
func main() {
foo := nil
_ = foo
}
Raises a compilation error:
./main.go:4:9: use of untyped nil in assignment
It's just that zero value (nil) of interface is not the same as a zero value (nil) of a pointer.
Casting a nil pointer to an interface does not create a nil interface, in the same way that casting a zero integer or an empty string to an interface{} (any) doesn't create a nil interface{} (any).
I agree that a nil pointer not representing a nil interface value is a problem (that I've been bitten by) but once you know what interfaces are at the language level, it makes sense, at least for me. So I design my APIs with this lack of ergonomics in mind.
My frustration, possibly perhaps how I'm coding, is wrapping every instruction on an error wrapper and then I've got twenty if not an error do the next thing.
I code solo, so I'm not sure how others do it.
The alternative is to have twenty methods?
I build a data structure, modify its state in a dozen ways and then return it to the caller.
I like Go's error handling. Yes, I know that puts me in a special category of coder.
Reasons:
- It's simple and easy to understand. There's no hidden complexity, no magic, no gotchas. Nothing is going to surprise me about it.
- It's a very readable cadence. Do the thing, check the error, do the thing, check the error, do the thing, check the error. Once you get used to the error checks it's extremely readable.
- Despite all the extra characters the actual mental load of the error checks is tiny. Yes it's more typing, but the hard bit about code is thinking not typing, and it doesn't make more thinking.
- If I have to do something special with the error, it's easy. There's no incentive to handle this error just like the rest, or ignore it and let the exception handler catch it. If this error needs (for example) extra logging then I can just put it in there for this error check, no hassle.
- It's pretty much standard across all Go code. If I have to deal with someone else's code, I can expect the same cadence, the same simplicity, the same readable pattern of error checks. One of the great things about Go is that it is opinionated about stuff like this.
The problem is that it's so incredibly inflexible and dated.
Many effectful actions e.g. reading from a file system can have a range of different errors each of which you want to handle differently e.g. out of disk space versus lack of permissions.
It's great that you're treating errors as values. But you need pattern matching and other techniques as they make your code more: (a) readable, (b) safer, (c) simpler and (d) less verbose.
The hilarious thing is that eventually Go is going to get these because there is nothing but upside. And then at that point you're going to wonder how you ever survived without it.
> But you need pattern matching and other techniques...
The go way to do this would be:
switch {
case errors.Is(err, outOfDiskSpaceError):
// handle out of disk space error
case errors.Is(err, lackOfPermissionsError):
// handle lack of permissions
...
default:
// do the equivalent of the `_` case in a scala match statement or `t` in a lisp cond
}
Which is roughly as readable as scala/rust's match statements. Moreso if someone tried to get cute in scala and bind both the object as a whole and parts of it at the same time ( something like `case x @ Type(_, _, y, _)` ) or when people get real cute with the unapply method.
I mean, I like scala. It's fun. But I would never say it's more readable than go. I've been left to support things that happen when a team of average intelligence devs get a hold of it.
You're also really comparing an MIT and New Jersey style solution here, and judging them both on MIT merits. And I don't think that's exactly a fair argument.
If you're thinking of C/C++ switch statements and everyone that blindly copied them (looking at you Java, JS and PHP), you're right, but Go's switch is much more flexible (https://gobyexample.com/switch).
PHP tried to correct the mistake of not giving some more thought to the switch statement at the beginning by including a new "match" expression in PHP 8 - fun times for everyone who used classes called "Match"...
A switch or case statement is a restricted form of pattern matching for simple values. It fits those use cases, which are frequent.
I use a language that has both pattern matching and case constructs. I use both.
If you're doing complex pattern matching all over the place (especially matching the same sets of cases repeatedly in multiple locations in the code), maybe your design sucks. It's not making effective use of OOP or some other applicable organizational principle.
enum BinaryTree {
Leaf(int),
Branch(BinaryTree, BinaryTree)
}
function sum_leaves(tree: BinaryTree) -> int {
match tree {
Leaf(leaf) => leaf,
Branch(left, right) => sum_leaves(left) + sum_leaves(right)
}
}
(If you're thinking "the first example should be using inheritance + polymorphism", imagine that "BinaryTree" is in a different library than "sum_leaves". If you're now thinking "visitor pattern", sure, go write your hundreds of lines of boilerplate code if you like.)
The first example is less safe because it's filled with invariants: left_child is nil iff right_child is nil, and leaf_value should only be accessed when they're nil. The second example has zero invariants. (You might think there would be an invariant that the children aren't nil, but languages with pattern matching tend to use Optional instead of nil, so that invariant isn't necessary.) If you make mistakes about when you access various fields in the first example, you'll be accessing leaf_value when it's uninitialized, or get a null dereference from one of the pointers.
As for readability, that's in the eye of the beholder, but I find the second example a lot more readable for the same reason: it's clear in both the data definition and the use site which fields exist.
All sorts of details vary across languages, even with a small example like this, but that's the basic differences.
Thanks for the clarification. I get what you're saying, but I wouldn't write it like this - I'd write more code with more checks ;)
In terms of errors, though, it's generally "the result is either a value and no error, or no value and one of these errors". I get how sum types would help with this, and I'm not arguing against that; they would be useful. But the pattern matching basically still has to deal with that outcome, and have a pattern for each error type. It doesn't strike me as being inherently safer, more readable, etc.
> But the pattern matching basically still has to deal with that outcome, and have a pattern for each error type.
Not so! In Rust:
enum FileError {
FileNotFound,
FileExplodedWhileOpening,
}
impl Display for FileError { ... } // say how to print the errors
// Result is defined in the standard library.
// It's used to store results that may be successful, or may be an error:
enum Result<Success, Error> {
Ok(Success),
Err(Error),
}
fn read_file(path: Path) -> Result<File, FileError> {
...
}
fn main() {
match read_file("kittens.png") {
Ok(file) => // show kittens on screen
Err(err) => println!("Could not show kittens :-( \n{}", err),
}
}
In your first example you don’t need the else clause and wouldn’t you clear up a lot of the invariants by checking leaf value (with ‘not a leaf’ appropriately represented) rather than whether there is a left child. Or even representing them with a function call isLeaf.
I agree that sum types are lovely and that pattern matching makes them nice to work with but I don’t think you really make the case well here that it’d be superior rather than just personal preference.
> In your first example you don’t need the else clause and wouldn’t you clear up a lot of the invariants by checking leaf value (with ‘not a leaf’ appropriately represented) rather than whether there is a left child.
I'm not sure what you mean by "appropriately represented". What's an appropriate representation of "not present" if a leaf is allowed to be any int?
> Or even representing them with a function call isLeaf.
Let's see how it looks with an isLeaf() function:
struct BinaryTree {
leaf_value: int,
left_child: BinaryTree,
right_child: BinaryTree
}
function sum_leaves(tree: BinaryTree) -> int {
if tree.is_leaf() {
return tree.leaf_value;
else {
return sum_leaves(tree.left_child) + sum_leaves(tree.right_child);
}
}
It still has an "else" clause, and it's still full of invariants. Not sure how this is supposed to be much better?
> I don’t think you really make the case well here that it’d be superior rather than just personal preference.
Increased type safety is not personal preference! Here's the list of errors that are easy to make in the first example, and literally impossible in the second:
- accessing `leaf_value` when it's not set
- accessing `left_child` when it's nil
- accessing `right_child` when it's nil
- setting exactly one of `left_child`, `right_child` to nil
- setting `leaf_value` when `left_child` or `right_child` is nil
(You might imagine that the "setting" mistakes are possible in the second example too. If Go merely gained pattern matching, this would be the case. Most languages that were born with pattern matching, though, don't have default/uninitialized values for everything, and so don't let you make those mistakes. I.e., you cannot construct a BinaryTree in Rust without choosing a value for the leaf or for the two branches when you do.)
> Increased type safety is not personal preference!
Sure it is! People literally make this choice all the time. If leaf can be any int then I’d suggest a value that determines whether the value is a branch or a leaf. Which is drum roll how tagged unions work anyway.
You still don’t need the else clause with the early return.
Just to be clear the sum type/pattern matching is nice! But it’s perfectly acceptable to live without it and the world won’t end.
Pattern matching in Erlang is a brilliant feature. You get concise branching and binding, and if a pattern match fails the native error handling (dramatically less verbose than Go) takes care of things for you (mostly).
Much like other FP features, shoehorning pattern matching into a language doesn't give you nearly the same advantages as building a language around it, so I don't know that it would make Go significantly better.
I find it really, really hard to work out which pattern is matching when debugging Erlang. It might be more concise but it's massively less readable (and less amenable to reasoning out what might be going wrong). Especially in older code bases that have had a few people work on them.
> - Despite all the extra characters the actual mental load of the error checks is tiny. Yes it's more typing, but the hard bit about code is thinking not typing, and it doesn't make more thinking.
The hard part is understanding the existing code. The more cluttered and verbose the code is, the harder that is. Go's boilerplate if err != nil return err, nil becomes something that your eyes just skim over - which is fine right up until you have some code that's doing something similar but not the same, and don't even notice.
I find that the differences leap out at me. Even just a `err == nil` instead of `err!= nil` is noticeable.
I do have to spend a second reading the action if it's not just `return result, fmt.Errorf("failed to do the thing: %w", err)` but that's good, I think.
And all of this is way easier than trying to trace up through the stack to the nearest exception handler and work out what it will do with the error
edit: also, verbosity doesn't make code harder to understand, imho. If anything the other way around. Packing 5 statements into a single line is massively harder to read than separating those same 5 statements into 20 lines with error handlers.
> And all of this is way easier than trying to trace up through the stack to the nearest exception handler and work out what it will do with the error
You still have to do that part though? Like, this function returns err, so the caller returns err, so the caller of that returns err, ... - you've still got to walk up the stack to the point where the error is actually dealt with.
> edit: also, verbosity doesn't make code harder to understand, imho. If anything the other way around. Packing 5 statements into a single line is massively harder to read than separating those same 5 statements into 20 lines with error handlers.
Very much not my experience. There's a huge understandability hit when a function doesn't fit on a single screen and you have to scroll, so vertical space is really precious.
> you've still got to walk up the stack to the point where the error is actually dealt with.
This is why we wrap errors. The error message gives a pretty good indication of what the stack was doing when it went wrong.
I had a junior dev work with me on some JS. I was writing it in functional style because it made sense at the time. He was really struggling, so I refactored it to old-school imperative and he understood it and was able to work with it. It might have been an issue with the way he was taught JS, but I think it's more that tightly-packed concise code is actually harder to parse. Not least because you have to understand the whole thing to work out wtf it's doing. Whereas with one-statement-per-line you can scan down to the lines you're interested in and focus on those.
Go's boilerplate if err != nil return err, nil becomes something that your eyes just skim over
OTOH, if you make sure you do something like
if err != nil {
return fmt.Errorf("what I was doing when the error happened: %v", err)
}
then you get very precise targeted errors appearing in logs that are much easier to track down and fix than either just returning the error or the usual generic catch block found in other languages.
IME properly handled Go errors make for much more maintainable code.
The problem (as alluded to above) is that stacktraces give you no context to what went wrong, only where it happened and the (possibly cryptic) underlying error.
IMO the real value in proper error handling (ie what makes fixing errors easy) is that context, not the precise line # of the error which, though useful, is supplementary data. Rust errors with anyhow::Contexts are far more workable than those without.
Edit: I actually wrote a simple Result handling package for Go that includes error context and stacktraces: https://github.com/kitd/chock
Except no-one does this in practice. There's a half-assed exception handler at the top level that catches everything and that's all. At least in most of the code bases I've seen. Go encourages you to handle the error at the point of calling, which is good.
But in practice people don't do that in Go either, they just add a bunch of if error != nil lines, so you end up with no more information about what went wrong (in fact less information than you'd get with an exception, since you don't even get a stack trace) and just have more clutter in your code.
It's not the `if err != nil` that's the problem, it's the following `return nil`.
As I mention above, return instead the error with some identifying information and context, and you get highly targeted details needed to locate and fix the problem. And the way Go handles this makes it simpler than doing it via exception handling.
Strong agree. Stack traces are great and all (seriously) but a chain of contextual error messages in developer friendly language is great for understanding what probably went wrong from the error alone.
- Simple and lots of duplicated code. Golang took DRY overuse and turned it on its head to be dupliate everthing everywhere
- Its duplicated everywhere which means I need to mentally figure whether I need to figure out this path or ignore. Mental load of typing is small, load of reading is not
Golang has alot of good things about it. This is not one of them and is a wart on the language that is tolerated because the genesis of the language is to be an entirely inverted approach to verbosity than Java. Its not something to be praised.
The mental load of reading a lot of simple code is less than reading a small amount of complex code, I find.
It's easy to look at a 50-line function with one statement every 5 lines and find the bit I'm interested in, because it's easier to screen out the bits I'm not interested in. Rather than unentangling a 5-line function that has 10 statements in it, because I have to work out what all of it does in order to understand it and I can't focus in on the bit I'm interested in.
The problem with lots of simple code is now there’s 5,000 slightly different ways of doing the same thing. You end up trying to wrangle it with a mess of code generation and linters.
Yeah, this is something I think feels like a bigger problem than it actually is. If I change the way I'm handling something, it is a lot of typing. But it's just typing. No big deal.
But the same is true with the concise code. Only with that you have to unentangle each chain to work out if it needs to be refactored or not, and that's hard.
You’re talking about code that’s been reduced from 10 lines to like a one-liner; whether it becomes more understandable or not (and whether to keep it as such) is a matter of taste.
But not copying and pasting code by merging common logic is a matter of reducing the number of times you can fuck up, and reducing the number of “accidentally/unnecessarily special-cased” scenarios. You’re trying to reduce the amount of information needed to understand the codebase. The former is preserving the information, it’s just writing it more densely.
> It's simple and easy to understand. There's no hidden complexity, no magic, no gotchas. Nothing is going to surprise me about it
There absolutely are gotchas, exactly because they are not sum types. There are functions where both “slots” are used as return types, when an error occurred.
> It's a very readable cadence. Do the thing, check the error, do the thing
Arguably, you can’t reasonably handle most errors in-place, you just don’t have enough context for that. Also, you want to make your business logic right — all those verbose, often incorrect/naive error handles will just make it harder to read your own logic. Also, very easy to accidentally swallow an error - exceptions/sum types are much better in this regard, you can’t not care about them.
The alternative is to... not have errors I suppose. It does sound like the function you're describing is your core business logic, e.g. load from db, transform data, send somewhere else, etc, each of which CAN go wrong.
One thing you can do is go to the Go slack space at https://invite.slack.golangbridge.org/ and ask for a review in the #reviews channel; 20 error checks is a lot.
I love Rust but it is accidentally in the position of trying to serve two disparate camps of people. Developers who just want a modern ML-ish language with good tooling and some actual lessons learned from PL theory, and developers who need near-total control over the hardware but are tired of working with C & C++ and manually solving decades-old problems with memory safety. The former are a very large audience, but have to deal with requirements imposed by the latter which are irrelevant for their use case.
This a million times. I love working with Rust because it's - from my point of view - an ML language disguised in C-syntax (with a package manager and unit testing etc. etc). Most of the time I'd be served well by a GC version of Rust.
Same. I wished for this many times. I don't need incredibly detailed on-the-metal precision and most of the time, I do not want to think about stack, heap or lifetime parameters. I enjoy the language for its syntax.
Aside from the other reply which points out that common idioms have useful shorthands like ?, Rust's type system is also error-aware. Rust errors are actual sum types (either Ok(T) or Err(E)), while Go just uses multiple return/tuples to return (value, nil) or (nil, error). This means that in Rust, you are forced to handle errors (well, unless you don't care about the return value, but then the compiler still gives a warning). This means it is harder to make a mistake with error checking.
It also enforces the lack of weird anti patterns like some functions in go that will return a significant value in the first return value while also returning non nil in the error, with a snarky comment in the docs about why this actually makes sense in this case.
I don't consider this an anti-pattern, but part of the flexibility gained by Go's lightweight approach to error handling. I believe Go was even designed with this use case in mind (the standard library uses it in many places, and I don't consider their documentation about it "snarky").
Sometimes you want to return a partial result along with an error. Go's idiom of returning multiple values, with the final value being an error, allows this situation to be easily supported.
> Sometimes you want to return a partial result along with an error.
Sometimes you do, and there are types that represents that. Having a single type that is almost always used a certain way but occasionally used subtly differently is a trap waiting to bite you.
I've been using Go for almost a decade, and I don't recall this ever biting me. It doesn't feel like a trap.
Go has a convention of documenting each function with a comment (which is adhered to by the standard library, my own code, and any other code I'd consider worthy of depending on).
So when I think this subtlety matters, I check the documentation. Usually I don't care either way: When I get a non-nil error, I typically don't care about the other result (whether partial or the zero value).
The distinction rarely matters in practice (and when it does, the documentation is there). I think this simplicity is the right tradeoff (vs. being burdened with more types to think about).
How is a type more of a burden than documentation? You're saying that consistent documentation is a good thing, but types enforce more consistency and integrate better with tools.
Documentation has immense value beyond just this specific error situation. Its a burden worth taking on regardless of how errors are handled.
So it's not that a type is more of a burden than documentation, but rather that a type is an additional burden (since we're taking on the documentation burden either way).
I like that Go's error handling is simple enough that I can keep all its rules in my head. And I like that the other parts of Go are simple like that too. It allows me to easily know exactly what is going on at the language level (while my attention is focused on higher levels).
The more you can shift into types the less you need to put in ad-hoc documentation, so it's not an extra burden. Conversely, if you "keep the language level rules simple" by shifting more stuff from the types into the documentation, that's not making it easier to think about IME - the more you can shift into the lower levels the better, so the documentation can focus on the higher-level things you weren't able to encode in the types.
I’ve been bitten by it in production. Whenever I see this partial result pattern, it’s always a function that USUALLY returns nil plus an error. In one special case, it returns a partial result plus an error. This is a nil pointer panic waiting to happen.
If I get a result that came with a non-nil error, and I still plan to do something with the result, then it seems natural to think about the possibility that the result might be nil. I would consult the function's documentation, and maybe its source, and then maybe put in a nil check.
This is too rare a problem to be worth adding complexity to the language. There needs to be a big payoff for adding complexity. A language gets hard to use if it adds micro-complexities all over the place for the sake of preventing rare programming mistakes.
Except in the case I'm talking about, there was a resource to be reclaimed, the GC didn't cover it.
And if you wanted to do that in rust, that's a valid thing too, either as using a tuple instead of a sum type, or a tuple in the Err side of a result. You can describe what you're trying to do simply with the function signature.
Rust is actually largely error-unaware. It does have some syntactic sugar (mostly `?`, and even then that’s not restricted to errors), but for the most part Result just an enum with a `must_use` annotation, everything flows down from that.
The question-mark shorthand (?) basically does what the blog post is describing, and makes a lot of this stuff straightforward.
There _is_ some futzing that sometimes have to happen due to various operations' Result types using incompatible Error types. That's just a thing that has to be dealt with.
Being able to define your own app-specific Error type which everything gets converted to is incredibly useful and powerful. Especially for web apps where you can return the correct response codes depending on the error.
Yeah, but that futzing typically happens separately near the "outer" error type's declaration - and when it doesn't, a concise .map_err() often does the job.
It's also often taken care of by a sort thiserror macro invocation per "inner" type. There are obviously more complex error setups, but this covers the vast majority IME
I get why it's like this but I have often found that most of the time code ends up bailing by using a conversion into `String` quite quickly, leading to everything being stringified anyways.
I get why this is, but it does make me miss the Python Exception model of "there's ~15 base exception types. One of them is probably good enough for you". One could point out that the arguments are usually "just" strings there too, but at least there's some conventions.
I understand Rust's philosophy, I just find it annoying.
I haven't experienced that at all. Any of the codebases I've worked in - both proprietary and open-source - have converted errors to strings when it becomes necessary: When presenting it to a user, whether that means logging or placing it in an HTTP response body, etc.
I find Rust's question mark shorthand and the resulting futzing you're talking about adds a lot more cognitive load for me than Go's verbose if err conventions. I think more than anything this kind of syntactical discussion is just very subjective.
It's definitely subjective! I think the `?` stuff in itself is pretty straightforward for a DSL-like behavior. The futzing from when you're mixing actions from multiple codebases is unfun.
Rust is definitely a near-contemporary of Go that made the sum-type-and-pattern-matching choice, yes, though outside of that difference the focus of the languages is very different, so it might always be the right choice if Go was but for error handling ergonomics.
I like having errors as values, as is the case in rust, and haskell, and idiomatic scala.
What i don't like is where approximately 3/4 of the significant lines of code are endless repetitions of:
if err {
return nil, err
}
Rust has ?, haskell has do notation, scala has for. All of them have higher order functions for operating on the result. But go is not only more verbose, it is more error prone, since it is easy to forget to check an error, or use the other returned value before checking the error.
> approximately 3/4 of the significant lines of code are endless repetitions of:
if err { return nil, err }
3/4? How are you managing to have so many cases of blindly passing an error up the stack without introducing problematic coupling?
I would suggest that if you are able to realistically do this more than a couple of times total in an application, you have introduced way too much pointless indirection and should take a closer look at your overall design. Something is amiss.
And that goes for any language – not something exclusive to Go. Blindly propagating an error up the stack using exception handlers, for example, is prone to the same problematic coupling and indicates the same design problems if seen in more than rare situations.
For a gRPC/HTTP request that calls another microservice and writes to a database, you've got:
* Handler maps the wire format message to an internal entity. Return a bad-request type error if it fails.
* Handler calls controller. Perhaps some specific error conditions get dedicated status codes, the rest get 5XX.
* Controller calls gateway. Except in rare cases where the external call is optional, you probably just bubble this to the handler.
* Gateway maps internal entity to wire protocol format request. Sometimes this transformation can have errors; bubble these up.
* Gateway calls wire protocol client. Wire protocol client may have built-in retries, or gateway may have application-level retries. In any case, if retries are exhausted, bubble this up to the controller.
* Gateway maps wire protocol response to internal entity. Depending on the schema, this can often fail to be well-formed, even if the request is "successful." These errors also need reported to the controller.
* If everything is successful up to this point, controller calls repository. Bubble errors to handler.
* Repository maps internal entity to storage model. Sometimes this transformation is also fallible.
* Repository calls storage client. These errors might be retryable but after retries, need bubbling up.
* Finally, storage client returns successful value to repository returns to controller returns to handler returns to wire protocol server.
This is just a hello-world level microservice. We have thousands of them, with easily a dozen endpoints each and probably 5+ interactions per endpoint on average. And oh yeah, every single one of these error return sites needs a unit test case.
In my experience there are usually only a few places in an application that actually care about errors. You typically have something near the top level that reports errors to the user or client, whether that is printing to stderr, showing a dialog, or returning an error response over the network. You might have something that logs errors. That might be in the same place you report to the user, or it might happen where errors originate. At some boundaries, especially between a library and the calling application it might wrap the original error in a different error. If you have retries, that is probably handled by a framework that in turn returns errors if the retries fail. And maybe in some rare cases you can gracefully degrade or try an alternative in the case of an error. Everywhere else, you probably just want to propagate it up the top level error reporting facility.
I don't think you should blindly pass up all errors, but IME propagating upwards is usually the right thing to do in your middle layers.
Mate, the code in question also simply propagates error up the stack. I have seen this all the time in pretty much all program. There's only a few times where exceptions are actually handled; most of the time, they are truly exceptional. What is anyone going to do when printf/malloc/read/open/close fails?
If I am using a library, I pretty much expect each and every one of its API to return a Result<T> or Option<T>. Those that don't either: do something funny to hide the bad states or simply crash and die which means I have to do some double checks on my end.
> What is anyone going to do when printf/malloc/read/open/close fails?
As a sysadmin, I would love for the code to log/print the "#€%"#€%#€ filename when open() or read() fails, and not just bubble up some generic "something went wrong, fix something" and have me dive into strace/truss/ktrace just to know that /home/foo/badperms.txt could not be opened.
For some reason, all these wrapper libraries and frameworks and stuff are super good at hiding things for which we used to get decent errors, like "could not open tcp port 443" or "file: ./badperms.txt open() failed" but as the layers stacked on top of eachother more and more, the code calling "set-up-totaly-secure-sending-of-file-to-remote-http-endpoint-and-renew-LE-cert-if-needed()" has so many moving parts that the program can only say "worked perfectly" or "dang, noone in the world knows what went wrong, try again tomorrow, worked on my laptop once before deploy".
So while it is not "fun" to handle all these particular errors one by one, when we stop fussing about details, we make someones life miserable as the filesystem goes full/quota, or when networks/firewalls hinder traffic if we can't even tell the user which of those two occured because it would be "tedious" to pay attention to so much detail when all I wanted was my program to be short and sweet.
Most codebases would not justify the use of 'return nil, err' at all. The language certainly isn't going to stop you, but you are going to make life miserable for future developers if you do so.
There are rare circumstances where it is the right thing, but if you are seeing more than one or two instances in a substantial codebase, something isn't right. If it is a common occurrence, and you are not purposefully trying to demonstrate your hatred of future developers, you've no doubt introduced way too much unnecessary complexity – which too is going to make life miserable for future developers.
would love to see some examples you cite. This pattern is prevalent in all golang I've seen, whether in libs or work. And I've only been working in the language full time for three months.
I would love to have the original question answered. Perhaps you (or the people who have contributed the code of which you speak) have stumbled upon some solution to the coupling problem, in which case I would then understand the choice.
Frankly, the entire Go community is no doubt keen to hear it. The 'try' proposal fell apart because nobody could figure out a good solution to that problem at the time, and could not find justification for a whole new feature for rare occasions. If `if err != nil { return nil, err }` were to actually become tenable in most cases then said proposal could be revived based on your information. It was otherwise well received.
It would be wonderful to see your non-coupled design without middle layers knowing about top context to handle errors without bubbling it up. It already sounds like a failure.
I must misunderstand, but your request does, indeed, sound like a failure – and not at all related to the discussion taking place. Perhaps you could explain how "middle layers" and "top context" fit?
"Handling" an error, without propagating it to the caller, can only mean deciding that the thing attempted didn't really need to happen.
This situation is extremely rare: if it's appropriate to swallow an error, you may as well not even attempt the thing. We don't normally write "opportunistic" functionality into our software. But when we do, it's the jurisdiction of the caller to say that the it doesn't matter if the called function fails. It's almost never appropriate for the callee to decide to eat its own failures.
I mean, yeah that can happen if the functions you call can error, but this would also happen in any other language. It's just that it's often hidden in the form of unchecked exceptions, or all handled at once in catch-all blocks. Errors don't go away if you use a different language.
I'd be surprised if you're in a minority -- maybe here that's true but most Go developers I know and work with are pretty content with the state of affairs regarding error handling.
> Panics : Weird because golang already had errors.
It is true that panic allows any value to propagate, so technically you can use it to carry errors, along with anything else you can imagine (names, email addresses, audio, whatever).
But the intent is for it to be used for communicating exceptions. You will notice panic's behaviour mirrors exception handling systems found in some other popular languages. But errors, along with names and email addresses for that matter, are decidedly not exceptional.
Panics are discouraged to use though, or, only when you intend to have the whole app crash immediately, which is rarely the case. It's there, but you shouldn't use them in practice.
I wear two hats when I develop in go. One is for robust long-lived systems and for those I don't mind verbose error handling. The other is small quick scripts and for those I really wish I had a non-verbose option.
You have the option to explicitly handle every error, not handle errors for certain methods, or bubble up errors to a single error handler or any combination thereof.
NullPointerException don't really make sense as checked because almost every method will have some exposure to null values. But the idea is that encapsulating methods can check for those and translate them into other exception types or just let it be handled by a global exception handler.
Java is not the best for error handling by any stretch. But it's easily better than Go.
Unless you just always return it up the chain without thinking about it like the bulk of the code I've encountered. No more thoughtful than "catch and reraise" or "just throw."
And like in Java, an NPE ain't getting caught by the error return of a function in Go either.
Even if you ignore this, assume nobody ever created an editor macro to smash, go has several glaring holes which make it very possible (and reasonably easy) to miss errors entirely. And unlike Java, those errors are just gone, they won’t be breaking noisily eventually.
A few years ago there was an effort to come up with improving error handling, but after several months of back and forth, various concepts, etc etc, the consensus was that the existing error handling wasn't too bad, and/or the proposed alternatives weren't significantly better.
Due to the nature of my work I'm often in a position to write, deploy and then maintain systems that are running my code and generally have to play the part of top tier support for the entire stack.
The more that this has become my reality, the more I care about actually producing worthwhile errors. I'm not at all bothered that 70% of my code is error case management with some liberal sprinklings of context into those values.
At this point, I see straight through them in my code, to the point where I'm actually going to be confronted with an error, bug or emergent system case and I'm going to be exceptionally pleased that I put so much effort into actually managing these errors correctly and not just punting them up through a common and often cryptic common error handler case with most of the useful context about the detailed error environment now missing.
This is exactly how I feel about Go error handling (which I prefer to other languages).
Error checks become a syntactic formalism that are easy and quick to type, easy and quick for the eye to scan, but have just enough presence to make sure you think through your error situation whenever and wherever you need to.
Go gives me a confidence in my code's error handling that is harder to get in other languages I've used, where the error situation is typically muddier.
You are right. I think what I don't like is that using this operator stops you being able to use the error as a value in the given context. I would rather always have the if block there, screaming it me, ready to be addressed. But certainly YMMV.
Seems like the old and new syntax would still be there and the rewrite from one to the other is easy.
Actually, maybe the IDE could (optionally) display it this way, so it’s not even a rewrite. The downside would be possible confusion when using another tool (like doing a diff).
It’s still not as good as having Either/Try monads available to avoid the mess of “if err != nil” madness but having gone back to Java lately and the issues we have with exceptions there, having errors as values instead of a magical control flow feature makes the code easier to understand.
Now that I can make sure I include slack traces on them so it’s easy to find where they originated I have the best of exceptions in place as well.
Being able to wrap the error with context is the key. If done right, you can identify the exactly what caused the error without any additional infrastructure. (For people that say stack traces do this, they don't. They don't tell you which iteration of the for loop you're on. They don't tell you what the code on the other side of the channel you read the error from was doing.)
My rule for errors is to add only information that the caller isn't aware of. So don't do:
The caller already knows the id it's looking up, and that the name of the function is GetFooByID. It might include those in its wrapping of that error, but GetFooByID shouldn't.
Now your logs look something like "server failed to startup error=apply migration: upgrade foos: UpgradeFoo(42): i/o timeout" instead of "server failed to startup error=i/o timeout". This saves you from "hmm, maybe it's too slow to get all the foos in a batch like that, we should change that". But nope, it's actually UpgradeFoo(42) that's broken.
Wrapping applied well is my favorite feature of Go. Most people don't do it. The standard library doesn't follow my rules. But if you do it this way, every error you see is so easy to debug and resolve. Less downtime, more reliability.
(As an aside, that loop where you accumulate errors can easily accumulate into a multi-error as of recent versions of Go. I always prefer to try everything possible and return all the errors, then you can fix multiple problems with your input on one go. It's also good for cases where you are doing a main operation and an ancillary operation, like closing something. People often ignore errors on "Close" and "Sync", but by joining those into a multierror, then you no longer have questions like "there were no errors, but this file isn't on disk". https://pkg.go.dev/go.uber.org/multierr#hdr-Deferred_Functio... is a really nice approach for the common case of "defer fh.Close()". I have an `errors.Close` wrapper I use: https://github.com/pachyderm/pachyderm/blob/master/src/inter.... Can't live without it!)
I think solving the context problem is key. I've seen a few proposals for go error handling that address the verbosity while also allowing you to provide proper local context in a terse and readable way. I'm hoping something like that makes its way into go eventually.
I've seen proposals that were very similar. I think the problem with this approach is that it doesn't really make error handling that much easier. Typing "catch err" is not that different from typing "if err != nil".
In Go, if a function returns only an error, you can do
if err := foo(); err != nil {
// handle error
}
However, this only works in that particular case, if foo() also returns a value you can no longer use this because the value (and err) are scoped to the if block.
this is the sort of silliness that is common in the Go world. as noted elsewhere, errors are values in lots of languages - Rust, C, Scala, Haskell, etc etc, but Go explicitly has no way to handle them nicely, no specific syntax and no fancy type system stuff like sum types.
it is my very strong belief that this will eventually be fixed in Go and when it does, almost all the people currently saying "I like errors being values [and it's fine that Go makes it very annoying]" will quickly prefer having some actual language help for these values.
I don't think the Go community disagrees that it would be nice to make error handling a bit easier, because that was the top issue raised in the recent developer survey. The problem is that every proposal made so far either did not actually make error handling that much better, by looking at the feedback they received.
The author has reinvented the ?-operator from Rust and Ruby, but for Go.
Here's the same function in Rust:
fn decomp(filename: &Path) -> Result<Vec<u8>, io::Error> {
let fd = File::open(filename)?; // File is automatically closed by its destructor.
let zd = GzDecoder::new(fd); // flate2::read::GzDecoder::new does not return an error.
let mut data = Vec::new(); // Rust makes the caller allocate the buffer for reads.
zd.read_to_end(&mut data)?;
Ok(data)
}
I think this is great. From a reader's perspective, it can dramatically improve readability in a lot of functions. From a writer's perspective, these kind of solutions make it easier to compose expressions without interleaving if-statements after every other line.
Yes, sometimes you want to add context to your errors instead of using this syntax. But you can always use verbose syntax when it's needed, and terse syntax when it's not.
> Rust makes the caller allocate the buffer for reads.
I don't think this is true in this context. The buffer initially will have a capacity of 0, and will grow to fit the available data, so that as read_to_end is inserting data, the buffer will be resized until all the data fits.
However, if we had preallocated the buffer, or were reusing an existing buffer, then the buffer would only be grown if the data being read was too large for the buffer. In addition, there are other functions that can will never resize the buffer, and read only until the buffer is filled.
Perhaps a better way of phrasing this is that Rust lets the caller control where the data will be written to.
The only argument I can think of against this would be that it would maybe slightly discourage adding the verbose details and would generally decrease the quality of error messages slightly. That said, even so it's probably worth the trade.
Errors are composable so this isn't such a problem in practice. Most of the prod code I wrote would do something like this
use thiserror::Error;
#[derive(Error, Debug)]
enum Error {
#[error("Could not open given decomp file: {0}")]
FileOpen(#[from] std:io::Error),
#[error("Compressed read error: {0}")]
CompressedRead(#[from] gz::Error)
}
fn decomp(filename: &Path) -> Result<Vec<u8>, Error> {
let fd = File::open(filename)?; // File is automatically closed by its destructor.
let zd = GzDecoder::new(fd); // flate2::read::GzDecoder::new does not return an error.
let mut data = Vec::new(); // Rust makes the caller allocate the buffer for reads.
zd.read_to_end(&mut data)?;
Ok(data)
}
Huh, can you explain that a bit more for a rust noob like myself?
1. How does it know how to create your Error enum? I guess it's from the #[from]?
2. What happens if your method tries to return something that's not an io::Error or a gz::Error? I guess the compiler catches that?
3. How would you handle doing this for multiple methods in the same file? Would you rename your enum to DecompError or something to avoid conflicts?
> How does it know how to create your Error enum? I guess it's from the #[from]? 2.
#[from] is just a convenience library feature, in reality it’s because of the From conversion trait which ? invokes on the way out. Essentially it calls ReturnType::from(ValueType) to bridge the two.
> What happens if your method tries to return something that's not an io::Error or a gz::Error? I guess the compiler catches that?
If there is no available conversion to the return error type, compilation fails.
> How would you handle doing this for multiple methods in the same file? Would you rename your enum to DecompError or something to avoid conflicts?
That is an option, although the slightly sad truth is libraries usually have a single big error type and every function returns that.
Convenient fine grained errors in rust remains unsolved, as far as I know. You can do it but it’s a lot of manual work.
People are overly dramatic when it comes to typing err != nil. I’m not going to say it’s a holy grail in error checking. But I have yet to be convinced on the benefit of exceptions vs. the hell hole of complexity they bring.
They stuck to their guns, took what C did, and made it 1000x better. And in the same sweep made a language that is dead simple to read and code review.
It's not that it's hard to type. It's that in Go, if you make a mistake with your error checking, your program has a bug. In better languages like Rust and Haskell, if you make the same kind of mistake with error checking, your program doesn't compile.
And Go isn't even close to 1000x better than C. It makes almost all of the same mistakes that C did (especially the billion dollar mistake), despite being new enough that it should have learned from them.
I like both Haskell and Rust (in theory; haven't written anything nontrivial in either). But they both seem to encourage you to spend a lot of time and energy on the meta aspects of programming. You can build safe, beautiful systems, but you can also happily spend all day (all week? all month?) playing various forms of code golf without actually producing anything.
Go, like C, is a very get-it-done language. It doesn't let you have any fun at all with abstractions, so you end up just doing your work instead. In spite of its warts, I think Go is remarkable and unique for this quality.
It's about how hard it is to make a mistake in well-intentioned code. The number of times my fingers typed "if err != nil" when what I did actually mean was "if err == nil" is too damn high, and the consequences often dire. I usually prefer Go to Rust (and even defend Go's error handling most of the time), but that's the one aspect that Rust handles 1000x better: the common case must be easier, so that you don't get as easily tripped up by the uncommon.
Rust's type system mandates it. If you disagree, then post a Rust program that uses the result of a call to std::fs::read_to_string without specifying how to handle an error.
Sometimes; you wrap to add additional context. Sometimes; you wrap to get a generic error type into a package-specific error type. Sometimes; you wrap to get, idk, some kind of project-wide HTTP-oriented error type. Error wrapping is the pattern in Go; which is very different from exception-oriented languages.
I like this article's syntax for a straightforward "throw error" situation. I wouldn't support its addition, but I wouldn't oppose it either. However, I struggle to imagine a more concise Go-ish syntax I like which supports a "wrap and throw" type situation. Maybe something like:
Phrased in english; the bang operator can follow any statement that resolves to a multi-value function return where the last value is an error type, and the statement is in a function body whose last return value is an error type. If this function returns a non-nil value as its last value; If nothing follows the bang, it bubbles up this non-nill error with no wrapping. If a statement follows the bang, that statement gains an implicit `err` value containing the error value the LHS statement resolved to; and it can return a new error-type which then gets bubbled up.
But, again; I don't love or even really like this, its just the best I can come up with. Its not that much shorter than writing it out. Its not obvious how it should behave in the presence of an outer-scoped variable named `err`. Its not obvious how the bubble up should handle the other non-error return values (zero value I guess?)
One thing I rather like about Go is; if you're catching and re-throwing wrapped errors, which is a pattern I like, its actually far more concise than exception oriented languages. The same thing in JS?
let v;
try {
v = Thing()
} catch (err) {
throw new Error(`I died: ${err}`);
}
So; you rarely do that. But in Go, its barely harder to wrap and re-throw than it is to just directly throw; so people do it more often.
To be fair, you wouldn't do that in JS because exceptions have a stack trace, so you don't need to add the context on every return.
The downside is that since you don't "need" to catch your exceptions, you don't think about them. That's one thing I really like with Go, errors are in my face all the time, so I have to think about them. Even if I write the infamous
if err != nil {
return err
}
at least I do it consciously. And when I or my colleagues read it 2 months later, we know from reading that this can return an error, and we see clearly how it was handled, so we can think about it and visually see if it was handled correctly.
Reviewing and reading code with exceptions is a nightmare, because you have no idea if errors were handled correctly. On every call you have to guess if it throws or not.
I propose a simple split. When you are writing a daemon or background job whose goal is to never ever stop, then exceptions are indeed a minefield of complexity. When you are writing a collection of HTTP/RPC handlers, exceptions semantics are almost always correct: whatever goes wrong just needs to bubble up the stack and become a 5XX for the caller. Being able to “handle” without propagating needs to be possible but is rare enough that it should be opt in.
This! These fancy operators add a lot of complexity in practice. Trivial things in Go like error wrapping (to add local context), become much more complicated with `?` operators.
Go made something super simple. I write and review a lot of Go code daily, and I don't quite get how these error branches are such a big issue. The code is always very simple to follow through.
I've worked heavily with Go for several years now but recently I have been transitioning to Elixir. In my opinion, Elixir handles this kind of error return problem better with two separate patterns:
1) Somewhat similar to what the author of this post achieved, there is a pattern commonly used in Elixir libraries where there are often variants of library functions which end with ! which raise an exception when a problem is encountered.
I like a lot about elixir, including what you point out. Pattern matching is awesome. Its type hinting via spec is not enough (i hear some developments are on their way here).
I don't like that I often had to jump up several callers to understand the arguments coming into my function. Go wins here. And I also require performance. While it may have been query abuse by Ecto, the Elixir code base I was in was only able to handle like 3k rps across 5 nodes. I expect nearly 2x that from a single similarly sized Go node making Go 10x more performant in naive implementations. Easier to read and more performant? I chose Go. It also plays nicer with K8s; we had trouble getting nodes linked up in elixir to allow multi node BEAM features.
I like to joke that the most important question in Irish architecture is: where does the water go?
Similarly I love how in go error plumbing is front and center, at least as important as data plumbing, usually more important.
For some problems, one wants to gloss over things that might go wrong. Don’t use go for those. Go is for when how something fails is more important than how it works.
I love weird little experiments like this! I was in favor of the check-handle proposal[1] that landed at the same time as the generics proposal and a little sad that it seems to be abandoned.
Bikeshed: How about _two_ bangs?
data := !!io.ReadAll(zd)
The double-bang pattern is unused in Go because it doesn't coerce non-booleans to booleans, so you can appropriate it without stepping on anyone's toes :)
The main disadvantage of a Rust-?-like syntax for Go is you lose the ability to add context to the error (e.g. wrapping with `fmt.Errorf("context goes here: %w", err)`). Some people prefer to shove a stack trace inside the error to work around this. I'm not sure what the perfect solution would be. The check-handle proposal adds a special handle keyword to deal with it.
BTW, I am aware of the 'errors are values' concept -- I was the guy interpreting for Rob Pike and the nice Japanese fellow who asked him about it at a conference afterparty and inspired the blog post. I think this pattern is great but it's quite hard to distill it into generic advice ("delay reporting errors until the last possible moment"?). Designing ergonomic APIs for Go can be challenging, and purely anecdotally a lot of the proprietary Go code I've seen at various companies does not do a great job at it.
My #1 wish for Go is that one day Go will figure out sum types and pattern matching. I know that the way interfaces work make this challenging, but I have hope. Using sum types to handle errors makes it much more obvious what the 'right thing' to do is.
> The main disadvantage of a Rust-?-like syntax for Go is you lose the ability to add context to the error
You don't lose the ability to just by that syntax existing. You could still do so by writing it out the long way like you have to do anyway today in Go.
What do you mean by "at the cost of type erasure"? Isn't type erasure an attribute of a programming language and not something that a library has control over?
In this case, it means your actual error will be packed and boxed someplace behind a smart pointer, its type will be lost (so you wouldn't be able to match on its specific variants downstream might you want to) but it will be still able to format itself to a string and will comply with std::error::Error trait.
Indeed. What I meant is that compared to the pre-existing error handling, there are use cases it won’t fit. I am pro-? syntax. I think it would be useful even without being able to add context, but I’m also interested to see if there are solutions that could handle this for you.
Oh yeah, I’m not seriously suggesting !! as a new language feature, just recommending it as an alternative to the article’s (!fn() and ^fn()). The reason for “abusing” these pre-existing syntaxes is so that you can take advantage of the official parser and AST package. Makes it easier to play around with. If Go were to ever actually add this feature, they definitely should use a new keyword or symbol.
Agree, wrapping errors continues to build context until you have a clear trail of blame for the issue.
Most languages just expect you to read the stack trace and figure it out. Proper error handling tells you explicitly how it failed at each level so you can decide what to do at each level of the codebase.
Having global try/catch (or bang as the author suggests) is easier at first, but makes actually handling error paths worse.
Here is an fictitious example of an error that was wrapped from deep in the internals of the net package all the way up to the HTTP handler serving the user. It was wrapped five times and clearly outlines what went wrong at each stage.
> failed loading config: unable to reach host example.com: tcp: dns lookup: timeout
https://pkg.go.dev/errors#Is and friends can be used to figure out what type/class of error exists in this chain so there is no loss of information either. It's more than just string concatenation. It's an actual tree of errors that also presents well for the logger.
And the stack trace comes with its own issue; they are expensive. Which is fine, if the exception is actually true to the name - an exceptional situation.
But in practice / production, exceptions / errors are not exceptional. Take a HTTP server. Due to a myriad of reasons, the connection between the server and a client can be severed, triggering an error because the connection was broken. Is that exceptional? No, the internet is unstable and clients are unreliable. Is it therefore valuable to generate a stack trace every time that happens?
One exception is fine, but if you go web scale like Google, the "this connection was interrupted" case happens millions of times a day. Millions of times the cost of generating an exception + stack trace becomes really expensive. An error is cheap in comparison.
I think this is fine. 90% of the code that starts with "if err != nil" is immediately followed by "return nil, err". Of course this obscures the origin of the error, but that's probably a fundamental problem with not having automatic stack unwinding. (Maybe unary "Foo()?" could "return nil, err", but infix "Foo()?NewErr" would do "return nil, errors.Join(err, NewErr)"... but that starts to look like line noise...)
Rust struggled with that for about three rounds of verbose error handling, until finally settling on "Result<useful, Error>" and "?". That seems to be about right. C++ exceptions are too much. Writing it all out as in C and Go is too little. The Rust solution is a good midpoint.
try! was added in 0.10 (2014), and ? was stabilized in 1.13 (2020), according to my quick check on GitHub releases. Note that the underlying traits to implement ? on custom types are still unstable.
But that’s part of the value proposition of Go: that it’s a small(-ish) language. I like that it lacks magic found in many other languages. Though I concede that the “simplicity” sometimes comes at the expense of developer ergonomics
Yet here we have an example of bad developer ergonomics that so frustrates one developer that they are willing to use whatever bonkers syntax they can get the parser to choke down, and build an AST transformer for Go, to try to get something remotely approximating sane for a single common case.
Which is saying nothing of Go libraries heavily relying on comment-based programming (err, sorry, we call them "struct tags") to generate boilerplate. The language is successful, but "expense of developer ergonomics" is an understatement.
For me personally, any kind of codegen solutions are off the table unless the library/ecosystem demands it, like k8s :(. You lose so much productivity when dealing with codegen: linting, LSPs, testing, hell even just understanding WTF this does.
It also prevents adding relevant context to the errors, meaning you get all the joy of hunting through call stacks to figure out what went wrong in a sufficiently complex program.
When I write tooling for myself in Go, I usually wrap all of my errors with stacktraces since I hate having to grep for a string that may be interpolated or duplicated throughout the codebase.
Addendum: It is still a good thing to be able to add context to the error message though, so I think you've got a point.
But part of me also sees the Go's team point on this, which is that not all functions always need their error checked - as an obvious example, fmt.Println.
Sure, the compiler could add explicit exceptions for those cases, but that's a very unclean solution and it doesn't handle third party libraries.
The "Go way" is to use the errcheck tool to do that.
And on balance I think that having that defined in a separate tool - which can be configured by the user to exclude modules of their choice, and comes with good defaults - is the correct choice.
>not all functions always need their error checked - as an obvious example, fmt.Println.
I agree, but a lot of other languages handle this much better. In Rust (predictably!) you receive a Result type, and if you don't want to check the error you either .unwrap() it or apply the ? operator to pass it up.
I feel like making "let's not check this error" explicit rather than implicit would be an improvement. Currently in Go it's impossible to tell whether someone forgot to check an error, or if they omitted the check intentionally.
As an aside: I feel like if you are ever in a situation in which fmt.Println returns an error, then whatever situation you are in is already far, far beyond saving. Maybe fmt.Println should just panic on an error, that seems better than output silently being dropped!
> As an aside: I feel like if you are ever in a situation in which fmt.Println returns an error, then whatever situation you are in is already far, far beyond saving.
Not necessarily, it could just be that the user has closed the stream for one reason or an other. Possibly because they’re running it as a service without having set up an stdout, if the program has useful side effects.
> it's impossible to tell whether someone forgot to check an error
How, exactly, does one find themselves in a situation where they forget to add entire blocks of logic to their application and not notice? We're not exactly talking about subtle bugs here. This is completely missing functionality – something that becomes immediately obvious as soon as testing begins.
Which, I guess, means that the previous developer did no testing at all. In which case, where do you even begin to figure out what else they have forgotten? Such a codebase, no matter the language, may not even be salvageable at that point.
It automates checking but not handling the errors. It's the Go equivalent of an empty catch{} block. It allows a programmer to not care about errors, which works out to the same thing as ignoring them.
> if you did not need an external linter to remind you
That's true if the function doesn't return a value (other than the error) or if it does and the caller also ignores the return value.
That is, given
func F() error {
...
}
It's legal to call F() without checking the return. However in the case that the function returns a value and an error, like so
func G() (int, error) {
...
}
then it's fine to just call G() and ignore anything returned, but
i := G()
will cause a compile error. It's possible to assign one or both values to the Blank Identifier, the underscore _, so ignoring the error, while possible, requires the code to reflect the intent, like so
i, _ := G()
but
i, err := G()
will fail to compile if either i or err are not used following the call.
> That's true if the function doesn't return a value (other than the error) or if it does and the caller also ignores the return value.
“Yes” would have done just fine, especially as this is not uncommon when it comes to IO. But then again who’d do IO in Go right?
> will fail to compile if either i or err are not used following the call.
a, err := G()
if err != nil {
panic(err)
}
i, err := G()
i, err = G()
err = F()
fmt.Println("Got", a, i)
> Maybe the compiler should require the error return to be assigned for F(), that's a bit of a quirk that's under discussion.
It’s not “a bit of a quirk”. The langage only checks for unused variables which is woefully insufficient and unfit for purpose, and has been since the langage was first released.
It's an intentional design decision of go. In C#/Java, exceptions are often (ab)used to create invisible control flows through the program that are extremely difficult to follow since exceptions can float up arbitrarily large stacks before being intercepted and handled. In go since you are forced to handle every error on the current level of the stack you are on, there are no invisible control flows. Either you pass the error one level up the stack, or you don't.
>In C#/Java, exceptions are often (ab)used to create invisible control flows through the program that are extremely difficult to follow
I hear this often from Go advocates, but as someone who cut my teeth in Java in 2004, I've only ever seen this done in one codebase: an old streaming parser that I haven't seen since, and where the alternatives were, at the time, generally _more_ confusing than tossing an "unexpected end of input" exception that contained context.
In the 19 years since (14 of which have been my professional career, mostly with Java as part of the job _somewhere_), I've never seen it since.
In the meantime, though, I've run into better, more-explicit error-handling strategies (monadic errors, union types with explicit unwrapping) that manage to make the usually-bad option ("I'm swallowing this error") explicit and intentional, which is not something Go manages to achieve. (Interestingly, these better approaches all pre-date Go, so it's not like there wasn't a better state of the art to learn from.)
But Go also doesn't make error-propagation easy either.
Somehow, Go manages to optimize for accidentally swallowing errors, which is sort of impressively bad. It reminds me of something like INTERCAL: engineered to be as much a footgun as possible. INTERCAL is a parody, though, so it has that going for it.
Because if you forget to write the code to account for an error, your program still compiles (unlike languages that use sum types for errors) and then said error will be accidentally swallowed when it occurs (unlike languages that use exceptions for errors). The only other languages I can think of that are as bad as Go about this are C and assembly.
The much-maligned checked exceptions, obviously, _require_ you to have a "catch" block for the exceptions in question, or else you get a compiler error.
Option types, Result types, and Either types (which are just generalized Result types) _require_ you to unwrap them explicitly, or else you'll get a compiler error because a Result<T> is not a T.
In Haskell, you've got monadic error-handling inside of do-notation, which is implicit, but at least does the right thing by default of propagating the error back to you, rather than defaulting to swallowing it and moving on.
Meanwhile, in golang, you write this form around 6-7 times in any function of more than a few lines:
...sure, that's so much noise it's hard to miss.....the first time. But since that's literally the only way errors can be handled, you wind up with something more like this:
...how quickly can you spot the error that was swallowed? How quickly could you spot it at 2am when another, downstream service is broken because its submissions are failing validation but the validation error isn't propagated?
By taking away the typesafety of requiring some sort of type wrapper for multiple return that must be unwrapped, _and_ by taking away the enforcement that you have to check for and either propagate or explicitly swallow the error (by way of a result type or even checked exceptions), Golang takes away your guardrails, leaving you on the mountainside and liable to fall off easily.
By making you do repetitive boilerplate "if err != nil { return nil, err }" every other line or so (rather than providing automatic error-propagation machinery like Haskell's do-notation or Rust's `?` operator), Golang lulls you into "highway hypnosis"[1], setting you up to be much more likely to accidentally drive over the cliff. It makes the Right Thing™ tedious, easily omitted, and only enforced by your own constant vigilance (or complex external tooling that has to guess at your intent), and makes the Wrong Thing™ the default.
> In C#/Java, exceptions are often (ab)used to create invisible control flows through the program that are extremely difficult to follow since exceptions can float up arbitrarily large stacks before being intercepted and handled.
I've actually never ever seen that done in production code in the last 20 years I've been programming. I've only seen exceptions used for errors in which case the described behaviour is exactly what I want.
The fact that Go advocates have to exaggerate the issues with exceptions to make the design of errors in Go seem reasonable makes me extremely suspicious.
How are Goto's invisible? They are pretty explicit.
Also, just because your (not talking to you, just a pet-peeve of mine) CS101 professor said that Goto's are bad, doesn't mean it is true in 100% of cases.
I've been pondering doing a short experiment where I abandon exceptions in C# and instead always return 1 or 2 values from my methods: maybe some output and an error object. I'm not sure how well this will work in practice. One issue that occurs to me immediately is that I'd be wrapping a lot of library functions in try/catch blocks just to push my error objects back up the call stack. I suppose the lazy option is to just ignore the libraries and only worry about my own code.
1. it's very very tedious to have two unrelated error systems in one language (yours and the one everyone else in C# uses)
1. as Go demonstrates, this type of error handling is extremely tedious even by itself
I’ve done this at times in Python and C#, I really really miss Go or - better - Rust style errors there.
But, exactly like you say, you end up fighting libraries - particularly the standard libraries, and it ain’t a fight you will win.
In typescript it actually kind of works - better at least. JS APIs often are errors-as-values already because of the old continuation async APIs, and typescript of course lends itself incredibly well to rich return types.
One of the things that intrigues me about Go is that it is not meant to be used in isolation. So, the experiment you list here is actually a whole different experiment if done in isolation (on like say a hobby project maintained only by yourself) versus something actively maintained by a whole group of people. The experiment really should be done with some peers - if you can find willing victims (I mean, volunteers)!
To me part of a language's utility is measured by how well it works in the group setting. That's what typically drives the industry, and more to the point for me it is the situation I will encounter most of the time.
C# doesn’t really offer language syntax to deal with self many DUs. So you will probably suffer. Perhaps it can change if the language supports DU in the future.
it is my second or third favorite part of Go. What sucks more than just about anything is exceptions as control flow. It is a spooky GOTO at a distance.
To everyone who complains about the "if err != nil { return err }", honestly, you are doing it wrong or you have a toy application. I have written several, large, high scale, highly available systems processing multiple billions of requests a day. When analyizing those code bases, empty error returns like that accounted for 4% or less of our code. We always were doing _something_ with the error. Metrics, logging, reties, sending off to a different workstream, etc.
It's not just the empty return. It's that every fallible function call starts with `err = foo()` or `value, err = foo()`. You're forced to read `err` at the beginning of every function call even if you don't care. How will sugar for an empty return or appending an error handling scope harm anything?
And I understand the advantages over exceptions, but we've made some nice progress since the mid 2000s. Sum types and error unions aren't rocket science and even the assembly line programming model that Go seems to have built to enable could adopt them up in a day.
You could even have `catch` capture only the last value in the result tuple and it would still be big improvement.
sum types and pattern matching are great. For me, I'm anti-exception.
"Point on the doll where the exceptions hurt you." I worked on a project with exceptions as control flow (cough, twisted python, cough), and the error handling was caught several classes and mixins up and over in the file directory. In some far away file, your exception triggered a callback or an errback, and if that excepted, similar magic happened. It got to the point that several engineering choices had to be made around "well, this really would be nice to have an exception and some standard handling, but the framework will take it and do strange things."
I _love_ handling my errors where they are created.
There are no practical problems found in error handling, period. They aren't anything special. It is no different than handling a person's age, and who has problems with handling that? You have failed to grasp programming at even the most basic level if you are struggling with handling values.
The discussion is always just about the pleasantness of the development experience, and exception handlers are simply not pleasant to use (when carrying errors) – to the point that, when using those languages, developers go out of their way to find ways to avoid having to deal with errors to not have to write out the monstrosity that is to catch an error thrown.
Indeed, that's life. You have to deal with the hand you are dealt. If that means changing how the application functions to not make your developer life a complete living hell due to quirks of a programming language, so be it. But idealistically, that does not make for a good language design. There is a reason why they are called exception handlers, not stack traversers, or whatever.
What I mean is that exceptions and try/catch makes a better development experience imo, both for "errors" and control flow and we've been using that for ages with no debate similar to what the golang error handling generates.
They only give the illusion of providing a better development experience if you forgo dealing with errors.
Indeed, there is a class of programming, known as scripting, where errors actually don't matter. If something bad happens you simply fail, deal with the problem through human intervention, and try again later. It is not unreasonable to carry errors up the stack using exception handlers in this type of problem domain. Arguably communicating an error state this way is the best solution we know of to that problem.
However, there is another class of programming, known as systems, where error handling is the most important code you will write. This is the domain where exception handlers are just plain not suited to the problem. The idiomatic Go solution is not pleasant, but a huge improvement over dealing with catching exceptions everywhere.
As before, because catching exceptions is so painful, many applications that probably should be systems are pushed into being scripts. Which may very well be a good engineering solution to the problem of dealing with a quirky language, but idealistically a programming language will not shape your application requirements like that.
But, yes, Go is not at all designed to be a scripting language. It has been quite explicit about that since day one.
I don't think the scripting/systems distinction matters. After all, Go dragged a lot of Python developers that create more or less the same kind of applications and banckends. They just enjoy added benefits such as easy deployment, faster execution, lower memory requirements, saner dependency management, and more. But I'm pretty sure error handling is not what they like about Go. I sure don't, I just cope with it.
With respect to Python, what is the practical difference between:
try:
something()
except FooError as err:
raise BazError("something bad happened", err)
except BarError:
return 1
and
err := something()
switch {
case errors.Is(err, FooError):
return 0, NewBazError("something bad happened", err)
case errors.Is(err, BarError):
return 1, nil
}
? I see no meaningful difference at all.
The only thing that stands out to me is that Go version gets you thinking about errors more (although not as much as some other languages), not being sourced from something bolted onto the side. In Python, you are more likely to lazily call something() blissfully unaware that an error may occur. Is the problem actually not with Go per se, but that people don't want to be reminded that errors are a thing, presumably because they don't want to put in the hard work to properly deal with them?
It is quite true that properly dealing with errors is hard work. Really hard work. But it is also the most important code in your application, so I posit that it is work worth doing.
> In Python, you are more likely to lazily call something() blissfully unaware that an error may occur.
It's not about being lazy or unaware (frankly, that's condescending). No matter the language, IDEs are pretty clear about what errors a something() could trigger. It's not about terseness of the code either. Again, IDEs will just fold all the "if err != nil" blocks so I don't care. It's about separation of concerns.
Example: If I develop an app which rely on external resources such as an API or database, there may not be any reason to manage all of their possible failures upfront, right in the middle of the so-called "happy path" of the current business logic. Just let the connection error bubble up and do right thing (ie. rollback whatever you had, log the error and display a nice message to the user, or whatever is needed) elsewhere where this error dealing is centralized along with all http transport shenanigans.
That's kinda like the Java checked exception thing: annoying with no real benefit.
These days I port an app to Go. And I really like some of the things it brings to the table (mostly unrelated to the language semantics though). But it also feels incredibly archaic at times, error handling not being the only culprit here.
> there may not be any reason to manage all of their possible failures upfront
You still need to handle the errors you receive to ensure that implementation details do not leak to the next caller. That, at very least, means returning (or raising/throwing) a new error that is relevant to the function returning it - not what was relevant to any functions it happened to call.
To continue with your example: If your database error bubbles up, callers will come to rely on those error values. Then, when changing requirements sees you need to switch your implementation to a web service API – with a whole different set of errors – now everything they wrote is broken.
Which is especially problematic in languages like Go and Python where possible error values are not well communicated to the caller via language constructs. Your code will compile just fine, and seemingly run fine, until that SQL error you were expecting becomes a 500 HTTP error that you didn't know to handle, and now you're in a difficult place. Maybe you can get away with that practice if you have checked exceptions, as at least the compiler will blow up when the possible set of errors change, but it's still a pretty awful API design to leave the caller breaking like that just because you changed an insignificant (to the caller) implementation detail underneath the abstraction they are using.
> (frankly, that's condescending)
Flawed logic. There is no emotional component to a discussion.
I'm not asking for exceptions. I'm asking for sugar for an empty return, and most importantly, a way to capture the error value after the function call.
Regular go style is to return (value, error) tuples from most functions which can have runtime errors. This raises a bunch of questions which, thanks to some of the comments in this thread, I hadn’t really thought about until now.
I think it is implied that one should not return both a value and an error. Is this true? Is there much code in go that returns both and lets you decide if you want to take the value and carry on, or take the error and stop?
Is there much idiomatic code where a function returns a tuple of actual values? For example a function in some shipping code that returns the largest dimension of a package to be shipped where it can only be of a maximum size or a maximum weight? Does this kind of idiomatic code also rely on the contract that only one value will be set?
Does go have the typing facility to have a union type for both a value and an error, and does anyone use this in preference of returning tuples? If types in go can be null then this still allows for ambiguity and it would surely be unpopular to produce code that breaks the value/error tuple pattern, but I assume some left field hackers are breaking the rules somewhere.
Designing better error handling in Go is an ongoing work. Proposols are being considered. Personally I don't see a big problem in real life Go code bases.
Note that there is kind of a philosophical cul-de-sac here. If you understand and expect your "errors" (= exceptional situations that prevent the program to do what we thought the user wanted) you can handle them, and they become part of your program normal logic.
If you don't understand or ainticipate your errors, or choose to neglect situations that are not really rare or are really dangerous, no amount of special language constructs will save your users.
> = exceptional situations that prevent the program to do what we thought the user wanted
Just to latch onto this remark, are they actually exceptional? Say you have a REST API, does something with a database. 1 in 1000 requests fails, so it's exceptional I suppose. But then you go web scale, and your application gets called a million times per minute. Suddenly you're dealing with 1000 errors per minute; not so exceptional anymore.
Besides, errors can also be things like 'SQL row not found', which are expected in the normal run of things.
Some of these proposals don’t work especially for Go, Go type system is so plain infact it’s too plain that allowing for “special compiler keywords” can do more harm than good, example is “?” What if there’re 4 return values?, in Go where Order is pretty important, to Go this alone increases compile time
Also a good example of programmer masochism, there's absolutely no good reason for
zd, err := gzip.NewReader(fd)
to return an error rather than waiting for an actual read. The stdlib is usually pretty good about this but third-party libraries are rife with this kind of misdesign.
You should try reading the source code and you will get your answer. NewReader[0] calls Reset[1], which calls readHeader[2], which the name should tell you what its doing.
Is the author just telling us about a tool they privately made but are not sharing?
And any code you write with this "bango" operator is incompatible in anyone else's Go environment unless they have the author's same tool configured to preprocess their code?
I like having errors and handling them explicitly, though it's a very common pattern to do something, and if there's an error, bubble the error up.
I think instead of a bang it would be better to have a keyword like this.. similar to defer to defer a function. Maybe `try` to give a function a try and if any error is returned then bubble up.
defer cleanup()
bar := try somethingelse() // equal to bar, err := somethingelse()
.. return err
Sum types are the right thing for a statically typed language.
Returning an error and a value as a product type with an implicit promise to ignore the value part if the error part has a specific value is not the right thing.
There's a credible chance sum types are the right thing in a dynamically typed language too but I haven't settled on exactly what what should look like yet. It's not a thing in the dynamic languages I know of.
I will promote my shorterr package, that I just released a few days ago, and that tries to implement rusts ? operator as much as possible: https://pkg.go.dev/github.com/ansiwen/shorterr
If you browse random Go code from https://pkg.go.dev/ you will have a hard time finding that hypothetical case where most of your code is error handling chores. The problem is artificial.
Yes, I agree. If you just ignore errors, they will go away. #yolo
P.S. That said, any program written in Go is an absolute shitshow of crappy UX because of the (inevitably) inconsistent and often incorrect error handling.
Once a Go program goes off the happy path, the only way to figure out what happened is to read the source code. (No, you cannot have a stack trace. Stack traces make junior programmers feel uncomfortable, and so Google made them verboten.)
And even then, because Go uses insane and outdated OOP practices like casting all pointers to a generic base class, even reading the source code is a exercise in frustration and rage.
Stacktraces are not useful for end users of the program and should never be shown unless there is a bug in the program.
It is silly how many Java or Python programs display stacktraces on trivially preventable problems that are not bugs (e.g. file not found) instead of giving short human readable messages.
There’s error wrapping for adding context. Where one adds a human readable context to errors. Stack traces as I know them from the JVM world, with their line numbers and method names, seem a lot more coupled to the source code, so I’d say it’s the other way around really.
Library code, especially stdlib, is usually the bottom of the abstraction chain and rarely performs fallible operations like IO. It’s the one originating the errors.
The issue is that application code usually has several layers between the end-user and the fundamental operations like Read()/Write(). Bookkeeping errors through all these layers is a chore. You can skip the layers but then you get spaghetti.
> New Code is a lot cleaner: matcher2 := match.If(match.Binary(match.Ident("err"), token.NEQ,match.Ident("nil")),match.Block(match.Multi(match.Return(match.Multi(match.Ident("nil"), match.Ident("err"))))))
Go is designed to be easy to read and easy to write. err != nil is not hard to read. If you don't like the repetition, it simply isn't the language for you. I'm perfectly happy with how Go does it. I do not want the language morphing into another Rust.
I love these kind of experiments. Before Java had lambda expressions, there were a few creative libraries that introduced them. It wasn't perfect, but it helped drive towards good use cases for that language feature before it made it into the language officially.
the only people who constantly complain about errors in Go are those who have no or only small experience with long running programs, so mostly scripts(php, python,...).
He doesn't want to ignore the error, he wants exceptions-like behavior where the exceptions just propagate up by themselves if you don't catch them inside the function. Like Java or Python exceptions.
Yet another language-ergonomics post that confuses essential complexity (the function is doing complex business logic that happens to be necessary) with accidental complexity (in this case: having to type more).
With this distinction in mind, the entire article boils down to two points:
1. The author is basically complaining about having to type more (fair, but not interesting)
2. The author proposes that appending a "!" to a statement is somehow clearer than explicitly returning an error value (absurd and eyebrow-raising)
The whole article then becomes "I like Rust's syntax better".
Having a common idiom for common operations like error propagation is not "absurd and eyebrow-raising".
What sounds absurd to me is obscuring the actually interesting control flow significantly in favor of either showing the trivial control flow or alleviating the need to learn the one (1) single operator that represents said trivial control flow.
You don't have an idiom, you write it out longhand every time, which ends up obscuring the important control flow and business logic under a pile of ceremonial repetition.
Yes it means errors shows up everywhere in your code - the same types of errors are something all languages have to deal with, just accept our faulty reality for what it is and have the discipline to account for it.