I think the idea of "useful zero values" in Go is a mistake from bias grown out of being at Google. Protocol Buffers implement default zero values (https://developers.google.com/protocol-buffers/docs/proto3#d...) and there's no way of discerning whether something is "false" or "unset", the empty-string or unset, etc. In that context, it makes perfect sense for Go to have similar behaviors for zero values.
In some respects, you can get around this by simply adding another field. You can have `bool over18; bool over18IsSet` and I'm guessing that Google's internal usage of protobufs does this.
In a certain way, even getting rid of null/default values doesn't fix all problems when it comes to things like updating data. Think about updating a record where a field could be set or unset - let's say a person's age could be a number or empty. If I want to send a request updating their name, maybe I send `{"name": "Brian"}` because I don't want to update the other fields. How do I unset Brian's age? `{"age": null}` makes some sense, but a Java (and many other language) deserializer will have null for age with `{"name": "Brian"}` too. I mean, the age field has to be set to something in the Java object. You could manually read the JSON, but that's janky and brittle - and hard in terms of interoperability with libraries and languages.
Maybe Google's protobuf designers would argue, "you really need to have explicitness around your values and forcing defaults means forcing engineers to deal with things explicitly."
I don't think I agree with that. I don't like Go's nulls and default values. I think most languages are moving away from that kind of behavior with other new languages like Kotlin and Rust going against it and older languages like C#, Java, and Dart trying to bolt on some null-safety (Java via the `Optional` object and C# and Dart via the opt-in `TypeName?` similar to Kotlin). It's possible that this is a wrong direction chosen by many languages. We've seen bad programming language fads before. In this case, I think we're on the right track and Go's on the less-great side.
Go has a lot to like, but this is one of those odd decisions. I understand why they did it. Go comes from Google where Protocol Buffers have similar default-value behavior. I think Go would be better if it had made some different decisions in this area.
I don't think it has anything to do with protocol buffers, but the behavior derives from the same intrinsic motivation.
If you don't have a zero value, a programmer has to pick one. What are they going to pick? Probably what the language picks for you, "int n = 0;", 'string foo = "";', etc. For a language, it doesn't really matter which side you pick (force programmers to select a value, or auto-assign one). For network protocols, defining empty is an important optimization -- if the client and the server are guaranteed to agree, you don't have to send the value. This is especially important where the client and the server aren't released at exactly the same time; the server may have a new field in the Request type that the client doesn't fill in. With a predefined zero value, it doesn't matter. (You can always add fields to your message to get the same effect, if you actually care. I've never seen anyone do this in any API, including ones that use serialization that doesn't have the concept of zero values. It's why Javascript has the ?. operator!)
Finally, Go came out in the proto2 era, which did have the concept of set and unset fields (and let the proto file declare arbitrary default values). Honestly, I wrote a ton of Go involving protos at Google, and never saw proto3 until after I left Google.
> If you don't have a zero value, a programmer has to pick one.
Most languages without null have an optional type which is used exactly for that. In these languages, this means the None value exists when you have optional things, but that, then, the compiler forces you to check whether your value is set when you want to use it. Serialization libraries then get the choice to handle these optional values as they wish, which can be not to send a field.
It's one of those things that may be hard to think about from an external pov, but it works just fine.
From users of these languages, proto2 was okayish, and proto3 was a massive regression. Another thing that's missing in protobuf is the ability to define union types. That's one frequently asked feature from typed functional languages for serialization protocols.
For programming languages, I agree with you. Though I don't really see how "int*" is different from "Optional<int>". You can write:
func foo(maybeInt *int) {
if maybeInt == nil { panic("not so optional!!!!") }
...
}
Just as easily as:
func foo(maybeInt Optional<int>) {
switch maybeInt {
case None:
panic("not so optional!!!!!")
...
}
}
To me, it just isn't a big deal. Your program is going to crash and return unexpected results when it expects something to exist and it doesn't, and the type system won't save you. (Even Haskell crashes at runtime when a pattern match doesn't resolve. Don't see how that's any different than a nil pointer dereference. Your live demo is ruined.)
For protocols,an Optional type just pushes the problem one level down. Is the optional value "None" because the client didn't know the field existed, or because they explicitly set it to "None"? You can't tell.
I think rather than going 3 levels deep, it's easier to just define the default values and not distinguish between these three cases. If you want an Optional value, you can make yours as complex as you wish:
message Optional {
int value = 1;
bool empty_because_the_user_said_so = 2;
bool client_has_version_of_proto_with_this_field = 3;
}
Now if you get (0, false, false), you know that's because the client is outdated. If you get (0, false, true), you know that's because the user didn't feel like sending a value. And if you get (0, true, true), you know the user wanted 0. (Of course, there are all the other cases that you have to handle -- what about (1, false, true), or (1, true, false)?)
I think you'll find that nobody but programming language purists want this feature. If your message is:
message FooRequest {
int foos_to_return = 1;
}
You do the right thing regardless of whether the 0 in foos_to_return is what the user wanted, something the user forgot to set, or the user has an old version of FooRequest ("message FooRequest{}").*
> Though I don't really see how "int * " is different from "Optional<int>"
They wouldn't really be different if a pointer were only used to express optionality.
However, in go, that's not the case. Pointers are used to also influence whether a receiver can mutate itself (pointer receivers for methods), to influence memory allocation, and as an implementation detail for interfaces.
If I have "func makeRequest(client *http.Client)", how can I know if the function will handle a nil client (perhaps using a default one), or if the client expects me to pass in a client, and just uses a pointer because idiomatically '*http.Client' is passed around as a pointer?
The answer is, I can't know. However, if we have what rust has, which is Box and Option as two different things, we can get 'fn makeRequest(client: Option<Client> | Box<Client> | Option<Box<Client>>)'. We've made it so the type system can express whether something is optional, and separately whether something is on the heap.
In go, those two things are conflated.
Similarly, rust has 'mut' as a separate thing from 'Option', which is another case where the type-system can express something the go pointer also is sorta used for.
In practice, I think there's a clear difference. Most of the pointers I see in go (like pointer receivers on methods) are not pointers because they're expressing "this might be nil", they're pointers because the language pushes you to use pointers in several other cases too.*
> If I have "func makeRequest(client http.Client)", how can I know if the function will handle a nil client (perhaps using a default one), or if the client expects me to pass in a client, and just uses a pointer because idiomatically 'http.Client' is passed around as a pointer?
C/C++ implementations generally have a [[nonnull]] attribute to that effect.
Annoying that the various Go linters I have installed (`go vet`, `golangci-lint`, `gosec`) don't catch "possible use of pointer before nil-check" - you'd think it would be an obvious case for those to handle.
I'd imagine it'd be one of the "disabled by default" in `golangci-lint`, for example - helpful for when you're doing an in-depth review of the code but probably not something you want running on every CI invocation.
For one, the compiler can force you to check for None. Trying to use an Option<T> as a T is a compile-time error, you have to write the pattern match to use the Some case.
For another, and this is the big one, you can write a function which takes a T, and you can't pass it an Option<T>. The compiler can statically confirm if your variable has already been "nil checked".
I used to write a lot of Java code, and since you can't know for sure that null-checking has been performed (among other reasons, that might change with a new code path) you just kinda sprinkle it everywhere. And still forget and get runtime errors.
func foo(maybeInt Optional<int>) {
switch maybeInt {
case None:
panic("not so optional!!!!!")
...
}
}
This function would never take an Optional in real life if it's just going to crash on None so I'm not sure what you're getting at here. The benefit of using Optional/Maybe is that you're encoding at the type level whether it makes sense for a given variable to be able to be nothing, and if it does make sense, the compiler makes sure you check whether it's nothing or not, but this is an example of where that doesn't make sense so the type should just be int instead of Optional<int>.
I feel like this discussion is departing from the simple clarification about what alternatives exist in the design space, and entering a debate about the merits of these alternatives vs. Go's choice, which was not my aim.
While I'd be happy to discuss about my experience using these alternatives and how it's practical beyond being a purist, it doesn't really belong to a discussion about Go.
One major difference is that *int is mutable. Using it as a replacement for "optional" values is potentially dangerous because you can't guarantee that it is never modified.
> For a language, it doesn't really matter which side you pick (force programmers to select a value, or auto-assign one).
Even (modern) C/C++ handles this aspect of memory safety better: There is no default value, and reading from an uninitialized value is a compile time error (usually, because C/C++ have baggage).
Our project got bitten by this hard. I was under the impression that they did this to enable memcpy/memmove into structs, which didn't feel completely motivated if you ask me.
In my horrible opinion, we shouldn’t ditch null. We must introduce null flavors (subclasses) instead and fix our formats to support these. One null for no value, one for not yet initialized, one null for unset, one for delete, one for type’s own empty value, one for non-single aggregation (think of selecting few rows in a table and a detail field that shows either a common value, or a “<multiple values>” stub - this is it), one for SQL NULL, one for a pointer, one for non-applicable, similar to SQL. Oh, and one for not-there-yet, for async-await (a Promise in modern terms). These nulls should be enough for everyone, but we may standardize few more with time. Seriously, we have three code paths: normal, erroneous and asynchronous. Why not have a hierarchy of values for each?
Semantically all nulls must be equal to just “null” but not instanceof null(<other_flavor>).
Edit: thinking some more, I would add null for intentionally unspecified by data holder (like I don’t share my number, period), null for no access rights or more generic null for “will not fetch it in this case”. Like http error codes, but for data fields.
A usual Maybe(Just, Nothing) doesn’t cover these use cases, because Nothing is just a typesafe null as in “unknown unknown”. Case(Data T, Later T, None E, Error E) could do. It is all about return/store values, because you get values from somewhere, and it’s either data of T, promise of T, okayish no value because of E, or error because of E. Where E is a structured way to signal the reason. No other kinds of code paths exist, except exceptions, it seems. (The latter may be automatically thrown on no type match, removing the need for try-catch construct.)
My point is, there is no size fits all. Maybe you only have Some(data)/Nothing. Maybe you have a Some(data)/NoData/MissingData/Error(err)/CthuluLivesHere.
It's better you develop one for you and that suits you, rather than just a set of null-likes that are similar in meaning, but different in semantics.
Indeed: your language needs to support the ad-hoc creation of these primitives in a first-class way. (Which is why I still consider a typed language without union types to be fundamentally crippled.)
undefined is awful because you can use it anywhere. Done properly you would only be able to use it in APIs that specifically need to deal with that form of null.
What you want is a 'bottom' class (as opposed to 'top' = Object), not null. Essentially, a class that subclasses everything to indicate some problem. Look at how 'null' works: the class of 'null' (whether it can be expressed in a language or not) is a subclass of anything you define, so you can assign 'null' to any variable of any class you define. This is how 'bottom' works, if you want it as a class. But you already recognise that this is not really what you want: you want specialised sub-classes representing errors of specific classes you defined, which are all superclasses of a global bottom class.
Such a system can be done, but it is probably super ugly and confusing. The usual answer instead is: exceptions, i.e., instead of instanciating an error object, throw an exception (well: you do instanciate an error object here...). That works, but if overdone, you get programming my exception, e.g., when normal error conditions (like 'cannot open file') are mapped to exceptions instead of return values.
The usual answer to that problem then is to use a special generic error class that you specialise for your type, the simplest of which is 'Optional' from which you can derive 'Optional<MyType>'. You can define your own generic type 'Error<MyType>', with info about the error, of course. I think (please correct me if I am wrong), this is currently the state of the art of doing error handling. It's where Rust and Haskell and many other languages are. I've seen nothing more elegant so far -- and it is an ugly problem.
Yeah, my gp[2][0] comment addresses okayish error values with Case(...). It’s interesting what do you think of this type? What would a language look like if that was built-in?
As I said, it will get super-ugly, and it has not been done (in any language with more than 1 user), I think. Why? Because you will want an error class for a whole tree of classes you define, and it is not so trivially clear how that should look like. A simple 'bottom' (i.e., 'null') works. But e.g. you have 'Expr' for your expressions and you want 'ExprError' to be your error class for that that subclasses all 'Expr' and is a superclass of bottom. Now when you define 'ExprPlus' and 'ExprMinus' and 'ExprInt' and so on, all subclasses of 'Expr', you still want 'ExprError' to be a subclass of those to indicate an error. That is the difficult part: how to express exactly what you want? How does the inheritance graph look like? At that point, languages introduced exceptions. And after that: generic error classes: 'Optional<Expr>' and 'Error<Expr>', etc., without a global 'bottom'. This forces you to think about an error case: you cannot just return ExprError from anything that returns Expr, but you need to tell the compiler that you will return 'Optional<Expr>' so the caller is forced to handle this somehow.
Most people start using Result/Either[0] when they need to define a reason for a value being missing. Then you can decide how to handle arbitrarily different cases of failure with pattern matching, or handle them all the same. The error types themselves are not standardized as far as I know, but I'm not sure how useful it is to standardize these differences at the language or standard library level. Is the theory that people don't use the Result type correctly as is?
It's very usual in Haskell to define some error enumeration, and transit your data in `Either ErrorType a`. It's not a bad way to organize your code, but there is no chance at all that you'll get some universal error enumeration that will be useful for everybody.
> In a certain way, even getting rid of null/default values doesn't fix all problems when it comes to things like updating data.
The way this is addressed in Google's public APIs is with the use of a "field mask"[1]. You provide a list of dotted paths to the fields you want to update. I'm not sure if that serves as an indictment of the design decisions made in protobuf, or if it's just one less bad tradeoff among several bad ones.
So, given Go's design tenets, using pointers makes a lot of sense to me. It is easy to reason about them in terms of memory and resource consumption, there are only a few ways they can be used, pass-by-value semantics further reduces the centrality of them, and they don't require a JIT compiler to be efficient.
Why does whether a language is JIT or not make any difference? C# is usually jitted but you can AOT compile it just like Go if you want, and you could JIT go if you want.
The memory layout of a pointer is quite different from a full-blown object. E.g. Optionals are only efficient in Java because the HotSpot compiler optimizes them at runtime. And it further obfuscates the memory layout--is it an object or not? How much memory will it use at runtime? How long does it take to become optimized? Am I paying the cost of a method call or not?
Yes this is basically what I was talking about. It becomes pretty tricky to understand memory layout if you try to masquerade as a regular type (hence my apparently controversial reference to JIT compilers). I guess my point is that Go pointers are language primitives for that reason, and they support the fundamental "safe" and "unsafe" access operations that all those other languages have. So I don't think there's anything fundamentally different between the safety of Go pointers and optional types, but they are easier to reason about from a memory model perspective (they are laid out in memory exactly as you would expect).
Relatedly, in practice, a lot of Rust code I've worked with is littered with unwrap() calls.
You only have to deal with the extra complexity if you choose to put non-pointers into an Optional, though. If you use the same capabilities go has, there's no problem.
Not that "maybe it uses an extra byte, maybe it doesn't" is going to matter in 99% of situations.
> the other languages with JIT compiling runtimes aren't really useful comparisons
Interesting side-point re. language comparisons I noticed recently -- Java is often benchmarked together with compiled languages, although I would say it's only half-compiled (to bytecode, not to machine code). That's
Byte code compilation does extremely little about performance, it's all about the JIT. Hotspot was simply the 1st JIT compiler that was that good it was comparable to gcc's o2. Java is very much compiled to machine code/assembly.
I had to do a double-take; Rust, one of the languages leading the mainstream away from nullables, was created by Graydon Hoare (I looked it up and there's no relation to Tony)
But yeah: I once heard Go described as "a C fan-fiction". It was designed by and for people who use and like C, and just wanted some niceties built on top of it for their usecase of writing distributed systems (garbage collection, strings, easy networking and multithreading, easy cross-platform builds, very basic polymorphism). It was not designed by people who were interested in stepping back and re-evaluating the big picture beyond those practical necessities
I see it as C++ (literally, "C improved"), for the modern world (instead of the 80s). For all the good and bad that implies.
> But yeah: I once heard Go described as "a C fan-fiction".
It's more of an alt-java than a C fan-fiction.
Most of the stuff people are interested in in C, Go dropped. However while Go clearly disagreed with Java on the specific means and most important bits the core goals and aesthetics are extremely strongly reminiscent of Java's.
I think nil/null is a symptom of the real issue: the language permits partially constructed data types, so it has to assign something.
That's a consequence of the "always mutable" model whereby the responsibility of initialization can be shared by the object constructor and the caller.
But there are many cases where it's very intuitive for the caller to set up an object, especially in the objects as actors model.
I think to make that work, you'd want to track the evolution of the object's state, especially noting when all fields have defined values.
For what it's worth, it's more ceremony, but you can work fine with partially constructed data types without null, it's just that the nature of their construction either needs to encoded int the types directly, or they need to be initially constructed with defaults and you need to decide if you want them to explode/panic if they're not completed, or just return potentially unexpected defaults.
> That's a consequence of the "always mutable" model whereby the responsibility of initialization can be shared by the object constructor and the caller.
It's not though. An "always mutable" model can require that objects be fully initialised up-front. Rust does exactly that.
My experience writing Go is that nil pointer expectations are really not that big of a problem in practice. Linters and unit tests are quite good at catching those.
A nil-dereference is very easy to spot in production: you have a panic and a stack trace in your logs.
Which means you can check after a few years how many nil-pointer exceptions you got in your actual system. In my experience that numbers is really low.
> very easy to spot in production: you have a panic and a stack trace in your logs.
Not a great experience for your users though. And definitely not great for developers who get paged to handle these production incidents that could have been caught at compile time.
My experience is that any Golang codebase that makes use of protobuf ends up having DoS vulnerabilities (because they forget to check for nil somewhere).
Yeah it is a strange choice, maybe just reflecting what the creators of Go are used to. They have always had null and haven't had any problems with it, why change?
Null made since in the early days of C, an option type back then would have been prohibitively expensive. Today, not so, any language that can afford garbage collection, the cost of an option type is lost in the noise, and modern compilers can often optimize the cost away entirely. (though that would hurt Go's fast compile times a little bit)
Rust has a zero-runtime-cost Option<T>. Presuming T is a non-nullable pointer it compiles to something that uses the nullpointer to store the None varient. (This isn't a special-case for Option in the compiler, this optimization applies to any two-varient enum where one varient can store a pointer and the other varient stores nothing).
Early C compilers were not optimising compilers. The hardware was very limited.
Although I would agree it was possible to design the language much better to avoid Null, most buffer overflows etc.., C was not designed to be good from PL standpoint. It was designed to write something and then evolved.
Early C compilers were non-optimizing compilers because C is among the hardest languages to optimize, not because the hardware was limited. Frances Allen described C as a "serious regression" in language design which "destroyed our ability to advance the state of the art in automatic optimization" because of how its semantics blocked most known optimizing transformations.
If they required values to always be initialized, that would either require a concept of constructors (the whole OOP thing they tried to avoid), or require all struct fields to always be explicitly initialized at every instantation (which can get awkward), or allow specify default field values (which sounds like implicit magic Go tries to avoid too).
In Go, by-value semantics is more common than in Java with its mostly by-ref semantics, so nil is less of an issue.
It's also easier to implement (the allocator already memsets to zero for safety)
So it's just a combination of factors, they took a bunch of shortcuts. Not defending it, but their choice makes sense, too.
I suppose not, if you are considering "we" as "the set of people that designed languages like Go or justify the design decisions made by such group".
If you are considering "we" as humanity, in general, I'd say that no, "we" actually learned a lot and "we" already know better. But you still have some building rockets to try to prove the earth is flat, unfortunately.
Well Go's goal is a lowest common denominator language with the least amount of features possible and the easiest to learn for new graduates. This essentially makes it almost a toy language in fancy clothes. A toy language with the money of google backing it so it does actually feel useable.
This was never a goal of go. Go was conceived out of frustration with c++. They wanted to reduce language complexity and build times (among other things). For me personally it's C without the hassle.
Go would be an infinitely friendlier language if it had had built in an Optional type from the beginning. People using nil pointers to indicate nil values is a scourge on the language, and pretty much unavoidable given the current design.
My harebrained idea (obviously we can't change it) is that if nil pointers didn't exist the language would be much better. Require that a pointer always has to point to an address and then people couldn't have built all this infrastructure using nil pointers (something that rightly has to do with reference semantics) to indicate nil values (something that has to do with the meaning of the code)
Having expicitly optional/nullable types is great in languages which are religious about it, since they remove a lot of useless branching and complexity from the code. If it's just tacked on, then it just tends to look ugly, without solving any problems.
It's actually only superficial simplicity. There have been good comments on HN about this issue. Just because the language is "simple", it doesn't hide the complexity of reality, and written code ends up being harder and more verbose and more difficult to manage compared to powerful languages.
With all due respect those comments you’re referencing generally don’t know what they’re talking about. There is a lot hatred on this website for Go, which appears to have more to do with “I don’t understand why it’s designed this way, and I think all good languages should look like lisp/Haskell/rust” than “this design is net negative for developers”.
Practical simplicity is all about hiding complexity. Unless you’re building a race car, you don’t need to know the differences between file handling in Linux Mac and windows. It just never comes up. And when it does, it’s possible to peak under the hood.
A lot of the criticism of go mistakes “difficult to write” or “not trendy” for “bad design”, and again I assert this is because the critics don’t actually understand what Go is designed for, period.
With all due respect, I do understand why go was designed the way it was, and I vehemently disagree with those decisions. I used go for years, and with every passing day I grew more and more disenfranchised with the language.
GP is right. Eschewing abstractions in a programming language forces users of that language to deal with it themselves on a recurring basis. Millions of lines of
don't help anyone, and only detract from readability which is bar none the most important part of a code base. Sadly, this is one of many symptoms in the language where problems that could have been solved in the language have instead been pushed down to its users to deal with over and over and over.
You claim to understand the decisions, so I’ll push you on that.
Why is go error handling designed the way it is? What are the intended benefits? What are the actual benefits?
A follow up, on your abstraction point: why does go eschew abstraction? Intended upside? Actual upside?
It’s very clear people in these threads can perceive downsides of some of Go’s decisions, but what about upsides? And further, can you recognize how Go copes with the downsides it’s choices produce?
I don't know much about Go, and I'm one of those people who mostly sees the downsides, so I guess I'll ask you about one of my biggest issues with the language (in the hopes of broadening my perspective on this): what is the upside of returning a value and error, rather than a value or an error? Or I guess, more precisely, why is that the default and only way to do things [1]?
For context, I'm a big fan of explicit errors and errors as values over exceptions as a general principle: my preferred language is Haskell, where Either (essentially Result) is the main way to doing error handling, and exceptions are very rarely used.
[1] - I can see a few cases where returning both is useful, but I can't see it being what I want most of the time
I agree that Either is more semantically correct. But is it more readable? Is it easier to manipulate? I don't think so. To have either, you need two big concepts:
1. Sum types. These can be really useful, and they _might_ improve Go, but they are undeniably a new complication and something users must now understand to work with Go code. This is not free. Is it the best use of developer minds to be paying rent to the Sum type concept? Or might dev teams be better off using that rent on some other domain specific problem?
2. Unwrapping. Now you have a container for your data, and you have to peel it apart to manipulate the data. This is a minor cut, but it adds up over time, and I think it leads to necessarily more convoluted code. `if err != nil` is the cost of separate err types, but visually its minor and Goland at least auto-collapses it so it interferes as little as possible with your code. I think matching on errs, or just propagating the error case, are both more complex for readers to parse. Furthermore, propagating a naked error loses crucial context (where in this function did we fail? what info from this function do I need to debug this?) that is dead simple to add in Go.
I think a sum typed has a semantic advantage, but it also has a cost, and I think you can make an argument its not worth the dough.
Not to get in the way of the fun, but my point at the start was less about the semantics of an Optional type in particular, but more about how in the absence of an Optional (or I suppose a nil value) people abuse nil pointers as a stand-in, and that's been a source of lots of bugs I've seen.
The language was designed without a nil value opting instead for an empty value. It turns out that in lots of real-world applications some kind of affordance for "this doesn't exist" is so common that even standard libraries use nil pointers to model that. That leads to bugs because now lots of things are pointers that don't actually need to be passed by reference and the language's affordance for _everything_ being able to be passed by value is being circumvented. An optional type (even without any full blown support for sum types in the language) would have solved this problem neatly.
> You claim to understand the decisions, so I’ll push you on that.
I don't know why I'm biting at this, because it's clearly a set-up.
No matter what I say here, if my explanation of their design is in any way incomplete, I'll be taken to task for that omission and held up as an example of yet another ignorant hater who clearly doesn't understand the brilliant minds of its creators. If I accurately detail most of its purpose but make a handful of minor technical errors (after all, it's been years since I stopped using it), I expect the same.
Here goes anyway.
> Why is go error handling designed the way it is? What are the intended benefits? What are the actual benefits?
Go's error handling is designed in response to the problems its authors perceive with exceptions. There are many genuinely reasonable problems one might wish to design around.
Unchecked exceptions implicitly bubbling up from any function you might call is something they wanted to avoid. They wanted to encourage handling errors as close as possible to where those errors occur. They want to force error-handling to be explicit. And they believe that error-handling code is as important—if not moreso—than the "happy path" code, and so shouldn't be tucked away out of sight.
All of these goals are reasonable. It's ultimately the execution that's turned out awful.
What are the actual benefits? Well, it's hard to argue against the explicitness but personally I wouldn't call it a benefit. Sampling random large projects in Github demonstrates that production go code is something approaching (and potentially even exceeding) 50% error-handling stanzas in practice. In making things explicit, they've swung the pendulum way too far in the opposite direction and managed to make actual program logic dramatically more difficult to decipher.
Unchecked exceptions can't implicitly bubble up through your code, it's true. But most go error handling just... explicitly bubbles those same errors up, "decorating" them with text to serve as breadcrumbs when trying to understand where an error occurred. We've simply created human exception handlers and in doing so have lost stack traces in the process. There appears to be no convention of declaring per-error structs that might help one determine what went wrong programmatically, so every error is effectively "stringly" typed and after it's been bubbled up once it's effectively impossible for a higher layer of code to understand specifics of what might have gone wrong. Was the problem with your HTTP call a network error (try again!) or a server error (fatal)? If for some reason you couldn't handle it right where it happened, you have little chance of being able to tell the difference between the two later.
The benefits they did reap have come with some pretty massive caveats. And with this design, they've brought in additional own-goals that should have been so easy to avoid but somehow weren't.
Calling a function and doing something with the value or bubbling up the error is something like 95%+ of error handling in go. Rust makes this a single character: `?`. With go, you're forced to copypasta the error-handling stanza, hiding the actual logic you're trying to accomplish in pointless administrativia.
Further, with tuple values, you get a value and an error rather than a value or an error. For a function that returns an `int, error`, you get back a real `int` along with your error! If you make a mistake and forget to actually handle the error or bubble it up, it's all to easy to use the value. Its value might be well-defined (usually the zero-value) but the semantics of that value likely aren't. Ask me how many bugs I've seen in production code where a bug in error-handling allowed meaningless zero-values to plow their way forward through happy-path logic before causing problems somewhere completely unrelated to where the original error occurred!
All of this is to say, go's designers had real, valid concerns with exceptions in mind when designing the language's error handling constructs. What they didn't seem to do was consider what problems their design would introduce. Of course, most (but not all) of these problems could have been sidestepped by having an Option/Result type like Rust (or equivalently, a Maybe/Either type like Haskell). There's even precedent in the language for "blessed" generic types like maps and slices! They could have done this, even without introducing full generics.
> A follow up, on your abstraction point: why does go eschew abstraction? Intended upside? Actual upside?
This post has already gotten too long and honestly anyone who wants to love go despite its warts isn't going to be (nor should they be) convinced by someone writing a dissertation on HN so I'll leave the rest as an exercise to others.
But put simply, the authors' insistence on simplicity at all costs have simply put the burden of complexity on everyone else. A computer is a tower of abstractions hundreds of layers deep. Go's authors' thesis is tantamount to saying that Abstraction Level 481 is "just right" and even a single additional one would claerly make things impossible to reason about.
When one considers it in the wider scope of how many layers there already are and how the language hamstrings its users' ability to make meaningful layers below it, the whole thing comes across as absurd.
I understand why you’d expect me to be dismissive, but I appreciate your taking the time to write this. Errors are certainly verbose, but I personally find the benefit to debuggability and readability (where could this function possibly fail?) worth it. I think the considered and hand crafted error messages knock stack traces out of the park. I think the pain of unwrapping a Result type, and the pain of annotatingbit with function specific failure information, would be a step down from gos error handling.
Again, I understand why my comment came off as a trap, but trapping you is only one of my intentions! I’m also interested in understanding where you’re coming from, so thank you.
The pain of unwrapping a result type? What's painful about it? If, rather than automatically bubbling it up with ? operator, you want to handle the possibility of failure inline explicitly, it's a simple case of pattern matching that's no more verbose than the `if err != nil` idiom
match fallible_function() {
Err(e) => // handle error
Ok(val) => // do something with val
}
In this case, you of course don't need to annotate the outer function's type with its possibility of failure. In the case where you use ?, you of course do have to annotate the possibility of failure. However, I think trying to argue that this is more painful as syntactic ceremony than constant nil checks is a non-starter.
It's a strict improvement. You can choose to unwrap on the spot with the same amount of syntactic ceremony as go, except with the compiler checking you've handled the cases. Or, you can do the same thing you were going to do in go anyway, with a single character and a type annotation instead of a stanza.
All this is ignoring the extra power methods like `map`, `map_err`, `map_or_else`, etc, give you.
1. Extra indentation for both cases, instead of shoving only the error case aside.
2. How do you annotate the error with details of the current function? In go you can write `return fmt.Errorf("parsing point (id=%v): %w", id, err)` and easily add crucial context for devs to understand why an error occurred. This seems harder to do in rust.
Calling that a strict improvement is too black and white, and the point of my asking others to name good things about Go is to force a more nuanced conversation.
1. You can use that style as well. You're free to return early in the error arm of the match, and make use of the Ok value in later straight line code. I've done that in fallible_function in this example:
fn main() {
// prints "first call worked"
if let Ok(i) = fallible_function(Ok(1)) {
println!("first call worked");
}
// prints "second call failed: FallibleError("error!")"
if let Err(e) = fallible_function(Err("error!".to_string())) {
println!("second call failed: {:?}",e);
}
}
#[derive(Debug)]
struct FallibleError(String);
fn fallible_function(x: Result<i32, String>) -> Result<i32, FallibleError> {
let y = match x {
Err(s) => { return Err(FallibleError(s)); },
Ok(i) => i,
};
// y now contains the i that was in the Ok.
// do straight line code with y here
Ok(y)
}
2. You can create custom errors for a specific function, and put any data that you would have passed to Errorf inside. This way you get the ability to introspect errors to see what went wrong programmatically, and all that data is available for later inspection. Note that we could also have returned a formatted string on error instead of FallibleError exactly like in Go if we wanted to.
Of course, the way you'd write fallible_function if you weren't going out of your way to be verbose would be like this:
fn fallible_function(x: Result<i32, String>) -> Result<i32, FallibleError> {
let y = x.map_err(|s| FallibleError(s) )?;
// y now contains the i that was in the Ok.
// do straight line code with y here
Ok(y)
}
Separately, the point of all this is to be able to statically know whether a function can fail or not. We know for a fact that fallible_function can fail. If we write a function
fn f(x: i32) -> i32 { .. }
We know for a fact it won't fail (unless it panics, but well behaved code should never panic). We don't even have to worry about the possibility of nils getting in there and screwing us up.
A lot of the criticism of go is a little bit like a guy who hates New York pizza saying “New York pizza sucks, it’s only 1”. Chicago pizza has 4 more inches which has been state of the art for years”
Sure, it’s different, but that doesn’t mean it’s bad, and if your criticism only focuses on one detail and leaves out the whole picture, no kidding you produce a harsh judgement.
Frankly your dismissal of people's legitimate concerns with the language as just uneducated griping by fools is one of the reasons I and so many others avoid it and its community like the plague.
I don't know what it is about go, but for some reason it seems impossible to have reasoned—if passionate—disagreement about the language. Any criticisms are hand-waved away as just ranting. Everyone who actually uses it knows that none of these things are real problems. And... what are those real problems, anyway? Can't think of any!
I love Rust, but I'm more than happy to dive into its warts and agree with legitimate complaints and concerns. Hell, I love Ruby too and that language is full of questionable decisions. Not only are language designers imperfect, but there's no such thing as a perfect language anyway. Great decisions have downsides, and there's no sense in acting like those downsides don't exist.
Why is it that gophers never seem to be willing to admit that their language—like all others—has warts, bad tradeoffs, good tradeoffs with uncomfortable but acceptable downsides, and flat-out mistakes? To any criticism, the response is the same: "you don't understand the design", "go's simplicity is its strength", "I've never needed that feature", etc. Hell, the inventor of `nil` calls it his billion-dollar mistake and someone is in this wider discussion arguing that nil pointer dereferences aren't that big a deal.
Where on earth are the gophers that will stand up and say, "Yeah, <X> part of golang sucks. I'd change <Y, Z> if I could. But I still really think it hits the right balance overall." Instead, it's all just regurgitation of the same Kool-Aid.
I’m happy to acknowledge real trade offs made by go. Error handling in go makes writing code harder. This thread is very hostile to go, so it makes sense that my comments express similar hostility.
Sure, but I like Go a lot and am going to defend it, particularly from what I see as baseless or misinformed accusations (that quickly veer into resentment and ad hominem).
I disagree. I've seen and wrote a lot of golang code, and it's a mess once the domain becomes complex. Those comments are saying the right thing.
Golang was designed without any regard to language developments since the 70s, and it shows. It still has null, and for no good reason. No proper enums, let alone pattern matching. These are mainstream features. The only reason golang became popular was because of branding. Its predecessor didn't go anywhere. I admit that concurrency is somewhat ok, but it lacks the expressiveness to make it much more useful. Java is implementing green threads, and is much better equipped to tackle this area (proper concurrent types, immutable types via records, better profiling, hierarchy management, etc.).
> Unless you’re building a race car, you don’t need to know the differences between file handling in Linux Mac and windows.
> “I don’t understand why it’s designed this way, and I think all good languages should look like lisp/Haskell/rust”
False dichotomy. It's possible for languages to be better designed than golang, yet not be lisp/haskell/rust. Java has been making great strides in this area.
In the spirit of keeping this specific, and to demonstrate your understanding, I’d be curious how you’d answer these questions:
1. What is good about go’s file abstraction? What are the specific real world consequences (the article, which I was actually referencing in my comment, doesn’t deal with what happens in practice)?
2. What is the downside of increasing expressiveness? What is the downside of supporting sophisticated abstraction and type systems?
1. golang provides a bare bones, yet not truly OS independent API, for accessing files. This makes it easy for the compiler writer, but difficult for the consumer
2. Increasing expressiveness, if not done correctly, can end up in a situation like C++ and Scala. This makes it more difficult to choose a subset of the language to work with, makes it more difficult for the compiler writer, and slows down compile times. We know that one of golang's supposed goals is fast compile times, seemingly at all costs. So they choose to keep the compiler dumb, while pushing complexity to the end user.
Java has shown that it is possible to have expressivity, while not having an overburdening type system. This results in safer programs, and a language that has strong modeling capability. golang lacks on both fronts.
I don't think I know of a single thing where there are fewer ways of doing something in Go than there are in Java.
There are multiple ways to declare a variable, to pass a value to a function, to declare a constant, to create something similar to an enum, to return errors, to check for errors, to handle closing, to synchronize parallel threads of execution, to initialize a struct, to create a list of items. I can probably go on.
What are some examples where Go is simpler than Java, other than its current lack of generics which has always been a known-limitation?
There are still two ways: the C-style for loop and its variants (for initializer; condition; increment) and the range for loop with its variants (iterate by key, by value, or both). There's also the option of writing a recrusive function.
Still less than Java's five (do-while, while, C-style for, range for, recursion), to be fair.
For any complex codebase, people will build their own sugar and that may differ in implementation so it depends whether that is a good idea.
Subtle differences in similar looking code can trip people and increase complexity. Fortunately, go has a good standard library to compensate for some of it.
I dont think thats true at all. Even the sugar they have is strange: see `go` and `make`. Go has plenty of good features and "lack of ergonomic faculties for common programming idioms in Go" is not one of them.
`new` is basically syntactic sugar (and I personally rarely use it), but `make` is the only way to dynamically allocate or to pre-allocate slices and maps.
I think the core of it is the belief that error handling is no less important than "happy path" code. In some domains, this isn't true. In mine, distributed systems, it is. So I don't want to relegate errors to some ghetto, I want them to be front and center, equal to everything else.
Another small part is probably how you think about the error values themselves. I almost never want to pass an error to my caller exactly as I receive it, I almost always want to do something to it first, most often decorating it with relevant context and metadata where I receive it. Sometimes, obscuring it, if I don't want to leak implementation details.
But ultimately it's about explicitness, obviousness. `?` is easy to miss, and permits method chaining, the outcome of which is incredibly easy to mispredict. And in imperative code, which is the supermajority of all code, `?` gives no meaningful increase in speed-of-reading -- which is a bogus metric, anyway. So for me, strongly net negative.
> I want them to be front and center, equal to everything else.
That’s fine, and that’s why the function itself will have a return type of `Result<T, E>` for some meaningful return type T and error type E.
Even better, if there’s an error, there is no non-error return value. You can’t accidentally use the zero-valued return half of a tuple (as you can in golang) because it simply isn’t there.
Is the important part of error handling having some copy-pasted stanza repeated everywhere? Or is it enforcing that errors are always handled and semantically-undefined return values are never accidentally passed along in the event of an error?
> But ultimately it's about explicitness, obviousness. `?` is easy to miss, and permits method chaining, the outcome of which is incredibly easy to mispredict.
No, it simply is not. `?` early-aborts the function and returns the result straight away if it’s an error, and unwraps the interior value if not. There is no plausible way for someone to mispredict this behavior, and if there was, it would be no different from golang, since the two constructs are semantically virtually identical. One is simply shorter than the other.
`?` is no less explicit than three lines of copy-pasted code and both its existence and behavior are forced due to the function’s return type.
> And in imperative code, which is the supermajority of all code, `?` gives no meaningful increase in speed-of-reading -- which is a bogus metric, anyway.
Ease of understandability is almost hands-down the most important metric given the ratio of frequency to code being read versus written. And to be completely blunt, it is flatly ridiculous that wrapping every line in nearly-identical error handling code somehow doesn’t impair comprehension. The argument is the same for abstractions like `map`, `select`, `reduce` et al. Intent and behavior of code can be understood at a glance when you remove the minutia of looping, bounds-checking, and indexing and focus on just the operation. And as an added bonus, you remove surface area for potential bugs like off-by-one or fencepost errors.
Having nearly identical error-handling everywhere both in theory and in practice obscures the places where something is different. It is hard to notice small differences in largely-identical blocks of visual information—hence the existence of “spot the difference" games—but it is trivial to spot when those differences are large.
I genuinely struggle to comprehend how people can have ideas like this when they fly in the face of what little hard evidence we do have about syntactic differences in programming.
> Even better, if there’s an error, there is no non-error return value. You can’t accidentally use the zero-valued return half of a tuple (as you can in golang) because it simply isn’t there.
That is better! But it's not as better as I think you think it is. The conventions are adequate, here.
> Is the important part of error handling having some copy-pasted stanza repeated everywhere? Or is it enforcing that errors are always handled and semantically-undefined return values are never accidentally passed along in the event of an error?
Neither, really: it's about having the error code path visually equivalent to the non-error code path.
> No, it simply is not. `?` early-aborts the function and returns the result straight away if it’s an error, and unwraps the interior value if not. There is no plausible way for someone to mispredict this behavior, and if there was, it would be no different from golang, since the two constructs are semantically virtually identical. One is simply shorter than the other.
I don't want early abort. Don't know how else to say it. If I have 5 operations, each of which can fail, I want them to be 5 visually distinct stanzas in my source, and I want to be able to manipulate the errors from each independently.
> Ease of understandability is almost hands-down the most important metric given the ratio of frequency to code being read versus written. And to be completely blunt, it is flatly ridiculous that wrapping every line in nearly-identical error handling code somehow doesn’t impair comprehension. The argument is the same for abstractions like `map`, `select`, `reduce` et al. Intent and behavior of code can be understood at a glance when you remove the minutia of looping, bounds-checking, and indexing
I'm sorry, but I just don't agree. You call looping, bounds-checking, index, etc. minutia, but I don't see it that way.
> I don't want early abort. Don't know how else to say it. If I have 5 operations, each of which can fail, I want them to be 5 visually distinct stanzas in my source, and I want to be able to manipulate the errors from each independently.
Sure you do, in 95% of cases. That's why the whole
stanza exists in the first place. And if you don't want it (in Rust), Result being a first-class value means you don't have to early-abort. You just don't type `?` and you operate on the Result directly.
I have to say I'm pretty certain at this point that you haven't actually used Rust, because your points here just... aren't how things work in the language. `?` desugars to the golang error stanza almost verbatim, and if you don't want that you have plenty of other options for how to specifically handle your errors.
> I'm sorry, but I just don't agree. You call looping, bounds-checking, index, etc. minutia, but I don't see it that way.
Wild. It's incomprehensible to me that
foo
.filter(|v| v % 2 == 0 )
.map(|a| a + 1);
is somehow less clear to anyone, or that anyone could think there's less room for unintended bugs than
res := make([]int, len(items))
for _, v := range items {
if (v % 2 == 0) {
res = append(res, v + 1)
}
}
when literally 95% of that code is boilerplate. It's immediately clear in the first example that we're incrementing every even number. In the second example, you have to visually parse significantly more code to get the gist, you have to remember to allocate an slice of the right capacity to avoid multiple reallocations for large slices, and you even end up with a slice that's too large for the data it contains in the end.
There's no argument for the go loop being "better" than the Rust equivalent that doesn't also argue that the C version with the additional hassle of bounds-checking and manual incrementing is better still.
It's bad practice, though unfortunately extremely common, to return unannotated errors like this. I can't think of the last time I've used this stanza. The proper form is, at a minimum,
and so on. The point is you have in that stanza the space to program with the error, same as any other value in the function. The... semantic equivalence? which the idiom reinforces is actually extremely good! Error handling isn't any less important than happy path code, and, IMO, language features like `?` suggest that it is.
> It's incomprehensible to me that...
It is not immediately clear that the first example is incrementing every even number. To get there, we have to parse the method names, recall and parse the special syntax rules for those methods, and, if we're being diligent, reflect on the ownership requirements and allocation effects w.r.t. their parameters, to make sure we're not doing anything with unintended side effects.
We're doing basically the same work in the second example, minus the ownership stuff. We're using more characters to do it, but that's not a priori worse. Parsing `res = append(res, v+1)` does not take more time than `map(|a| a + 1)`. Using curly brackets and newlines to demarcate transformation steps instead of monads is not more prone to bugs. It's the same stuff, expressed differently, and, IMO, more coherently: code written in the imperative style is generally easier to understand than functional. (I hope that isn't controversial.)
> There's no argument for the go loop being "better" than the Rust equivalent that doesn't also argue that the C version with the additional hassle of bounds-checking and manual incrementing is better still.
Isn’t that purely interface-equality related “issue”? With Maybe’s you’d compare Maybe(interface(Maybe(T))) with T or with Maybe(T) and it would either non-compile or be a broken comparison again, depending on == semantics. Then we’d read on muddy == semantics which is broken or disallowing for easy comparison through an optional interface and forces one to build ugly ladders of pattern matching in otherwise one-liner lambdas.
Article incorrectly claims that calling a method on a nil value results in an error. Only attempting to dereference the pointer does. It’s fine to call methods on nil and that’s part of the “make the zero value useful” philosophy.
In Go, the function to be called by the Expression.Name() syntax is entirely determined by the type of Expression and not by the particular run-time value of that expression, including nil.
That fact that calling a function on a type does not inherently dereference the pointer seems...nuts. Do gophers routinely check if the pointer is `nil`? If the function does not utilize the pointer (like your example) and so avoids an error, the function probably doesn't need to be defined on the type in the first place. This seems like a bug-prone rough edge to me, coming from Java.
Yea, you are correct in your understanding, but such code is super rare in my experience. In fact, until recently I didn't even know such behavior exists at all. Answering your question, checking if a pointer is `nil` isn't that common among gophers. A more common idiom is to return both a pointer and an error, and then check if error is not `nil`:
If the return value is just a pointer, in my books a nil value should be avoided. I guess what's not cool here is that there are no guarantees. If I'm using a library in which a function returns just a pointer, I sometimes jump in the function's body to check if it ever returns nil so that I don't have to pollute my code with `if` statements. Like, you almost never check for `nil` when dealing with simple factories, eg foo := NewFoo(). However, when it comes to a more complex method calls, it might be safer to add a quick `nil` check and forget about it.
On one hand, it is rational if you think of Go functions "on a type" as just having an extra argument, but then one might ask, why bother with the special syntax at all?
Just go the C way and if you want to take a "self", actually take it as a regular parameter. That is: (sorry if the syntax is wrong, I have never used Go)
func (f *Foo) sayHello() { ... }
Would become
func sayHello(f *Foo) { ... }
Heck, if you want to keep the value.sayHello() syntax, you can, which would still allow you to build fluent interfaces or whatnot, with the bonus point of being UFCS!
Some people do exactly that and avoid methods altogether. It's perfectly valid to use:
func sayHello(f *Foo) { ... }
One major difference between functions and methods in Go is you can have multiple methods with same name while no two functions with the same name can be defined in a package. Also, there is a subtle difference, which is more subjective, between user.IsActive() and IsActive(user). The former explicitly implies that you operate on a user, whereas the later one isn't that obvious, like what if I do IsActive(group), is that also allowed? I like that Go is not trying to be super flexible, but it also nicely holds a niche of a slightly improved (in terms of usability) C.
They basically are functions with an additional argument. You can call them with the method syntax, but you can also call them like a regular function. To use your first example, you can also call it like this:
(*Foo).sayHello(f) // the same as f.sayHello()
It's mainly useful when you want to get a reference to the method itself:
f := (*Foo).sayHello // a function that takes one arg of type *Foo
For most purposes the two are the same, the main difference is with with how interfaces work in Go. Functions declared with the special syntax can be used to satisfy interfaces, which in Go means keeping some run-time type information around.
It's fine to call methods on a concrete nil. But...
> that’s part of the “make the zero value useful” philosophy.
... the zero value of an interface is not a concrete nil! And if the interface does contain a concrete nil, it's no longer nil itself!
If the goal was to make zero values useful, Go should have rather have offered Obj C-style messages to truly nil interfaces - nothing happens and all values returned are zero.
> If the goal was to make zero values useful, Go should have rather have offered Obj C-style messages to truly nil interfaces - nothing happens and all values returned are zero.
IMO this is the worst thing about Obj C. The number of times I’ve discovered code that was silently failing in some Obj C project.....
It's literally the whole thread man. Boxed nils are nils-but-also-not-nils, and knowing if something gets boxed means knowing whether the exact function parameter signature (in or out) is an interface or concrete, not just whether what you are returning is of the appropriate type.
(And Go has no special syntax for interface vs. concrete types in parameters, so you not only need to read the signature but the definition of the type.)
As someone who writes a fair bit of Go and has dabbled with Rust a little bit after trying to learn it 3-4 times I agree. The Option and Result enums make me feel like I'm not missing an obscure error. Admittedly I find Go considerably more productive though but I wonder how much of that is familiarity and the cognitive burden GC eliminates.
As someone who only wrote Golang, and then worked for 2 years on a Rust codebase, I agree as well. Sum types, Option, and Result really change everything. The number of crashes we avoided thanks to these is really meaningful. On the other hand the number of crashes I found in Golang applications due to a nil dereference is worriesome.
Complexity around memory management and lifetimes is a definite tradeoff that rust makes. A GC imperative language that uses Sum/Product types seems interesting. I guess that would be swift? I'm not familiar.
Swift, OCaml, Haskell ('bit of a toughie that one),
Elm if you favor "absolute simplicity", it's a pure functional language which eschews all the advanced type stuff found in most functional language. And I do mean all: elm actually eschews abstract types entirely, it doesn't have traits, interfaces, … except for a limited number of built-ins (comparable, number, appendable, and that's about it, well there's also compappendable which means comparable & appendable, and some people advocate for removing even those: https://discourse.elm-lang.org/t/just-for-fun-what-if-there-...).
It used to have a concept of "extensible records" but I'm not sure even that remains.
Yeah, I'm familiar with the ML derivatives, I'm a haskeller, I was musing specifically about imperative languages, because I doubt go programmers would be interested in that much of a switch up.
>Here we start to see some sources of real bugs. Writing to and reading from a nil channel blocks forever.
Of course they block forever. If you read from a channel that nothing is writing to it, you block forever. If you write to a channel that nothing reads from it, you block forever. There's nothing special related to the channel being nil.
Trying to assigning to a nil map is also perfectly understandable: a nil map is read only. This is the same as nil slices. "But `a = append(a, 1)` works when a is nil", you might be asking. Yes. That's because `append(a, 1)` is not (always) writing to a.
>An interface is actually a fat pointer. It stores a pointer to the value, plus information about the type it points to. As it turns out, the information about the type is actually just another pointer.
Internalizing this understanding was a challenge worth mastering, for my own work. What helped was to make the abstractions concrete in a running program, with the help of Delve's "examine memory" CLI command. Similarly for slices and maps.
Personally, I never read or write a channel without some sort of time bound, so this behavior doesn't really bother me:
select {
case foo := <-whateverCh:
log.Printf("hey a foo: %v", foo)
case <-ctx.Done():
log.Println("the user doesn't care about the answer anymore, so let's not leak the goroutine")
}
Actually yes - it implies that if the read/write executes, then the read/write was successful.
In practice, in cases where a nil read/write could happen, a default "fall through" option on a select statement is used.
They could have made the original interface return a possible error for reading/writing, in line with regular go error handling, but opted instead to use more channel-based conventions.
I think the question is when would it be useful outside of select. Once you block on nil channel, there's no way to unblock, which doesn't seem helpful.
Such is the danger of blocking indefinitely. I think every Go programmer's first concurrent app eventually crashes because it runs out of memory, memory used by goroutines that are waiting for an answer that will never come. When you write "foo := <-ch", you're saying "I am willing to wait forever for an answer". Unfortunately, actual computers in the real world don't have the resources to wait forever.
Honestly, I see the ability to block indefinitely as a bug in the language. An improved language would probably "result, err := <(ctx)- ch", i.e. make it impossible to block without something bounding the duration of the block. (Also kill sync.Mutex. 100% of Go concurrency bugs are from people mixing mutexes and channels.)
(As an alternative, the runtime could be smarter about goroutines that have blocked forever. I am not sure exactly what it would look like, but the hypothetical first Go program I talk about above would probably be saved by something that killed goroutines that were waiting for a message from a TCP connection that is long gone. I think if it were easy to do right, it would have been done, though.)
They only case I can think of right away is some kind of process that should not die after the execution and wait until the user kills it by hand after reading through execution report.
Including null in golang was a mistake, but that decision is a lot more defensible than the decision to call it nil. Seriously, can we just pick 1 name and stick to it?
nil is Latin. null is English/French. They both mean the same thing - "nothing". As long as people speak more than one language across the globe, there will be multiple words for the same things.
1. I can't think of any other keyword in golang that comes from Latin. It's a dead language.
2. The meaning of null is well-understood by the CS community. The meaning of nil is not; it seems to be redefined by each language that includes it. For example, nil is the empty list in Scala.
> I can't think of any other keyword in golang that comes from Latin.
Yet you probably know more Latin from programming than you think you do. Did you know that integer is Latin?
> It's a dead language.
"In fields as varied as mathematics, physics, astronomy, medicine, pharmacy, biology, and philosophy Latin still provides internationally accepted names of concepts, forces, objects, and organisms in the natural world."
"interface", "import", "struct", "constant", "defer", "func", "select" are all directly Latin words, truncations of Latin words, or obvious portmanteaus of Latin words. "default" is ultimately from Latin but a bit less immediately so.
It sounds like a comment that someone would make after spending a day playing around with Golang, spend a week writing Golang and you won't care about how it's called.
Have we learned nothing? Many languages without NIL/Nil/nil/null/NULL existed already when Golang was born.
And this attempt to heal the damage hurts, too:
> “make the zero value useful” philosophy
This is like instead of programming by exception (Java), it's programming by ignorance (of errors).
I like a few things about Golang (handling of numeric literals, for example), but not this thing.