Hacker News new | past | comments | ask | show | jobs | submit login

Go would be an infinitely friendlier language if it had had built in an Optional type from the beginning. People using nil pointers to indicate nil values is a scourge on the language, and pretty much unavoidable given the current design.

My harebrained idea (obviously we can't change it) is that if nil pointers didn't exist the language would be much better. Require that a pointer always has to point to an address and then people couldn't have built all this infrastructure using nil pointers (something that rightly has to do with reference semantics) to indicate nil values (something that has to do with the meaning of the code)




Having expicitly optional/nullable types is great in languages which are religious about it, since they remove a lot of useless branching and complexity from the code. If it's just tacked on, then it just tends to look ugly, without solving any problems.


Syntactic sugar makes a big difference as well, e.g. Rust's `?` operator.


Simplicity in Go is one of it's most loved features. For the most part, there are fewer ways to do the same thing and I love that.


It's actually only superficial simplicity. There have been good comments on HN about this issue. Just because the language is "simple", it doesn't hide the complexity of reality, and written code ends up being harder and more verbose and more difficult to manage compared to powerful languages.


With all due respect those comments you’re referencing generally don’t know what they’re talking about. There is a lot hatred on this website for Go, which appears to have more to do with “I don’t understand why it’s designed this way, and I think all good languages should look like lisp/Haskell/rust” than “this design is net negative for developers”.

Practical simplicity is all about hiding complexity. Unless you’re building a race car, you don’t need to know the differences between file handling in Linux Mac and windows. It just never comes up. And when it does, it’s possible to peak under the hood.

A lot of the criticism of go mistakes “difficult to write” or “not trendy” for “bad design”, and again I assert this is because the critics don’t actually understand what Go is designed for, period.


With all due respect, I do understand why go was designed the way it was, and I vehemently disagree with those decisions. I used go for years, and with every passing day I grew more and more disenfranchised with the language.

GP is right. Eschewing abstractions in a programming language forces users of that language to deal with it themselves on a recurring basis. Millions of lines of

    if res, err := fn(...); err != nil {
        return nil, fmt.Errorf("...")
    }
don't help anyone, and only detract from readability which is bar none the most important part of a code base. Sadly, this is one of many symptoms in the language where problems that could have been solved in the language have instead been pushed down to its users to deal with over and over and over.


You claim to understand the decisions, so I’ll push you on that.

Why is go error handling designed the way it is? What are the intended benefits? What are the actual benefits?

A follow up, on your abstraction point: why does go eschew abstraction? Intended upside? Actual upside?

It’s very clear people in these threads can perceive downsides of some of Go’s decisions, but what about upsides? And further, can you recognize how Go copes with the downsides it’s choices produce?


> but what about upsides?

I don't know much about Go, and I'm one of those people who mostly sees the downsides, so I guess I'll ask you about one of my biggest issues with the language (in the hopes of broadening my perspective on this): what is the upside of returning a value and error, rather than a value or an error? Or I guess, more precisely, why is that the default and only way to do things [1]?

For context, I'm a big fan of explicit errors and errors as values over exceptions as a general principle: my preferred language is Haskell, where Either (essentially Result) is the main way to doing error handling, and exceptions are very rarely used.

[1] - I can see a few cases where returning both is useful, but I can't see it being what I want most of the time


I agree that Either is more semantically correct. But is it more readable? Is it easier to manipulate? I don't think so. To have either, you need two big concepts:

1. Sum types. These can be really useful, and they _might_ improve Go, but they are undeniably a new complication and something users must now understand to work with Go code. This is not free. Is it the best use of developer minds to be paying rent to the Sum type concept? Or might dev teams be better off using that rent on some other domain specific problem?

2. Unwrapping. Now you have a container for your data, and you have to peel it apart to manipulate the data. This is a minor cut, but it adds up over time, and I think it leads to necessarily more convoluted code. `if err != nil` is the cost of separate err types, but visually its minor and Goland at least auto-collapses it so it interferes as little as possible with your code. I think matching on errs, or just propagating the error case, are both more complex for readers to parse. Furthermore, propagating a naked error loses crucial context (where in this function did we fail? what info from this function do I need to debug this?) that is dead simple to add in Go.

I think a sum typed has a semantic advantage, but it also has a cost, and I think you can make an argument its not worth the dough.


Not to get in the way of the fun, but my point at the start was less about the semantics of an Optional type in particular, but more about how in the absence of an Optional (or I suppose a nil value) people abuse nil pointers as a stand-in, and that's been a source of lots of bugs I've seen.

The language was designed without a nil value opting instead for an empty value. It turns out that in lots of real-world applications some kind of affordance for "this doesn't exist" is so common that even standard libraries use nil pointers to model that. That leads to bugs because now lots of things are pointers that don't actually need to be passed by reference and the language's affordance for _everything_ being able to be passed by value is being circumvented. An optional type (even without any full blown support for sum types in the language) would have solved this problem neatly.


> You claim to understand the decisions, so I’ll push you on that.

I don't know why I'm biting at this, because it's clearly a set-up.

No matter what I say here, if my explanation of their design is in any way incomplete, I'll be taken to task for that omission and held up as an example of yet another ignorant hater who clearly doesn't understand the brilliant minds of its creators. If I accurately detail most of its purpose but make a handful of minor technical errors (after all, it's been years since I stopped using it), I expect the same.

Here goes anyway.

> Why is go error handling designed the way it is? What are the intended benefits? What are the actual benefits?

Go's error handling is designed in response to the problems its authors perceive with exceptions. There are many genuinely reasonable problems one might wish to design around.

Unchecked exceptions implicitly bubbling up from any function you might call is something they wanted to avoid. They wanted to encourage handling errors as close as possible to where those errors occur. They want to force error-handling to be explicit. And they believe that error-handling code is as important—if not moreso—than the "happy path" code, and so shouldn't be tucked away out of sight.

All of these goals are reasonable. It's ultimately the execution that's turned out awful.

What are the actual benefits? Well, it's hard to argue against the explicitness but personally I wouldn't call it a benefit. Sampling random large projects in Github demonstrates that production go code is something approaching (and potentially even exceeding) 50% error-handling stanzas in practice. In making things explicit, they've swung the pendulum way too far in the opposite direction and managed to make actual program logic dramatically more difficult to decipher.

Unchecked exceptions can't implicitly bubble up through your code, it's true. But most go error handling just... explicitly bubbles those same errors up, "decorating" them with text to serve as breadcrumbs when trying to understand where an error occurred. We've simply created human exception handlers and in doing so have lost stack traces in the process. There appears to be no convention of declaring per-error structs that might help one determine what went wrong programmatically, so every error is effectively "stringly" typed and after it's been bubbled up once it's effectively impossible for a higher layer of code to understand specifics of what might have gone wrong. Was the problem with your HTTP call a network error (try again!) or a server error (fatal)? If for some reason you couldn't handle it right where it happened, you have little chance of being able to tell the difference between the two later.

The benefits they did reap have come with some pretty massive caveats. And with this design, they've brought in additional own-goals that should have been so easy to avoid but somehow weren't.

Calling a function and doing something with the value or bubbling up the error is something like 95%+ of error handling in go. Rust makes this a single character: `?`. With go, you're forced to copypasta the error-handling stanza, hiding the actual logic you're trying to accomplish in pointless administrativia.

Further, with tuple values, you get a value and an error rather than a value or an error. For a function that returns an `int, error`, you get back a real `int` along with your error! If you make a mistake and forget to actually handle the error or bubble it up, it's all to easy to use the value. Its value might be well-defined (usually the zero-value) but the semantics of that value likely aren't. Ask me how many bugs I've seen in production code where a bug in error-handling allowed meaningless zero-values to plow their way forward through happy-path logic before causing problems somewhere completely unrelated to where the original error occurred!

All of this is to say, go's designers had real, valid concerns with exceptions in mind when designing the language's error handling constructs. What they didn't seem to do was consider what problems their design would introduce. Of course, most (but not all) of these problems could have been sidestepped by having an Option/Result type like Rust (or equivalently, a Maybe/Either type like Haskell). There's even precedent in the language for "blessed" generic types like maps and slices! They could have done this, even without introducing full generics.

> A follow up, on your abstraction point: why does go eschew abstraction? Intended upside? Actual upside?

This post has already gotten too long and honestly anyone who wants to love go despite its warts isn't going to be (nor should they be) convinced by someone writing a dissertation on HN so I'll leave the rest as an exercise to others.

But put simply, the authors' insistence on simplicity at all costs have simply put the burden of complexity on everyone else. A computer is a tower of abstractions hundreds of layers deep. Go's authors' thesis is tantamount to saying that Abstraction Level 481 is "just right" and even a single additional one would claerly make things impossible to reason about.

When one considers it in the wider scope of how many layers there already are and how the language hamstrings its users' ability to make meaningful layers below it, the whole thing comes across as absurd.


I understand why you’d expect me to be dismissive, but I appreciate your taking the time to write this. Errors are certainly verbose, but I personally find the benefit to debuggability and readability (where could this function possibly fail?) worth it. I think the considered and hand crafted error messages knock stack traces out of the park. I think the pain of unwrapping a Result type, and the pain of annotatingbit with function specific failure information, would be a step down from gos error handling.

Again, I understand why my comment came off as a trap, but trapping you is only one of my intentions! I’m also interested in understanding where you’re coming from, so thank you.


The pain of unwrapping a result type? What's painful about it? If, rather than automatically bubbling it up with ? operator, you want to handle the possibility of failure inline explicitly, it's a simple case of pattern matching that's no more verbose than the `if err != nil` idiom

    match fallible_function() {
        Err(e) => // handle error
        Ok(val) => // do something with val
    }
In this case, you of course don't need to annotate the outer function's type with its possibility of failure. In the case where you use ?, you of course do have to annotate the possibility of failure. However, I think trying to argue that this is more painful as syntactic ceremony than constant nil checks is a non-starter.

It's a strict improvement. You can choose to unwrap on the spot with the same amount of syntactic ceremony as go, except with the compiler checking you've handled the cases. Or, you can do the same thing you were going to do in go anyway, with a single character and a type annotation instead of a stanza.

All this is ignoring the extra power methods like `map`, `map_err`, `map_or_else`, etc, give you.


Whats painful:

1. Extra indentation for both cases, instead of shoving only the error case aside. 2. How do you annotate the error with details of the current function? In go you can write `return fmt.Errorf("parsing point (id=%v): %w", id, err)` and easily add crucial context for devs to understand why an error occurred. This seems harder to do in rust.

Calling that a strict improvement is too black and white, and the point of my asking others to name good things about Go is to force a more nuanced conversation.


1. You can use that style as well. You're free to return early in the error arm of the match, and make use of the Ok value in later straight line code. I've done that in fallible_function in this example:

    fn main() {
        // prints "first call worked"
        if let Ok(i) = fallible_function(Ok(1)) {
            println!("first call worked");
            
        }
    
        // prints "second call failed: FallibleError("error!")"
        if let Err(e) = fallible_function(Err("error!".to_string())) {
            println!("second call failed: {:?}",e);
        }
    
    }

    #[derive(Debug)]
    struct FallibleError(String);

    fn fallible_function(x: Result<i32, String>) -> Result<i32, FallibleError> {
        let y = match x {
            Err(s) => { return Err(FallibleError(s)); },
            Ok(i) => i,
        };
    
        // y now contains the i that was in the Ok.
        // do straight line code with y here
    
        Ok(y)
    }
2. You can create custom errors for a specific function, and put any data that you would have passed to Errorf inside. This way you get the ability to introspect errors to see what went wrong programmatically, and all that data is available for later inspection. Note that we could also have returned a formatted string on error instead of FallibleError exactly like in Go if we wanted to.

Of course, the way you'd write fallible_function if you weren't going out of your way to be verbose would be like this:

    fn fallible_function(x: Result<i32, String>) -> Result<i32, FallibleError> {
        let y = x.map_err(|s| FallibleError(s) )?;
        // y now contains the i that was in the Ok.
        // do straight line code with y here
    
        Ok(y)
    }
Separately, the point of all this is to be able to statically know whether a function can fail or not. We know for a fact that fallible_function can fail. If we write a function

    fn f(x: i32) -> i32 { .. }
We know for a fact it won't fail (unless it panics, but well behaved code should never panic). We don't even have to worry about the possibility of nils getting in there and screwing us up.


A lot of the criticism of go is a little bit like a guy who hates New York pizza saying “New York pizza sucks, it’s only 1”. Chicago pizza has 4 more inches which has been state of the art for years”

Sure, it’s different, but that doesn’t mean it’s bad, and if your criticism only focuses on one detail and leaves out the whole picture, no kidding you produce a harsh judgement.


Frankly your dismissal of people's legitimate concerns with the language as just uneducated griping by fools is one of the reasons I and so many others avoid it and its community like the plague.

I don't know what it is about go, but for some reason it seems impossible to have reasoned—if passionate—disagreement about the language. Any criticisms are hand-waved away as just ranting. Everyone who actually uses it knows that none of these things are real problems. And... what are those real problems, anyway? Can't think of any!

I love Rust, but I'm more than happy to dive into its warts and agree with legitimate complaints and concerns. Hell, I love Ruby too and that language is full of questionable decisions. Not only are language designers imperfect, but there's no such thing as a perfect language anyway. Great decisions have downsides, and there's no sense in acting like those downsides don't exist.

Why is it that gophers never seem to be willing to admit that their language—like all others—has warts, bad tradeoffs, good tradeoffs with uncomfortable but acceptable downsides, and flat-out mistakes? To any criticism, the response is the same: "you don't understand the design", "go's simplicity is its strength", "I've never needed that feature", etc. Hell, the inventor of `nil` calls it his billion-dollar mistake and someone is in this wider discussion arguing that nil pointer dereferences aren't that big a deal.

Where on earth are the gophers that will stand up and say, "Yeah, <X> part of golang sucks. I'd change <Y, Z> if I could. But I still really think it hits the right balance overall." Instead, it's all just regurgitation of the same Kool-Aid.


I’m happy to acknowledge real trade offs made by go. Error handling in go makes writing code harder. This thread is very hostile to go, so it makes sense that my comments express similar hostility.


> This thread is very hostile to go

It's a programming language, it'll be OK.


Sure, but I like Go a lot and am going to defend it, particularly from what I see as baseless or misinformed accusations (that quickly veer into resentment and ad hominem).


Don’t, it’s not a good look.


I’m still going to lol. Bad look or not.


I disagree. I've seen and wrote a lot of golang code, and it's a mess once the domain becomes complex. Those comments are saying the right thing.

Golang was designed without any regard to language developments since the 70s, and it shows. It still has null, and for no good reason. No proper enums, let alone pattern matching. These are mainstream features. The only reason golang became popular was because of branding. Its predecessor didn't go anywhere. I admit that concurrency is somewhat ok, but it lacks the expressiveness to make it much more useful. Java is implementing green threads, and is much better equipped to tackle this area (proper concurrent types, immutable types via records, better profiling, hierarchy management, etc.).

> Unless you’re building a race car, you don’t need to know the differences between file handling in Linux Mac and windows.

And golang does a terrible job at that abstraction: https://fasterthanli.me/articles/i-want-off-mr-golangs-wild-...

> “I don’t understand why it’s designed this way, and I think all good languages should look like lisp/Haskell/rust”

False dichotomy. It's possible for languages to be better designed than golang, yet not be lisp/haskell/rust. Java has been making great strides in this area.


In the spirit of keeping this specific, and to demonstrate your understanding, I’d be curious how you’d answer these questions:

1. What is good about go’s file abstraction? What are the specific real world consequences (the article, which I was actually referencing in my comment, doesn’t deal with what happens in practice)?

2. What is the downside of increasing expressiveness? What is the downside of supporting sophisticated abstraction and type systems?


1. golang provides a bare bones, yet not truly OS independent API, for accessing files. This makes it easy for the compiler writer, but difficult for the consumer

2. Increasing expressiveness, if not done correctly, can end up in a situation like C++ and Scala. This makes it more difficult to choose a subset of the language to work with, makes it more difficult for the compiler writer, and slows down compile times. We know that one of golang's supposed goals is fast compile times, seemingly at all costs. So they choose to keep the compiler dumb, while pushing complexity to the end user.

Java has shown that it is possible to have expressivity, while not having an overburdening type system. This results in safer programs, and a language that has strong modeling capability. golang lacks on both fronts.


I don't think I know of a single thing where there are fewer ways of doing something in Go than there are in Java.

There are multiple ways to declare a variable, to pass a value to a function, to declare a constant, to create something similar to an enum, to return errors, to check for errors, to handle closing, to synchronize parallel threads of execution, to initialize a struct, to create a list of items. I can probably go on.

What are some examples where Go is simpler than Java, other than its current lack of generics which has always been a known-limitation?


There is one way to iterate over things, for any kind of elementwise processing: a `for` loop.

There is one way to format your code ;)


There are still two ways: the C-style for loop and its variants (for initializer; condition; increment) and the range for loop with its variants (iterate by key, by value, or both). There's also the option of writing a recrusive function.

Still less than Java's five (do-while, while, C-style for, range for, recursion), to be fair.


Which is simpler?

    if (ptr == null) {
        return nil, fmt.Errorf("bad");
    }

    val := *ptr
or

    let value = ptr?
If `?` is unacceptable complexity, how on earth do you deal with functions—or, even worse—control flow keywords?


For any complex codebase, people will build their own sugar and that may differ in implementation so it depends whether that is a good idea.

Subtle differences in similar looking code can trip people and increase complexity. Fortunately, go has a good standard library to compensate for some of it.


Relative lack of syntactic sugar is one of Go's best features though.


I dont think thats true at all. Even the sugar they have is strange: see `go` and `make`. Go has plenty of good features and "lack of ergonomic faculties for common programming idioms in Go" is not one of them.


`new` is basically syntactic sugar (and I personally rarely use it), but `make` is the only way to dynamically allocate or to pre-allocate slices and maps.


`?` makes code substantially less coherent, not more.


I would love to see some rationale behind this opinion.


I think the core of it is the belief that error handling is no less important than "happy path" code. In some domains, this isn't true. In mine, distributed systems, it is. So I don't want to relegate errors to some ghetto, I want them to be front and center, equal to everything else.

Another small part is probably how you think about the error values themselves. I almost never want to pass an error to my caller exactly as I receive it, I almost always want to do something to it first, most often decorating it with relevant context and metadata where I receive it. Sometimes, obscuring it, if I don't want to leak implementation details.

But ultimately it's about explicitness, obviousness. `?` is easy to miss, and permits method chaining, the outcome of which is incredibly easy to mispredict. And in imperative code, which is the supermajority of all code, `?` gives no meaningful increase in speed-of-reading -- which is a bogus metric, anyway. So for me, strongly net negative.


> I want them to be front and center, equal to everything else.

That’s fine, and that’s why the function itself will have a return type of `Result<T, E>` for some meaningful return type T and error type E.

Even better, if there’s an error, there is no non-error return value. You can’t accidentally use the zero-valued return half of a tuple (as you can in golang) because it simply isn’t there.

Is the important part of error handling having some copy-pasted stanza repeated everywhere? Or is it enforcing that errors are always handled and semantically-undefined return values are never accidentally passed along in the event of an error?

> But ultimately it's about explicitness, obviousness. `?` is easy to miss, and permits method chaining, the outcome of which is incredibly easy to mispredict.

No, it simply is not. `?` early-aborts the function and returns the result straight away if it’s an error, and unwraps the interior value if not. There is no plausible way for someone to mispredict this behavior, and if there was, it would be no different from golang, since the two constructs are semantically virtually identical. One is simply shorter than the other.

`?` is no less explicit than three lines of copy-pasted code and both its existence and behavior are forced due to the function’s return type.

> And in imperative code, which is the supermajority of all code, `?` gives no meaningful increase in speed-of-reading -- which is a bogus metric, anyway.

Ease of understandability is almost hands-down the most important metric given the ratio of frequency to code being read versus written. And to be completely blunt, it is flatly ridiculous that wrapping every line in nearly-identical error handling code somehow doesn’t impair comprehension. The argument is the same for abstractions like `map`, `select`, `reduce` et al. Intent and behavior of code can be understood at a glance when you remove the minutia of looping, bounds-checking, and indexing and focus on just the operation. And as an added bonus, you remove surface area for potential bugs like off-by-one or fencepost errors.

Having nearly identical error-handling everywhere both in theory and in practice obscures the places where something is different. It is hard to notice small differences in largely-identical blocks of visual information—hence the existence of “spot the difference" games—but it is trivial to spot when those differences are large.

I genuinely struggle to comprehend how people can have ideas like this when they fly in the face of what little hard evidence we do have about syntactic differences in programming.


> Even better, if there’s an error, there is no non-error return value. You can’t accidentally use the zero-valued return half of a tuple (as you can in golang) because it simply isn’t there.

That is better! But it's not as better as I think you think it is. The conventions are adequate, here.

> Is the important part of error handling having some copy-pasted stanza repeated everywhere? Or is it enforcing that errors are always handled and semantically-undefined return values are never accidentally passed along in the event of an error?

Neither, really: it's about having the error code path visually equivalent to the non-error code path.

> No, it simply is not. `?` early-aborts the function and returns the result straight away if it’s an error, and unwraps the interior value if not. There is no plausible way for someone to mispredict this behavior, and if there was, it would be no different from golang, since the two constructs are semantically virtually identical. One is simply shorter than the other.

I don't want early abort. Don't know how else to say it. If I have 5 operations, each of which can fail, I want them to be 5 visually distinct stanzas in my source, and I want to be able to manipulate the errors from each independently.

> Ease of understandability is almost hands-down the most important metric given the ratio of frequency to code being read versus written. And to be completely blunt, it is flatly ridiculous that wrapping every line in nearly-identical error handling code somehow doesn’t impair comprehension. The argument is the same for abstractions like `map`, `select`, `reduce` et al. Intent and behavior of code can be understood at a glance when you remove the minutia of looping, bounds-checking, and indexing

I'm sorry, but I just don't agree. You call looping, bounds-checking, index, etc. minutia, but I don't see it that way.


> I don't want early abort. Don't know how else to say it. If I have 5 operations, each of which can fail, I want them to be 5 visually distinct stanzas in my source, and I want to be able to manipulate the errors from each independently.

Sure you do, in 95% of cases. That's why the whole

    if res, err := fn(...); err != nil {
        return nil, err
    }
stanza exists in the first place. And if you don't want it (in Rust), Result being a first-class value means you don't have to early-abort. You just don't type `?` and you operate on the Result directly.

I have to say I'm pretty certain at this point that you haven't actually used Rust, because your points here just... aren't how things work in the language. `?` desugars to the golang error stanza almost verbatim, and if you don't want that you have plenty of other options for how to specifically handle your errors.

> I'm sorry, but I just don't agree. You call looping, bounds-checking, index, etc. minutia, but I don't see it that way.

Wild. It's incomprehensible to me that

    foo
        .filter(|v| v % 2 == 0 )
        .map(|a| a + 1);
is somehow less clear to anyone, or that anyone could think there's less room for unintended bugs than

    res := make([]int, len(items))
    for _, v := range items {
        if (v % 2 == 0) {
            res = append(res, v + 1)
        }
    }
when literally 95% of that code is boilerplate. It's immediately clear in the first example that we're incrementing every even number. In the second example, you have to visually parse significantly more code to get the gist, you have to remember to allocate an slice of the right capacity to avoid multiple reallocations for large slices, and you even end up with a slice that's too large for the data it contains in the end.

There's no argument for the go loop being "better" than the Rust equivalent that doesn't also argue that the C version with the additional hassle of bounds-checking and manual incrementing is better still.


> if res, err := fn(...); err != nil { > return nil, err > }

It's bad practice, though unfortunately extremely common, to return unannotated errors like this. I can't think of the last time I've used this stanza. The proper form is, at a minimum,

    if err != nil {
        return nil, fmt.Errorf("executing request: %w", err)
    }
Or, if there's additional, caller-actionable information about the error you want to provide,

    if err != nil {
        return nil, OtherError{Inner: err, Extra: xyz, ...}
    }
and so on. The point is you have in that stanza the space to program with the error, same as any other value in the function. The... semantic equivalence? which the idiom reinforces is actually extremely good! Error handling isn't any less important than happy path code, and, IMO, language features like `?` suggest that it is.

> It's incomprehensible to me that...

It is not immediately clear that the first example is incrementing every even number. To get there, we have to parse the method names, recall and parse the special syntax rules for those methods, and, if we're being diligent, reflect on the ownership requirements and allocation effects w.r.t. their parameters, to make sure we're not doing anything with unintended side effects.

We're doing basically the same work in the second example, minus the ownership stuff. We're using more characters to do it, but that's not a priori worse. Parsing `res = append(res, v+1)` does not take more time than `map(|a| a + 1)`. Using curly brackets and newlines to demarcate transformation steps instead of monads is not more prone to bugs. It's the same stuff, expressed differently, and, IMO, more coherently: code written in the imperative style is generally easier to understand than functional. (I hope that isn't controversial.)

> There's no argument for the go loop being "better" than the Rust equivalent that doesn't also argue that the C version with the additional hassle of bounds-checking and manual incrementing is better still.

Reducto ad absurdum.


Isn’t that purely interface-equality related “issue”? With Maybe’s you’d compare Maybe(interface(Maybe(T))) with T or with Maybe(T) and it would either non-compile or be a broken comparison again, depending on == semantics. Then we’d read on muddy == semantics which is broken or disallowing for easy comparison through an optional interface and forces one to build ugly ladders of pattern matching in otherwise one-liner lambdas.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: