Hacker News new | past | comments | ask | show | jobs | submit login
The Next Step for Generics (golang.org)
673 points by ainar-g on June 16, 2020 | hide | past | favorite | 664 comments



Looks like a big step in the right direction. The biggest pain is that methods cannot contain additional types, which prevents the definition of generic containers with a map method like

    func (c Container(T)) Map(transform(type V)(T) V) Container(V)
if you want to see what `Result(T)` and `Option(T)` look like under this proposal, check out a very stupid implementation here https://go2goplay.golang.org/p/dYth-AQ0Fru It's definitely more annoying.

But, as per https://go.googlesource.com/proposal/+/refs/heads/master/des... it looks like maybe they will allow additional type parameters in methods, which would be REALLY nice.


There's no need to use the pointer in there; you could just use an ok bool instead (saving the indirection): https://go2goplay.golang.org/p/4hr8zINfRym

I think it's interesting to observe that using the Result type is really not that much different from using a multiple return value - it's actually worse in some ways because you end up using "Unwrap" which can panic.


I like your solution, although it's a tradeoff when the value type is some kind of nested struct whose zero value could be expensive to create.

Agreed that multiple return is actually quite nice in many situations, although `Unwrap()` is generally only called after checking `OK()` and the real benefit is in using `Map()`.

    a, err := DoCalculation()
    if err != nil {
      return err;
    }
    b, err := Transform(a)
    if err != nil {
      return err;
    }
is a lot more verbose and harder to read than

    a := DoCalculation()
    b := ResultMap(a, Transform)


I happened upon this comment a bit late, and caveat I'm not really a software engineer, but this comment made me think of something...

I've written a decent amount of code professionally in the first and second styles.

I sincerely prefer the second style, for reasons that are hard to articulate, maybe the aesthetic, the space, or something that I assume is equally as questionable.

After I stopped writing a lot of code, when I got pulled back in to debug or provide context long after the fact, in both cases, it was way easier in "real" code bases, to catch myself back up on things in the case that the code was of style 1, than when it was of style 2!

I may be alone in this sentiment, and I even regret that I have it.

I think there is also a bit of satisfaction in writing the code in the second example, especially when the example is a lot more complex than the one provided here.

Maybe it comes down to how much re-reading/maintenance your codebase actually needs. I wonder if coding convention/style has been mapped to the types of problems that code creates and the subsequent repair of those problems being dependent on the code style... I'm sure if I google it I'll find something :)


This should be the top comment: constructive criticism with a working example of how a simple sum-type-Rust-like-thing would be implemented.


You can improve the non-method syntax slightly by using option.Map in Go style. But it still isn't as nice as a method.

I think type parameters of methods will definitely get added since it's so standard and if anything more confusing to not have them.


Yeah, it's a little nicer that way, but methods would be ideal IMO.


I like generics for collections but that is about it. I've seen "gifted" programmers turn everything into generic classes and functions which then makes it very difficult for anyone else to figure out what is going on.

One reason I like go is it doesn't have all the bells and whistles that give too many choices.


Here's some generic code I wrote in Rust recently. I had two unrelated iterators/collections and I needed to flatten and sort them into a single buffer.

    struct Data {
        idx: usize
        // ... 
    }
    
    struct Foo {
        stuff: Vec<Vec<Data>>
    }

    
    struct Bar {
        stuff: Vec<Data>
    }


    fn flatten_and_sort (foo:Foo, bar:Bar) -> Vec<Data> {
        let mut output = 
            foo.stuff.into_iter()
               .flat_map(|v| v.into_iter())
               .chain(bar.stuff.into_iter())
               .collect::<Vec<_>>();

        output.sort_by(|lhs, rhs| lhs.idx.cmp(&rhs.idx));
        output
    }
Now you could argue that this is just "generics for collections" but the way those iterator combinators are written make heavy use of generics/traits that aren't immediately applicable for collections. Those same combinator techniques can be applied to a whole host of other abstractions that allow for that kind of user code, but it's only possible if the type system empowers library authors to do it.


You can actually use the power of generics to get rid of the two inner calls to `into_iter()`:

    let mut output: Vec<_> =  
        foo.stuff.into_iter()
            .flat_map(|v| v)
            .chain(bar.stuff)
            .collect();


I believe you can just use `.flatten()` instead of `.flat_map(|v| v)`:

    let mut output: Vec<_> =  
        foo.stuff.into_iter()
            .flatten()
            .chain(bar.stuff)
            .collect();
I might make it slightly more clear what the intent is by doing this:

    let mut output: Vec<_> =
        Iterator::chain(foo.stuff.into_iter().flatten(), bar.stuff)
            .collect();
But still basically the same.


Yes, functional programming patterns are nice. It's possible to write that even more concisely in JavaScript.


I've certainly encountered codebases that abused inheritance and templates/generics to the point of obfuscation but you can abuse anything really. Besides in my experience the worst offenders where in C++ where the meta-programming is effectively duck-typed. Trait-based generics like in Rust go a long way into making generic code readable since you're always aware of what meta-type you're working with exactly.

I definitely don't use generics if they can be avoided, and I think preemptive use of genericity "just in case" can lead to the situation you describe. If I'm not sure I'll really need generics I just start writing my code without them and refactor later on if I find that I actually need them.

But even if you only really care about generics for collections, that's still a massive use case. There's a wealth of custom and optimized collections implemented in third party crates in Rust. Generics make these third-party collections as easy and clean to use as first party ones (which are usually themselves implemented in pure Rust, with hardly any compiler magic). Being easily able to implement a generic Atomic type, a generic Mutex type etc... without compiler magic is pretty damn useful IMO.


What's wrong with this?

  class Result<T>
  {
    bool IsSuccess {get; set;}
    string Message {get; set;}
    T Data {get; set;}
  }
On many occasions, I like using result types for defining a standard response for calls. It's typed and success / fail can be handled as a cross-cutting concern.


That's a generic container of 0 or 1 elements ;)

It's also incredibly unsafe and why generics aren't enough. C++, Java, and so on have had generics for ages and with types like the one above, null pointer exceptions are incredibly common. Nothing prevents the user from attempting to retrieve the data without first retrieving the success status.

On the other hand, this improves on it dramatically:

    enum Result<T, E> {
      Success(T),
      Failure(E)
    }


I'm convinced that lack of Sum Types like this in languages like Java/C#/Go are one of the key reasons that people prefer dynamic languages. It's incredibly freeing to be able to express "or". I do it all the time in JavaScript (variables in dynamic languages are basically one giant enum of every possible value), and I feel incredibly restricted when using a language that requires a class hierarchy to express this basic concept.


I completely agree. Every passing day I become more convinced that a statically typed language without sum types or more broadly ADTs is fundamentally incomplete.

The good news is that many languages seem to be cozying up to them, and both the JVM (through Kotlin, Scala, et all) and .net (through F# or C# w/ language-ext) support them.

Even better news is that the C# team has stated that they want to implement Sum Types and ADTs into the language, and are actively looking into it.


I just don't see, in properly designed code, that there would be that much use for sum types if you have generics. When are you creating functions take or return radically different types that need to be expressed this way?

I dislike dynamic languages where parameters and variables can take on any type -- it's rarely the case that same variable/parameter would ever need to contain a string, a number, or a Widget in the same block of code.

I find it much more freeing to have the compiler be in charge of exactness so I can make whatever changes I need knowing that entire classes of mistakes are now impossible.


> When are you creating functions take or return radically different types that need to be expressed this way

Let's say you're opening a file that you think is a CSV. There can be several outcomes:

- the file doesn't exist

- the file can't be read

- the file can be read but isn't a valid CSV

- the file can be read and is valid, and you get some data

All of these are different types of results. You can get away with treating the first 3 as the same, but not the last. Without a tagged union, you'll probably resort to one of a few tricks:

- You'll have some sort of type with an error code, and a nullable data field. In reality, this is a tagged union, it's just that your compiler doesn't know about it and can't catch your errors.

- you'll return an error value and have some sort of "out" value with the data: this is basically the same as the previous example.

- you'll throw exceptions, which usually ends up with people writing code that forgets about the exception because the compiler doesn't care about it, and the code works 99% of the time until it completely blows up.


If you want to force people to handle the above 3 cases, couldn't you just throw separate checked exceptions (eg in Java)? In that case the compiler does care about it. You can still catch and ignore but that imo is not a limitation of the language's expressiveness.


Checked exceptions would have been an ok idea if it weren't for the fact that at least when I was writing Java last (almost 10 years ago) they were expressly discouraged in most code bases. Partially because people just get in the lazy habit of catch and rethrow RuntimeException, or catch and log, etc. when confronted with them. Partially because the JDK itself abused them in the early days for things people had no hope of handling properly.

They also tend to defer handling out into places where the context isn't always there.

The trend in language design does seem to be more broadly away from exceptions for this kind of thing and into generic pattern matching and status result types.


> Checked exceptions would have been an ok idea

> Partially because people just get in the lazy habit of catch and rethrow RuntimeException, or catch and log, etc. when confronted with them.

After quite a while of thinking this way, I came to the conclusion that:

95% of the time, there's no way to 'handle' an error in a 'make it right' sense. Disk write failed? REST request failed? DNS lookup? There usually isn't an alternative to logging/rethrowing.

When there is a way to handle an error (usually by retrying?), it's top level anyway.

Furthermore, IO is the stuff that can just 'go wrong' regardless of how good the programmer is, and IO tends to sit at the bottom in most Java programs. This means every method call is prone to IOExceptions.


Yes, after a few years of Java we all end up there. Frankly it's a good argument for the Erlang "Let It Crash" philosophy.

https://verraes.net/2014/12/erlang-let-it-crash/

If IOException on a read is truly happening, and it isn't just a case of a missing file, there are serious issues that aren't going to be fixed with a catch-and-log, or be able to be handled further up the call stack.


One benefit I've found with error-enums is just being aware of all the possible errors that can occur. You're right: 95% of the time you can't do anything except log/retry. But that 5% of the time become runtime bugs which are a massive pain. It's really nice when that is automatically surfaced for you at development time.


Note that checked exceptions are essentially the same thing as returning a tagged union, from a theoretical perspective at least.

They're not popular in Java though, because the ergonomics is a lot worse than working with a Result type.


Honest question: do you think this kind of stuff is going to be adopted by the majority in the next decade or two? Because I'm looking at it and adding even more language features like that seems to make it even harder to read someone else's code.


um... you realize the parent post is talking about having sum types in statically typed languages (eg. rust), when you already do this all the time in dynamic languages like javascript and python right?

So, I mean, forget 'the next decade or two'; the majority of people are doing this right now; python and js are the probably the two most popular languages in use right now.

Will it end up in all statically typed languages? Dunno; I guess probably not in java or C# any time soon, but swift and kotlin support them already).

...ie. if your excuse for not wanting to learn it is that it's probably an edge case that most people don't have to care about now, and probably never will, you're mistaken I'm afraid.

It's a style of code that is very much currently in use.


Are the majority actually writing code like this though? In the case of dynamic languages, this property seems more like an additional consequence of how the language behaves. It's not additional syntax.


> Are the majority actually writing code like this though?

Yes.

For example, some use cases: https://www.typescriptlang.org/v2/docs/handbook/unions-and-i...

This sort of code is very common.

I really don't know what more to say about this; if you don't want to use them, don't. ...but if your excuse for not using them is that other people don't, it's wrong.


Because even with generics, you are not able to express "or"; two different choices of types that have _different_ APIs. With generics, you can express n different choices of types that have all the _same_ API.

It's a good software engineering principle to make control and data flow as streamlined as possible, for similar data. Minimize branches and special cases. Generics help with this, they hide the "irrelevant" differences, surfacing only the relevant.

On the other hand, if there are _actually_ different cases, that need to be handled differently, you want to branch and you want to express that there are multiple choices. Sum types make this a compiler-checked type system feature.


Let's take Rust's hash map entry api[0], for example. How would you represent the return type of `.entry()` using only a class hierarchy?

    let v = match map.entry(key) {
        Entry::Occupied(o) => {
            o.get_mut() += 1;
            o.into_mut()
        }
        Entry::Vacant(v) => {
            update_vacant_count(v.key());
            v.insert(0)
        }
    };
I view sum types as enabling the exact same exactness as you describe in your last line; especially since you can easily switch/match based on a specific subtype if you realize you need that, without adding another method to the base class and copying into the x subclasses that you have for implementing the different behavior.

[0]: https://doc.rust-lang.org/std/collections/hash_map/enum.Entr...


Rust has both generics and sun types, and benefits enormously from both.

And sum types aren’t for “radically different types”. You can define an error type to be one of different options (I.e. a more principled error code), or to represent nullability in the type system, or to indicate fallibility without relying on exceptions, etc.

Rust uses all of these to great effect, and does so because these sum types are generic.


> I'm convinced that lack of Sum Types like this in languages like Java/C#/Go are one of the key reasons that people prefer dynamic languages.

It doesn't hurt that static languages (TypeScript) or tools (mypy) that lightly lay on top of dynamic languages often do support sum types.


> It's also incredibly unsafe and why generics aren't enough. C++, Java, and so on have had generics for ages and with types like the one above, null pointer exceptions are incredibly common.

uh, you'd never get a null-pointer exception in C++ given the type that OP mentioned. Value types in C++ cannot be null (and most things are value types by a large margin).


> Value types in C++ cannot be null

They can just not exist. And C++ being C++, dereferencing an empty std::optional is UB. In practice this particular UB often leads to way worse consequences than more "conventional" null-pointer derefs.


Then write your own optional that always checks on dereference or toggle whatever compilation flag enables checking in the standard library you are using.


Instead you can have undefined behaviour in C++.

Don't think get;set is C++, though it breaks encapsulation.


You can also constrain a generic type only to value types in C#:

  class Result<T> where T: struct
  {
  ...
  }
In that case it can't be null with C# either.


Then you can't construct it unless it's successful, no?

A Result<T> that can only contain successful values doesn't seem very useful


You can, it's possible to address "missing values" with a default construct. Example:

  int x = default; // x becomes zero
  T x = default; // x becomes whatever the default value for struct is


Then we're back to accessing that value being an enormous footgun, yes?


Then you can't construct it unless it's successful, no?

A Result<T> that can only contain successful values doesn't seem very useful


No, you just are forced to use methods like foo.UnwrapOr(default_value) to get the Result. Or depending on the language, you get a compile error if you don't handle both possible values of the Result enum in a switch statement or if/else clause.

See for example https://doc.rust-lang.org/std/result/enum.Result.html#method... in rust, https://docs.oracle.com/javase/8/docs/api/java/util/Optional... in Java, and https://en.cppreference.com/w/cpp/utility/optional/value_or in C++.


Who are you replying to? Is any of your elaboration related to this result type?

    class Result<T>
    {
      bool IsSuccess {get; set;}
      string Message {get; set;}
      T Data {get; set;}
    }


Ah you're quite correct.


Yes you can? The equivalent type in C++ is std::expected[1] which doesn't even contain a pointer that could be dereferenced (unless T is a pointer obviously).

[1] unfortunately not standardized yet https://github.com/TartanLlama/expected


Who are you replying to? Is it in any way related to the original comment I replied to and this type?

    class Result<T>
    {
      bool IsSuccess {get; set;}
      string Message {get; set;}
      T Data {get; set;}
    }


I am replying to you and its pretty obviously related to your comment.

You: "C++, Java, and so on have had generics for ages and with types like the one above, null pointer exceptions are incredibly common."

jcelerier: "you'd never get a null-pointer exception in C++ given the type that OP mentioned."

You: "Then you can't construct it unless it's successful, no?"

Me: "The equivalent type in C++ [to what the OP mentioned] is std::expected". It is not possible to get a null-pointer exception with this type and yet you can construct it.


It sounds quite a lot like you took the type the OP posted and changed it in your reply to a different type that isn't standardized yet, do I have that right?


The code the OP posted is not C++. If you translate it to C++ there is no way to get a null pointer exception.

It sounds quite a lot like you're looking to be pointlessly argumentative, do I have that right?


There are two things being discussed in this thread.

1. The first, my original point was that a high quality type system enforces correctness by more than just having generics. There's no proper way in C++ to create this class and make a sum type - there's no pattern matching or type narrowing like can be done in languages with more interesting type systems and language facilities. Generics is just a first step to a much more interesting, safer way of writing code.

2. The second, my replies to folks who have corrected me, and I'll borrow your little paraphrase here:

> [Me]: "C++, Java, and so on have had generics for ages and with types like the one above, null pointer exceptions are incredibly common."

>

> jcelerier: "you'd never get a null-pointer exception in C++ given the type that OP mentioned."

>

> [Me]: "Then you can't construct it unless it's successful, no?"

I think this is exactly correct still. If it's impossible to create an instance of Result<T> without... well, a successful result, you may as well just typedef Result<T> to T, right? If it can't store the "failure" case, it's totally uninteresting.

If it _can_ store the failure case, making it safe in C++ is fairly challenging and I dare say it will be a little longer and a little less safe than a Result I can write in a few lines of TypeScript, Scala, Rust, an ML or Haskell derivative, and so on.

Now, I'd love to be proven wrong, I haven't written C++ for a while so the standard may have changed, but is there a way to write a proper enum and pattern match on the value?

It looks like this std::expected thing is neat, but can a _regular person_ write it in their own code and expect it to be safe? I can say with certainty that I can do that for the languages I listed above and in fewer than 10 lines of code.

The C++ answer to that is, well, this:

https://github.com/TartanLlama/expected/blob/master/include/...

I don't think it's a comparison.


> The C++ answer to that is, well, this:

the linked link has a ton of things that are "quality-of-life" things. For instance comparing two Result values efficiently (you don't want to compare two Result<T> bitwise, and you don't want the "is_valid" flag to be first in the structure layout to fallback on the automatic default of lexical order as that would sometimes waste a few bytes, but you want the "is_valid" flag to be the first thing being compared for instance. Do you know of a language that would do that automatically ?).

It also supports back to C++11 and GCC4.9 with various fixes for some specific compiler versions's bugs, supports being used with -fno-exceptions (so a separate language than ISO C++) - sure, today's languages can do better in terms of prettiness, but so would a pure-C++20 solution that only needs to work with a single implementation.

If you are ready to forfeit some amount of performance, for instance because you don't care that the value of your Result will be copied instead of moved when used in a temporary chain (e.g. `int x = operation_that_gets_a_result().or_else([] (auto&& error) { return whatever; });` 3/4 of the code can go away (and things will still likely be faster than most other languages).


Well, T can be a pointer / reference here.


That wouldn't change anything to Result<T>'s implicit safety properties. "safe + unsafe == unsafe" - to have a meaningful discussion we should focus on the safe part, else it's always possible to bring up the argument of "but you can do ((char*)&whatever)[123] = 0x66;"


With c# 8 you have nullable references and you can use the compiler to guard you against null pointer exceptions.


> That's a generic container of 0 or 1 elements ;)

Then chances are so are most if not all of the uses of generics OP criticises. The only "non-container" generics I can think of is session types where the generic parameter represents a statically checked state.


Result types are much better than multiple return values. But now the entire Go ecosystem has to migrate, if we want those benefits (and we want consistent behavior across APIs). It'd be like the Node.js move to promises, only worse...


I'm not sure why you'd use a class like this in Go when you have multiple returns and an error interface that already handles this exact use case.


Because multiple return values for handling errors is a strictly inferior and error prone way for dealing with the matter.


    func foo() (*SomeType, error) {
        ...
        return someErr
    }

    ...
    result, err := foo()
    if err != nil {
        // handle err
    }
    // handle result
vs

    type Result struct {
        Err error
        Data SomeType
    }

    func (r *Result) HasError() bool {
        return r.Err != nil
    }

    func bar() *Result {
        ...
        return &Result { ... }
    }

    ...
    result := bar()
    if result.HasError() {
       // handle result.Err
    }
    // handle result

I'm not really sure I see the benefit to the latter. In a language with special operators and built-in types it may be easier (e.g. foo()?.bar()?.commit()), but without these language features I don't see how the Result<T> approach is better.


Go can't really express the Result<T> approach. In Go, it's up to you to remember to check result.HasError(), just like it's up to you to check if err != nil. If you forget that check, you'll try to access the Data and get a nil pointer exception.

The Result<T> approach prevents you from accessing Data if you haven't handled the error, and it does so with a compile-time error.

Even with Go's draconian unused variable rules, I and my colleagues have been burned more than once by forgotten error checks.


there are linters that will help you with that.

https://github.com/kisielk/errcheck

https://golangci-lint.run/usage/linters/ has a solid set of options.


I just wish the linter was integrated into the compiler. And that code that didn't check would simply not compile


> without these language features I don't see how the Result<T> approach is better.

That's the point! I want language features!

I don't want to wait 6 years for the designers to bake some new operator into the language. I want rich enough expression so that if '?.' is missing I just throw it in as a one-liner.

Generics is one such source of richness.


A language with sun types will express Result as Success XOR Failure. And then to access the Success, the compiler will force you to go through a switch statement that handles each case.


The alternative is not the Result type you defined, but something along the lines of what languages like Rust or Haskell define: https://doc.rust-lang.org/std/result/


It's interesting that you say this, because I've had the opposite experience. I wouldn't say it's strictly inferior, because there are definitely upsides. If it was strictly inferior, why would a modern language be designed that way -- there must be some debate right?

I love multiple returns/errors. I find that I never mistakenly forget to handle an error when the program won't compile because I forgot about the second return value.

I don't use go at work though, I use a language with lots of throw'ing exceptions, and I regularly miss handling exceptions that are hidden in dependencies. This isn't the end of the world in our case, but I prefer to be more explicit.


> If it was strictly inferior, why would a modern language be designed that way

golang is not a modern language (how old it is is irrelevent), and the people who designed it did not have a proper language design background (their other accomplishments are a different matter).

Having worked on larger golang code bases, and I've seen several times where errors are either ignored or overwritten accidentally. It's just bad language design.


I cannot think of a language where errors cannot be ignored. In go it is easy to ignore them, but they stick out and can be marked by static analysis. The problems you describe are not solved at the language level, but by giving programmers enough time and incentives to write durable code.


The following line in golang ignores the error:

    fmt.Println("foo")
Compare to a language with exception handling where an exception will get thrown and bubbles up the stack until it either hits a handler, or crashes the program with a stack trace.

And I was referring to accidental ignoring. I've seen variations of the following several times now:

    res, err := foo("foo")
    if err != nil { ... }
    if res != nil { ... }
    res, err = foo("bar")
    if res != nil { ... }


Usage of linters fixes this:

>The following line in golang ignores the error:

   fmt.Println("foo")
fmt.Println() is blacklisted for obvious reasons, but this:

    a := func() error {
        return nil 
    }
    a()
results in:

    go-lint: Error return value of 'a' is not checked (errcheck)
>And I was referring to accidental ignoring. I've seen variations of the following several times now:

    res, err := foo("foo")
    if err != nil { ... }
    if res != nil { ... }
    res, err = foo("bar")
    if res != nil { ... }
results in:

    go-lint: ineffectual assignment to 'err' (ineffassign)


> fmt.Println() is blacklisted for obvious reasons

That's the issue with the language, there are so many special cases for convenience sake, not for correctness sake. It's obvious why it's excluded, but it doesn't make it correct. Do you want critical software written in such a language?

Furthermore, does that linter work with something like gorm (https://gorm.io/) and its way of handling errors? It's extremely easy to mis-handle errors with it. It's even a widely used library.


Huh, I have seen enough catch blocks in Java code at work which are totally empty. How is that better than ignoring error?


Because it's an explicit opt-in, as opposed to accidental opt out. And static checking can warn you about empty catch blocks.


In rust, errors are difficult to ignore (you need to either allow compiler warnings, which AFAICT nobody sane does, or write something like `let _ = my_fallible_function();` which makes the intent to ignore the error explicit).

Perhaps more fundamental: it’s impossible to accidentally use an uninitialized “success” return value when the function actually failed, which is easy to do in C, C++, Go, etc.


Or .unwrap(), which I see relatively often.


That’s not ignoring errors, it’s explicitly choosing what to do in case of one (crash).


Error handling is hard, period. Error handling in go is no worse than any other language, and in most ways it is better being explicit and non-magic.

> people who designed it did not have a proper language design background

Irrelevant.

> It's just bad language design.

try { ... } catch(Exception ex) { ... }


Exceptions don't lead to silent but dangerous and hard to debug errors. The program fails if exception is not handled.


> try { ... } catch(Exception ex) { ... }

The error here is explicitly handled, and cannot be accidentally ignored. Unlike golang where it's quite easy for errors to go ignored accidentally.


Nevertheless, this is how it is mostly done in Java. I haven't used eclipse in eons, but last time I did it even generated this code.

If you care with go, use errcheck.


Does errcheck work well with gorm (https://gorm.io/) and it's way of returning errors? This is not an obscure library, it's quite widely used.


Does any language save you from explicitly screwing up error handling? Gorm is doing the Go equivalent of:

     class Query {
         class QueryResult {
             Exception error;
             Value result;
         }
         public QueryResult query() {
             try {
                 return doThing();
             } catch(Exception e){
                 return new QueryResult(error, null);
             }
         }
     }
Gorm is going out of its way to make error handling suck.


> Does any language save you from explicitly screwing up error handling?

It's about the default error handling method being sane. In exception based languages, an unhandled error bubbles up until it reaches a handler, or it crashes the program with a stacktrace.

Compare to what golang does, it's somewhat easy to accidentally ignore or overwrite errors. This leads to silent corruption of state, much worse than crashing the program outright.


> It's about the default error handling method being sane.

Gorm isn't using the default error handling.


That's one point in this discussion. The language allows error handling that way. Compared to a language with proper sum types or exceptions, where one would have to actively work against the language to end up with that mess.


> That's one point in this discussion. The language allows error handling that way. Compared to a language with proper sum types or exceptions, where one would have to actively work against the language to end up with that mess.

I've seen a bunch of code that does the equivalent of the Java I posted above. Mostly when sending errors across the network.


because it has try/catch. Without that (which would be similar to not checking the err in go) it explodes or throws to a layer up that may not expect it.

Each language has its wonks.


> Without that (which would be similar to not checking the err in go) it explodes or throws to a layer up that may not expect it.

It's not similar to that at all. Without it, the exception bubbles up until it gets caught somewhere, or crashes the program with a useful stacktrace.

With golang, it just goes undetected, and the code keeps running with corrupt state, without anyone knowing any better.


I would say it is a very ergonomic way of doing this. It allows for writing in a more exploratory way until you know what your error handling story is. Then, even if you choose to propagate it later, you just add it to your signature. Also it is very easy to grok and clear. Definitely not strictly inferior.


It's a lot cleaner to pass a Result<T> through a channel or a slice than to create two channels or slices and confirm everyone's following the same convention when using them.


I concede that there are probably scenarios where this design makes sense within that context. I typically find that either I care about a single error and terminating the computation, or I don't care about errors at all. In the former case, the primitives in the sync package (or just an error channel which we send to once and close) are adequate. The latter case presents no issues, of course.

At $work we definitely have examples where we care about preserving errors, and if that tool were implemented in Go a solution like a Result struct containing an error instance and a data type instance could make sense.


It has a bunch of invalid states (message and data both set, neither set, message set but IsSuccess is true, etc.). So you have to either check it every time, or you'll get inconsistent behaviour miles away from where the actual problem is. It's like null but even more so.


Well, for one thing, it doesn't actually work like a proper Optional<T> or Either<T, string> type. It works more like Either<(T, string),(T, string)>, which might have some uses, but isn't typically a thing someone would often reach for if they had a type system that readily supported the other two options.


> What's wrong with this?

That it's mutable, at the very least!


I feel like such a class should either be part of the language, and part of language idioms etc, or it shouldn't be used.


Can you articulate why? it seems to me that 'feel' should not be part of the discussion.


Not GP, but I've sometimes found libraries implementing similar concepts differently causing issues.

E.g.

    libraryA.Result struct {
        Err error
        Data SomeDataType
    }

    libraryB.Result struct {
        err string
        Data SomeDataType
    }
    func (r libraryB.Result) Error() string {
         return r.err
    }
Now you have two different implementations of the same fundamental idea, but they each require different handling. In Go, where many things simply return an error type in addition to whatever value(s), you would now have three different approaches to error handling to deal with as opposed to just whatever the language specified as the best practice.


This is what interfaces are for.

Let your caller bring their own error type and instantiate your library code over that.


Not GP but:

It may frustrate coworkers who need to edit the code.

It adds another dependency into your workflow.


> which then makes it very difficult for anyone else to figure out what is going on

Or we can learn to read them. Just treat types like a first class value. You either assign names to types like you do to values, or you can assign a name to a function that returns a type, this being generics.


> or we can learn to read them

That's an awful way to think about hard to read code. I could produce the most unreadable one liners you've ever seen in your life. We should condemn that and not blame it on others to "learn how to read".


> That's an awful way to think about hard to read code

Most of the time I hear about "hard to read code" is "pattern I don't currently have a mental model for". We didn't move on from COBOL by letting that be a deterrant.


Fair, I've actually seen both types of situations. I only complain after having some domain knowledge of the project and the language/tools. After sufficient understanding, I will make sure that the code that gets merged into master is highly readable. Simple > complicated. Always. Don't be ashamed to write simple code.


You write code for an audience. In that audience, sit yourself in your current state, yourself a year+ from now, your colleagues (you know their level) and the compiler. With bad luck, your current self i a state pulling your hair out to debug.

Think about the audience when you code.


I assume you only program in readable languages like COBOL and AppleScript.


Ah, blub. It will never leave us.


I expect after a flurry of initial excitement, the community will settle on some standards about what it is and is not good for that will tend to resemble "Go 1.0 + a few things" moreso than "A complete rewrite of everything ever done for Go to be in some new 'generic' style".


> I like generics for collections but that is about it.

What about algorithms (sorts, folds, etc) on those containers? I write a lot of numerical code. It sucks to do multiple maintenance for functions that work on arrays of floats, doubles, complex floats, and complex doubles. Templates/Generics are a huge win for me here. Some functions work nicely on integer types too.


I think this is probably the single best use case for Generics for Go - numerical operations on the range of number types.


At this point I'd like to summon to go-generics defense all the PHP and Javascript developers who assert unwaveringly "Bad language design doesn't cause bad code; bad programmers cause bad code."


Counterpoint: languages (and libraries, and frameworks, and platforms) so well-designed that they introduce a "pit of success"[1] such that bad programmers naturally write better code than they would have done otherwise.

For example, what if PHP could somehow detect string-concatenation in SQL queries and instantly `die()` with a beginner-friendly error message explaining to use query parameterisation from the very beginning: tens of billions of dollars of PHP SQL injection vulnerabilities simply never would have happened - and people who were already writing database queries with string-concatenation in VB and Java who gave PHP a try would then be forced to learn about the benefits of parameterisation and they'd then take that improved practice back to their VB and Java projects - a significant net worldwide improvement in code-quality!

[1]: https://blog.codinghorror.com/falling-into-the-pit-of-succes...

I've been writing in TypeScript for about 5 years now - and I'm in-love with its algebraic type system and whenever I switch back to C#/.NET projects it's made me push the limits of what we can do with .NET's type system just so I can have (or at least emulate as closely as possible) the features of TypeScript's type system.

(As for generics - I've wondered "what if every method/function was "generic" insofar as any method's call-site could redefine that method's parameter types and return types? Of course then it comes down to the "structural vs. nominative typing" war... but I'd rather be fighting for a hybrid of the two rather than trying to work-around an poorly-expressive type system.


And that's among the reasons it's been left out of Go. Go design was guided by experience working on large software systems; the risk with making a language too flexible is that developers begin building domain-specific metalanguages inside the language, and before you know it your monolingual codebase becomes a sliced-up fiefdom of various pieces with mutually-incompatible metasyntax that defeats the primary advantage of using one language: developers being able to transition from one piece of the software to another without in-depth retraining.

For enterprise-level programming (which is the environment Go grew up in), a language that's too flexible is a hindrance, because you can always pay for more eng-hours, but you can't necessarily afford smarter programmers.


What about genetics for phantom types?

Ex.

    class ID<T> {
      int id;
    }
The idea is that an ID is just an int under the hood, but ID<User> and ID<Post> are different types so you can’t accidentally pass in a user id where a post is is expected.

Now, this is just a simple example that probably won’t catch too many bugs, but you can do more useful things like have a phantom parameter to represent if the data is sanitized, and then make sure that only sanitized strings are displayed.


Just to note, for this specific example, Go supports this with type definitions:

  // UserID and PostID are distinct types
  type UserID int
  
  type PostID int


This isn't quite the same, because it's just an alias - you can pass a UserID to a function accepting a PostID: https://play.golang.org/p/nSOgcJs_66y

It still provides a documentation benefit of course.

EDIT: Whoops, yes, as lentil points out, they are indeed distinct types not aliases. So it does provide the benefit of the Rust solution.


No, it's not an alias, they are distinct types. You can't use the types interchangeably (unless you cast them).

Your playground example didn't try to pass a UserID to a function accepting a PostID, but if you do that, you'll see the error:

https://play.golang.org/p/vyiJ_sLzy4O


Oh neat! Most languages make it a little bit verbose to create these kinds of wrapper types for type safety (with zero overhead), so it's nice that Go has that.

I think the generic approach is a little bit better because of the flexibility, but this approach is still better than not having it at all.


The go team's attempt at involving everyone in the priorities of the language has meant they lost focus on the wisdom of the original design. I spent 10 years writing go and I'm now expecting to have to maintain garbage go2 code as punishment for my experience. I wish they focused on making the language better at what it does, instead of making it look like other languages.


That said the go team is incredibly talented and deserve a lot of kudos for moving much of the web programming discussion into a simpler understanding of concurrency and type safety. Nodejs and go came out at the same time and node is still a concurrency strategy salad.


considering the vast majority of programming involves loops I don't see "just for collections" as some minor thing—it's most of what I do


If you don't understand someone else's code, you can either tell them they stuff is too complicated or learn and understand better. There can be a middle ground of course.


Most of the time if code is hard to understand its bad code. Just because someone writes complex code that uses all the abstractions, doesnt mean its good. Usually it means the opposite


I'd like generics for concurrency constructs. Obvious ones like Mutex<T> but generics are necessary for a bunch of other constructs like QueueConsumer<T> where I just provide a function from T -> error and it will handle all the concurrent consumption implementation for me. And yes, that's almost just a chan T except for the timeouts and error handling and concurrency level, etc.


There is an underappreciated advantage to using generics in function signatures: they inform you about exactly which properties of your type a function is going to ignore (this is called parametricity: https://en.wikipedia.org/wiki/Parametricity)

For instance, if you have a function `f : Vec<a> -> SomeType`, the fact that `a` is a type variable and not a concrete type gives you a lot of information about `f` for free: namely that it will not use any properties of the type `a`, it cannot inspect values of that type at all. So essentially you already know, without even glancing at the implementation, that `f` can only inspect the structure of the vector, not its contents.


Not all generics are parametric.


Agreed. From a quick skim of the Go generics proposal I get the impression that they are in fact aiming for parametric generics though (in fact they use the term "parametric polymorphism" in the background section).


I like generics but I find that it is often best to start out writing a version which is not generic (i.e. explicitly only support usize or something) then make it generic after that version is written. As a side benefit, I find that this forces me to really think about if it should actually be generic or not. One time I was writing a small Datalog engine in Rust and was initially going to make it take in generic atoms. However, I ended up deciding after going through the above process that I could just use u64 identities and just store a one to one map from the actual atoms to u64 and keep the implementation simpler.

I agree with the sentiment that it is very easy to overuse genetics though there are scenarios where they are very useful.


I can think of a few other potential use cases in Go. Some ideas:

- Promises

- Mutexes like Rust's Mutex<T> (would be much nicer than sync.Mutex)

- swap functions, like swap(pointer to T, pointer to T)

- combinators and other higher-order functions


For java / c#, in my experience, I've done that mistake because in both language the class declaration is very verbose. Then using generic is the only way to solve a problem which can only be solved by dynamic typing / variables.

In typescript I don't need generic too much / too complex, because the typing definition is more lax, and we can use dynamic in the very complex scenario.

I don't know which approach go is taking.


Honestly as long as you learn when to use generics and when to not use them there are a lot of very useful ways to encode state/invariant into the type system.

But I also have seen the problem with overuse of generics and other "advanced" type system features first hand (in libraries but also done by myself before I knew better).


I've done this to one of my pet projects (thankfully unreleased). It just makes debugging/editing on the fly more difficult. I'd love to unwind the mess. But that'll take days fixing I caused in minutes! It's a big foot gun.


Mathematically, almost everything generic can be viewed as a collection.


Functions ;)

s -> (s, a) is generic and a Functor (mappable - often conflated with a collection) but it's no collection!


Yeah i actually think just having a built in genecic linked list, tree and a few other abstract data types would solve 90% of everyones problems. Part of the good thing about go is you solve problem more then you create them.


I agree, those grapes are probably sour anyway...


I cant help feeling it is a missed opportunity to add generics to Go this late. A mistake that is copied from earlier languages (C++, Java), a mistake similar to other mistakes Go chose not to solve at it's inception, like: having implicit nulls (C, C++, Java, JS, C#), lack of proper sum types (C, C++, Java, JS) and only one blessed concurrency model (JS).

While I think I get the reasons for these decision in theory, make a simple-to-fully-understand language that compiles blazingly fast, I still feel it's a pity (most) these issues where not addressed.


Go really should have learned 2 lessons from Java 5:

1. People will eventually want generics 2. Retrofitting generics onto an existing language is hard and leads to unusual problems

(edit: I'm glad Go is doing this, but...Java learned this in 2004.)


There's is a design document for Go generics.

If you see "unusual problems" with the design, then tell us what they are.

Otherwise it's just shallow pattern matching "Java added generics late, they had problems, Go added generics late therefore they'll have problems too".

Counterexample: C# added generics late and it's perfectly fine design.

The reason Go team is not rushing to implement generics is precisely so that the design makes sense, in the context of Go.

Over the years Ian Taylor wrote several designs for generics, all of which were found to not be good enough.

They are doing it the right way: not shipping until they have a good design and they didn't have good design when Go launched.


If this follows the monomorphic approach described in Featherweight Go [1][2], they will at least avoid problems caused by type erasure and avoid too much runtime overhead.

1. https://arxiv.org/abs/2005.11710 2. https://news.ycombinator.com/item?id=23368453


But then you have compile time overhead (an issue Rust and C++ have faced). One of Go's design goals was to have very fast compile times, which might be in doubt if they take the monomorphization approach.


Is rust's slow compile time because of poor LLVM IR generation or just because monomorphization? D has generics and compiles fast. I guess Nim compile times are okay, too..


Go is just Java repeated as farce. The histories are almost identical with ~10 years lag.

We all called this when Go was created, too.


Just without the "VM".


JAVA -> Go + WASM is conservation of MEMEs? :D

(Not to disparage WASM, which has some nice ideas both on the technical and ecosystem level.)


Thanks but I don't need another Java. Enjoy your full Java and keep Go doing things go-way. :)


Are you saying Go should have not launched in 2009, but shout have waited 10 years until generics were ready?


If they made it a priority they could have shipped in 2010 with generics. There is no new art in this design.


Go has the unfortunate circumstances of its birth being before widespread recognition of the value of generics and better type systems. Those ideas existed in many languages, and in the PL community, and they were starting to take hold in other languages, but the consensus for most software engineers was that "sum types" are hard and "generics" aren't necessary and the type system should stay out of the way.

I think TypeScript, Scala, Kotlin, C#, and various others I forget now proved that these things weren't a fad and could yield significant gains in productivity and code correctness.

Had Rob Pike been more forward looking (or hired Anders Hejlsberg or Guy Steele to design the language) or dipped further into the PL research, he might have been so bold himself. I don't think anyone can fault him for it, these were niche and minority views in 2010 and may not even be in the majority today.

I think at the same time, we see what happens when a new language has large corporate backing in more recent years. Swift more closely resembles Rust than Go in terms of its type system.


"Generics are useful" was not even remotely a niche view in 2010. That was already 6 years after Java got them, and the lack of generics in Go was a common criticism from day one.


Philip Wadler introduced generics into Java and had previously designed Haskell, so he must have been thinking about types for at least a further 15 years before Java (20 years before Go).


This is such a bullshit argument. Why is it that any go post on hacker news pulls out all the tropes. Lack of exceptions. Lack of generics. Would generics make go a better a language? Maybe. Does the lack of generics make go objectively bad? Hell no!

I've had a long career coding in C, C++, Java, Lisp, Python, Ruby... you name it I've done it. Go is my favorite most productive language by far for solving typical backend issues. My least favorite? Java by a HUGE mile.


> Why is it that any go post on hacker news pulls out all the tropes. Lack of exceptions. Lack of generics.

It's pretty simple -- those of us who use those feature regularly in other languages know how valuable they are, and we miss them when they aren't there.


> Lack of exceptions

Is that really a problem? I think proper sum types to allow Result types that can encode many success/error states are so much nicer than an exception hierarchy. Rust and Kotlin did not go with exceptions, and for good reasons.

> C, C++, Java, Lisp, Python, Ruby... you name it I've done it.

Let me name a few: Rust, Kotlin, OCaml/Reason, Haskell, Elm. These languages carefully selected a set of features, the all have: no implicit nulls and sum types. And in your list non of them have those features. I really wonder what you think of these features when you've worked with them.


> Kotlin

Kotlin very much did go with exceptions except for the Result type in coroutines which wraps a Throwable anyway and is only used for the border between the coroutine runner and the async functions.


Why do you dislike java so much?


It was a common criticism but I don't think it was a majority criticism. Hacker News is not representative of the internet at large, and the idea that Go doesn't need or might not even benefit from generics is still pervasive. (See the first person to reply to you.)


The Go authors acknowledged the usefulness of generics as far back as 2009, shortly after its public release: https://research.swtch.com/generic


If you copy most of your design from Pascal / Modula 2 / Oberon as a safe bet to use a time-proven approach, this is only natural. If you don't want to use a time-proven approach, you need to design your own, and it's a massively more complex and fraught enterprise than adding GC and channels (which are both old, time-proven designs, too, just from elsewhere).

You could say that one could maybe copy the time-proven OCaml. But OCaml doesn't have a proven concurrency story, unlike Oberon and Modula-2 (yes, it had working coroutines back in 1992).

I also wish all these design decisions would not have been made in a new language, like they haven't been made in Rust. Unfortunately, the constraints under which creators of Go operated likely did not allow for such a luxury. As a result, Go is a language with first-class concurrency which one can get a grip of in a weekend, quick to market, and fast to compile. Most "better" languages, like Rust, OCaml, or Haskell, don't have most of these qualities. Go just fills a different segment, and there's a lot of demand in that segment.


>As a result, Go is a language with first-class concurrency which one can get a grip of in a weekend

Which is a mess. Sending mutable objects over channels is anything but "proven concurrency story".

Both Rust and Haskell (and OCaml, if we talk about concurrency and not parallelism) have way better concurrency story than Go. I don't care how fast one could start to write concurrent and parallel code if this code is error prone.

The only difference between Rust/Haskell and Go is that the former force you to learn how to write the correct code, while the latter hides the rocks under the water, letting you hit them in production.


I used "first-class" here to denote "explicitly a feature, a thing you can operate on", like "first-class functions" [1]. I didn't mean to say "best in class". I don't even think we have a definite winner in this area.

[1]: https://en.wikipedia.org/wiki/First-class_function


> first-class concurrency

That's an overstatement.

Also implicit nulls are not beneficent to anyone. And sum types could have made results (error/success) so much nicer. I see no reason to go with nulls at Go's inception, hence I call it a mistake.


The flip side was on the top of HN yesterday:

Generics and Compile-Time in Rust

https://news.ycombinator.com/item?id=23534974

It's easy for spectators / bystanders to call something a mistake because you don't understand the tradeoffs. Try designing and implementing a language and you'll see the tradeoffs more clearly.


D has generics and compiles fast. There are probably other examples too.

The overlooked thing is rustc produces poor quality LLVM IR which is also mentioned in FAQ.

And generics reduce amount of manual for loop juggling code one has to write. I don't think adding generics makes much of a difference.


> The overlooked thing is rustc produces poor quality LLVM IR which is also mentioned in FAQ.

LLVM is also slow in general. If you use it, it's likely the thing that's bottlenecking your language's compile-time unless you've done something insane like templates and `#include`, etc..

Inevitably it's the case that even if your source language doesn't do nearly as badly at the design stage when it comes to generics as C++ does, if you use LLVM your build stage is probably going to be unacceptably slow.


> LLVM is also slow in general.

Agreed. But so is GCC. And I guess many of the 'zero cost' abstractions require some advanced optimizing compiler like LLVM to be zero cost (or move that complexity to compiler end).

They specifically mentioned the technical debt and poor LLVM IR Generation issue though. I wonder if it has yet gotten attention or fixed. Maybe @pcwalton knows.


OK sure, now write your own backend or use one other than LLVM (which one?). Now you're going to learn about a different set of tradeoffs.


Go and D made the tradeoff. And both have good compile times. Both of them have GCC and LLVM frontends as well as a homegrown one. And homegrown ones have fast compile times.

Edit: well fast compile times at the expense of optimization. But that kind of proves the point.


I feel the same way, but then again Rust (among others) exists so it's not like those of us who dislike this approach are "stuck" with Go. I think it's actually nice to have the choice, reading the comment in this thread it's pretty clear that there are people who don't feel like we do.

Go clearly values "simple and practical" over "elegant". It seems to be quite successful at that.


I have to refute this 'simple and practical' claim.

Not having generics and neither having very common tools doesn't seem very good. You will have to write a for loop for what is a simple function call in python or javascript or <insert modern language here>. Such detail easily interrupts reading / writing flow.


I agree with the spirit of what you are saying, but I'd nit pick and say that lack of generics and presence of implicit null types make Go not simple and not practical over other options in the same space.


As someone using Java which has null and hardly using any generics even though they are available in Java. I find Java immensely practical with huge number of libraries and other facilities of ecosystem.

Seems you are of the opinion if you do not find something practical no one else can.


And I find go immensely practical and Java immensely impractical. But I see its value. It's almost like we have different languages because people are myriad :)

Problem-space and learning styles play a huge role.


Having seen Java shortcomings up and close I can totally understand that. It is just that some folks think their subjective opinion about programing are some universal objective truths.


That's exactly it. There is Rust and Java already. Use it, please. Don't try to make another Java from Go. One Java is enough.


> having implicit nulls (C, C++, Java, JS, C#),

> lack of proper sum types (C, C++, Java, JS)

Incidentally, Java and C# have addressed (or are in the process of addressing) both issues. Both languages/platforms are superior to golang in almost every conceivable way.


I've used a lot of Java and C#, and a decent amount of Go. I'm not sure I would call Go inferior. The design goals were different. I'm also not a language wonk, so maybe that's why I enjoy the relative simplicity of Go. The developer loop of code, run, code is very fast, and the standard library is good out the box. I just want to write code and get stuff done. To that end, Go is another capable, workman type language (like Java or C#).


Neither have addressed it though - they do have works in progress. Same as the lack of decent threading in Java.

It's been a WIP for years. Lets judge based on status quo rather than being disingenious.


How does Java not have decent threading? Go doesn't even have actual threads.

I'm assuming you're talking about goroutines aka virtual threads/fibers whatever which are entirely different from actual threads.


apologies, but yes, you're correct - I was referring to green threads


Hopefully will preview in next release (6 months release cycle) http://cr.openjdk.java.net/~rpressler/loom/loom/sol1_part1.h...

EAP build - http://jdk.java.net/loom/


I'm not aware of any current Java project on Sum types. There is multi-catch for exceptions but no immediate plans I know to use elsewhere.


A form of pattern matching and switch expression has already made it to the language as of JDK 14. Those are paving the way for full blown pattern matching and sealed types:

https://cr.openjdk.java.net/~briangoetz/amber/datum.html

https://openjdk.java.net/jeps/360


Java aren't fixing nulls, IMHO. They are making it worse.


How are they making it worse?

In any case, there already exist solutions in place:

https://checkerframework.org/manual/#example-use

https://github.com/uber/NullAway


The code ends up looking like:

  Optional<Integer> foo;

  //....

  if (foo != null) {
      foo.and_then(new Consumer () {
            function accept(Integer foo) {

            }
      });
  }


I would never assign null to an Optional nor use a library that returned one.

To ensure assignment before use make it final.


Except for green threads.


That's true though Project Loom is working on fibers. There's already an early access release to test.


Minor point, they've settled on the name "virtual threads" instead at this point.


Sum types are not planned for C# 9, are they?


It has pattern matching, but not a full discriminated union implementation yet. That seems to be on the roadmap though:

https://github.com/dotnet/csharplang/issues/113

https://github.com/dotnet/csharplang/issues/485


Swift had generics in place from Day One. It also hides the generics when they don't need to be shown (like many collection types).

I think that most of the standard language features are based on generics, but you would never know it, as a user of those features.


To be fair, Swift have a bit strange generics implementation that forces programmer to jumps through hoops to achieve something quite common in similar programming languages. The whole standard library is peppered with AnyThis and AnyThat, the language doesn't have generators (yield) and I'm not sure they're possible with the current generics design and programmers needs to learn what is type erasure just because the core team decided that returning generic interface from a function is something nobody will want.

I like Swift a lot, for many reasons, but generics design isn't one of these reasons.


Fair 'nuff. I was always a fan of the way C++ did it, but it was a great deal more technical.

Swift is sort of "generics for the masses." C++ is a lot more powerful, but also a lot more "in your face." I really do like the way that Swift allows generics to have implicit types.


Can't get enough of those template errors.


This looks like a good, minimal implementation of generics. It's mostly parameterized types. Go already had parameterized types, but only the built-ins - maps and channels. Like this.

    var v chan int
Somewhat surprisingly, the new generic types use a different syntax:

    var v Vector(int)
Do you now write

    v := Vector(int)
or

    v := make(Vector(int))
Not sure. The built-in generics now seem more special than they need to be. This is how languages acquire cruft.

The solution to parameterized types which reference built-in operators is similarly minimal. You can't define the built in operators for new types. That's probably a good thing. Operator overloading for non-mathematical purposes seldom ends well.

They stopped before the template system became Turing-complete (I think) which prevents people from getting carried away and trying to write programs that run at compile time.

Overall, code written with this should be readable. C++ templates have reached the "you are not supposed to understand this" point. Don't need to go there again.


Presumably you write

    v := Vector(int){}
Or

    v := make(Vector(int))
Just like with unparameterized types.


It's still fleshing out the details, and this is a great point to bring up. I suspect it may be a shim to make the parser more manageable. I for one would love to see all generics harmonize with the existing syntax for map, chan, etc.


Even the current syntax is a bit non coherent. Would you go with "Vector[int]" a la map, or with "Vector int" a la chan? I think both considered the proposed syntax is actually better.


In my opinion, `Vector int` is the more logical syntax. `Vector[int]` to me seems like a vector indexed by int (rather than containing int), much like `map[int]T` is a map indexed by int, and `[5]T` is an array indexed by an integer less than 5.

For the same reason, I dislike `std::array<T, 5>` which puts the range type before the domain constraint. This is inconsistent with `std::map<int, T>` which puts the domain type before the range type, and `[](int x) -> T` which puts the domain type before the range type.


> `Vector int` is the more logical syntax.

This introduces several parser ambiguities, esp. related to nested template types or types with multiple parameters. For instance, does `Foo Bar Baz` read as `Foo(Bar, Baz)` or `Foo(Bar(Baz))`?


Right. For consistency, the old built-in generics should be written

    var v chan(int)
    var m map(keytype,datatype)
    
instead of

    var v chan int
    var m map[keytype]datatype
but, for backwards compatibility, that probably won't happen. Thus, cruft.

Map was always a bit too special in Go. Channels really are special; the compiler knows quite a bit about channels, what with "select", "case", and "go" keywords and their machinery. But maps are just a built in library object.


If you want a dynamically sized vector pointing to a datatype, I think `Vector [] Value` is most "semantically meaningful". And if you want to define your own map, `Map [Key] Value` is most self-explanatory... now that I look at it, I'm starting to feel that this introduces too many special cases and complexity into generic syntax to be useful for the actual language parser. I might stick with that in documentation/comments instead.


I wonder if at least they will add this as an additional way to define them and deprecate the old ones without disabling them.


They won't. Go leans very hard towards "there is exactly one way to do it".


I know they won't but now there are 3 ways to express the same idea!


I agree with your point about syntax - we're still not at the point where you could roll your own map-like type, and I think that should be a goal for this implementation.

And yeah, operator overloading leads to utterly illegible code. Let's not go there!


What is missing for a "map-like type" other than operator overloading ([], []=)?


Not a strict requirement, but hashing pointer values.

In Go, a pointer's integer value may not be stable since the GC may move the underlying data. This doesn't happen with today's runtime, but it may in the future.

Why does this matter? What if I want to make a hash table with the key being a pointer? The built-in map can just go ahead and hash the pointer itself (i.e. a 32/64-bit integer) since it's a part of the runtime and will be updated if the GC is changed. But a user map cannot do this: it would have to hash the actual value itself. Depending on your data, this could be significantly more expensive.

This doesn't stop anyone from implementing a hash table, but it may mean that it can't be as fast as the built-in map.


I don't really understand the question. Is [] an operator?


Yes. "[]" would denote the operator for accessing element of collection associated with key/index k by [k] syntax. Similarly "[]=" would denote operator for storing element under a given key/index.

E.g. in python these would correspond to a custom type implementing https://docs.python.org/3/reference/datamodel.html#object.__... & https://docs.python.org/3/reference/datamodel.html#object.__...


OK, yeah, I get it now.

so if I defined my own map type (e.g. orderedmap[T] that kept things in the order they were added) then I'd need to write code for the [] operator to access an element, is that right?

I guess I'm not really sure how that differs from declaring orderedmap(T) (as it is in the proposal). Why do I need to overload one and not the other?


Maybe it's because I'm a newcomer to Go, but I'm surprised by all of the shade being thrown in the comments here. I think the design doc presents a clean and simple way to implement generics in Go -- and therefore it's very much in keeping with the spirit Go. I especially like the constraints system. Constraining generic types via interfaces just feels "right" for the language. It leans on a familiar mechanism with just enough leverage.

I'm not without concerns, but I'm struck by conflicting thoughts: I share the worry that implementing generics will negatively affect Go program readability which Go developers (rightfully) covet; when you have a hammer, everything looks like a nail. And yet, I also worry that the Go community is at risk of analysis paralysis on iterating the language in meaningful ways; the novel and innovative spirit that created Go must be harnessed to propel it into the future, lest it dies on the vine.


Finally. Better late than never. I have to write a lot of Go code at $BIGCORP and keep reaching for generic helper functions like Keys or Map or Filter or Contains. This is going to make my code much more readable.


Will it really?

Old:

  sum := 0
  for _, x := range xs {
      sum += xs
  }
New:

  slices.Reduce(xs, 0, func(i, j int) int {
      return i + j
  })


That's the most compelling example you could come up with as someone who presumably writes Go?

Reversing a slice of T. Copy and pasting this:

    for i := len(a)/2 - 1; i >= 0; i-- {
        opp := len(a) - 1 - i
        a[i], a[opp] = a[opp], a[i]
    }
New:

    reverse(a)
Or generic versions of these: https://gobyexample.com/collection-functions


Yep, this is exactly the use case I was talking about.


This is an argument I've seen played out in other language communities. In the JS/TypeScript world, the current evolving consensus is "most of the functional programming methods improve readability, except reduce, which is usually worse than a for loop."

I think the other replies to your comment point out the improvements that non-reduce-based versions of the code would bring, but I wanted to specifically call out reduce as being a pretty awkward function in other languages as well. So I think it's not a strong argument against generics (and the OP didn't mention wanting it in the first place); by and large functional methods like map, filter, reverse, etc are more readable; reduce is the exception that often isn't.

I wrote a fair bit of production Go code in my past, and missed many functional programming patterns. Didn't really miss reduce though. I'm glad that it seems like the ones I missed will become possible with this generics proposal.

And reduce isn't horribly unreadable, it's just a slight degradation compared to a for loop IMO. And the other functions are a large improvement.


New:

    slices.Sum()


Yeah no real gain for that case, but for something like filter, definitely a win.


Here's a four-line replacement for the various `sql.Null*` types in the standard library.

    type Null(type T) struct {
        Val   T
        Valid bool // Valid is true if Val is not NULL
    }
Here's what it looks like in practice: https://go2goplay.golang.org/p/Qj8MqYWWAc3


neat example, thanks. It gives you a generic Null type that has a bool is-valid field. Very similar to the sql.Null{String,Int} types, but you don't have to declare each one. Kinda the point of generics :)


Nice. But I need to retrain myself to read Go code again, because

  func Make(type T)(v T) Null(T) {
  }
looks really confusing and unreadable to me - too many brackets...


I agree. I assume they avoided the standard <> brackets to keep things simple, but really it just makes it harder to parse visually.


I consider the parens () syntax much more readable than the brackets < >


Well, you can try to craft a generic function that returns a function that takes T and returns something of type T, and see how many () you gotta get...


I am glad contracts are being dropped. It was very confusing indeed. Thanks for listening to the community.


Just the concept name is dropped, the concept itself is still there.


Unlike other commenters, I don't think it's too late to add generics to Go. I'm looking forward to this.

My only negative reaction is that throughout these proposals, I've not been a fan of the additional parentheses needed for this syntax. I can think of several syntaxes that would be more readable, potentially at the expense of verbosity.


You just can't please everyone.

I think that there are benefits when taking the time to gather feedback from people who actually use the language and make educated decisions with regards to language features instead of shoving them down in v1.



I'm usually a bit of a skeptic when it comes to Go, but this proposal surprised me; for the most part, or seems like this is exactly what I would want if I were a Go programmer. Being able to define both unbounded generic parameters and parameters bounded by interfaces is key, and while from skimming I'm not sure if it allows giving multiple interfaces as a bound, in practice this shouldn't be a huge issue due to how Go's interfaces are implicitly implemented (which would allow you to define a new interface that's the union of all the other interfaces you want to require, and all the types that implement all of them will automatically implement the new one). Interestingly, the proposal also defines what is essentially a union type interface, which I think is something that Go definitely could use. Although there are a couple of minor warts (e.g. not being able to have generic methods) and the syntax is not what I'd personally pick, overall I'm pretty impressed.


As someone who doesn't enjoy Go but is increasingly having to deal with it at work, I'm very happy with it.


“A type parameter with the comparable constraint accepts as a type argument any comparable type. It permits the use of == and != with values of that type parameter”

I think that’s an unfortunate choice. Quite a few other languages use the term equatable for that, and have comparable for types that have =, ≤ and similar defined. Using comparable here closes the door for adding that later.

I also find it unfortunate that they chose

  type SignedInteger interface {
    type int, int8, int16, int32, int64
  }
as that means other libraries cannot extend the set of types satisfying the constraint.

One couldn’t, for example, have one’s biginteger class reuse a generic gcd or lcm function.


Languages have been known to use “Ordered”, “Orderable”, or “Ord” for what you’re calling “comparable”. Rust and Haskell are both languages that fit this criteria.

The design draft refers to “constraints.Ordered”, so they’re definitely thinking about having both “comparable” and “constraints.Ordered”. Although, for consistency with “comparable”, I think it should be called “constraints.Orderable”.


That’s true, but both rust (https://doc.rust-lang.org/beta/std/cmp/trait.Eq.html) and Haskell (https://hackage.haskell.org/package/base-4.12.0.0/docs/Data-...) use eq or similar for what this proposal calls comparable, as do C# (IEquatable), java (through Object.equals), and Swift (Equatable)


Sure, but as someone else already pointed out, “comparable” is an established Go term that refers to “==“ and “!=“ only, and “ordered” refers to the other comparison operators.

My point was that “comparable” is not universally used in place of the “Ordered” term that the Go team is using, as you were seemingly implying. Ordered is a perfectly fine term for it.

You said:

> Using comparable here closes the door for adding that later.

But the door is not closed in any way. It’s just called “constraints.Ordered”, which is perfectly reasonable.


In my opinion, ordered or orderable is a better name than comparable for types implementing (<=). It evokes more of a "totally ordered" vibe than just saying "comparable", which is what we actually want in these types (in order to sort them and so on).


FWIW, “comparable” and “ordered” are well defined terms in the Go language specification:

https://golang.org/ref/spec#Comparison_operators


The vocabulary from that language spec is not even followed consistently in the language's own standard library.

The `strings.Compare`[1] function is used to establish ordering, in the spec sense. You'd think they would name it "Order".

Similarly, the popular go-cmp[2] library provides `Equal` for equality, instead of `Compare`.

[1]: https://godoc.org/strings#Compare [2]: https://github.com/google/go-cmp


So what?


This is the new code branch: https://go.googlesource.com/go/+/refs/heads/dev.go2go

I can't find the actual code review for this. It seems to be hidden/private still?

This was the previous code review: https://go-review.googlesource.com/c/go/+/187317

The comment at the end is where I got the link to the new branch, but as an outsider, I don't have any good way to ask where the new code review is, so I'm leaving this comment here in hopes that a googler will see it and point me in the right direction.

Based on a link that's on the new branch's page, it might be CL 771577, but it says I don't have permission to view it, so I'm not sure.


There isn't a code review for the changes on the dev.go2go branch (though you could construct one using git diff).

The dev.go2go branch will not be merged into the main Go development tree. The branch exists mainly to support the translation tool, which is for experimenting with. Any work that flows into the main Go development will go through the code review process as usual.


In my mind, this generics flip-flop will do no good for the long term survival of Golang. One way to view it is as "listening to the community and changing" but I think a change this big signals a fracture more than an evolution.

Think about how much planning and foresight goes into making a programming language, especially one that comes out of a massive top-tier tech company like Google. They deliberately chose not to include generics and (from what I remember when I wrote Go code) spent time + effort explaining away the need for them. Now the decision is being reversed, and it signals to me that they don't know what they're doing with the language, long term. As a developer, why do I want to hitch my carts to a language whose core visions are very clearly not correct or what people want?


Or alternatively, they took forever to do it not because they don't like generics, but because they are super conservative with design and wanted to do it right.


Is there anything about this proposal that would have been surprising to someone 5 years ago? Anything to show for waiting most of the decade other than a decade lost?


Judging past work by the "obviousness" of the final solution is shallow, juvenile, and dismissive. Every problem is easy when viewed through the lens of of a nearly-complete solution. There have been a wide variety of published proposals for generics in Go [1], each of which seemed plausible but had some insufficiency. Who knows how many proposals were conceived but never left the developer's machine.

If it's so damn obvious/you're so brilliant where's your proposal dated Jun 2010 (your 'decade lost') that "solves" generics?

[1]: https://github.com/golang/go/issues/15292


It is surprising to the people who have been feverishly trying to add generics to Go, with references to their efforts dating back even before Go1.

It may not be surprising to everyone. Trouble is that the people who are not surprised now are the ones who sat back and only watched, preventing their knowledge from making it to the project.

Luckily for Go, a small team of domain experts decided to do more than sit back and their efforts are how the latest proposal was reached.


From very, very early in the project-

"Generics may well be added at some point. We don't feel an urgency for them, although we understand some programmers do."

...

"Generics are convenient but they come at a cost in complexity in the type system and run-time. We haven't yet found a design that gives value proportionate to the complexity, although we continue to think about it."

...

"The topic remains open"

They haven't flip-flopped whatsoever. Now that the language has more thoroughly been fleshed out and community has matured, and various proposals have come and gone, the discussion continues.


There's a tension in languages between making it easier to write many layers of libraries versus making it easier to write applications.

Languages that want libraries that go ten layers deep need to carefully manage things with sophisticated type systems and compilers that can cut through and inline all the abstractions so it still runs fast. Think rust, C++, haskell (all of which, not by coincidence, have long compile times).

Languages for writing applications and solving problems tend to have a flatter dependency tree and wider standard library (often including practical functionality like JSON parsers). Think erlang, Perl, PHP.

I used to think Go was in the latter category, but it seems to be getting pulled into the former category. Perhaps that's a reflection of Go developers using more layers of abstraction than they have in the past?


Go benefits from a continuous flattening phenomenon due to how interfaces and types interact. The determination of whether a type implements an interface is done at the interface use site rather than the type declaration site (as in C#, Java, etc.)

I don't think Go developers are using more layers of abstraction than they have in the past; they aren't smarter or more sophisticated than they were before -- RAFT was written in Go, for example.

Your observation is a good one, however; are there other ways that Go idiomatically flattens type hierarchies?


"they aren't smarter or more sophisticated than they were before"

To be clear, I was not suggesting that more layers of abstraction reflect more sophisticated developers. Just that more layers require something in the language to help developers manage those layers without going insane.

There's a lot of wisdom in choosing a flatter approach when it makes sense.


In your opinion, what languages do have effective tools for managing high towers of abstraction? I'm genuinely curious, having spent many nights building ontologies.


C++ and Rust come to mind. C++ can certainly go awry, but it does seem to have the tools available, even if they might be hard to use. Rust still has a few things to prove, but things are looking fairly good. Both languages can express constraints (e.g. type constraints) so that you don't have to make too many assumptions, and you know when APIs change in subtle ways. And both languages use inlining heavily to get "zero cost abstractions".

I'll include ruby as well. It doesn't have a lot of "tools" per se, but if you're working on a nice codebase, it works out well even if using a lot of gems. Part of this is probably the unit testing/mocking, and part of it is that the code is just smaller. It does nothing to help you with performance though, so it can get really bad.

My experience with Java has not been great. I'm unconvinced that inheritance hierarchies are a good idea at all, or much of the other ceremony around Java-style OOP. Java uses JIT compilation, which can help with performance a lot.

If by "ontology" you mean "hierarchy", I'd caution you against that approach. Hierarchies enforce an order that is just not there in many cases. For instance, if you have an iPhone, would that be Hardware/Apple/iPhone or Apple/Hardware/iPhone? An iPhone is both Apple and Hardware, but there is no real order to those two -- yet if you try to put them in a hierarchy, then you must choose (and the choice will matter). I think interfaces/traits are much better. Both Go and Rust got this right.


Good insights. Sorry, when I said "ontology", I was referring to the Web Ontology Language (OWL).


Neat, thrown together Set implementation https://go2goplay.golang.org/p/FffAhV8aLyN


Might as well use the zero-size struct{} instead of bool, given that you’re not currently using the bool values for anything: https://go2goplay.golang.org/p/9iegVQ2VQCr

Alternatively, use the bool values and rely on the zero value for missing elements: https://go2goplay.golang.org/p/E7yHQqAPseG


And variadic args to NewSet looks a bit nicer yet at the call site: https://go2goplay.golang.org/p/fTp3W9IwuVP


Is there a way to do this without losing range?


The proposal is, for better or worse, quite clear about not providing operator overloading or anything similar in the near future.

You could always offer a method that returns a slice of unique elements, which could then be ranged over.


I don't think there's nearly as much of an appetite for operator overloading as there is for an iterator protocol for range. That isn't to say that the latter wouldn't have a lot to argue over (internal vs. external, etc.) but I think the general usefulness of such a thing would be widely accepted by the community once generics are a settled matter, while I doubt that will ever happen for operator overloading.


I'm not sure that hiding the implementation is worth it here. Why not just make it public that it's actually map[T]struct{} underneath - then anyone can range on it and implement allocation optimisations, etc? I added some other methods too for the crack https://go2goplay.golang.org/p/EI1hYaSohnc


Currently I just want some form of genericity, regardless which one, but I fear we will go through yet another cycle of community feedback and by August 2021 still not there.

However not to be just negative, I will certainly take the opportunity to provide my positive feedback as well, so looking forward to play with go2go.


I'm sympathetic to being somewhat impatient; I'd really like to have generics in Go years ago too. But I'm also glad they didn't just plow ahead with the first draft of the proposal that came out; the way contracts worked was horrible, and the slow deliberative process means that we don't actually have to deal with those mistakes. Unfortunately, good design takes time.


Me too. A bad generics implementation is profoundly worse than no implementation at all. This angers people who have minimal experience with Go and it certainly seems counterintuitive until you understand that the way the community writes their code affects your experience whether it’s just a different paradigm or a conflagration of paradigms, whether it’s a little abstraction or gratuitous abstraction, etc.


Yeah, glad that work is being done. But surprised to see this status update be that nothing substantial has changed in the last year.


They've reworked the draft, written a generics-capable type checker, a translator tool that lets you use the draft in the current version of Go, and published a generics-capable version of the playground. How is none of that substantial?


The Featherweight Go paper also just recently came out, which is an important step:

https://arxiv.org/abs/2005.11710


Prefer no generics in Go. Like Go simplicity. Don't want all languages to merge into single grey good form. Want strong trade-offs made.


Why use many words when few do?


For one, everyone isn't fluent in English. I guess it could also be a low bandwidth input situation.


I'm pretty sure shpongled is just quoting Kevin from The Office: https://www.youtube.com/watch?v=VvPaEsuz-tY



It looks like this will make Go a significantly more complex language.

Despite Go's success over the last 10+ years, have its designers concluded that the language is too simple? Is there no place for a simple statically-typed language like current versions of Go?

If not, then for those of us who particularly want a simple language, is dynamic typing the only option?


This is a very conservative, simple implementation of generics. The vast majority of its impact will be to make the code devs deal with on a day-to-day basis simpler to use. It will reduce the necessity for parts of Go that are already complicated, like go gen and reflection.


The only time I get frustrated by the lack of generics in go in trying to get keys from a map... I can't have a single `keys(map[string]T) []string` function.


I started to write one in the "Go 2 playground", but then realized they had that exact example in the draft design:

   func Keys(type K comparable, V interface{})(m map[K]V) []K {
    r := make([]K, 0, len(m))
    for k := range m {
     r = append(r, k)
    }
    return r
   }
Usage:

   m := map[int]int{1:2, 2:4}
   k := maps.Keys(m)
   // Now k is either []int{1, 2} or []int{2, 1}.



For certain values of can.


Never said it was a good idea. It's inefficient, unsafe and hard to read but, as far as I can tell any generic function with concrete return types can be written using reflect (and any generic function with generic return types can be written also, it just requires the caller to manually cast from an interface{} to the correct type).

So, that's an interesting thing.


This is an awesome use of the reflect package!


Ada language had generics since the first 1983 standard. When C++ came about it introduced OOP features that Ada lacked and didn't get for another 12 years when 1995 standard was introduced. C++ users never missed generics for many years and the language became ubiquitous while Ada remained marginalized. What's interesting is that all strongly typed languages have now jumped on the generics bandwagon, which to me shows that being ahead of its time doesn't pay off.


The first edition of Stroustrup is the only version without templates (and exceptions). I remember awful macro workarounds for C++ compilers with broken template support, so I think everyone knew that some kind of generics were obviously needed. Ada got this and a few other things right, but it was painfully verbose and the first compilers were so expensive that few seriously evaluated it.


If I had to pick a language that got nearly everything right and shouldn't have failed, it'd be Eiffel. But it was also priced more like a truck than a tool until too late (at a time when programmers didn't generally cost six figures!)


Typical to Go verbose style, but even that is better than nothing (example from the doc)

    s := []int{1, 2, 3}
    floats := slices.Map(s, func(i int) float64 { return float64(i) })


    Type inference for generic function arguments
    This is a feature we are not suggesting now, but could consider for later versions of the language.


This is really sad, generics add a ton of complexity and get abused over and over in many codebases. Sure if you have a competent team then you can stop the bleeding, but it'll be all over the place in your dependencies.

Every developer who discovers generics has this urge to be clever and write things that are hard to read and maintain.

Reading a Go codebase on the other hand is really a pleasure, this is due to the simplicity and straight-to-the-point aspect of the language, as well as the compile times. I really think Go had found a no-bullshit niche that it catered really well to, and I'm scared of what Go post-generics will look like.

Are there any other languages like Go that have no plans to move to generics?

PS: beautiful quote from rob pike: https://news.ycombinator.com/item?id=6821389


Agree that it adds complexity - nobody wants a repeat of Java enterprise apps from years ago. In the world of scientific computing and recommendation systems, it's a godsend. Writing your own sort function and your own min function gets old, and you're tempted to return to the world of Python and NumPy. This brings Go onto much better footing with some basic generics functionality.


"My late friend Alain Fournier once told me that he considered the lowest form of academic work to be taxonomy"

That's really unfortunate. I think a lot of the point of science (maybe even the entire point?) is to model the world and create taxonomies. Instead of being considered "busy work" or tedious, it should be held in the highest regard. Similar reasoning is why I think Rob Pike's opinion on generics is exactly wrong.


I actually had never read that Rob Pike quote and it's really baffling to me. It's not about generics at all! It's hard not to read it as saying "generics are recommended to me by the same weirdos who like inheritance, and I don't like inheritance, so generics are BS too".


There's C. C11 technically has a form of generics, but it's only really useful for math functions. I think you're safe from ever having parameterized collection types in common use over there.

Go was already an oddball in not having generics when it came out, so I think it's unlikely that any new statically typed language will ever become popular. They're just too useful.


This saddens me :( as a reviewer of code I found Golang to be extremely easy to read due to the lack of expressiveness. The most secure codebases I've seen are all in Golang (or are written by strong engineers).


While I’ve learned to work without generics, this is a welcome proposal. Many cases where we’ve been forced to use interface{} + cast could be genercized.


Is there a reason Go is not using <> instead of ()?

So instead of

func Print(type T)(s []T) { for _, v := range s { fmt.Print(v) } }

we would get

func Print<T>(s []T) { for _, v := range s { fmt.Print(v) } }

?


The angle brackets increase parsing complexity because `>` is a valid standalone operator (which neither ] nor ) is, usually) making the grammar ambiguous: given

    Foo<Bar>
You still don’t know whether this is a bunch of comparisons or a parametric type, meaning you need either feedback from the type checker into the parser or possibly unbounded lookahead before you can even generate a parse tree.



Apart from other reasons, it just looks ugly. Any kind of braces is better than <>.


The anti-generics folks in the Go community are pretty prevalent still--meanwhile the Go community frequently lauds Go's strong type system. I suppose it's possible that these aren't the same people, but it seems more likely to me that many Go users don't know the difference between strong types and static types, and think that all static types are strong types.

Without generics, Go does not have a strong type system. It has a type system that verifies just enough to force you to work around it to do things that require generics, meaning that you don't get the benefits of verification because you've worked around it. You get the development speed of a statically-typed language with the type guarantees of a weakly-typed language: the worst of both worlds.


A litmus test I'm excited about is writing a less clunky linear algebra / numerical optimisation package.

Gonum is good, but it can feel like a lot of the powerful abstractions in numpy and Eigen are difficult to replicate.

Whilst numpy and Eigen do "turn the magic up to 11" just a bit too much for my liking, I do like the idea of things like this:

    interface Scalar() {
        type float float32 int int32 int64
    }

    type DenseMatrix(type T Scalar) [][]T

    func Identity(type T Scalar)(int size) DenseMatrix(T)


etc.


> Second, we know that many people have said that Go needs generics, but we don’t necessarily know exactly what that means.

Seems like the go authors are still bent on the idea that generics are really not necessarily and that there is no such problem that can only be solved by generics. Then is the generic drafting only a thing because they have been pressured into it by the community? I wonder why Rob Pike one day decided that it’s okay to change a language he almost vowed to never change after the initial release.


What’s funny is that the stdlib makes extensive use of generics


"For me and not for thee"


This is going to give rise to a utility-belt of sorts, which is what I'm really looking forward to. There are many collections and common utilities that we've all been hand-writing for a while (e.g. waiting for Set(string) instead of map[string]struct{}, or a TreeMap(string, Thing) instead of maintaining sort order separate or using reflection for an existing lib).

> Methods may not take additional type arguments

While I can fathom the complications that arise, I'm hoping they can work past this restriction by launch.


I wish they would just add macros instead and then you can effectively code generate at compile time any specific generic-ness that is needed.


Macros are not a substitute for parametric polymorphism ("generics"). With lisp-style macros you could easily implement something like C++ templates, but that's different in at least one critical way: with C++ templates, the templates are expanded before typechecking, whereas with proper generics, typechecking happens while the type parameters are still opaque. The former has several disadvantages:

- It gets you really garbage error messages, because the type errors are usually about something in the body of the template, wherever it tries to do something that the type you substituted doesn't support, rather at the offending call site.

- It hurts compile times (above and beyond proper generics), since you need to type check the generic function at every call site, rather than just once.

- It makes it easy to break interfaces, because exactly what is required of a type parameter isn't written down anywhere -- it's just whatever the body tries to do with it.

(Though it is also true that generics are certainly not a full substitute for macros. I would welcome some mechanism for doing codegen that didn't complicate the build system and was a bit lighter weight than what we have now).


C++ templates are not expanded before type checking. Template instantiation is type driven. In fact you can even conditionally expand a template on whether an arbitrary expression typechecks. You wouldn't be able to do it if templates were a different phase.

Only early extremely non conforming compilers had macro-like template expansion.

Edit: although calling templates type level macros wouldn't be completely wrong.

Edit2: a better definition is that templates are type level functions: they take compile time values, types and other type functions and return values, types and type level functions.


Fair enough; I suppose I was oversimplifying. I think the broader point I was trying to make (in addition to the specific downsides I mentioned) still holds though.

The Template Haskell paper has some interesting things to say on the nature of C++ templates:

https://www.microsoft.com/en-us/research/wp-content/uploads/...


A proper macro system is arguably much harder to get right than generics.

Unless you mean preprocessor macros in which case I'd rather cut myself.


You can use `go generate` with the AST package and get the same thing.


Rob Pike is not mentioned in the blogpost or in the sources the authors acknowledge as contributors to this project.

What is he up to nowadays?


He still commits code and responds to Github issues occasionally, and I think it's safe to assume that he's still participating in the internal discussions about the design. But most decisions about language changes these days are made by Russ Cox, Ian Lance Taylor, and Robert Griesemer. At least, that's the impression I'm getting.


That jives with this article I found:

https://evrone.com/rob-pike-interview

> Rob: Although it's far from certain, after over a decade of work it looks like a design for parametric polymorphism, what is colloquially but misleadingly called generics, is coming in the next year or two. It was a very hard problem to find a design that works within the existing language and feels as if it belongs, but Ian Taylor invested a phenomenal amount of energy into the problem and it looks like the answer is now in reach.

Sounds like he took a welcomed step back. Hopefully the new leaders can deliver.


Ian Lance Taylor has been involved in the language for a long time, since 2008. Here he is talking about the built-in function append and why it shouldn't really exist in 2017

https://www.airs.com/blog/archives/559


that's pretty sad, we probably wouldn't have to deal with generics in Go 2 if he was still around :(



I think he was the one to ask Phil Wadler to help out on the Featherweight Go formalization work.


The draft design document is very readable too: https://go.googlesource.com/proposal/+/refs/heads/master/des...

I'm rather excited..


I like this, here's a Sum function that supports all number types:

https://go2goplay.golang.org/p/hPDnvRa-Bif


Now the abstract sequence type of "LINQ in Go" https://github.com/nukata/linq-in-go can be written as

  type Enumerator(type T) func(yield func(element T))
and the "Select" method can be written as

  func (loop Enumerator(T)) Select(f func(T) T) Enumerator(T) {
        return func(yield func(T)) {
                loop(func(element T) {
                        value := f(element)
                        yield(value)
                })
        }
  }
You can call this method with type-safety as follows. Yay!

  func main() {
        squares := Range(1, 3).Select(func(x int) int { return x * x })
        squares(func(num int) {
                fmt.Println(num)
        })
  }
  // Output:
  // 1
  // 4
  // 9
See https://go2goplay.golang.org/p/b0ugT68QAy2 for the complete code.

And, for generality, you should write the method actually as follows.

  func (loop Enumerator(T)) Select(type R)(f func(T) R) Enumerator(R) {
        return func(yield func(R)) {
                loop(func(element T) {
                        value := f(element)
                        yield(value)
                })
        }
  }
However, you will get the error message then:

  type checking failed for main
  prog.go2:17:33: methods cannot have type parameters
According to https://go.googlesource.com/proposal/+/refs/heads/master/des... this seems an intended restriction:

> Although methods of a generic type may use the type's parameters, methods may not themselves have additional type parameters. Where it would be useful to add type arguments to a method, people will have to write a suitably parameterized top-level function.

> This is not a fundamental restriction but it complicates the language specification and the implementation.

For now, we have to write it as https://go2goplay.golang.org/p/mGOx3SWiFXq and I feel it rather inelegant. Good grief


The "we still can't abstract over zero values" section is only really solvable with ad hoc polymorphism (type classes)


Can `Option(type Value)` and some sort of `option.Unwrap` now totally replace `(value, error)` and `err != nil` checks? :D


What a joke. Go devs spent a decade telling people they were waiting for the right design for Generics to emerge. Finally after a decade they deliver to us... Java-lite generics which are going to require the community to do a massive library ecosystem retooling for next five to ten years similar to how Java felt going from pre 1.5 to 1.5 and beyond.

This language is such a frustrating mess at times.


I'm not sure what you expected. With the exception of Ian Lance Taylor, the Go team has expressed skepticism (and, in the case of Rob Pike, outright disdain) about generics since day one.

To anyone who has followed the Go development process at all, this proposal will not be a surprise. The previous proposal was also extremely conservative. Personally, I don't see any other way. Going full-blown generics in the style of Rust/Haskell/OCaml, for example, would drastically alter the language.

This is closer in scope to, say, Modula-3.


I'm more frustrated that it took a decade to get to this point. Not that it is not more powerful.


Then why do you use it? I'd rather have no generics in Go, maybe you can try Rust?


Because I have a job that pays good money working in Go. Doesn't mean I have to be happy with it.


Interesting, the reason they chose Go is probably due to its no BS aspect. But forcing to make java/C++ people work with it actually makes the language worse. Interesting.


I've been writing a lot of Rust lately after writing Go for 7 years now (unrelated, but that is incredibly odd for me to write out. It still feels like that funny new language that came out of Google). I've always said that generics is overhyped (in Go), but I find that I write a lot of generic stuff in Rust that is somewhat surprising. For example we have a JSON API that returns data in the format '{"ok": bool, "data":...}'. In Go I might do something with json.RawMessage and pay for decoding twice, or annotate every struct with Ok/Data. But in Rust I can just have a type ApiResponse<T>. Then I can have functions that operate just on ApiResponse, or ApiResponse<T=Users>. This problem is solvable in Go, but in a different way and with less guarantees. However that power comes at a mental cost, that creeps in C++ as well. I spent more time playing type system golf trying to come up with the optimal type for whatever usecase. In Go I might just "do the work", but in Rust I've turned 5 minute functions into 30 minute api design thought exercises. The jury is out if thats is a good thing or a bad thing.

That said, the only feature I'd steal from Rust is sum types and getting rid of null. `nil` interfaces are the only language feature to actually cost me money and Option<T> is just better in every aspect, and Result<T> is much better than err == nil. I'd be perfectly happy with compiler blessed Option and Result types even if the larger language didn't support generics.


The cost you pay for appeasing the type system is the cost you're not paying writing more tests around corner cases ("this function never returns null"), debugging segfaults, null pointer exceptions, "attribute not found" errors, memory leaks (lifetimes help here), etc.

The type system may not always help you get your code out the door faster, but when you finally ship it, you have much easier time running it.


>The cost you pay for appeasing the type system is the cost you're not paying writing more tests around corner cases

I have spent a lot of time in Rust getting carried away with type system stuff that is not required for the actual program I am writing. I realize this when I go over it a second time and reduce line counts drastically by replacing my gee-whiz generics with primitive types. This is balanced out by an equal number of times where I've reduced line counts drastically by using the advanced type stuff to reduce the repetition that comes along with using primitives to represent a type that has way more structure than the primitive.


I feel obsessing over types is often just a form of procrastination, as it feels more interesting than the real work that needs to be done. This seems to be a bigger issue in Rust because the type system is powerful.

I write a lot of Rust nowadays, so I often need to keep myself in check and make sure I'm not getting sidetracked. When I'm writing internal code, the priority is to just get it done but still avoiding shortcuts. This is mainly avoiding .unwrap() / .clone() / taking care to handle Result correctly, as well as regularly running clippy to pick up the obviously silly stuff.

I find the most painful part of this strategy is when you want to revisit completed code to reduce copying as that generally requires a lot of type changes. At least when you get to this point though and haven't taken shortcuts, you've got functioning software and it should be fairly fault tolerant.


> I feel obsessing over types is often just a form of procrastination, as it feels more interesting than the real work that needs to be done. This seems to be a bigger issue in Rust because the type system is powerful.

This is interesting, because I generally feel like getting the types right actually IS most of the work that needs to be done, and once the types are correct the implementation code just follows from the available operations necessary. I've never used Rust, so perhaps it's different in this language (I understand memory management is quite onerous there?), but this is how things feel for me in C# or TypeScript.


Try Haskell, the ceiling is so high for the type system that you can spend a lot of time trying to golf an interface until you are happy that it is maximally elegant. Banal is better in some regards, hense Go. But it’s always a trade off where the extremes are pushing your boundaries and getting stuff done.


This misses the point of why expressive type systems matter. Beginners and hobbyist (and i’m using these labels in the best possible sense) play with these things, push bounderies, etc... Professionals (in the literate sense, ie. people getting paid for it) tend to just solve problems whilst being fully aware that they have many powerful tools in their arsenal if/when the need arises. If you have a pro who is constantly lost in type-golfing land then that’s a problem.


Check out type-driven development (there’s some good F# example blogs) - honestly it’s like TDD on steroids. Having no type system scares me now, having worked in a Python startup before moving to a Haskell shop and seeing the difference in development quality (speed is surprisingly similar too).


I think we are on the same page. I also write Haskell for food. What I'm saying is that if you write Haskell professionally and types slow you down, ie. you always try to come up with the best possible abstraction for everything, then that's a problem. It should do the opposite: give you wings while prototyping, iterating fast and refactoring(!!) as well as great comfort knowing that you have all its amazingly powerful tools in your arsenal if and when you need them.


That seems to idealize professionals quite a bit. In reality, professionals are capable of much pedantry, not to mention political posturing. Some languages give greater scope for these than others.


If you just don’t need the assurances, that seems true.

In my world, anything the type system doesn’t do for me, I must do myself in unit tests.

It’s actually possible to learn a type system and get good/fast at it. But typing out all those goddamn test cases never gets any easier, and updating them when the code changes is a nightmare.


Go is making the wrong trade-offs.

My example for something that's simpler than Haskell but useful would be OCaml.


Name some OSS:

- Written in Haskell

- Not a tool for manipulating Haskell

- With a substantial user base

- Other than Pandoc, xmonad, or shellcheck

Partial credit if you can name closed source software instead.

For Go, off the top of my head, Docker/K8s would qualify, but if we want to eliminate that, Hugo, Caddy, Gogs, etcd, fzf…

https://github.com/search?q=stars%3A%3E1000+language%3AGo&ty...

https://github.com/search?l=&o=desc&q=stars%3A%3E1000+langua...


Huh? What does Haskell's (lack of) adoption have to do with any merits of Go? PHP, Visual Basic (perhaps?) and Javascript have even better adoption than Go, if you want to go down that route.

If anything, you should have asked about the prospects of OCaml, shouldn't you? (Spoiler alert: they are grimmer than Haskell's.)

Mostly, Go is full of wasted opportunities. It would have perhaps been a worthwhile language had they come up with Go instead of C in the 1970s. https://github.com/ksimka/go-is-not-good is a good summary.

I would suggest having a look at D as an example of what an evolution of C could look. An evolution with fewer wasted opportunities.


Funny that I also always claim D being "the better Go". It has a very similar compilation and runtime model. But D is a much more powerful language, it's not "dumbed down to the max". For people that like Go's runtime model but don't see "simplified at all costs" as an advantage D could be just right.


OCaml is used, for instance, as a main language in Jane Street which is a top prop trading firm. They also are the main sponsors of the language and contribute a lot to the compiler, but that doesn't invalidate the point.


Yes.

We also used OCaml at Bloomberg quite a bit. (And they are still using it, but I'm not there anymore.)

But there's not that much more in the commercial OCaml world.

Haskell has actually a bit of a broader adoption. The open source library situation is also better.

(For the record, I like both OCaml and Haskell. And Janestreet are doing a great job.

And commercial and open source adoption are important. But they are not the end to every discussion.)


[Hasura GraphQL Engine](https://hasura.io/)


Git-annex, darcs, postgREST. I might have cheated and used google, but those are all projects I've heard of before. And it's not really fair to exclude big projects for haskell, but not for go.


I explicitly excluded the big Go projects of Docker/K8s.

I think I've heard the name darcs, but have no idea what it is or does. I may have heard of postgrest but I think the whole category of a thin layer over the DB is dumb, so I refuse to learn about it. :-) (If you want to talk to your DB, use the native SQL drivers. Don't just invent the same thing but as REST or GraphQL for no reason. Backend system should connect SQL to SQL. If you want a browser to be able to talk to your DB, then you will need auth and want bundling of calls and object translation and suddenly the thin translation layer isn't thin anymore.)

Anyway, the point is that Haskell has been around for a long time and is very popular with HN/Reddit users, but unlike Go and Rust, it has produced a very small amount of OSS.


Darcs is a distributed version control system. It predates git.

I tried using darcs for a bit. But the designers were a bit too ambitious: they had an elegant concept to solve all rebases automatically at least in principle. Alas, the early implementations sometimes ran into corner cases that had exponential runtime. Which in practice meant that it hang forever for all a user could tell.

As far as I can recall, that behaviour is fixed now. But they missed their window of opportunity, when other dvcs became really popular. Like git.

Interestingly, git had the opposite philosophy: they explicitly only resolve simple conflicts in a simple way, and bubble up anything slightly more complicated to the user.

About SQL: Haskell users wouldn't want to use SQL directly. They want their compiler to yell at them when they get their database interactions wrong.

So they don't want some much to have a layer on top of SQL that hides things, but they'd rather want some support to forbid nonsensical SQL queries.

Interestingly, at one of my previous jobs we had 'relational object mappers' in our Haskell code. What that means is that we used relations as data structures inside our Haskell code, but had to interact with a mostly object oriented world on the outside.

Relations make fabulous data structures for expressing business logic. Take the leap in expressivity coming from eg dumb C to Python's ad-hoc dicts and tuples, and then imagine making a similar step again, and you arrive at relations.

Especially the various kinds of joins are good at expressing many business concerns. But projections and maps etc as well. Basically, everything that makes SQL useful, but embedded in a better language and just as a datastructure.

Codd's paper on relational algebra (https://www.seas.upenn.edu/~zives/03f/cis550/codd.pdf) is just as relevant to in-memory programming as it is to data bases.

> Anyway, the point is that Haskell has been around for a long time and is very popular with HN/Reddit users, but unlike Go and Rust, it has produced a very small amount of OSS.

The ML family of languages that Haskell is a part of has been around for even longer, and many of the beloved features of Haskell stem from that older legacy.

I don't have high hopes of the ML family becoming more mainstream. But I am really glad to see many advances born in the land of functional programming, and ML or Haskell in particular, making it out into the wider world.

The poster child is garbage collection. Which was invented for Lisp in the first place.

(You can do functional programming without garbage collection, but it requires much more finesse and understanding than they had when Lisp was young. And even then, garbage collection is probably still the right trade-off when you programme is not resource constrained.)

Garbage collection is pretty much the default for new languages these days. People expect a good rationale for any deviation.

More recently we have seen first class functions, closures and lambdas make it out into the wider world. Even Java and C++ have picked up some of those.

First class functions belong relate to a wider theme of 'first class' elements of your programming language. I remember that eg in Perl you had to jump through extra hoops to make an array of arrays. That's because arrays were not by default treated the same as any other value.

I think Python did a lot for the mainstream here: Python is pretty good at letting you assign many of its constructs (like classes or functions or ints or tuples etc) to variables and pass them around just like any other value. In the understanding of the Python community, they see that just as good OOP, of course.

Combinators like map and filter have become popular in mainstream languages.

Algebraic data types have made it to a few languages. With that comes structural pattern matching, and the compiler complaining when you miss a case.

Tuples are pretty much expected in any new language these days.

The very article we are commenting on talks about generics.

Immutable data types are something every new language is supposed to have thought about. Even C++ has const and Java has final. Go's designers were asked to justify their very limited support for immutability.

Many people go so far as suggesting immutability should be the default in new languages. (And you could very well imagine a dialect of C++ where 'const' is implied, and you need 'mutable' everywhere, if you want to override that.)

There's quite a few more examples. Of course, correlation is not causation, and so not everything that showed up in functional programming languages before it showed up in the mainstream means that the mainstream actually got it from FP.

---

In summary, you are right that Haskell has not been used in OSS or commercial products as much as eg Go, but I am just happy that the rest of the world is moving in the Right Direction.


I’ll agree with that: while Haskell/FP has had a ton of OSS, it has had a huge, positive impact on other languages. Eg Rust would never exist without Haskell being around first.


* "has not had"


Which design choices made for OCaml do you think would be worthwhile to consider making in Go?


I wasn't thinking of any specific design choices, but rather the overall feeling of pragmatism one gets from OCaml when coming from Haskell.

Most of my favourite design choices are the ones that make OCaml a functional programming language. So mostly not apt for Go. But here's a list of what I think one could adapt while staying true to the idea of a C-like language:

C had unions. They had some obvious problems. Go's solution was to get rid of them. OCaml instead went for discriminated unions (https://www.drdobbs.com/cpp/discriminated-unions/240009296) and made the compiler check that you are handling all cases correctly. In OCaml parlance, they are called algebraic data types.

C had null pointers. Go inherited them. OCaml uses the general mechanism of discriminated unions to represent cases like 'end of a linked list' or 'no data' or 'unknown' instead. And the compiler will check for you that you either handled those cases (when declared) or that you don't create those cases (when not explicitly declared to be there).

Some of C's constructs are generic. Like accessing arrays or taking the address of a value or dereferencing a pointer. To give an example, p works for a pointer of any type and does the right thing. And the compiler will tell you off, if you mix up the types.

As a user of C you can not add your own generic operations. You can use copy-and-paste or cast to and fro void or sometimes use function pointers in a clever way.

Go goes the same route. For example the built-in datastructures are generic, like maps or arrays.

Similar to C, a user of Go can not add their own generic operations. Go's sorting https://golang.org/pkg/sort/#Interface demonstrates what I mean by clever use of function pointers. (Note, how even that sorting interface still has lots of copy-and-paste when implementing and how it only really works for in-place sorting algorithms. I'm not sure how much trouble it would give in a multi threaded environment.)

I suspect Go's aversion to user-creatable generics comes from C++. C++'s templates confuse ad-hoc polymorphism and parametric polymorphism.

If C++'s unholy mess were the only game in town, Go's stance of restricting generics to the grownups (ie language implementors) would be perfectly reasonable.

OCaml carefully separates these two kinds of polymorphism.

Parametric polymorphism basically means 'this function or data structure works identically with any type'. That's what we also call generics. Eg the length of an array doesn't depend on what's in the array. (See https://en.wikipedia.org/wiki/Parametric_polymorphism)

When compiling you only need to create code once regardless of type. (Modulo unboxed types.)

Examples of ad-hoc polymorphism (https://en.wikipedia.org/wiki/Ad_hoc_polymorphism) are things like function overloading to do different things for different data types.

An infamous example are the bitshift operators from C whose cute looks got them roped into IO duty in C++.

In OCaml parametric polymorphism is handled as the simple concept it is. Ad hoc polymorphism (and unboxing for parametric polymorphism) are more complicated, and rightly make up the more complicated bits of OCaml types system.

Go could have gone with parametric polymorphism and left out ad-hoc polymorphism to stay simple.

(Parametric polymorphism synergises well with algebraic data types. Eg you can model the possibility of a null-pointer via an 'option' type, and have generic functionality to handle it. Same for the slightly richer concept of a value that's 'either an error message or a proper value'.)

But discussions about generics are a bit of a mine-field in Go.

Type inference. Go's type inference only goes one way, and is very limited. OCaml type inference goes backwards and forwards. Rust's system is perhaps a good compromise: it has similar machinery to OCaml, but they intentionally restrict it to only consider variable usage information from inside the same function.

(Generics restricted to parametric-polymorphism-only don't make type inference any harder.)

Back to something uncontroversial: Go allows tuples and pattern matching against them only when returning from functions and has lots of specialised machinery like 'multiple return types'.

OCaml just allows you tuples anywhere any other value could occur. You can pattern match against tuples and arbitrary other types. The compiler ensures that your pattern matching considers all cases.

I am sure there are more things to learn. But those few examples shall suffice for now. I feel especially strong about tuples as a wasted opportunity that wouldn't impact how the language feels at all. (From a practical point of view discriminated unions would probably make a bigger impact.)


> This is interesting, because I generally feel like getting the types right actually IS most of the work that needs to be done

When I was much younger, I was told by my elders that to a first approximation a function doesn't need a name because it's defined by its range and domain. If it's hard to discern what the function does just by the range and domain then your type system isn't powerful enough. This insight is as powerful as it is superficially wrong.


That insight is wrong. Not just superficially; it goes against the whole point of types.

1) Consider easing functions in animation (see easings.net for examples). They all have domain and range "real numbers between 0 and 1", yet they are all meaningfully different.

2) Consider filters in audio processing. They all have domain and range "streams of real numbers", yet they are all meaningfully different. The fact that they have the same type is important - it allows composition of filters, which I'd hope would sound exciting to a typed programmer!

3) Consider sorting algorithms. Even under a very advanced type system, the domain would be "arrays of numbers" and the range would be "sorted arrays of numbers". Yet there are tons of sorting algorithms that are meaningfully different. The fact that they have the same type is important - it allows a sorting library to evolve without breaking applications.

See the common thread? Types enable mixing and matching functionality. If a type allows only one functionality, it's useless for that purpose.


> They all have domain and range "real numbers between 0 and 1", yet they are all meaningfully different.

I was thinking a similar thought: sine vs cosine. But I think that falls under 'superficially wrong'.


I think the "only types matter" applies when you're simply applying natural transformations between isomorphic types.


> I feel obsessing over types is often just a form of procrastination

Can confirm, this happened to me in Java constantly. I once spent an hour writing a big utility class using generics, even though I needed it for only one type in the code. "What if I need it for something else? Copying the code and replacing the types is very inefficient!"

Turned out that Java, being Java, was unable to instantiate an empty array of a generic type, so the very elegant code in the utility class now meant having to handle a bunch of special null cases in the application. I actually left it in for a while, but thankfully changed my mind before merging it as I'm sure the team wouldn't have appreciated having to deal with even more null in unexpected places because I got attached to my shiny generic code.


That sounds like a problem with Java more than a problem with generics.

In general, I agree with your sentiment. Though in some cases, an abstraction can be justified even when used only once.

Take the simple case of a function. Generally a function only communication with its surroundings via parameters and returned values. A function that respects those conventions makes reading code much easier, and can be worth it even if used only once.

In contrast, a code block inside a much larger stretch of code doesn't give the reader any explicit guidance one how it intends to interact with the rest of the system. Even if it's exactly the same code as in the function.

Similarly, a for-loop can do almost anything. But using map or filter immediately tells you that only a few things are possible here. Even a fold is more restricted than a general for-loop.

'Theorems for free' makes a very similar argument for generics. https://people.mpi-sws.org/~dreyer/tor/papers/wadler.pdf


For me that type golfing is how I explore the problem. A lot of times you realize things about the problem that you didn’t understand before.

This is especially true with Rust where you also need to consider who owns what data. With C# it’s much too to model things that make sense as a type zoo but where the more important problems like ownership and access patterns is lost in the cracks.

The typical sign in Rust is when you need a lifetime on a struct so you realize it has someone else’s data. That usually means I stop to think before progressing. In C# it’s very easy to miss that big red flag.

Obviously it’s always worth revisiting and simplifying a design, but I still don’t think the initial design or designs were a waste of time if it gave me an understanding of the problem space.


> I feel obsessing over types is often just a form of procrastination, as it feels more interesting than the real work that needs to be done.

I spent 2 days implementing a (terrible) discriminated union struct type in C# (for types I don’t control, so I can’t just add an interface) rather than just return `Object` from a couple of methods.

Type systems need to be more expressive if we’re going to be able to increase developer productivity without costing program safety.


There are expressive type systems around, it’s the industry at large that needs to use them more and educate future generations on why these are useful tools. I 100% agree that spending 2 days on that task is lunacy. But imagine if youcould do it instantly with 0 ceremony. At that point it would 100% worth your while.


So, part of those 2 days was spent toying with Roslyn code-generation to try to re-implement C++ templates in C# - but I ran out of time and reverted back to using T4.

The .NET CLI itself places limits on what .NET-based languages can actually do: for example, structural-typing at method boundaries in CIL is only possible by having the compiler generate build-time adapter classes and interfaces - which means that methods accepting structural-types can’t be easily called by non-strucutural-typing-aware languages. The CLI also imposes single-inheritance - so a .NET language cannot implement multiple inheritance at all - and because non-virtual calls cannot be intercepted at all (without hacks like abusing .NET Remoting’s special-cases in the CLR) it’s not possible to have a “every call is virtual” system like Java’s in the CLR (note to self: find out how J# did it). Implementing Go-style composition is hamstrung by the fact we can’t patch the vtable at runtime either (reified vtables would be nice...).

I dare say it - but I think 2020 marks the decline of C# and the CLR because we’re held-back by decisions made over 15 years ago. .NET 5 isn’t showing any sign of significant improvements to the CLR’s type system or fundamental concepts.


Or, you know, just return Object from a couple of methods. It's not the end of the world.


...or interface{} in Go


> I spent 2 days implementing a (terrible) discriminated union struct type in C#

A type system without native or at least low-impedance support for discriminated unions is, well, not a very good type system.

> Type systems need to be more expressive if we’re going to be able to increase developer productivity without costing program safety.

Lots of type systems support discriminated unions directly, or at least make it trivial to implement them. C# perhaps doesn’t, but that's a problem with C#, not typed programming in general.


One of the most pleasant parts of programming in C# is that an 'int' is an 'int'. If some function takes an integer input, that's exactly what it takes.

In C/C++, the base type system is so weak and squishy that everything redefines every type. You can no longer pass an 'int', but instead pass a "FOO_INT" or a "BARLONG" where FOO and BAR are trivial libraries that shouldn't need to redefine the concept of an integer.

Like C#, Rust has well-defined basic types, which eliminates a crazy amount of boilerplate and redefining the basics.


C++ itself has well-defined (if verbose-named) types (e.g. uint32_t ) the problem is as soon as you interface with any large library or platform API with a long history: Win32 and Qt comes to mind. It’s 2020 and Windows.h still has macros for 16-bit FAR pointers. I’m disappointed Microsoft hasn’t cleaned-up Win32 and removed all of the unnecessary macros (they can start with T()!)

C# and Java might seem to have escaped that problem - but now it means that because `int` was defined back in 32-bit days programs can’t use `int` (System.Int32) when the intent is to use - what is presumably - “the best int-type for the platform” (I.e. C++’s fast-int types) - or the context (.NET’s arrays are indexed by Int32 instead of size_t, so you can’t have a Byte[] array larger than 2GB without some ugly hacks).

(I know this is moot for function locals as those will be word-aligned and so should behave the same as a native/fast int, but this isn’t guaranteed, especially when performing operations on non-local ints, such as in object instance fields).


>I’m disappointed Microsoft hasn’t cleaned-up Win32 and removed all of the unnecessary macros

Considering how seriously they take backward compatibility, the only way to do that would be to design a completely separate API, like they did with UWP. I'm 99.999% certain these macros are still being used somewhere out there. And who usually takes the blame when some badly written application stops working or compiling properly? Microsoft. (And I don't even like Microsoft.)


(UWP isn't Win32 though)

What I'm proposing isn't really a new API - but you're right about it having to be separate. It avoids the work of having to design a new API (and then implement it!) - what I'm proposing would keep the exact same Win32 binary API, but just clean-up all of the Win32 header files and remove as many #define macros and typedefs as possible - and redefining the headers for Win32's DLLs/LIBs using raw/primitive C types wherever possible.

There's just no need for "LPCWSTR" to exist anymore, for example. And I don't see anything wrong with calling the "real" function names (with the "W" suffix) instead of every call being a macro over A or W functions (which is silly as most of the A functions now cause errors when called).

This would only be of value for new applications written in C and C++ (which can directly consume Win32's header files) where the author wouldn't need to worry about missing macros. It would certainly make Win32 more self-describing again and reduce our dependence on the documentation.


Which is exactly why UWP ended up being an adoption failure to demise of us that were quite welcoming to its design goals, and I still believe that UWP is what .NET v1.0 should have been all along.

Now we have Project Reunion as official confirmation of what has been slowly happening since Build 2018, as Microsoft pivoted into bringing UWP ideas into Win32.

Breaking backwards compatibility is a very high price to pay, as many of its proponents end up discovering the hard way.


> Breaking backwards compatibility is a very high price to pay, as many of its proponents end up discovering the hard way.

I don't believe breaking back-compat was ever the problem: there were (and are) two main problems with UWP (and its predecessors[1]) going back to Windows 8:

* UWP were/are unnecessarily and very artificially restricted in what they could do: not just the sandboxing, but the app-store restrictions almost copied directly from Apple's own store.

* And because the then-new XAML-based "Jupiter" UI for UWP did not (and still doesn't, imo) ship with control library suitable for high-information-density, mouse-first UIs - and because XAML is still fundamentally unchanged since its original 2005 design with WPF in .NET Framework 3.5 - the XAML system is now far less capable (overall) than HTML+CSS in Electron now (the horror). Microsoft had a choice to maintain progress on XAML or let Electron overrun it for desktop application UIs - instead they've decided to keep XAML alive but for what gain? There simply isn't any decent exit-strategy for Microsoft now: they've just re-committed themselves to a dead-end UI system that needs significant amounts of re-work just to keep it competitive with Electron, while simultaneously using Electron for new headline first-party applications like Teams, Skype, Visual Studio Code, and more.

Microsoft has completely wasted the past ~10 years of progress they could have made on Windows and the desktop user-experience, letting Apple stay competitive with macOS while still funneling billions into iOS and iPad OS - further weakening the Windows value-proposition).

[1] Metro Apps, Modern Apps, Microsoft Store Apps, Windows Store Apps...


Windows Community Toolkit has taken care of that.

Skype uses React Native, and given that React Native for Windows bashes Electron in every talk that they give with its 300x overheard bar charts, expect that when React Native for macOS and Linux get mature enough, which MS is also contributing for, that eventually all Electron in use gets replaced with React Native.

Also React Native is built on top of UWP.


> Skype uses React Native

I can't speak for the iOS and Android mobile-apps, but the Skype software on my Windows 10 desktop is still an Electron app.

EDIT: This article from March 2002 says as much - Microsoft is moving away from React Native and sticking with Electron: https://www.windowscentral.com/latest-skype-preview-version-...


Using the classical desktop version?

> Microsoft Skype is one of the largest React Native applications in the world.

https://blog.dashlane.com/exploring-react-native-on-windows/


The iOS and Android apps are using React Native - not the Windows Desktop Skype app.


I admire their efforts in backwards compatibility, but I never saw the point of extreme source compatibility. If I don’t want to recompile then I don’t need to worry that some names were changed. If I do rebuild my app then I’m happy to spend the time fixing the errors, or build agains an old library or language version.


As someone who does a lot of work in Java (which lacks typedef), I feel the opposite. I don't like "stringly typed" APIs where everything is a String or int or whatnot - it's only slightly better than Object; you're basically giving up on typechecking.

With generics or typedefs (or a willingness to create lots of classes), you can be certain you never pass an Id<Foo> someplace that expects an Id<Bar>.


Not having typedefs wouldn't be much of an issues if we could at least inherit from String and int (or Integer).


I haven't used Rust before, but I can relate to the experience.

In general I lean towards it not being a waste. That's not to say it's always useful for that project but it is usually useful as a learning exercise, that allows me to better identify similar patterns later (including whether generics are worth using compared to primitive types, in situations like that).

For example, I spent a good year or two forcing all of my iteration into functional styles. Now I am much better at coding in that style... and at knowing when a for loop is the more appropriate tool.


> I realize this when I go over it a second time and reduce line counts drastically by replacing my gee-whiz generics with primitive types.

I do this too, but I’ve also come to realize that fresh perspectives and newly learned patterns may be more of the reason than being a little over zealous.

Edit: forgot to mention that this happens to me in most languages, not just rust.


A year from now, isn't it the 'reduced line count' that will have value to you, rather than the churn?


Sometimes. Often the reduced line counts contain way more complexity per line and understanding it is just as hard. There's something to be said for code churn, as it keeps it fresh in the mind of whomever is on the team right now as opposed to "a guy that left eight years ago wrote this".


As simple as possible, but no simpler.

I sometimes use the analogy of haiku versus short story.

You can pack a lot of meaning into a haiku, but few will be able to unpack it in the manner intended. A short story can get you to the point without filling a whole book to do so. And a lot more people can write a reasonable short story.


Exactly, the maintenance tax, you have to load the crazy type into your head and once you get to 2 or 3 layers of generics is just painful. C# and typescript both have this.

The question is, what is beyond generics? We need a humane alternative that still provides the type safe guard rails.


That's up to debate, you have to prove that in x.y.z language the time spend for maintenance would have been avoided if x.y.z had that feature from the start. Especially language like Rust that have a big learning curve / blockers, is it justified on the end? I'm not sure.

In Go I never felt that omg it misses some big features we're screwed, we're going to pay the cost of running that in Prod. All the services I worked on while not being perfect ran like a clock without issues on the long term, and it has to do with Go "simplicity", eventhough you never wrote the code you can actually jump into any code base without issues, there is no weird abstraction layer / complexity that takes you day to understand.

For me the fact that Go is a "simple" language is very powerful because on the long run it's much easier to maintain.


Moreover, not all domains require you to catch every error. In many applications, it's just fine to ship a few bugs and it's much better to ship a few bugs here and there rather than slow down dramatically to appease a pedantic type checker. This is especially true when bugs can be found and fixed in a matter of minutes or hours and when the bugs are superficial. And I also contend that the more pedantic the type checker, the more likely that the additional bugs that it finds are of diminishing importance--they are less and less likely to be a downtime issue, they are increasingly likely to be in very rarely hit paths.

I like Rust a lot and I hope to get to use it more, but "added type safety at the expense of iteration velocity" is not a good tradeoff for my domain and I suspect many others (although its iteration velocity is improving monotonically!).


Strangely, I found Go to be one of the least productive languages I've ever used, 2nd only to Java. It just makes the programmer do so much of the work for common tasks, and it's incredibly repetitive and while it generally generates a fast program, the quality/fault rate isn't any better than any other iterative language.


I’ve used Python and Go (and JS to a lesser extent) extensively and Go has been consistently faster to iterate with, much better performance, much less headache with build/deploy/dependency issues, much better for teamwork (less pedantry in code review, docs don’t get out of date as easily, types keep people from doing as much pointless magic, etc). Note that the performance point is hard to overstate since with Python it means you run into performance issues far sooner and your options for optimizing are in a much different complexity ballpark which completely eats any iteration velocity gains that you might have had from Python in the first place (in case it was ever a question).


I mean if you're comparing it against a language with an even weaker type system...


Python is not a great productivity language. I find Python less productive than enterprise Java. The tooling in Python is still in the dark ages.


Yeah, although Python was my first language and I used to love it, now I realize it's "the worst of all worlds". It's superficially easy, but it both runs slower than "real" languages and programmer productivity scales much, much worse than them. Of course "real" languages are also a scale and I think they too become worse further the scale. Go, C# and Java are probably around the sweet spot, although I myself prefer stricter languages like OCaml. Definitely something like Haskell is "too much" for my taste, probably will have to try Rust soon to figure out where it stands in my scale.


curious what other languages and domains you are working in. In my experience, Go has been a boon to productivity. I do networked services mostly. Projects that are maintained for years with lots of tweaks. Rewrites, greenfield. Lots of focus on operability, maintainability, and high availability. My work started in php, became more serious under perl (anyevent), and python (twisted), and toss in a smattering of other things here and there (some C, C++, ruby, javascript, lua). The main work horses were perl and python though. I've been using Go now since 1.2 (around 2013/2014). Every time I go back to these other languages in the same domain it is like taking a giant leap backwards.


Interesting, I'd say the ones to challenge in the backend space are Java, C#, Scala, F#, OCaml, Erlang, Kotlin, Clojure, Elixir and Haskell.

Those are the stapled languages in the backend space which Go competes in. You could give some of them a try.


Go is most often preferable to any of the languages you mentioned, no doubt about it. It's just that Rust is generally held to an even higher standard.


I think that is your preference and not a generally accepted fact.


So you've only used untyped languages, C++? No wonder you think Go is productive...

Not that Java is even good, but that beats go. The history of Go is the history of Java, repeated as farce. It's even some of the same academic lineage come to bail things out with the type system for generics!


Go is quite a lot different than Java. Go has held to its promise of simplicity and consistency remarkably well. No need to learn groovy to configure Gradle to build your application; just use ‘go build’. No need to learn javadoc or configure your CI pipeline to build and deploy documentation packages. No need to figure out how to statically link your code, it’s the default. No need to figure out how to compile your code ahead of time, it’s the default. No need to figure out how to tune your GC for low latency and low memory usage, it’s the default. Plus Go has value types; Java is maybe going to get them, but they’ll never be idiomatic. On top of all of this are the misfeatures and antipatterns that are pervasive in Java but which don’t exist in ago—things like inheritance, objects, and the “you just want a banana, but the banana has a reference to the gorilla holding it and the whole damn jungle” design pattern. I know, I know “that’s just bad design and you can have that in any language!”—true in theory, but Go doesn’t have this in practice. Nor gratuitous inheritance hierarchies or anything else. And while Java folks don’t have to use those patterns/features if they don’t want to, the must still interface with them because their coworkers probably write code like that and if not your coworkers’ code, then the dependency library and even the standard library.

Go is a dramatically nicer programming experience than Java. There are some benefits in Java however: sometimes JIT is nice and Go has no analog to Spring. But these are very circumstantial benefits and not good tradeoffs in the general case.


Most of the things you listed are not a problem in C#. And you don't have to be tied deeply to Microsoft to use it these days — I develop stuff full-time on top of .NET Core under Linux (and host it on Linux).

>No need to learn groovy to configure Gradle to build your application

I rarely have to configure msbuild, mostly to do some relatively advanced stuff like auto-installing npm dependencies for an SPA.

>No need to learn javadoc

Just use

    /** */
>No need to figure out how to statically link your code

You can build pseudo-statically linked executables by adding one line to .csproj (which builds a single executable file, but it's really a self-extracting archive), and real static linking is in the works. I have no need for it personally though.

>how to compile your code ahead of time

Same thing — AOT compilation is one configuration line away. You don't need it very much though, as the CLR is very performant these days and beats Go in most benchmarks:

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

>No need to figure out how to tune your GC for low latency and low memory usage

I would argue CLR's GC is ahead of Go's GC at least for latency. Memory usage, not so much, but damn, look at the execution times above. It depends on your priorities I guess.

>Plus Go has value types

C# has had them since before Go was a thing. Unlike Java, it also has real structs, which you can configure the alignment of, and map a raw chunk of memory to them (which you received from a COM port, for example). This came in handy more often than I care to remember.

As for your forest/banana analogy, I'd argue it's never been as bad in .NET world as it's been in Java world. I have personally never found anything similar to the insane class hierarchies they have in Spring.


> I have personally never found anything similar to the insane class hierarchies they have in Spring.

Try to do something with stuff like XAML Workflows, Bizztalk, WCF, SharePoint, or MVVM. :)


I think people get causation and correlation a bit confused here ... Java code is like it is not just because of it's nature as a language but because a gigantic number of huge corporations use it. And they have bajillions of dollars to pay developers to figure out a gradle build, or navigate a typ e hierachy etc. That's why equivalents of Spring etc., have appeared for .Net etc.

Btw, did you know the default Gradle configuration for compiling a Java app is literally 1 line?

    apply plugin: 'java'
I'm not that much a fan of Gradle (too much magic), but it's one of the least verbose and easiest to figure out parts of working with java.


Having spent way too much years in java I’d have to say that the build systems are really the weakest part of java. We have „maven“ (wtf) and „gradle“ (powerful but yikes) oh and “ant” (dead now)

Wrt gradle: While it’s super easy to get started, any slight modification is a major pain.

Note to other tooling guys for other languages:

- I don’t want to write in a new language just for my build system

- I don’t want freaking xml hell like maven

And don’t get me even started with Maven as a package manager. But to be honest... it’s at least better than NPM :-)


So I assume you don't like many of the popular build tools at all - Make, bazel, etc all define a custom DSL effectively. At least Groovy or Kotlin are actual languages you can learn and use for other things, not something that has no alternative purpose so is completely wasted. And their syntax is very close to Java so you have a head start on learning it.

In general the concept of a "cross platform" build tool and "not having to learn another language" are at odds with each other, for anybody who isn't lucky enough to already work in the language the build tool is written in.


My only java experience was in college nearly two decades ago, so I def don’t have a valid opinion on Go vs Java. I’ve worked with dozens of former Java devs that were now working in Go. The majority felt it was a breath of fresh air and did not want to return to Java in the future. Granted that was not all of them. The plural of anecdote is not data, so I’d be interested in a larger poll to know how folks feel who have experienced both.


More anecdata for you. I'm a professional Java dev, and I spend my off-hours writing tools for work in Go. It feels so much nicer to use.


Were you using an IDE? For me, yeah go has a lot of boilerplate, but with autocomplete, because the type system is so simple, the autocomplete is super savvy and code just flies onto the screen. Still a fair amount of plumbing, but it's easy plumbing.

I can easily see it being tedious if you have to type every character.


The trouble with boilerplate is that we have tools that write it for you but almost nothing that reads it for you, much less reviews it for manual edits. Anything that can be generated from a high level description and then thrown away, should be.


The IDE side is in my experience the worse part for Go (unless you pay for JetBrains GoLand).

Code navigation is extremely basic (can only find references in the same package!), often breaks, and can't handle stuff like finding all interfaces that a type implements, or finding all implementations of an interface.


It used to be really good across the board, but I think modules broke a lot of things and they never fully repaired? I'm not sure, but I get the feeling that the quality of these editor integrations dropped a year or two ago.


As far as I can tell, they never had support for finding usages across packages, never had support for finding implementations of an interface, never had support for finding what interfaces a certain type matches/implements. These are basic features in any IDE for a language that has packages and interfaces.

Really good IDE support would include refactoring (more than renaming a local variable/struct field, e.g. Extract Function, extract parameter), advanced navigation/analysis (analyze data flow to/from variable, find call stack etc).

Go is somewhat decent in the tooling area, and you can get your job done with it, but it really doesn't have anything I would call good support in any area of tooling except the compiler (not code analysis, not debugging, not monitoring, not profiling, not package management).


> As far as I can tell, they never had support for finding usages across packages, never had support for finding implementations of an interface, never had support for finding what interfaces a certain type matches/implements. These are basic features in any IDE for a language that has packages and interfaces.

I suppose it depends on what you're used to. I've never found myself wanting to do these things, but I'm also strictly from a Python and C++ background so my standards are admittedly lower than Java and C# people. Hopefully gopls solves for these problems.


In C++, you have never tried to go to the definition vs declaration of a function? Or find calls to your class through a superclass?


Which Go IDE's have you tried?


I tried GoLand, which was great.

Then, I tried VSCode with the Go plugin, both the default version and the go pls based one; and Emacs with the LSP package and go pls.


The boilerplate still has to be read and so the boilerplate still gets in the way of understanding "What is this supposed to do"


I feel like Rust and Go are fundamentally different use cases and I wish people would stop the language debate.

Rust: For code that isn't meant to be iterated upon, that must be safe/correct while also being performant.

Go: For code that is meant to be iterated upon frequently, that must be performant and easy to maintain.

I would like my tools to be written in Rust, and I would like to interface with them in Go.


I think that's totally backwards. A good type system _improves_ iteration speed, it doesn't slow you down. When I refactor a Rust program, I have very high confidence that if it compiles, I haven't accidentally introduced a bug.

I don't write much Go code, but my company does. I have seen so many incidents and bugs whose root cause was: someone changed a library, that broke the semantics that users were relying on, but everything still compiled fine, so nobody noticed the subtly wrong behavior for a long time. An incredible amount of money has been lost like this.

Go simply does not give you the tools to prevent these issues in the type system, you have to rely on really thorough unit tests (which inevitably don't get written in the spirit of "moving fast".)


Wait what?! Most of the major go libs (std and not) have been stable for years. Rust is merging random syntaxes and random packages get upgraded to std lib all the time.

The package ecosystem was so stable that go didn’t ship with a package manager (not saying it’s good but definitely a sign of relative stability).

Besides nil pointers (typesystem could do better here) and bad logic (can’t do better but language simplicity reduces this), what you’re saying is definitely not the experience of the majority. People constantly talk about being able to go to a golang program from 8 years ago and run it with no problems. Rust changes every few weeks.


> People constantly talk about being able to go to a golang program from 8 years ago and run it with no problems. Rust changes every few weeks.

You stated that "Rust is merging random syntaxes and random packages get upgraded to std lib all the time" as though it were incompatible with "being able to go to a program from 8 years ago and run it with no problems". But there's no incompatibility here. Rust adds new features at a faster rate than Go does, but it also maintains backwards compatibility ("stability without stagnation").


Sure, I conflated 'still compiles' with 'still the right pattern'. Its sorta like Javascript, ECMAscript has been backwards compatible but the ecosystem is a consistently moving target that users have to adopt their code to or feel left behind. E.g crossbeam and parkinglot. Also the bunch of nightly features that disappear but many use.

Sure the mainline stays clean but thats not what matters in practice.


> The package ecosystem was so stable that go didn’t ship with a package manager (not saying it’s good but definitely a sign of relative stability).

Which also explains why you can git clone a Go library/tool and have it not building because some dependencies introduced some breaking changes in the master branch. That happened the last 3 times I wanted to fix something in a Go project.


I don’t know what Rust you’ve been using, but they’ve been explicitly having all changes be backwards compatible since 2016 - when they hit 1.0 - with a few small changes in 2018.


> Rust is merging random syntaxes and random packages get upgraded to std lib all the time.

I really don't know where this is coming from. The only package that was included into std like this is replacing the hashmap with hashbrown, and this didn't even change the public api of the standard library at all, besides improving performance of the hashmap.


Not to mention go compiles at the speed of light, so CI pipes are usually just rebuild and rerun the entire suite on every commit, so idk how such bugs would lurk unless the tests were lackluster. Which happens, I totally understand.


> don't write much Go code, but my company does. I have seen so many incidents and bugs whose root cause was: someone changed a library, that broke the semantics that users were relying on, but everything still compiled fine, so nobody noticed the subtly wrong behavior for a long time. An incredible amount of money has been lost like this.

In all likelihood, that "incredible amount" is a small fraction of the money your company saved in iteration velocity by not using Rust in the first place. This might not be true depending on your domain--if you're writing space shuttle software, bugs can be really expensive. If you're writing ordinary web app backend code, you're probably much better off with Go than Rust.


I'm not sure I understand how this can happen in Go, it's a strongly type language, if for example you change a method / function signature, the code won't compile.


Go is strongly typed (you can't pass an int to a function expecting a string), but its type system is not very powerful, and it cannot express many of the types people need in practice. Back to the discussion at hand (generics), Go programmers usually work around this lack by using interface{} types, which can point to anything and are essentially dynamic typing.

As another example, Go makes it impossible to create a Maybe/Option type which leads to a bunch of issues around returning failing or empty values from functions. In one particularly costly example, there was a function in a library that was getting some price information from a file. Originally this returned nil when the price information could not be loaded, but a refactoring caused it instead to return 0 which the client happily used. This was not caught for _weeks_.


I've never seen those claims about heavy usage of empty interface in the real world, and in the few case we have it, there is type assertion that is safe and well known.

As for the second example I don't think it's a language issue but more a programmer one, how Rust would have saved you in that case, anyone could have returned 0 for the price instead of a proper type. Using Rust doesn't magically turn programmer into good ones. You still have to know how to properly design your code and best use the language and its features.

I'm often reading code in various language and I'm baffled at what a giant lib.rs looks like from someone that never really used Rust.


> how Rust would have saved you in that case, anyone could have returned 0 for the price instead of a proper type

You've missed the problem that was posed. It wasn't returning 0 to indicate an error, it was changing the error value from nil to 0. In rust that would've been a type change (from Option<int> to int). Because the context here was about how stronger type systems (like rust's) allow you to be more specific about your types, and thus many code changes that would not involve the type changing in other more weakly-type languages do cause a type change in rust.


Except that in Go, you always return the error alongside the value, like this:

    return 0, errors.New("unable to get price")
It's less elegant than Options but still, very common practice. GP's company clearly hasn't bought into Go if they're using a library that doesn't follow well-known conventions.


This is what strikes me: the situation is a programmer being lazy.

They wanted to do: something(getPrice()), but if getPrice can actually fail the then either something needs to accept an error, or they need to handle the error.

No type system imagined can save you from someone who's decided returning 0 is the correct way indicate an error in this scenario.


Sounds like your company is just using a unidiomatic library. It's poor practice to return a zero value as an error instead of, well, returning the error. As someone else said, it's standard practice in the vast majority of all go code to return a value, error tuple and check at each call site. In fact, it's so common that it's the number 2 complaint about go code after not having generics (error handling too verbose).

Go has its problems, and it isn't a perfect language, but I have never felt unproductive using it. In fact, my company uses it for all backend services. It has a great stdlib and first class tooling. Most of the discussion in this thread seems to be coming from people who barely use it or have only dabbled with go.

As a tangent, it's really tiring to see HN engage ad nauseum in these pointless language debates. Every language has its place; use it when it's appropriate.


What you're describing comes down to "Bad programmers do bad things", which literally no language can or will solve. If your code has lots of interface{} in it, you're not really using the language as intended. Pretty much every Go styleguide/linter/whatever will throw up at interface{} usage.

You can write unsafe code in Rust. Does that mean that Rust is bad too?


100% agree. Same with :any in typescript. When it comes to bugs... I tend to think that a good configured linter is more important than the degree of the type system. It’s nice to have a strong type system but people forget that a lot of programmers are bad and a good type system won’t change that fact.


>Originally this returned nil when the price information could not be loaded, but a refactoring caused it instead to return 0 which the client happily used.

Hard to feel sympathetic, as this practice violates the Go convention of adding an additional return value to the function to indicate failure, either of type error or bool, as appropriate.

>Go makes it impossible to create a Maybe/Option type

It's very much possible: https://play.golang.org/p/MCxGcZ-rJRU.

Of course, it's not generic, but I have a feeling that might change soon :)


The Option type there isn't type safe. E.g. https://play.golang.org/p/CiQLf4yWhCO doesn't throw a type error for the Get function even though I've omitted the null check.


Sorry, but that's just ridiculous. Obviously the language can't prevent you from defining your Option type incorrectly. The point is that it's not possible to misuse it.


Okay let me rephrase. The type checker doesn't make sure you've checked the value of ok e.g. you can do this: https://play.golang.org/p/x18Rh2iwd2E

Go constantly requires boiler-plate checks for errors, null values etc. without the type checker helping ensure you've checked for an error or null value.


Maybe you need to think why your company is not writing in Rust. Is there some quality of Rust that individual users prefer more than companies in general.

Whenever I hear about Rust in companies same dozen or so companies get mentioned every time.


The specifics of what languages my company writes are mostly historical, and I would say that Rust was not ideal for writing high-throughput network services until this year with the release of async/await.

But in general, companies adopt Go for the same reason they adopt MongoDB [0]: it's really easy to get started with, and get something that "works." The pain comes months or years down the road, and by then you're stuck with it.

[0] A company I previously worked for starting using MongoDB as a small startup, starting finding the limitations about 6 months later, but ending up spending an incredible amount of engineering time over 5 years getting rid of it.


You're a person that really does not like the Go programming language, and that's okay, but to try and paint a perfectly fine tool as bad because someone used it incorrectly, and then compare it to a pretty objectively bad product is silly.

We get it. You don't like Go. Don't use it. That's a perfectly fine option. Plenty of very successful companies, open source projects and services all work with Go (and literally every language) without issue. You not liking something does not make it bad.


And you're welcome to think that. YMMV.


What's wrong with iterating on Rust? Comprehensive type checking makes refactorings a lot easier than something like Go. And you can iterate starting from a simple, working solution that relies on marginally more expensive features, such as .clone(), Rc<…>, Cell<…> or RefCell<…>, Any for dynamic data etc. etc.


Refactoring is safer with Rust, but in generaly Go is not so unsafe that refactoring costs more than it saves on the frontend. You can write a lot of Go (and contrary to the memes in this thread, you can still leverage the type system to great effect wrt safety) in the time it takes to get a little Rust to compile; however, Rust is getting better all the time.


I think you are missing the point of type systems. Our current economy doesn't value correctness at all. Code is rushed out, and usually live longer than it was designed too. This mimicks the the largest infrastructure debt of America (and to a lesser extent, the western world as a whole).

But the same shoddy system is also constantly changing the requirements, precisely because nothing is built for posterity and there's tons of make-work churn. Good type systems make code far easier to refactor, and refactoring (as opposed to slapped-on new code) is the only way to handle shifting requirements without being crushed under ones own weight.


I'm not missing anything. I'm well aware of the value of type systems, as I am our current cultural decision to value speed over correctness.

I'm not confident that average level developers are able to ship features in Rust as quickly as they would in Go, which is the root of my point. Those building tools, which generally do not have the same success metrics with respect to speed and correctness, should use the language with a focus on safety and correctness. Those whose positions do not lend themselves to safety and correctness should not use that tool.


> I'm not confident that average level developers are able to ship features in Rust as quickly as they would in Go

Even if this was a genuine concern, they should still use Rust. Because it will be far easier to refactor the code that they ship into something that is safe and correct, compared to rewriting a Go codebase.


We can agree to disagree. The whole point of my post is that I do not agree with your statement.


Which code is not meant to be correct?


This is a silly strawman. All code is meant to be correct, but often times close enough to correct is good enough (as is literally the whole point of my post). Most businesses operate on code that is close enough to correct for their individual risk tolerance, and increasing that correctness would reduce speed/throughput and increase cost.


I hope you are kidding mate. I've been solo maintaining 50k lines of Rust iterating over a few months and it's a breeze.


This. Let me give an example. When booting a system you can get errors as concurrently booted subsystems (we're talking tens of thousands of them potentially) don't necessarily finish in the order you want, inducing effectively dependency error race conditions. You can either write a totally checked system that is brittle to these race conditions, and many nightmares worth of debugging, testing, and giving up, or you can write floopy logic that accepts the problem, fails, and restarts, and moreover very likely gives you lowered downtime overall. What do you do?


(1) This is why you can even ship code written in JS or Python. (2) This is why Typescript and mypy exist.


I agree, although my experience with Mypy has been miserable. I’m not sure the costs are less than the gains. Also there are other tradeoffs, like performance and artifact size (numpy and pandas alone are 100mb).


Only because liability still isn't a sure thing with proper fines for those that ship faulty products.


1. There's lots of software that doesn't "ship" - it's used internally.

2. There's lots of software that "ships" as a web service, often used by people who are not paying for the web service.

If you're talking about defects for an actual product that render it unsafe or unsuitable for the intended purpose, then I can see your point (though even there I'm not sure that I completely agree). But there's much more software than that.


Note that the whole point of my post is that there are different risk profiles for different kinds of software (or even within one kind of software--e.g., the data security subsystems in saas apps have a different profile than the UI widget library). If the parent's point is "some bugs are serious!" then that's not a rebuttal to my post--don't use weak type systems for those projects or those parts of the project. You don't rebut "You don't have to write all software like it is space shuttle software" with "some software is space shuttle software!".


Nonsense. No one is served by fining companies for superficial issues. Customers are nearly always better off getting features faster at the expense of superficial issues. Any other take ignores economic reality (I say this with much chagrin, as I would rather move slower and release finely crafted features).


Ideally, you'd have a language that allowed you to do both. Generics/type safety when you want them (e.g. for stuff that is hard to maintain), and total freedom otherwise.


Python with type annotations is the closest I've used to that, and I hate it. It beats no type annotations, but only barely.

As soon as I need to use someone else's code, all bets are off. Half the reason I want types is because it makes my life easier when I'm trying to use something written by someone else -- instead I find myself needing to experiment with a debugger to figure out what functions are actually doing.


Plenty of languages allow you to do this, either by opting into type checking only when you add type annotations, or by using "any" / "dynamic" or unchecked typecasts as an escape hatch when you don't feel like going through the trouble of how to make the typechecker happy.


C# with liberal use of "dynamic" would be pretty close to that.


So basically typescript?


[[citation needed]]. "Everyone knows" that type systems reduce bugs, but actual evidence for that is pretty weak.

https://danluu.com/empirical-pl/


That study calls out significant issues. I can go from personal experience. Porting python to Go. In python, we had to have tests dedicated to ensuring the right types were handled the right way. Is it a string or an array passed in? Welp, need a new test to cover that. Whole classes of tests go away with more advanced type systems. Those tests were needed to prevent those classes of bugs. So better type system == removing of whole class of bugs.


Came here to say I've had this exact same debate. We were writing long Python test suites to check types. I constantly (and playfully, I don't care to die on this hill) pointed out this is fully automatable with a static typing system. I remember many times being told to just read thousands of lines of uncommented tests in order to understand some thing I was trying to debug. (Don't get me started on Python stack traces!)


I've started applying `mypy --strict` to my new python code. It caught so many issues already it's definitely worth it. Mostly in cases of uncommon error handling, some in code which was tested but under wrong assumptions and would actually fail on production. (This is a problem with "you can test types approach")


Yes, mypy is terrific, and has changed the way I write Python programs.

Taking this gradual typing tool a step further, you can run your tests with the typeguard plugin for pytest, which will dynamically check the types of most values, pointing out places where your types are a lie.

https://typeguard.readthedocs.io/en/latest/userguide.html#us...


That's amazing. Thank you for mentioning this.


This is where the type system debate breaks down.

What you point out is a developer error. Mixing variable types in a dynamically typed language is, like generics, a code smell.

Using a dynamically typed language is all about not having to babysit the compiler, its not about playing dumb about types.

If you design your functions in a way that that function will never receive an array, you dont have to test for it. And if some idiot tries one day, it should error very loudly that something is wrong.

Too many abuse dynamic typing to the point they feel safer offloading this headwork to a compiler.

That is, they get the compiler to make sure nobody will ever send an array into that function that takes a string.

All generics do is bring back the lack of typesafety and all the manual typechecking that comes with muddying generalizing the parameter typeset.

I actually dont hate generics, they are super powerful, but I question every usage as suspicious.

I also wish people would remember human brains arent all the same. We have different ways of thinking about problems. Some people need to have a compiler to busy work, some people need to do busy work while they think.

I just wish people shitting on dynamic typed languages would remember they came to solve the mess and monotomy that arose from typesafety.

Someone mentioned they felt go was going backwards from rust, i think both languages are massive steps backward from python.


> If you design your functions in a way that that function will never receive an array, you dont have to test for it. And if some idiot tries one day, it should error very loudly that something is wrong.

Then you've moved it from a test to a runtime assertion. You're still re-implementing the type-checker, poorly.


And on top of this, it's usually not "some idiot" calling the function from a REPL. It's "some idiot's code", which may not be executed until the program is running in production somewhere if it is rare. "Oh, someone uploaded THAT kind of file? Well, let me see... AttributeError: Type SomeObject does not have attribute append. Here's a list of 47 functions that were called between the action you took and this error being raised. None of them guarantee the types going in or out, so you should probably get going waking through all this logic. You're welcome! -Python <3, XOXOXO"


Well if a functions first run is in production youve got other problems. The idiot ref would be right.


“Compile-time and runtime are less relevant notions than before-shipping and after-shipping” - Rich Hickey

https://twitter.com/richhickey/status/1063086406980026370


Hehe. But has anyone ever shipped code that won't compile? I think not.

A compile time error is a caught error. A run time error may be, or perhaps not.


This!


It ran fine with all the file types we tested! A dependency loads the file into a custom type, but it handles this file type differently! Now it's a stinky old NoneType. :( :( :(


Yeah Python has its points where it doesn't shine, but the stacktrace does a consistently good job of telling you exactly where the logic broke down.


I love Python, and generally I agree, but the place where an error is raised can be significantly distant from the event that occurred which caused it. The lack of static typing can really make it difficult to find such bugs sometimes.

A statically typed language with compile-time type checking would flag that immediately, and chances are your IDE will show the error before it even gets to that point.

Type hints are a thing, but then you are already moving in that direction while remaining in python.


Agree that there's a big difference between the error happening at compile time vs. happening at run time in prod, but I personally find the python stacktrace much more descriptive of how we got to the error than a lot of other languages.

A certain amount of it is probably a familiarity thing, but I do usually find myself able to parse it to the point of what line in which file things started to go wrong.


Other languages can do the same, they just deliberately make the decision not to because of performance. Ruby has the same nice stacktrace (even nicer).


There was some work that took ML compile time errors and created Python-like stacktraces of how this specific error could blow up at runtime in a Python-like fashion. Students found those concrete errors with concrete values easier to reason about than abstract compile time errors.


> All generics do is bring back the lack of typesafety and all the manual typechecking that comes with muddying generalizing the parameter typeset.

I'm not sure if I'm misunderstanding you here, but generics are type safe. They're statically checked at compile-time.


> If you design your functions in a way that that function will never receive an array, you dont have to test for it. And if some idiot tries one day, it should error very loudly that something is wrong.

That's naïve. Unless you code extra defensively (which is the same type of overhead as excessive testing, and needs itself to be verified by testing), there's very little guarantee that type errors will result in loud failure consistently at the first point that the bad value is passed (because functions that don't do unnecessary work often pass their arguments along without doing a lot of work dependent on the very specific type.)

And, in any case, failing consistently at compile time (or, with a modern code editor, at writing time) with static type checking is better than failing at runtime, even if it is loud and consistent.

Of course things like mypy and TypeScript (especially, in the latter case, because it powers tools that work pretty well when the actual immediate source is plain JS, or JS with JSDoc comments from which type information can be extracted, when consuming libraries with TS typings) mean that some popular dynamic languages have static type checking available with more expressive type systems than a number of popular statically-typed languages.


In Python, it’s entirely idiomatic to take different types of input and do different things. Look at the pandas dataframe constructor or just about anything in the datascience ecosystem (e.g., matplotlib). Look at the open() builtin—the value of the second string argument changes the return value type. Things aren’t much better in JS land where a function might take an arg or a list or a config object, etc. We see dozens of type errors in production every day.


For what its worth, the datascience ecosystem is probably the worst place to get general idioms from. They tend to err towards their own domain language and build stuff for people who arent programmers. i.e. for caller flexibility instead of maintainability.

Besides that, i only said it was a code smell, i dont debate that its useful sometimes. Just that its a signal to be careful, check you really need itor there is t some other constraint you can break.

My only advice is that narrowing your parameter types at the earliest opportunity is a good practice and results in much easier to understand code.


I kinda wish Python took the Rust approach of having multiple factory static-methods, rather than one constructor that does different things based on what arguments you pass in.


It does that sometimes, e.g. the classmethods in https://docs.python.org/3/library/datetime.html .

But perhaps not enough.


Yeah, it seems not very idiomatic. I regularly see "senior engineers" doing I/O, throwing exceptions, etc in constructors. I staunchly believe a constructor should take exactly one parameter per member variable and set the member variable. For conveniences, make static methods that invoke that constructor. This will make your code far more understandable and testable (you can initialize your object in any state without having to jump through hoops). This is language agnostic advice, but some language communities adhere to it better than others. Python is not very good about this.


I wonder if dataclasses might change this. They provide an __init__ that does what you describe, and though you can supply your own __init__ instead, classmethods seem the easier way to add custom initialization logic.


Yeah, it would be a very good thing for the Python community if dataclasses became idiomatic and vanilla classes came to be regarded as a code smell. Basically what every language should have is the concept of a "struct" or a "struct pointer". Constructors imply that there is one right way to construct an object which is rarely true except in the one-param-per-member-field sense.

"Convenience constructors" (i.e., static methods for creating instances from a set of parameters) are fine, but should often not be in the same module (or perhaps even the same package) as the class anyway. For example, if you have a Book class and it has a convenience constructor for creating a book from a database connection string and a book ID, this is probably reasonable for certain applications, but if it's likely that others would want to use your book class in non-database applications, then it would be a travesty to make the non-database application take a dependency on sqlalchemy, a database driver, etc (especially since database drivers tend to be C-extensions which may or may not be difficult to install on various target platforms).


I just wish people shitting on statically typed languages would remember they came to solve the mess that arose from dynamic typing.


Im sorry you feel like i was shitting on statically typed languages.

I can assure you I do beleive they have their purpose and place in the world where raw performance or closer to metal abstractions can take place.

Personally, my brain much prefers to use a dynamic type system to infer what i mean leaving the thinking part up to me.

I just find defining a string is a string when im only ever going to put a string in there kind of redundant.

There are other arguments about readability, refactorability, and similar and I think they have some weight in large doverse systems with many many teams interacting.

Personally, that isnt a large issue for me. Im working at the scope I can maintain a standard and a level of consitency that alleviates these problems to the point the type safety is simply not wareanted from a return on investment point of view.

I do not, for instance, think the kernel or chrome should be rewritten in javascript. Despite what some react developers seem to think.


> I just find defining a string is a string when im only ever going to put a string in there kind of redundant.

There are 2 problems with this kind of thinking.

1) you aren't going to be so sure. Humans are imperfect and as codebase grows, everyone commits such errors. Eg comparing a string to int without converting it to int in python. It always returns false (IIRC) and it is hard to detect where the problem is.

2) Most type systems aren't that verbose. I guess you got the impression from java / mediaeval C++. But most statically typed languages have local variable type inference. OCaml / Haskell / F# / Swift etc.. have varying degrees of type inference. At this point, types in method signatures, in languages that require it, serve as documentation. And many of these languages are terser than python/javascript.


Thanks for taking the time to respond.

1) I work on a typescript codebase for work. There is a real need for types in this case because the definitions are about 3 layers of abstraction away from their use. It drives me up the wall because I know when you keep definitions close to use, or have some basic conventions this problem melts away. At least to the point of diminishing returns where types are no longer relevant.

So, i guess over the years ive been caught out by int + string = 0 all of like, 1-2 times in production, maybe a few more while in dev. Either way its rare enough for me to call FUD on the argument. Stop mixing types and this really doesnt happen. To be fair to your point, at work, i know of a recurring error in production where this is almost certainly the same class of error. Nobody can be bothered fixing it because its too hard to track down and the cost benefit isnt there.

2) Type inference is problematic for me. Doesnt it just open you up to type errors of a different class?

2.5) terse code is not the point, I find rust cryptic and dense to say the least. I prefer something like go wgere readability is a priority over density.


> 2) Type inference is problematic for me. Doesnt it just open you up to type errors of a different class?

It's at compile time, so not really? It will tell you exactly what it expects vs what you provided at compile time. These kind of languages read like dynamic languages (for the most part) but they are completely static.


> In python, we had to have tests dedicated to ensuring the right types were handled the right way. Is it a string or an array passed in? Welp, need a new test to cover that

  def foo(arg: str) -> int
vs.

  def foo(arg: List[Union[Set[int],Optional[Dict[str, Any]]]]) -> str
But...Python has a static type checker with a more expressive type system than Go.

Also, IIRC, better type inference.


But, do you really save tests? You have to test your code anyway. Does a type system buy you anything more in code that's going to be heavily tested for other reasons?


So many tests it's not even funny. A huge chunk of tests in dynamic/softly typed languages just end up being input/output validations where in a strongly typed language you don't bother since you know the compiler would refuse to compile it.


Except if you are comprehensively testing your code for things other than what's caught by type checking, you get those validations for free.

In my view, type checking lulls one into thinking the code is better than it actually is, just because it compiles. But it doesn't test the algorithms. And testing the algorithms exercises the code enough to reveal the shallow type errors.


Positive vs negative testing. Checking the algorithm is generally positive testing. You still should have negative testing, and in a strong type system language the compiler can be leveraged to do much more of that for you.

But yes, there's a lull with type checking that makes people believe only negative testing is sufficient, which is as foolish as thinking only positive testing is sufficient.


The evidence is pretty clear to anyone who writes in an untyped language. I can't count the number of bugs that I see in ruby code at work that would simply not exist in rust. The biggest one being unexpected nils.


In my experience I find it very common to write, rewrite and rerewrite code a lot when embarking on a new project. For various reasons, I embark on more new projects than I work on long term ones, which I think is a factor here.

It takes a while for something to settle down enough to say this version is going to be the code for the long run.

At that point, solidity in testing and gradual typing can be added, usually starting with the public interface.

Languages where I’ve had to satisfy all type errors in the various fragments of half brained code I’ve written for version 0.0.85 have always felt a little cumbersome. The compiler’s type checker is a perfectionist looking over my shoulder.

(Go is one of those languages. Ruby is not.)


I'm the opposite. New applications are the ones you refactor the most, and refactoring is an order of magnitude easier/faster when the code is statically typed.

Refactoring any dynamically typed application (regardless of size) is a manual operation, and error prone.

But then again I rarely deal with type errors, when I do it's the compiler looking out for my often 'application ending' mistakes.


As a person who chooses Python whenever it seems reasonable, I agree.

Go, the language, is simple. Hard to argue with, and sounds great.

The code I have to write though, it's quite simpler to do it in Python more often than not.


This is a generalized feeling about good static type systems.

What is missing is a language with good static type system (Algebraic data types, stream/iterators, generics), simple and having GC, for application domains where prototyping speed matters.


I think Swift satisfies all of those those criteria, and doesn't have a steep learning curve since you can write in an imperative style if you want.

F#, OCaml and Haskell (and other ML-family languages) satisfy the first 3 criteria (type system, simple and GC) and depending on your familiarity with the language, can be fairly suitable for fast prototyping.

For example, I write most of my new personal projects in Haskell, and I'm able to iterate pretty rapidly once I have a skeleton of the system in place (with liberal use of typed holes to ignore things I don't care about implementing right now). For some projects, I find I'm actually able to prototype more rapidly than I would be able to in an untyped language, because the language helps me express the shape of the data through algebraic data types, and then the functions end up having "one obvious implementation" that follows the structure of the data.


Reference counting still requires you to keep track of cycles. Also performance is less than GC, especially in multithreaded environments. And Swift is not a serious language outside apple ecosystem.

OCaml is actually very good language. I hate how people outright dismiss it mentioning multicore. Neither JS nor Python have great multicore story. And most applications don't need multicore.

Wish there was a good static compilation toolchain for .net core. F# would gain much more traction then.


Both Javascript and Python are perfect for this, IMO - both languages have syntax for type annotations which are ignored by the intepreter, but are respected by external typecheckers.

Mypy for Python (https://mypy.readthedocs.io/en/stable/) and Flow for Javascript (https://flow.org/en/docs/) are my personal favorites, and have all the features you describe. And importantly, both languages have very strong library support.


Types also are also massively useful in making better tooling. The ability to refactor with confidence and get meaningful autocompletes is already enough justification for a static typesystem.


Fun fact: because json is a reflective package in Go, solving your problem is quite trivial: https://play.golang.org/p/-TZ5b9Su1or .

As for programming with types, that's partly what Go was trying to avoid: https://news.ycombinator.com/item?id=6821389 . And I agree with Pike on this one. The nice thing about Go for me is that I'm just writing code. Not defining a type hierarchy, not rewriting some things to get them just right, not defining getters, setters, move and copy constructors for every type :). Just telling the computer what to do and in what order. When I'm writing a library I'm defining an API, but that's about it; and you can usually steal whatever the standard library's patterns are there.

I disagree about nil as well; I think Go's zero value approach is useful, and basically impossible without nil (it wasn't necessary in my code, but I may want to instantiate an ApiResponse object without a data structure ready to be passed in).

A little bit of a rambly response from me, but, all in all, I think I'll be one of the stubborn ones which refuses to use generics in his code for a long time.


The first comment below your link to the message about Rob Pike's feeling on types was a more succinct form of my reaction:

> Sounds like Rob Pike doesn't understand type theory.

I'd be a bit more charitable; Pike is a smart dude, I'm sure he does understand type theory, but has decided he'd prefer to write imperative code. And he's actually good at writing imperative code correctly, so for him, that works. But most people are not very good at writing imperative code correctly, at least not the first time, and not without writing a large volume of tests (usually several times as much test code as program code) to verify that imperative code.

Or perhaps Pike was just assuming the person he was talking to was only talking about types in the OOP sense. If that's the case, I agree with him: taxonomies are boring, and inheritance often leads to leaky abstractions.

But types let you do so much more than that. I much prefer being able to encode behavior and constraints into the types I define over writing a bunch of tests to verify that the constraints I've expressed in code are correct. Why do the work that a compiler can do for you, and do it much better and more reliably than you?


> But most people are not very good at writing imperative > code correctly, at least not the first time, and not > without writing a large volume of tests (usually several > times as much test code as program code) to verify that > imperative code.

How are languages other than Go (like Rust, Swift etc.,) not imperative? They are nearly as much imperative as Go but have FP features like ADTs and pattern matching, but those don't make them "functional". This point might be valid if you are talking about Haskell, MLs and the likes.

> I much prefer being able to encode behavior and constraints > into the types I define over writing a bunch of tests to verify that the constraints > I've expressed in code are correct. Why do the work that a compiler can > do for you, and do it much better and more reliably than you?

IMO, this "types replacing tests" might be true when comparing dynamic vs static typing, but for already static typed languages, extra typing cannot be a substantial gain. Personally, I find type-level programming beyond a certain extent to be counter productive and it doesn't seem to have much impact on correctness (unless we bring in dependent types).


You're acting like a language is either 100% imperative or 100% functional, but there's a lot of middle ground.

I just fundamentally disagree that "extra typing" (as you put it) doesn't give you much of a gain in correctness confidence.


I'm going to be a smartass about this and say I think neither of you are wrong. The "extra typing" gives you a large boost in your confidence that you've written something correct, without necessarily boosting the correctness of what you have written.

The best example of this which I have seen is Amos' https://fasterthanli.me/blog/2020/i-want-off-mr-golangs-wild... in which he chastises Go for allowing him to print filenames which contain non-UTF-8 data to the terminal which does not display correctly, instead of forcing them be escaped. Rust does that, so he is confident that the program he writes in Rust handles filenames properly: if they are UTF-8 they are printed normally, if they are not they are escaped. Since the Rust type system allows him to do this, it must be correct, right? Of course not. Filenames which contain vt100 escape codes would do a lot more damage and they are not handled at all.

At the end of the day you still have to think of all the possible cases. Types help to constrain the number of possible cases, but the more time you're spending making type constraints as narrow as possible, the less time you're spending handling the cases which can never be caught by the type system.


Parent comment isn't implying that functional vs imperative is a binary; they're pointing out that the power of a type system is orthogonal to the imperative/functional spectrum.


How about just having the bread-and-butter tools of FP available? You can't have a typesafe map operation without generics, so that alone is a major impediment to FP in Go.


You are conflating FP with statically typed FP. There are dynamic typed FPLs too (Scheme, Elixir etc.,). Yes, I agree that Go could've been better if it had simple generics and discriminated unions.

By the way, generics are not at all exclusive to FP. Both C++ and D proved advanced generic/meta-programming facilities like template template parameters(HKTs) and constant generics which Rust currently lacks. But I'd certainly not use these if there is no absolute need.

What I objected to was abstracting and generalizing too much to the point of over-engineering. Some type systems, like those of Rust and Haskell, provide more room for such abuse and it takes some discipline to keep it simple. In contrast, OCaml is a very good sweet spot. ML modules are as powerful as type classes and lead to much cleaner and simpler APIs. OCaml compiles even faster than Go!


>I disagree about nil as well; I think Go's zero value approach is useful, and basically impossible without nil (it wasn't necessary in my code, but I may want to instantiate an ApiResponse object without a data structure ready to be passed in).

I like your solution to my imaginary problem, but I am going to counter this one. Now I'm salty because nil interface[1] actually bit me once and cost me. Go's zero value approach also exists in Rust, but is type safe. Basically every primitive type, and Option<T> have zero values. Any struct that is made up of those values can also safely have zero values. Then for those that don't, you can implement your own zero values. This is called Default in Rust. Go's zero value approach is perfectly possible, and arguable better without nil. And even then, the number of times I've seen a service crash because someone "forgot" to initialize a nullable type, and passed it to a function that then exploded isn't an issue I should deal with in 2020.

[1] https://medium.com/@glucn/golang-an-interface-holding-a-nil-...


First of all, interfaces are not nil if they point to a nil value for the same reason a * * T is not nil if the * T is nil. That is the correct decision. You can not call a function on a nil interface, but you can call a function on an interface which is not nil, but where the value of the type is nil[1].

As for your proposal, there are some issues with it: not all structures have a sane default for all not optional, nilable parameters. What is the default underlying reader for a bufio.Reader? A reader which returns zero bytes? Certainly that would be more confusing to debug than a simple panic, which is what we have now [2]. There's also the fact that a zero value is just a value with all the bytes zeroed and allocating an object via language means (new) never allocates more than the size of the object and doesn't do computation.

But I guess the main point would be that I simply do not have a problem programming with nil-able types. Failing to initialize in Go means writing var a *T as opposed to a := &T{} or a := NewT(), which seems like an odd mistake to make - or forgetting to initialize a member of a struct in the NewT() function. Fundamentally, I do not want to spend hours of my life dealing with safeguards which are protecting me from a few minutes of debugging.

But hey, that's just me. Go isn't Rust and Rust isn't Go and that's a good thing.

[1]: https://play.golang.org/p/L7iy9YBC55c [2]: https://play.golang.org/p/Vgv73KhegKI


> not all structures have a sane default for all not optional, nilable parameters. What is the default underlying reader for a bufio.Reader?

I think you're actually agreeing with nemothekid here, what you're saying is that there is no sensible default value for a bufio.Reader. In Rust terminology, that would mean bufio.Reader would not implement the Default trait. Types need to explicitly implement the Default trait, so only types where that makes sense implement it.

> I simply do not have a problem programming with nil-able types

Yes, that's going to make a big difference to how you feel about features that reduce the chance of errors like that. I'd hazard a guess that in C# the most common exception that is thrown is the NullReferenceException, after a quick search[1] it looks like NullPointerException is a good bet for the most common exception in Java. Most of those IlligalArgumentExceptions are probably from null checks too.

> Fundamentally, I do not want to spend hours of my life dealing with safeguards which are protecting me from a few minutes of debugging.

Similarly, I've already spent hours of my live dealing with null and undefined value errors, and I'd like to stop doing that. So I welcome new languages and new language features that help to stop those errors before they happen.

[1] https://blog.overops.com/the-top-10-exceptions-types-in-prod...


Important distinction: Default is something you have to opt into for your types. It is not pervasive.


Go is designed as a language for the average. Average programmer doing averagely complex things in the current average environment (ie web services, slinging protobufs or equivalent). That is its specific design goal for Google.

It's designed to be simple, to be boilerplate, to be easily reviewable/checkable by coding teams.

Not sure why Go and Rust are always the compared languages. Go is designed to replace Java/RoR/Python in Enterprise-land, not to replace C/C++.

Rust is designed to replace C/C++ in system-land, embedded, kernel, thick app components (browsers are probably the most complex apps running these days on end user systems). The entire focus is zero-cost abstractions.


Because for a long time people were trying to shove Go into use cases that before were covered by C applications. Some infrastructure has been developed in Go (Kubernetes being quite prominent) and so the overlap between systems programming and web/enterprise got muddy.

Rust can't cover the enterprise use-cases of Go the same way that Go won't ever cover what Rust can do on systems/embedded level but there is enough overlap in some cases to confuse people into trying to compare them directly.


You mean like the USB Armory security key running Go bare metal?

https://www.f-secure.com/en/consulting/foundry/usb-armory


I don't follow what you mean by that, care to elaborate?


That is a use case that many would assert should be written in C, yet F-Secure decided that given the security scenario, bare metal Go was the way.


Ah! Yes, that is a case that I would consider could have an overlap between Go/Rust and C. Thanks for sharing it, had no idea it was implemented in bare metal Go, will check it out :)


Wait...

I have seen average programmers who can write less dumbed down code than the most succinct code possible in Go.

I have seen below average programmers who understand how to use generic data structures / programs in C++ / Java / C# etc..

> It's designed to be simple, to be boilerplate, to be easily reviewable/checkable by coding teams.

Boilerplate and easily reviewable are at the odds. Unless Enterprise style java is the bar, it is hard for anyone to say that. Go is too much boilerplate than average programmer's python code, for example. Don't tell me python is dynamic. The average programmer doesn't do metaclass magic.

Being so much boilerplate and verbose `if err != nil` and no generics and no methods/functions for common operations like finding the index of an element in an array. All this leads to for-loop-inside-while-loop-inside-for-loop attrocities where it is harder to decipher what the intention is. Compare to python where you have methods on collections to carry out common manipulations, and list comprehension in python is so cleaner than 4 line imperative go code.

Go seems to miss why python/JS/ruby are so popular. It is because so much is built in that you can communicate __intent__ clearly without getting bogged down in details. Compared to any modern language that's not C and not mediaeval C++, Go is so much more verbose. Even java has enough shortcuts to do these common things.

And don't start telling me this leads to incomprehensible code. Coding standards are there. What's unreadable is 8 level indented imperative attrocity of blue collar language Go.

> Not sure why Go and Rust are always the compared languages.

They both emerged at same time and have some overlapping scope - eg static native compilation, memory safety etc.. but there similarities end. However, there is lack of a a popular, succinct natively compiled language which gets out of the way to write software. Everyone knows expressive languages need not be slow or difficult to deploy. Some people have to write Go in dayjob and the sibling rust, having quality-of-life improvements that anyone expects in a post-2000 language [0], seems to be a closer candidate to comparison (even though rust isn't the optimal language for the things Go is used for, given it is a systems language with static memory management) Others like D and Nim have a fraction of users. Of course there are also some vocal rust fanboys who think crab god is not popular because world is anti intellectual.


Well you have seen lot of things which I guess is fine. Others may have seen different things. In my last 10 workplaces and dozens of projects I have seen code which would be at least 5-10 times more verbose than an equivalent Go code. I also differentiate verbosity of individual expression vs verbosity of overall project due to dependencies, code arrangement and other associated files/ resources etc to produce a deliverable.

> Go seems to miss why python/JS/ruby are so popular.

Not sure what is there to miss especially since Go is pretty popular for its age. Considering it is not even mandated or officially supported like Swift by Apple, or Dart/Kotlin by Google/Android.


This is pretty much correct, except for 2 minor gripes: Firstly, Go was meant to replace C++ in server-land. The fact that it ended up being a suitable alternative for Java/Python in many cases was a happy accident.

Secondly, I don't think the fact that Go making the opposite trade-offs in terms of (let's say) programmer effort vs. CPU clock cycles means it is a "language for the average". Go is great for any number of interesting high-level server-side components where it being done in half the time is better than it being 15% faster.


I'm just going to comment on one point of your comment:

> to be easily reviewable/checkable by coding teams.

I strongly disagree with this. I find Go code to be incredibly difficult to read. This is because of two main reasons.

The first is that the lack of expressive power in the language means that many simple algorithms get inlined into the code instead of using an abstraction with a simple and understandable name. I find that the first pass of the code is me reading over the lines (each of which is very simple and understandable) and virtually abstracting it into what the code actually does. I find that the interesting business logic is lost in all of the boilerplate.

    out := []int{}
    for _, v := range input {
        number, err := fetchData(v)
        if err != nil {
          return nil, err
        }
        
        if math.Abs(float64(number)) <= 2 {
             continue
        }
        
        out = append(out, v)
    }
    return out
vs

    input.iter()
        .map(|&v| fetch(v))
        .filter_ok(|number| number.abs() > 2)
        .collect()
As I said, every line in the Go is simple (except maybe for append, but generics can help with that). However the actual business logic is lost in the boilerplate. In the second example (Rust with one existing and available helper function) the boilerplate is much less and each line basically expresses a point in the business logic. There are really two bits of boilerplate here `filter_ok` instead of `filter` to handle the errors from `fetch` and the `collect` to turn the iterator into a collection (although maybe you could improve the code by returning an iterator instead of a collection and simplify this function in the process).

Secondly the "defaults are useful" idea is in my opinion the worst mistake the language made. They repeated The Billion Dollar Mistake from C. I have seen multiple expensive production issues as a result of it and it makes code review much harder because you need to check that something wasn't uninitialized or nil. It is absolutely amazing in Rust that I don't have to worry about this for most types (depends on your exact coding style, in the above example there is never a variable that isn't "complete").

So while Go may be quick to write. I think the understandably is deceiving. Yes, I can understand every line, but understanding the program/patch as a whole becomes much more difficult because of the lack of abstraction. Humans can only hold so much in our head, making abstraction a critical tool for understandable code. So while too much of the medicine can be worse that the disease I think Go aimed - and hit - far, far below the ideal abstraction level.


> So while Go may be quick to write. I think the understandably is deceiving. Yes, I can understand every line, but understanding the program/patch as a whole becomes much more difficult because of the lack of abstraction. Humans can only hold so much in our head, making abstraction a critical tool for understandable code. So while too much of the medicine can be worse that the disease I think Go aimed - and hit - far, far below the ideal abstraction level.

I think this is the fundamental point of contention. Go aims at abstraction at the package level. Each package exports a set of types and functions which are "magic" to outsiders and can be used by outsiders - an API, if you will. Rust seems to aim at abstraction at the line level - each line is an abstract "magic" representation of what it is meant to do.

In your Go code, the only pieces of line-level magic are `range` and arguably `append` (even though I would argue it is integral to the concept of slices). And of course the API magic of `fetchData` and `abs` which is in both versions.

On the other hand, Rust has .iter(), .map(), .filter_ok() and .collect(). So, while I think anyone could understand the Go code if you explained `range` to them, I do not understand the Rust code. Yes, I understand what it does, but I have no clue how it does it. What is the type of .iter()? Why can I map over it? Why can I filter a map?

But that's not the point of the Rust code. The Rust code expresses what should be done, not how it is to be done.

The way Rust deals with complexity is by offering tools which push that complexity into the type system, so you do not have to keep everything in your head. The way Go deals with complexity is by eliminating it and, when that is not possible, by making sure it does not cross API boundaries. In Go you do keep everything in your head.

That's a Go feature.


> Go aims at abstraction at the package level. Each package exports a set of types and functions which are "magic" to outsiders and can be used by outsiders - an API, if you will. Rust seems to aim at abstraction at the line level - each line is an abstract "magic" representation of what it is meant to do.

Are line-level vs. package-level abstractions are necessarily mutually exclusive? One could argue that the functions Rust iterators expose are "'magic' to outsiders and can be used by outsiders --- an API, if you will".

> but I have no clue how it does it

The question here is whether it actually matters whether you know how the Rust code works. You don't necessarily need to know how `range` or `append` work; you just need to know what they do. Why should the Rust code be held to a different standard in this case?


I seem to be missing the point of your argument.

> I think this is the fundamental point of contention.

I agree with your point. Part of Go was definitely the removal of unnecessary abstraction and complication. However my argument is that they went too far.

> Go aims at abstraction at the package level [...] Rust seems to aim at abstraction at the line level

I don't understand the difference in your mind vs package level or line level. For whatever is exposed as a package is surely intended to be used in a line elsewhere in the program.

> On the other hand, Rust has .iter(), .map(), .filter_ok() and .collect(). So, while I think anyone could understand the Go code if you explained `range` to them

If you explain range, continue, return and append then you could understand the Go snippet. I don't see how this is meaningfully different from explaining iter, map, filter_ok and collect. Sure, the former are language features while the latter are library features but that doesn't seem to be a meaningful difference when it comes to comprehension.

> Yes, I understand what it does, but I have no clue how it does it.

This is the whole point of my argument. You don't need to understand how it works. Much like you don't need to understand how continue or append work in go. That is the point of abstraction. You need to know what they do, not how they do it. In my opinion Go forces you to leave too much of this usually-irrelevant plumbing in the code, which distracts from the interesting bits.

> The way Go deals with complexity is by eliminating it

This is again my key point. I'm arguing that in most cases Go hasn't managed to eliminate the complexity. Maybe it got rid of a little, as your "map" loop doesn't need to be as generic and perfect as the Iterator::map in the standard library. But the complexity that is left is now scattered around your codebase, instead of organized and maintained in the standard library.

The intrinsic complexity has to live somewhere. And in Go I find a lot more lives inline in your code. In other languages I find it is much easier to move the repetitive, boilerplate elsewhere. And when this is done well, it makes the code much, much easier to read and modify as well as leading to more correct code on average.

> That's a Go feature.

I agree with that. But what I am trying to express that in my experience this is actually a flaw. It looks good at the beginning. But once you start reviewing code you start to see it break down. I think Go had a great idea, but based on my experience I don't think it worked out.


> Sure, the former are language features while the latter are library features but that doesn't seem to be a meaningful difference when it comes to comprehension.

Absolutely. The difference is that Go has a limited number of such features and once you have learnt them that's all you need to know, in that sense, and can understand any code base.

One thing which I have not expressed very well in my reply is what exactly I meant by "understanding what the code does". When you look at func fetchData(T1) (T2, error), it's easy to understand what it does: it fetches some data from T1 and returns it as T2, with the possibility of it failing, and returning an error. If you know what T1 and T2 are (which you should if you're inspecting that code), that's usually sufficient. You understand (almost) all of it's observable behavior, which is different from it's implementation details. Similarly, `abs` returns the absolute value of a number

`append` also has easy to understand observable behavior: it appends the elements starting at position len(slice) and reallocating it if necessary (generally, if the capacity is not big enough), but it's actual implementation is undoubtedly very complex. `range` is harder to explain, but rather intuitive when you get the hang of it.

Of course you also want to keep in mind the behavior of all the language primitives as well: operators, control flow etc. In Go, you have to keep all of these things in your head to understand what is happening in the code, but once you do you really understand it.

We can call all of these things: variables, language primitives, API functions etc. atoms of behavior. In Go, to understand a piece of code, you first have to understand what the observable behavior (but usually not the implementation) of all of the atoms in that code are, and then understand all of the interactions between those atoms that happen as a result of programmer instructions.

What I mean by line-level vs package-level abstraction is quite simple (maybe not the best names, but hey, I'll stick with them). With package-level abstraction, the atoms, as well as the interactions between them, remain conceptually easy to understand, but become more powerful as you move up the import tree. The observable behavior of an HTTPS GET is easy to understand, but very complex under the hood.

With line-level abstractions the atoms, and especially the interactions between them, become very complex. The programmer no longer "has to understand" the observable behavior of every single function he uses. Odd one-off mutators are preferred to inlining the mutation because it "makes the code more expressive" - in that it makes it look more like english, it makes it easier to understand what the programmer is trying to do. It does not, however, make it easier to understand what the programmer is actually doing, because the number of atoms - and their complexity - increases substantially. If you want to get a feel for this look at the explanation for any complex feature in C++ on cppreference.com. I must have read the page on rvalue references 20 times by now and I still don't grok it.

Of course, with line-level abstraction, the programmer doesn't need to constantly keep in mind 100% of the behavior of the atoms he's using, much less whoever's reading.

I can't tell you which one's better - probably both have their place - all I'm saying is that I, personally, can't work with C++/Rust/other languages in that style. I've tried to use them but I can't. C is easier to use - for me.


I'm honestly still confused what point you're trying to make.

Paragraphs 2 through 6 seem like they would apply to most, if not all, languages, even if the language is more complex. For example, here's the text with a few minor alterations:

> One thing which I have not expressed very well in my reply is what exactly I meant by "understanding what the code does". When you look at `fn fetch_data(input: T1) -> Result<T2, Error>`, it's easy to understand what it does: it fetches some data from T1 and returns it as T2, with the possibility of it failing, and returning an error. If you know what T1 and T2 are (which you should if you're inspecting that code), that's usually sufficient. You understand (almost) all of it's observable behavior, which is different from it's implementation details. Similarly, `abs` returns the absolute value of a number

> `Vec::push` also has easy to understand observable behavior: it appends the elements starting at position vec.len() and reallocating it if necessary (generally, if the capacity is not big enough), but it's actual implementation is undoubtedly very complex. `Vec::iter` is harder to explain, but rather intuitive when you get the hang of it.

> Of course you also want to keep in mind the behavior of all the language primitives as well: operators, control flow etc. In Rust, you have to keep all of these things in your head to understand what is happening in the code, but once you do you really understand it.

> We can call all of these things: variables, language primitives, API functions etc. atoms of behavior. In Rust, to understand a piece of code, you first have to understand what the observable behavior (but usually not the implementation) of all of the atoms in that code are, and then understand all of the interactions between those atoms that happen as a result of programmer instructions.

The details differ, but the overall points remain true, do they not?

Unfortunately, I'm still confused after your description of line-level vs. package-level abstraction. Just to make sure I'm understanding you correctly, you say an HTTPS GET is an example of a package-level abstraction, and an "odd one-off mutator" (presumably referring to a functional/stream-style thing like map()/filter()) is an example of a line-level abstraction?

If so, I'm afraid to say that I fail to see the distinction in terms of package-level vs. line-level abstraction, since what you said could apply equally well to both HTTPS GET and map()/filter(). For example, if a programmer uses a function for an HTTPS GET instead of inlining the function's implementation, you can say "using the function is preferred to inlining the request because it 'makes the code more expressive' --- in that it makes it look more like English, it makes it easier to understand what the programmer is trying to do. It does not, however, make it easier to understand what the programmer is actually doing..."

That's the nature of abstractions. You hide away how something is done in favor of what is being done.

> With line-level abstractions the atoms, and especially the interactions between them, become very complex. The programmer no longer "has to understand" the observable behavior of every single function he uses.

I believe your second sentence here is wrong. Of course the programmer needs to understand the observable behavior of the functions being used --- how else would they be sure that what they are writing is correct?

> Odd one-off mutators are preferred to inlining the mutation because it "makes the code more expressive"

I think you might misunderstand the purpose of functions like map() and filter() if you call them "odd one-off mutators". The same way that `httpsGet` might package the concept of executing an HTTPS GET request, map() and filter() package the concept of perform-operation-on-each-item-of-stream and select-stream-of-elements-matching-criteria.

> If you want to get a feel for this look at the explanation for any complex feature in C++ on cppreference.com. I must have read the page on rvalue references 20 times by now and I still don't grok it.

You need to be careful here; in this case, I don't think rvalue references are a great example because those aren't really meant to abstract away behavior; on the contrary, they introduce new behavior/state that did not exist before. It makes sense, then, that they add complexity.

In the end, to me it feels like "package-level" and "line-level" abstractions are two sides of the same coin. The functions exposed by a "package-level abstraction" become a "line-level abstraction" when used by a programmer in a different part of the code.


> I think this is the fundamental point of contention. Go aims at abstraction at the package level. ...

I think this is an excellent point. I have Go code which might make Rust fan apoplectic with its verbosity and basicness. But for me it is like magic every time it runs and get stuff done on any of my remote/local machine.

I made a similar point elsewhere about expression verbosity vs project verbosity. To me all the 3rd party dependencies, substantial number of outside tools and obscure setup files are a type of verbosity when I work on a project. Though I do not mind them if thats what is needed.


Because originally Go was designed while Pike and others were waiting on C++ builds and he couldn't grasp why C++ developers don't jump of joy into adopting Go.

> We—Ken, Robert and myself—were C++ programmers when we designed a new language to solve the problems that we thought needed to be solved for the kind of software we wrote. It seems almost paradoxical that other C++ programmers don't seem to care.

https://commandcenter.blogspot.com/2012/06/less-is-exponenti...


> Fun fact: because json is a reflective package in Go, solving your problem is quite trivial: https://play.golang.org/p/-TZ5b9Su1or .

If I understand it correctly, your code doesn't check when unmarshalling if the content of Data has the expected structure. Using interface{} it's always been possible to achieve something like you'd do with Generics but it's ugly and can't check types at compile time.


> I spent more time playing type system golf trying to come up with the optimal type for whatever usecase. In Go I might just "do the work", but in Rust I've turned 5 minute functions into 30 minute api design thought exercises. The jury is out if thats is a good thing or a bad thing.

The way I tend to look at this is the 30 minutes I spend up front making my types work will usually save me hours of time later (sometimes much later, and spread over different coding sessions), because if I'm using the type system to enforce constraints, I'm not going to have to worry about writing buggy constraint code.

When I was learning Scala (after working in Java most of the time), I noticed that I would sometimes spend quite a bit of time getting short snippets of code correct so the types would line up. The result would be shorter than the equivalent Java code. But I had much more confidence in that code actually working, without needing to write tests (or at least as many tests), because I was leaning hard on the type system to prove the code correct. You can do that to some extent in Java, but the type system there is a bit weaker, and the standard library and common idioms make it harder to do. Ultimately if I have a chunk of Java code and a chunk of Scala code that does the same thing, I'm much more confident in the Scala code because I know scalac is providing me stronger guarantees than javac, assuming I've written the code to make good use of the compiler's abilities.


Do we have a name yet for the internet phenomenon where every time Go is discussed, Rust must be immediately brought into the conversation, and vice versa?


They both came on the scene at the same time, aiming for similar (though not exactly the same) markets, with Go aiming for something between a "better C" and "statically-compiled Python" and Rust a straight-up "better/safer C++". There's enough overlap between those two markets that it's natural for people to want to compare the two languages and how they've evolved over time.


To be clear Mozilla first mentioned starting sponsorship for something called "Rust" a few months after Go was publicly announced. However that prototype "Rust" was a language almost entirely unlike Rust 1.0 (which didn't appear until 2015, five years after sponsorship was first announced). Go was already years into development before being announced and was used internally before the public 1.0 release.

Today Rust and Go exist in different spaces. Go is a Python/Ruby/Java/etc alternative. Whereas Rust is an alternative to C/C++/etc. Obviously there is overlap between those sets of languages but that doesn't really mean Rust and Go are as directly comparable as HN comments might lead people to believe.


>Today Rust and Go exist in different spaces. Go is a Python/Ruby/Java/etc alternative. Whereas Rust is an alternative to C/C++/etc.

You make it sound as if Python/Ruby/Java on one hand and C/C++ on the other are two well established, well delineated language blocks with completely different uses cases. That's never been the case in my experience, especially with the pair Java vs. C++ which might be the most apt comparison for Go vs. Rust.

It's absolutely obvious to me that Go and Rust are competing for market share, there's no reason why ripgrep couldn't be written in Go or fzf in Rust for instance.

Maybe it's irrelevant and there's enough room for both languages to coexist and thrive in the long term, but in my experience they really don't exist in different spaces at all.


> Go is a Python/Ruby/Java/etc alternative. Whereas Rust is an alternative to C/C++/etc

Except that in this years Rust survey, there were more people using Rust for "Backend Development" than any other purpose. Rust allows you low-level control, but in many ways it's higher-level than Go (it allows for more sophisticated abstractions that can be wrapped up in libraries). It's an excellent choice for web services. And many people find it more productive than Go (although the reverse is also true).


"Backend Development" is basically anything that happens on the server side. That covers a multitude of sins, from the lowest levels to the highest.


I would say that Rust and Go started in a similar space, but diverged over time, with Go finding a higher-level, less strict niche and Rust finding a lower-level, stricter niche. In retrospect this seems completely unsurprising, with the development team of Go primarily targeting servers and the development team of Rust primarily targeting Web browser engines (though, as both are general-purpose languages, they can be used in many niches, and plenty of people are productive with e.g. emulators in Go and servers in Rust).


Personally I find Rust to be a much higher level language than Go, though I understand this is a bit of a weird case since it also involves manual memory management to some degree.

In terms of “feeling”, Go feels like a polished form of C to me. Rust doesn’t feel like C++ to me — it actually feels much more like OCaml and even Haskell. Though Haskel no doubt has a more sophisticated type system, I still find that a shocking number of design practices that I used in Haskell port naturally to Rust.


The original Rust compiler was written in OCaml, and OCaml was one of the chief design inspirations for the language. The similarities between Rust and ML-like languages is no coincidence.


+1

I don't understand people trying to make "a language that does it all". There has been a lot of that in Go, in Rust, in JavaScript, in Java, in TypeScript, in C++, in Haskell, in D...


It is called HN. You can’t ever discuss a language without also discussing Rust.


Hopefully not vice versa. In one way I put Swift, Java, and Go in same category. Their authors rarely show up in discussion which is not on their own subreddit, mailing list or any other discussion forum.

Whereas Rust user and committers are much more interested in discussing what is good programming, nuances of PL theory and so on. It does not matter if the post is about Rust or not and specially if it is about Go.


It shouldn't be that surprising. Both are "low-level" languages that came onto the scene around the same time, are highly opinionated, but stake out very different points on the design space.

If you start talking about changing the recipe for Coca-Cola, surely Pepsi is going to enter the discussion at some point, right?


Sorry. Either you are ignoring facts or you have not noticed at all. The point is if an article is about Rust it is quite unlikely that Go will appear in discussion but if it is Go discussion Rust will appear with almost 100% certainty.


this. Every conversation about Go gets interrupted by a Rust fanboy saying "Rust does this better".

The HN comments on this article have been derailed from any interesting discussion about Generics in Go to an utterly pointless religious war about Rust and type systems.


It's gotten so bad that I wish @dang would start to intervene.


In spirit I agree. But Rusk fans would claim 1) expression of opinion in non-abusive way. 2) Enlightening programers who are unaware of "better" way of doing things. And both activities are fine at least technically.


I totally agree with both those things. But the Rust fan should realise:

1. This opinion gets expressed on every single comment thread about Go, multiple times. Maybe have some respect for other people's right to have a discussion on a subject without derailing it with a tangential opinion about another language.

2. We all know that Rust is the one true programming language for every possible context. However, because we're clearly idiots, we're writing a program in Go. Pointing out how Rust would do it better isn't helping.


This doesn't explain the obsession with Rust.

Fitting analogy: If there's an article about McDonald's, people would always talk about Wendy's instead. No one would mention BurgerKing (C++), Subway (D), KFC, ...


I see zig and d mentioned a lot on the rust/go posts about features and C++ on the posts about performance. It doesn't seem to be go/rust specific.


> BurgerKing (C++), Subway (D), KFC, ...

PHP must be Taco Bell. It solves a lot of problems quickly, but you'll pay for it later.


The thing is, most discussions about Go are about adding features Rust has or making changes where Rust has something relevant to say.

I chose my analogy carefully. Coca-Cola tried to change its recipe to make it sweeter because they did studies and found people liked the taste of Pepsi more. So any discussion of changing Coke's flavor means Pepsi is highly relevant. But the converse is not necessarily true. Pepsi doesn't have any (recent) history of trying to match their flavor to Coke.

Maybe a better analogy is this: if Mercedes comes out with an electric car, you should expect to see Tesla in the discussions. But if Tesla comes out with a combustion engine, there's nothing particularly relevant about Mercedes in that discussion.

I'm sure that if there were discussions about adding structural typing or lightweight fibers to Rust, then Go would show up. But most of the discussions tend to be around adding generics or sum types to Go.


I wouldn’t say that I’m obsessed with Rust. However, it’s the sort of language I’ve wanted essentially since I started programming: a Haskell/OCaml-like language with the sheer performance and practicality of C++.

I’ve been a long-time user of Haskell, but I can safely say Rust has supplanted it almost entirely for me at this point. I never would have felt comfortable betting on Haskell in a commercial environment. Rust? Absolutely.


Can you expand on why you would not bet on Haskell for commercial work? What are those things that make Rust suitable for it but not Haskell in your opinion?


This is cool to hear. I've been interested in Haskell/OCaml to learn functional programming. Rust has other cool applications like WebAssembly too. I think I'll start learning it after I learn Clojure (trying to get into functional stuff first with Lisp).


Everybody that talks about rust is talking from a different perspective with different values. What makes rust different from other languages is the breadth of use cases for which rust is a candidate worth mentioning. That's not so much an obsession as it is an availability bias on your part. You're seeing different people talk about rust obsessively in different contexts and assuming that those same people talk about rust in all contexts.

If any of these features is of significant value to a programming use case, it is worth talking about rust:

* Speed

* Type safety

* Memory safety

* Memory determinism

* Latency

* RAII-based resource acquisition and destruction

* Concurrency correctness / safety

* Minimal runtime requirements

* Low level bit manipulation (eg. Cryptography, compression)

* Startup time

* Networking

This isn't exhaustive, but it gives a lot of different people in different domains working on different problems a reason to talk about the same language when comparing to their usual choices.


I don’t know if this explains the entirety of the phenomenon but I do think this post helped me understand at least a portion of the pervasive mention of rust in almost any programming conversation on HN, which makes the fact that happens slightly less frustrating.


yes, but the article is discussing Generics in Go. Nothing to do with Rust. Rust is absolutely NOT worth talking about in this context.


You do realize that the lack of generics in go pushed a lot of go programmers towards rust, right? This feature of go is long overdue, and many people gave up on trying to get them implemented. The fact that they are now available has many programmers wondering if it is even worth switching back.

That is context for discussion, and yes, it is relevant. Directly relevant, in fact.


I get that. And that's a great discussion for the Rust community to have.

I wanted to hear what Go devs have to say about this latest iteration of the generics proposal. But to do that I have to wade through endless comments about type systems and Rust. Even on comments directly talking about this iteration and how to use it, every other reply starts with "In Rust..."

If implementing generics leads more of the Rust community back to Go, then from what I've seen here: no thanks. I have a feeling we'd see every conversation in every community forum start with "In Rust.."


> I get that. And that's a great discussion for the Rust community to have.

It's also a great discussion for the go community to have. The only problem here is that it's not a discussion you wanted to have. Should every discussion about go revolve around you and your wants?

If all you wanted was to hear what the go devs wanted to say, you could have clicked on the link and read it. Or could you possibly use the feature that was built for you: the [-] button to collapse a thread. Or maybe, just maybe, you could go to a place that is explicitly only about go, like /r/golang.


I'd be totally cool if it was this once, and actually interested.

But it's every freaking time. Every single time Go is mentioned, there's a ton of Rust folk commenting on how Rust does it (better).

Just for once, it'd be nice to have a comment thread discussing a Go-specific item, that has nothing to do with Rust, without talking about Rust.

And yeah, /r/golang is at least Go-focused. But it has its own problems too. Mostly PHP and JS folks learning Go and asking basic questions (which would be great if they didn't then reply to the answers with "but that's not how PHP/JS does it, why does Go do it so strangely?" and without reading any of the billion answers for that question already).

Sorry if my exasperation is showing ;)


No I do not realize that at all. It's like saying "Lot of Americans will renounced their citizenship if Trump won" Yes a lot of people said that but I am not sure if there is data to conclusively say it happened in either case.

The only cases I know of are people moved to Rust to avoid GC pauses.


No, but I wish I saw Go vs Clojure, more often.

To me, Go vs JVM ecosystem languages is a much more difficult choice. I would have to actually think a bit.

To me, Rust and Go really don't occupy the same ecosystem niche. I can't think of a project where I would have to choose between them--the choice between them is almost always dead obvious given the project description.


Clojure and Go seem like two of the least comparable languages to me. They basically have no commonalities. Comparing Java, Kotlin, or Scala to Go seems more valuable to me.


Superficially they're very different. But lots of people use Go because its a "first class" concurrency language. Concurrency was given a huge consideration in Clojure too, with its immutability, STM, agents, etc. The reasons you might choose Go can overlap heavily with why you might choose Clojure, Erlang, or Elixir.


Is it really first class though? Anything CPU bound is basically a noop since GO doesn't have actual threads.


Go automatically multiplexes goroutines on top of OS threads. You can peg all the cores of a multicore CPU in a single Go program using goroutines.

Edit: last I looked you could see how many OS threads would be used by looking at runtime.GOMAXPROCS. There was a time when it would just default to 1, but it looks like they changed it. When I last programmed in Go you had to override it via an environment variable or set it in code.


Both are great choices for networking, concurrency, latency, and throughout-oriented use cases. Networked applications, databases, streaming, caches, etc.


Not a 100% fit, but we have: https://en.m.wikipedia.org/wiki/Cargo_cult_programming

> a style of computer programming characterized by the _ritual_ inclusion of code or program structures that serve no real purpose.


Was cargo famous before rust or maybe it should be npm-cult programming?


"oxidation"


It's called the Rust Evangelism Strike Force (RESF).


gorrosion!


I think the word you're looking for is marketing


Fearless shilling or zero manners evangelism.


I like to think that how much time a language makes you spend designing the API is a spectrum, and Rust in particular is very much towards the extreme. There are a lot of languages out there with generics that do not place the cognitive load that Rust's borrow checker puts on you. I suggest you give those languages a try if you feel that 30 minutes is too long.

I must say I am biased, since I develop a language that I believe fits the sweet spot here. I would encourage you to give it a try: https://nim-lang.org


I don't consider myself a particularly good programmer, and I don't find the burrow checker is a substantial source of cognitive load.

I think I get way more friction from:

- library authors that go crazy with generics.

- the million ways rust makes performance tradeoffs explicit (even when you're writing part of the program where performance simply doesn't matter).

- the million-and-one things you can do with an option type.

- the arcane macro syntax.

I could probably go on. I've been writing a lot of rust because I wanted a low-ish level language with good C interop, and I didn't want to learn C++, so I wasn't at any point a rust fanboy. I very much appreciate not having to be a genius to avoid segmentation faults, but other than that, I think rust is a somewhat ugly language - for the simple reason that it has a lot of features, some of which partially overlap, and most of which are vaguely inconsistent.

I'm probably sounding a bit harsher than I mean to - obviously, you can't expect a language like rust to have the simplicity and consistency of something like lua, and by and large, it's better than anything else I write programs in.

Still, the point is, as somebody with a lot of (fair and unfair) complaints, the burrow checker is not one of them. It complains rarely, and when it does you're almost always doing something stupid, unless you're doing something fairly arcane, in which case, you should probably use unsafe.


Then you're a good programmer.

Rust effectively forces you to code like you're writing a multi-threaded app even when you aren't. There's a reason why people suck at writing multi-threaded apps: because it's hard. This is what people are fighting when they fight the borrow checker. And this is why so many people find it frustrating. There's all sorts of designs that flat out don't work or are way more effort than they're worth.


This is a really common misconception, but all of the borrow checker's rules are necessary to fully verify the memory safety of single-threaded programs too.

It just turns out that mathematically proving GC-free code to be free of use-after-free and double-free errors is really difficult unless you disallow a lot of things.


it's not just about multi-threading, it's also about actually dealing with memory in a way that's safe in a single thread as well without introducing GC, which we know from C is hard too


In my experience most of the painful difficulties people have with the borrow checker is because it works like a read-write lock. AFAIK this aspect isn't required to have safe single threaded code without a GC.



It is if you want to have automatic deallocation with a strict guarantee of memory safety including no use-after-free bugs.


How does this relate to whether or not you plan to mutate things? Wouldn't plain ole counting work for this, regardless of whether or not they're mutable, rather than what Rust does - allowing multiple readers and no writers, or zero readers and 1 writer?


Refcounting is GC; it automatically detects at runtime that an object has no references. It also adds a substantial cost that Rust is trying to avoid, often by copying immutable objects (which have better cache locality than random access to refcounts everywhere).


I'm not talking about GC. I'm talking about Rust references:

https://doc.rust-lang.org/book/ch04-02-references-and-borrow...

> At any given time, you can have either one mutable reference or any number of immutable references.


Oh, I get it, you're suggesting all references could be mutable, but still limited to the lifetime of the object. I guess the downside would be that you can't design an API like iterator invalidation that relies on "some (mutable) methods can't be called while any other (immutable) references exist".


It would mean that, once you obtain a derived pointer – say, if you start with a Vec<Foo>, a pointer to Foo that refers to the first element – you would have to throw away that pointer as soon as you made any function call whatsoever. After all, that function call might mutate the Vec<Foo> and cause the backing storage to be reallocated.

In practice, this is unworkable enough that it would basically forces you to reference count the Foo, so you could preserve pointers across calls by incrementing the reference count.

On the other hand, Rust's approach is suboptimal in cases where you're using reference counting anyway, or are willing to pay the cost of reference counting.


The immutable borrows also help prevent iterator invalidation in single threaded code when using an iterator.


Yes, exactly. One of my first big fights with the borrow checker was over not realizing I was invalidating an iterator by mutating it by looping over it. Then it clicked for me.


This is exactly the same thing I can relate to coming from a Go background trying to use Rust.


Instead of a spectrum, I think of it more qualitatively. Rust says that lifetime is a fundamental, necessary part of an API's design. To call an API, you have to think about the ownership of what you send to it and what you receive.

C also makes you think about ownership in your API design, it's just the language doesn't give you any real tools to express or enforce it. C++ gives you a bunch of tools and options, but you have to hope that you and the API you want to use agreed on which subset of the tools to use.

Garbage collected languages specifically take memory management out of the API design by declaring that the runtime will take care of it for everyone.

If you want lifetime to be something an API can control, then I think Rust's approach makes sense even though it obviously adds complexity. If you don't, then, yes, removing it from the equation definitely lowers the API design burden.

In many ways its analogous to having language support for strings. In C/C++, you gotta hope that the library you're using has the same approach to strings that you want (std::string? char*, wchar_t?, something else?). In newer languages, it's just a given. (For better or worse: because then you end up stuck with UTF-16 in some languages.)


I've always said if you can write Ruby/Scala, you can probably write simple Rust with very similar levels of productivity after you get past the initial learning curve. But apparently there's a sizable population that thinks Ruby/Scala is hard/confusing/sizable cognitive load too.


> The jury is out if thats is a good thing or a bad thing.

My vote goes towards good thing.

One of the things I like most about Rust is that it forces you to think through your design and address complex tradeoffs, and often won't even compile until you do.

What it costs you in development time, you'll make back in reduced maintenance. The former is a visible cost therefore you feel it more than the latter which is invisible when things run well.

This is why Rust can feel less productive even if it makes you more productive in the long term if you take in to account less maintenance.


> One of the things I like most about Rust is that it forces you to think through your design and address complex tradeoffs, and often won't even compile until you do.

I have heard this bullshit enough times. "Orange crab god doesn't like it so you are doing it wrong". No, you shouldn't have to care about memory management nuisances when that's not relevant and rust being a language suited for systems domains with correct static memory management (Good thing for systems domain), makes those irrelevant details surface in implementation.. what you said isn't true for other domains.

This kind of shilling is what leads to rust evangelism strike force memes.


I don't think they are talking about memory management here, but simple API design.

In C++ you can write:

    template <typename Animal>
    void roar(Animal animal) { ... }
and call it like this:

    roar(airplane); // How do Airplanes roar?
You can try to fix that with an "Animal" concept, but it turns out that if someone adds a roar method to a Vehicle, you can still:

   roar(truck); // Trucks can roar... but roar expected Animal
In Rust, these things just cannot happen. If you write:

    fn roar(animal: impl Animal); 
then trying to call it with anything that's not an animal will fail to compile.

That's a good thing, but maybe somebody did wanted to call roar with a truck, and now you have to invest time into more precisely thinking which types should roar actually take, which types constraints make sense defining, how you categorize types, etc.

In Rust thinking about these things costs time. In C++ it also costs time, usually more than Rust, because you have to go way out of your way to prevent mistakes like the above. In Go you don't have to think about these issues at all because the language does not allow you to do most of this (except for interfaces, but that's another story).


I should first tell that I don't hate rust. It is a well designed language for what it aims for (statically guaranteed memory safety for systems programming). I have deep respect for rust creator and some members of core team for their technical work.

What irritates me is claims like "If borrow checker rejects your code, it was wrongly structured anyway" and "rust's restrictions lead to good structured code". It is not wrong if program logic requires self referential structures, for example. Neither does it make sense to evangelize rust using blanket statements like this. I get the impression that some people are trying to evangelize rust as a language for everything.

The mental overhead of rust (or maybe ATS, maybe Ada/SPARK, depending on requirements) makes sense for systems programming, where C/C++ will make stuff even more complicated, and you can't compromise on performance. The applications domain is realistically better served by something like F# / OCaml / Go once it gets generics..


> "If borrow checker rejects your code, it was wrongly structured anyway"

Nobody is claiming that? At least not in this thread.

I think your confusion stems for thinking that "Rust == borrow checker". The borrow checker is a tiny part of Rust.

> The mental overhead of rust

I find the mental overhead of Rust to be smaller than that of Python, Go, Java, Haskell, C, C++, Lisp, and pretty much any other language that I've ever used.

In Rust I don't have to think about multi-threading, data-races, memory management, resource acquisition and release... the compiler does the thinking for me. I also can refactor and reshuffle code at will, and trust the compiler to catch all errors.

Changing a type that pretty much every translation uses in a 500k LOC code base and add multi-threading to it? No problem, just try it out, and if it compiles, it is correct.

I can't say the same of Go, Python, Java, etc. where threading errors are impossible to debug. And well, it wouldn't even be worth trying. It would be impossible to make sure that the code is correct, even after fuzzing for weeks.


right tool for the right job.

I wouldn't want to spend that time one a one off script I only expect to run a few times.

I also wouldn't want to spend that time only to find that my assumptions were wrong or the requirements changed and I have to throw it all away for a new class puzzle.


> I wouldn't want to spend that time one a one off script I only expect to run a few times.

You don't have to. No generics, `.clone()` and `.unwrap()` everywhere, and the Rust code goes brrrrr. :D


Or just...not deal with any of that, and literally bash my way through it. I hated the bash learning curve because it's so zany compared to "sane" langs, but once you hit a point in comfort, it's "muck around with syntax for a few minutes, applying to a test case of a few examples, dial it in, and let it rip on a few GBs of data and go work on other things". stringly typed pipes go brrrr XD. Composing unix utilities is horrific and beautiful all at once. Shellcheck helps a ton.

Recently started getting into Xonsh cause it still makes piping programs super easy, while being still fairly easy to provision.

Really wish there were a (stable) go-based scripting language and shell. There's tengo and some other things, but not primetime ready.



There are some python based ones.


I'm laughing at how true this is, but at the same time, if you're doing this, it's probably the one case go excels at.


I would do it with D, but sure: whatever rocks anyones boat. All I'm saying Rust is not particularly worse for writing "quick&dirty" scripts ignoring good design and corner cases. And if any quick&dirty script becomes " important part of our legacy production environment" the story of improving such a script is much better in Rust.


I can relate, maybe this is the paradox of choice?

That said I agree completely, `Option`/`Result` themselves (even if they are just intrinsic language elements, not user-implementable types) are valuable.


I can dramatically shorten development time with generics + codable in Swift using a similar approach - to the same code in objective C it’s like night and day


> I spent more time playing type system golf trying to come up with the optimal type for whatever usecase. In Go I might just "do the work", but in Rust I've turned 5 minute functions into 30 minute api design thought exercises. The jury is out if thats is a good thing or a bad thing.

I usually do stuff like this as a YAGNI exercise. Don't spend the extra 25 minutes thinking about it until you know you have 3 or so cases where you'd actually be sharing code. But at that point... the consistency you can get starts to become attractive, if you want the behavior to stay consistent in the future!


> I spent more time playing type system golf trying to come up with the optimal type

> The jury is out if thats is a good thing or a bad thing

Well said. I have this same thought with TypeScript on a frequent basis.


I am just learning the ropes of Typescript with React and Christ sometimes it is hard not to think it is a veritable clusterfuck.


One think I realized is that properly using the type system is a skill you have to/should learn.

It includes:

- Having some understanding of how type systems tend to work and common patterns around them, preferable in a abstrac non language specific way. Do not confuse it with knowing formal CS since type theories, it's not the same.

- Knowing how to effectively use the type system to encode important variants.

But probably even more important:

- Not getting obsessed with trying "perfectly"/"optimally" use the type system.

- Knowing that the moment your types and their relations get too complex it might not be worth it at all, even if you encode use-full variants. Sometimes just having runtime checks is so much better from a API UX point of view. Getting complex type system based API's right is hard and from what I see in the open source community most people do not have the skill to do it right. (Through this doesn't mean not using the type system to encoded constraints, just not over using it!). A common case where this happens is when people try to use the type system to force a DSL into a language which isn't very compatible with that kind of DSL.

While I often see people learning about how they can use advanced features of a type system but they do not learn when to _not_ use it.

Disclaimer: I just realized that what I see as Simple usage of the type system the parent post might already see as advanced usage.


This is spot on.


> but in Rust I've turned 5 minute functions into 30 minute api design thought exercises.

This is very real. One reason I like Rust for serious projects that deserve this level of thinking time, but I'm wary to use it for simple 'toys'. Go was far easier to jump into but also far easier to push against its limits.

Some sort of compromise might seem better here but I like using the best language for the job. Rather than some all-encompassing developer experience or language use-cases.


I've also done this. Maybe a good rule would be to just write the function for the single type you need it for, and only introduce a generic type parameter when the need for it arises?


Funny, I also have an `ApiResponse<T>` in my backend Rust code that serves exactly the same purpose (same name even) !


I think (disclaimer: not a language expert, don't know Rust, etc) that it's a tradeoff between verbose and repeated but readable and simple code, vs generic reusable but more difficult code. In Go you can take most code at face value, with generics you have to keep use cases and possible types into consideration.


Exact same boat as you. Been doing Rust for 2 years now and it is so true about the time spent thinking about how to be "more Rust-like".

I think you summarize it perfectly though, my favorite parts of Rust are Option/Result.


Substituting null with optional helps with getting a stack trace of what's going on. However, it doesn't get rid of programming errors.

What does get rid of errors is IDE support for nullable types which warns the programmer that they have accessed a property without checking for nulls. Take kotlin and Intellij/IDEA for instance.


An optional is the compiler telling me to check for nulls. My program won't compile if I don't check for nulls.

IDE support is half baked, someone can always turn it off.


This doesn't sound right. Could you paste an example of a misuse of optional (for example: an unchecked unwrap) causing a compilation failure instead of a panic?


In Swift `let x = things.first` won't compile but `let x = things.first ?? "empty"` and `let x = things.first!` will. Does that count?

(Probably mangled the Swift, doing this from memory)


My swift isn't good, but since you didn't specify a type for x, I think it should compile anyway. But if things is an empty array of T, first may return nil and x will be of type T?. If things is an array of optional, x will be of type T??


From someone who hasn't learned/considered Go for any real project so far, would there be any benefits in going Go instead of Rust? Or learning Go instead of Rust?

The way I look at it, and I could totally be wrong, is if you go for a different language for your back-end than your front-end, it betters pay off big time. I find Go and Typescript to be in a very similar ballpark in term of productivity, but with Typescript having a huge advantage since you can reuse all the learning/tooling from the front-end.

Rust on the other hand really adds something to the table with the memory safety and zero-cost abstraction.

TL&DR: It feels like either you eat the bullet and go Rust for memory safety, or Typescript is a better choice. Of course I'm overly simplifying.

Thoughts?


Go is a significantly simpler language than Rust, and has a garbage collector. You can learn Go in a weekend, and experiment with it in a couple more. Rust on the other hand has a steep learning curve which will take at least a couple of months to get good at. So if I were you, I'd start by learning Go, seeing if it fit my needs, and then learning Rust later on if I had a use case that Go wasn't a good fit for.


I’m not familiar with Rust’s tooling but i’m wondering whether the dev workflow involves explicit manual recompilation? Eg. in Haskell modern workflows are about conversing with the compiler real-time ie. make a type error and it is immediately pointed out and thus explicit recompilation is not part of it anymore. I’d be mildly surprised is something like this does not exists for Rust.


Rust is a compiled language, so you will have to recompile. Incremental compilation helps to a certain degree. There is IDE tooling that helps with a lot of issues you may encounter, but in the end, you still have to recompile to run your code. Same applies to Go as well. I have never find this a shortcoming of any language, but YMMV.


> Rust is a compiled language

So is Haskell. What I'm saying is that in 2020 the dev workflow is not: code, compile, run, repeat. Thanks to modern tooling it is now a near-instant feedback loop of: code, red wiggly line(s), fix, repeat. Ie. real-time conversation with the compiler.


rust-analyzer brings this to the table, but it's a pretty recent development. The RLS existed for a while before that, and gave a basic version of this story, but it was much slower.


Languages are tools, so it depends on your problem.

Go shines when you have a lot of parallelism, and not a lot of concurrency (although sync package is fantastic). Web servers, data processing pipes, task queue consumers...

Also the language is simpler. Personally I think in it like C vs C++ or RISC vs CISC.

And last, but not least: Go have faster compilation times. Rust compilation is slow, and for some tasks that requires retrying (like learning a new language) compiling fast is very valuable.


It depends on what you want to do. It is not about picking one or another.

For the majority of software (I guess you are referring to web apps), simpler/GC/scripting languages like TypeScript or Go are better.

But, if you really need something low-level, then you need C, C++, Rust, Zig, etc. and there is no way around it.


decent points, I like TS too -- but commenting to help w the idiom: it's "bite the bullet" (ie, accept the pain and deiberately suffer through it now), not "eat the bullet" (commit suicide).


Thanks, english isn't my native language so I appreciate the help


One thing other comments didn't mention is Go has an order of magnitude more jobs than rust.


Go has had generics for years. But they are in a preprocessor not main compilation.


Indeed. If Go had a built-in macro preprocessor, generics would be less necessary, but macro programming has plenty of problems of its own...


Same here. Go tends to make me "just do it". I personally think it's a big win, because I believe systems need to be composed and the components they are composed of as simple as possible. As a human though I tend to waste time coming up with fancy abstractions. Generics are exactly that. Fancy, nice to have, but I completely fail to see how they are necessary.


Maybe you think go is simple. But I don't.

You need a for loop for what is done in other languages with one method call.

Removing an element from an array doesn't look like that. It is some slice appending trickery.

Having null and having to check errors every 5th line or so..

It lacks idioms like list comprehensions or map / filter. Reading code is spent in details like whether a for loop is index(), count(), map() or filter().

Instead of some simple code like x = condition ? a : b you are writing imperative verbose code like: if condition { x = a } else { x = b }

Go maybe simpler than enterprise-style java. But that's not 'simple'.

http://archive.is/YPxJP


I mostly disagree with this post, but vouched for it because there is absolutely no reason for it to be dead.


Is there an old quote about `as simple as possible but no simpler`? Go rides the simple line really well occasionally, but generics is an area where it falls into the 'simpler' category.


I won't use GO until they add generics. Hopefully they'll add them someday. I can wait as long as it takes.


I feel like Go lost its chance by being way too slow to innovate. From my circle of people who I remember were extremely excited about Go when it first came out - no one really uses it anymore. In our circle it has become a "meh" language. Heck Apple Swift on server is way more exciting than Go these days.


It isn't supposed to be an exciting language. It specifically not fancy. Great for teams. Forced style consistency. Clear expectations. Difficult to mis-read compared to most others. Terse, almost rude.


There is a lot of beauty in go’s simplicity. Very straightforward to reason about what’s going on.

I find it much easier on the eyes than java.


Great summary. If one has to maintain a program full of "exciting" code, or just come back to it after some years, one is extremely happy for code which is written in a clear, simple style.

Unfortunately, I have had to deal with too much "exciting" code. So I am very happy with Go so far and this latest concept of generics seems to add the least amount of complexity for the problems it solves.


Exciting is not a criteria you should use in choosing a programming language. Go is boring, and that's a good thing for those using it, because they can get on with their real work without worrying about the language changing.

I'm pleased to see they've made this proposal even more boring and removed contracts.


I wonder if you don't hear about it because it 'just works.' That's been my experience with Go. No frills, get stuff done type of language. No language is perfect, but Go doesn't have leftPad issues or a new language construct weekly like JS. It also doesn't have decades of baggage to lug around like Java. There just isn't much for people to talk about. It's a boring language that is, for the most part, easy to read and write.


I have not seen any data to indicate that server-side developers (outside of your circle) are more excited about Swift than Go.


yeah for me personally it's tricky, I like Swift as a language but I'm pretty happy with rust and c++ for cross-platform system/graphics work (there are a ton of great libraries) and there's a million options for network services including Go that feel more built up

i feel like i'm at a point of language overload as it is these days, I don't have the mental space for yet another language


Go is huge in the Kubernetes world... mainly because both are Google inventions. But almost all of the third-party integration I've seen in k8s has involved Go.


Kubernetes actually started out in Java, and was only rewritten into Go later (before 1.0 though). The choice of Go could also have been influenced by Docker being written in Go (which also meant that some Docker-related libraries were already available for Kubernetes to use).


The super early Java prototype of Kubernetes lived for about a week or two and never saw a public repository. It's kind of misleading to say it "started out in Java".


When I first encountered Go in 2015, it was exciting because it made concurrent programming accessible and easy compared to the competition. Fast forward to 2020, and many languages have either caught up or improved on the advances Go made (e.g. Kotlin, Rust, Elixir, Scala, etc.). That being said, I think you might be in a bubble if you don't know anyone using Go. It's everywhere nowadays, for better or worse.

I agree with you that I think they're slow to improve. I'm not sure if innovation is necessary though, since most people rely on the language being stable going forward.

Personally I struggle with deciding when to use Go for a new project. With so many languages now supporting different concurrency paradigms, and containers abstracting away prior deployment pain points, I'm not sure why I'd reach for Go over any of the others.


That's why there are other exciting languages for you to adapt to like Kotlin, Swift, C++xx, Nim or even wonderful functional languages. Hopefully no one forces you to use the boring Go and if someone does, you can start a new startup and writi in "the exiting" language exclusively. ;) Enjoy!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: