Hacker News new | past | comments | ask | show | jobs | submit login
What I'd like to see in Go 2.0 (sethvargo.com)
215 points by AtroxDev on Feb 4, 2022 | hide | past | favorite | 219 comments



I'd add one more to this list: proper enum types.

We use enums heavily to force devs who use our code into good choices, but the options are currently:

1) Use int-type enums with iota: no human-readable error values, no compile-time guard against illegal enum values, no exhaustive `switch`, and no autogenerated ValidEnumValues for validation at runtime (we instead need to create ValidEnumValues and remember to update it every time a new enum value is added).

2) Use string-type enums: human-readable error values, but same problems with no compile-time guards, exhaustive switch, or validation at runtime.

3) Use struct-based enums per https://threedots.tech/post/safer-enums-in-go/ : human-readable error values and an okay compile-time check (only the all-default-values struct or the values we define), but it still doesn't have exhaustive switch, is a complex pattern so most people don't know to use it, and suffers from the mutable `var` issues the post author detailed.

To my naive eye, it seems like a built-in, compile time-checked enum type with a FromString() function would help the community tremendously.


Just adding to the discussion:

I find this comment from Griesemer [0] on one of the github issues for enums in Golang quite insightful:

>> [...] all the proposals on enums I've seen so far, including this one, mix way too many things together in my mind. [...] Instead, I suggest that we try to address these (the enum) properties individually. If we had a mechanism in the language for immutable values (a big "if"), and a mechanism to concisely define new values (more on that below), than an "enum" is simply a mechanism to lump together a list of values of a given type such that the compiler can do compile-time validation.

Like with generics, I like the team's approach of taking features seriously, not adding them just because other languages have them, but actually trying to figure out a way for them to work in Go, as cleanly as possible. I think computer science, as a field, benefits from this approach.

And I also dislike many things from Go, and I want "enums" badly too, but that's for another comment.

[0] https://github.com/golang/go/issues/28987#issuecomment-49679...


Personally I’ve run into more problems with strict enum types in distributed systems in a team setting, than I have with Go’s lack of them. In that setting, strict enums are usually over-strict and eventually you back yourself into a corner in terms of being able to roll out new enum values.

When there’s no clear winner in terms of tradeoffs, I prefer to leave it out of the language like Go has done.


> strict enums are usually over-strict and eventually you back yourself into a corner in terms of being able to roll out new enum values.

Yes, they are poison for the evolution of a public API.


Swift solved this problems with [non-exhaustive enums](https://github.com/apple/swift-evolution/blob/main/proposals...).


Is it solved? How do you decide which enums are final and which ones need to be non-exhaustive to allow evolving the code?

I think that Go's decision stems from protocol buffers as they allow to push new values through old binaries, which is a must once you grow enough.

https://developers.google.com/protocol-buffers/docs/proto3#e...


Well, one thing is that enums are non-frozen by default, so you have to actively tag it as `frozen` if you want to put yourself in a scenario where you're never allowed to add cases.

When clients use `switch` on a non-frozen enum from outside its defining module, Swift emits a warning if they don't have an `@unknown default:` case... so consumers of your enum will have to have default logic for handling new cases in order to avoid this warning. (Not for frozen enums though, for frozen ones it's enough to just cover the known cases in calling code, since the expectation will be that you can't update them.)

So basically, if you don't bother thinking much about the problem, you can just avoid adding `frozen` and you'll probably get reasonable behavior where you can add more cases later. Using `frozen` should only be the case if there is some sort of logical impossibility for there to be more cases. Something like how `Optional` has .some and .none, but it's pretty obvious that nobody's going to go add a new case to it (what would a new case even mean?) Same with Result, and probably a bunch of other types I can't think of at the moment.

Also worth noting that Swift treats intra-library code very differently than code that links from another library... if you use your own enums in your own module and don't make them public, it treats them as if they're always frozen... which is nice because it's your internal code and you can always update your own usages without having to worry about compatibility.


> Well, one thing is that enums are non-frozen by default, so you have to actively tag it as `frozen` if you want to put yourself in a scenario where you're never allowed to add cases.

Yeah, non-frozen by default makes a lot of sense. The only gotcha left is that you can't retract from adding frozen, but that's ultimately behavior you want and something that must be able to bite you back.

> if you use your own enums in your own module and don't make them public, it treats them as if they're always frozen... which is nice because it's your internal code and you can always update your own usages without having to worry about compatibility.

That's neat


> which is nice because it's your internal code and you can always update your own usages without having to worry about compatibility.

Unless your services talk to each other or share some external data storage? Which is actually really common?


Services talking to each other, and storage persistence, are only tangentially related to internally defined enums. At some point you need to marshal data in and out of a serialization boundary, and it is at that point that you must handle cases you didn’t anticipate. But it’s just serialized data; it may be intended to represent the same value your enum describes, but it’s up to the deserialization code to do the right thing if it encounters a value it doesn’t recognize.

What I mean is, code that deals with serialization cannot by definition avoid the problem of “what if the data is invalid”. It’s not just enums but every aspect of your type system that must deal with this problem (typically by just throwing an error if the data is invalid, etc.)


"Invalid" is different than "unknown but safely be round-trippable" though. We round-trip unknown-to-the-local-unmarshaller enums through our protobuf services or DB layers all the time.


If you want to carry marshaled data around without knowing what is, carry the marshaled data around. If you want to know what that data is and deal with it, marshal it into a known swift enum.

I honestly think we’re talking about different things… the guarantees a programming language gives you are independent of the guarantees a serialization format gives you.


That's a fair concern: if everything is strict, then there's no option to incrementally roll out a new value. Maybe a proper enum type could always have an `Unknown` value, which would allow for the leniency while still forcing the use to think about (and handle) it at compile time?


Rust supports #[non_exhaustive] attributes, forcing users to cover the generic/wildcard case even though you have already covered all existing ones. Although, I rather do versioning and a breaking change if possible. Put it on the parsing/interop level rather than deep in the code during runtime because it is very likely that your code is not correct without handling the extra case either way.

https://doc.rust-lang.org/reference/attributes/type_system.h...


Oooor the language can have proper type-safe enumerated types of some sort, and if you're in a domain where that's an issue you don't use them.


Go is focused on the distributed systems domain though. It's fine, even desirable, to have languages focused on particular domains, that make design decisions based on the constraints of the domain. In this domain, closed enums are footguns with costly consequences if you get it wrong.


As someone that doesn't work in that domain, could you give a short example?


You push Thrift clients out into the world expecting to a certain API field to be typed according to a 3-element enum. You add a 4th element to support a new feature in a new client. If you ever accidentally serve this 4th element to an old client, it will crash on deserialization. Bonus points if the client is old enough that it's not part of your testing regime anymore.


I would be okay with an open-by-default enum type. (I think...I'm not sure I've ever encountered a language with open-by-default enums.)

I'm still not sure if it's worth it, it's idiomatic Go to "fall-through" if-branches for default cases, which is the same when checking quasi-enums. The symmetry is nice and makes it very easy to read. But I could be convinced.


What problems did an enum cause you, and how was the enum responsible for the problem?


This is a known issue with Rust, for example. I have an enum with variants A and B. Somebody writes an exhaustive switch (match statement) that handles A and B with no default case. I add a variant C. Their code breaks because they don’t handle C. Adding an enum variant was a breaking change.

In Rust, the answer is #[non_exhaustive], which forces consumers to always add a default case. It’s not a huge deal, just a known issue with a well-understood solution.


I guess I don't quite get this. What I'm seeing here is that a non obvious breaking change is being turned into an obvious one.

If something is handling A and B, but you add C, the code probably needs to make sure it's handling C correctly.

I use Java in my dayjob and the behavior you've outlined is how I always code things, but it's manual and doesn't happen at compile time: I provide default that throws a runtime exception.


If your package updates and users of your package update and their code ceases to compile, that seems... fine? It's the system working as intended. They can just downgrade back to the previous known good version. It would be much worse if you made a breaking change to code but consumers' code that used yours continued to compile but no longer functioned as expected


IME the problem is the default behavior. Rust, Java, et al have that same default behavior of defaulting to closed enums, and you have to opt-in to open enums. Whether that's adding a type attribute in Rust or implementing special code in Java to handle the case. This is a footgun for distributed systems. If you don't get it wrong, then clients writing their own client-side software will get it wrong.

I'm not disparaging closed enums, they are very useful in certain contexts, but they make it really easy to do the wrong thing when reading data off the wire. Given Go is focused on this exact domain (distributed systems), I am glad the language doesn't have them.


I don't think you would need a 2.0 (backward-incompatible language change) for any of this.


Exhaustive switch seems likely to be backward incompatible if done well.

What you want here is something akin to Rust's match behaviour on enumerated types. If your alternatives aren't exhaustive, it doesn't compile. Now, Rust is doing that because match is an expression. Your day matching expression needs a value if this is a Thursday, so not handling Thursday in your day matching expression is nonsense - even though often the value of a match isn't used and might be the empty tuple it necessarily must *have a value.

It seems to me that today a Go Switch statement operating on days of the week can omit Thursday and compile just fine. Exhaustive switch means that's a compile error. If your "exhaustive switch" is optional or just emits a warning, it won't catch many of the problems for which exhaustive switch is the appropriate antidote.


> Exhaustive switch seems

This would only make sense on an enum type, which would be a completely new thing, so it can be introduced without breaking backward compatibility. Constants and switch on non-enum values would stay, because they are useful independent of enum types.


Also makes sense on numbers and other patterns where you can logically enforce exhaustiveness.


Yep, agreed. Maybe a new hypothetical `exhaustiveswitch` keyword could be added that would be backwards compatible, but it doesn't seem very Go-like to have such similar functionality as separate keywords.


I believe new keywords are not backwards-compatible as they will invalidate any code using that as an identifier.


Sum types overlap a lot with what interfaces over. So enhancing the language with proper sum-types would benefit from enhancing interfaces and switch statements.

However, zero values throw a wrench into this. The zero value of an interface is nil, so enhancing interfaces would require you to address what happens with an uninitialized variable. One of the current proposals suggests that nil continue as the zero value.

They could introduce a totally different type, like a sealed interface, which doesn't require a zero value, but that distinguishes between different types of interfaces, and I'm not sure how that'll be received.


> We use enums heavily to force devs who use our code into good choices

Beware, tho, that with many languages today you’re not really doing that even when they advertise enums e.g. in both C# and C++, enums are not type-safe (not even `enum class`). Iota is, at least, a fair acknowledgement of that.

> with a FromString() function

That seems like way a step too far, is there any such “default method” today? And I don’t think Go has any sort of return-type overloading does it?


Really not sure what you're talking about. If you use enums in a real language like Swift, Kotlin, Rust, etc., you can only construct the values of the enum. There are no ways to get around it.


> Really not sure what you're talking about.

Have you considered reading?


Have you considered elaborating? I am likewise unsure how a language could get enums wrong, even after reading your post.


> I am likewise unsure how a language could get enums wrong, even after reading your post.

By allowing any value of the underlying type, even if they were not values of the enum. The language most famous for this is obviously C:

    enum foo { A, B, C };

    int main() {
        enum foo x = A;
        printf("%d\n", x);
        enum foo y = 42;
        printf("%d\n", y);
    }
This will print `0` and `42` (https://godbolt.org/z/Yq8qq5bzW), because C considers enums to be integral types and will thus implicitly convert to or from them. And as you can see from the link, there is no warning under `-Wall`. Clang will warn with `-Wassign-enum` (which is included in `-Weverything`), I don't think there's any way to make gcc complain about this.

Now you might argue that this is C so obviously it fucks this up, however that C does this leads to further issues:

- For compatibility with C, C# enums have the same property. I don't know about you, but that surprised me a great deal since Java has type-safe enums (even if they're not sum types).

- Even more so, C++ added `enum class` in C++11, and obviously has inherited C's enums, but enum class still is not type safe, afaik the differences are that `enum class` is scoped (so in the snippet above you'd need `foo::A`) and it's not implicitly convertible with the underlying type. But it's still explicitly convertible (via a cast or list initialisation), meaning you can't assume a caller will remain within the enum.


This is only a problem in languages that don't support checked type safe enums / discriminated unions, but this whole thread was started to request that feature. I don't get the point of your comments unless it's to state "be careful or they'll do it wrong"... which is obvious and already acknowledged several times in this thread.


Friend - this is a learning opportunity. Check the upvotes.


> Check the upvotes.

Check the upvotes on my comment.


Every time I see discussion about go and enums there are people who are referencing these mythical C-like enums that had never existed. It's some sort of constructed memory. And I'm sure there are languages that do enums "properly", but it's always C/C++ that is referenced.


Ada does them well. They have attributes that allow for converting to and from integers and strings [1], and case statements have to be exhaustive or use a default "others" clause.

[1] https://en.wikibooks.org/wiki/Ada_Programming/Types/Enumerat...


The point here has little to do with Go, which you can see by the quote it replies to having nothing to do with go:

> We use enums heavily to force devs who use our code into good choices

the note is that this is very much language-dependent and there are languages which get it very, very wrong.

> And I'm sure there are languages that do enums "properly", but it's always C/C++ that is referenced.

That makes no sense, the original comment obviously assumes a language which does it "properly".


Sorry, my comment wasn't trying to address (let alone bash) the original comment in any way here, just a tangent based on your note about C# enums.


The type sets proposal for Go has already been accepted as part of the generics proposal [0]:

    type SignedInteger interface {
        ~int | ~int8 | ~int16 | ~int32 | ~int64
    }
Interfaces that contain type sets are only allowed to be used in generic constraints. However, a future extension might permit the use of type sets in regular interface types:

> We have proposed that constraints can embed some additional elements. With this proposal, any interface type that embeds anything other than an interface type can only be used as a constraint or as an embedded element in another constraint. A natural next step would be to permit using interface types that embed any type, or that embed these new elements, as an ordinary type, not just as a constraint.

> We are not proposing that today. But the rules for type sets and methods set above describe how they would behave. Any type that is an element of the type set could be assigned to such an interface type. A value of such an interface type would permit calling any member of the corresponding method set.

> This would permit a version of what other languages call sum types or union types. It would be a Go interface type to which only specific types could be assigned. Such an interface type could still take the value nil, of course, so it would not be quite the same as a typical sum type.

> In any case, this is something to consider in a future proposal, not this one.

This along with exhaustive type switches would bring Go something close to the sum types of Rust and Swift.

[0]: https://github.com/golang/go/issues/45346


Another possibility is to use boolean flags. Of cause the compiler then will not enforce that only one of the flags is set. On the other hand on few occasions I observed how initial design with disjoint cases evolved into cases that can be set at the same time modeled with flags.


Or even better, proper sum types. They're a superset of enums anyway.

https://github.com/BurntSushi/go-sumtype is great, but a bit unwieldy. Language support would be much better.


Proper compile time sum types would be great. I find myself reimplementing sum types at runtime far too often, especially when it comes to parsing JSON. My go to is a function with a signature along the lines of the reflect packages dynamic select:

func UnmarshalOneOf(data []byte, args []interface{}) (index int, err error)

Which I use like this:

variants := []interface{}{ &T1{}, &T2{}}

i, err := UnmarshalOneOf(data, variants)

// …

return variants[i]


> no exhaustive `switch`

This is what linters will do for you by default.

> we instead need to create ValidEnumValues and remember to update it every time a new enum value is added

Code generators are first class citizens in go, and writing an icky but reliable test won't be too hard either.


> Use int-type enums with iota: […] no compile-time guard against illegal enum values

Create a new int type and use that for your enums. While you still can create an illegal enum value, you basically have to be looking for trouble. It’s not going to happen accidentally. It’s even harder if it’s an unpunished type in a different package.

See:

https://github.com/donatj/sqlread/blob/91b4f07370d12d697d18a...


This compiles:

  type Test int

  const (
    T1 Test = 0
    T2      = 1
  )

  func TestSomething(t Test) {}

  ...
  
  TestSomething(17)
So this isn't a good suggestion, because you can easily pass any int value and will not get a compiler error. You may as well be using strings at that point.


I generally tend to use enumer[0] to generate some boilerplate code that can help with addressing this, e.g. the below would compile, but would error at runtime. There are probably linters out there that could catch this. With Go, linters are generally pretty good at catching this kind of stuff.

    package main
    
    import "fmt"
    
    type Test int
    
    const (
     T1 Test = 0
     T2      = 1
    )
    
    func main() {
     t, err := TestString("T1")
     if err != nil {
      panic(err)
     }
    
     TestSomething(t)
    }
    
    func TestSomething(t Test) {
     fmt.Println(t.String())
    }
Having said that, it seems weird to have to mimic enums, as opposed to actually having it. Doesn't feel like it would add much complexity, if at all.

[0] https://github.com/dmarkham/enumer


> as opposed to actually having it

Like C or C++ do? :)


Right. However, C and C++ are far more complex, with no memory safety. Everything has it's ups and downs.


Your point is valid, but the Go philosophy depends on you following conventions to have reliable code. This is true all over the place, e.g. you can easily ignore errors.

Other languages take a stricter approach, and maybe that's better. Not defending (although I like Go), but it's really more a language philosophy than a singular defect.

As the other commenter noted, this should fail code review and you should be using the provided constants, and it should be clear to you. And if you disagree (which again is totally valid), you should use a stricter language—there's plenty out there!


Fwiw a literal 17 in a function call, let alone anywhere outside an equation or constant definition is a code smell that should never make it past review.

I see your point however.


Depending on code review instead of a static type system does not scale. Look at all of the memory safety security vulnerabilities that are solved by "simply making sure to manage memory correctly."

Also:

  variableDefinedInAFarAwayModule := 17

  ...

  TestSomething(variableDefinedInAFarAwayModule)
  
It's not always as clear as a constant value being passed to an incorrect type.


I don't understand your example here, that's not going to compile.

variableDefinedInAFarAwayModule is definitionally type int and will not be cast. It is also unpublished, so you couldn't be using it for a faraway module?

Your 17 in the previous example has it's typed determined at compile time which is why it can be a problem.

see: https://go.dev/play/p/jEdAhKDeLy6


Ah thank you. That's slightly better then.


This will not compile because the type of the variable is int not Test.


I mean, that's a pretty common usage of enums, isn't it?

    TestSomething(T1 & T2)


And this is the big, probably irreconcilable, difference in culture between the sum-typers and the compiler-assisted-named-valuers....


As amw-zero pointed out though, a user could accidentally create the int type and it would only fail if you have runtime checking (which requires you to build it, either via a custom enum constructor that returns `error` or an `IsValid` function, which then require you to maintain the ValidEnums list).

Int types also don't give you any guards when deserializing.


One caveat here is serialization. Writing your (or another package's) enum to a database will get you in trouble if you ever want to add another value in the middle. Sure, you can be careful and should document this, but who knows


Not only that, but the person sending you the serialized object might be looking for trouble. Sending you an enum value that is outside the legal range might help an attacker get into your system.


From Rob Pike on reddit regarding this post[0]:

The and and or functions in the template packages do short-circuit, so he's got one thing already. It was a relatively recent change, but it's there.

Non-deterministic select is a critical detail of its design. If you depend on a deterministic order of completion of tasks, you're going to have problems. Now there are cases where determinism might be what you want, but they are peculiar. And given Go's general approach to doing things only one way, you get non-determinism.

A shorthand syntax for trial communication existed. We took it out long ago. Again, you only need one way to do things, and again, it's a rare thing to need. Not worth special syntax.

Some of the other things mentioned may be worth thinking about, and some of them have already (a logging interface for instance), and some we just got wrong (range). But overall this seems like a list of things driven by a particular way of working that is not universal and discounts the cost of creating consensus around the right solutions to some of these problems.

Which is not to discount the author's concerns. This is a thoughtful post.

0: https://old.reddit.com/r/golang/comments/s58ico/what_id_like...


> Again, you only need one way to do things, and again, it's a rare thing to need. Not worth special syntax.

This really is one of the parts I like the most about Go. It really makes so many things simpler. Discussing code, tutorials and writing it.

Every time I'm trying to do something in JS I have to figure out why every guide has a different way of achieving the same thing and what are the implementation differences.


It'd be nice if he had at least hinted towards what 'the way' is for this problem.


It’s in the article he’s responding to.


> but (deterministic-select cases) hey are peculiar.

It looks for most select blocks in Go code, it doesn't matter whether or not they are non-deterministic or deterministic.

But, if the default is deterministic, user code could simulate non-deterministic, without much performance loss. Not vice versa (the current design).


Er, when it comes to concurrency, non-determinism is usually cheaper than determinism. As soon as you care about ordering, you almost always have to synchronize, and that has a cost.

Austin Clements (of the Go runtime team) wrote a paper that explores this in detail [1]. That was before joining the Go team, but the concepts are universal.

[1] https://people.csail.mit.edu/nickolai/papers/clements-sc.pdf


> Not vice versa (the current design).

    select {
       case: <-chan1_whichIWantToCheckFirst
       default:
    }

    select {
       case: <-chan2_whichItreatTheSameAsChan3
       case: 0xFF ->chan3_whichItreatTheSameAsChan2
    }


Yes, as I have mentioned, there is performance loss, comparing to

    select {
       case: <-chan2_whichItreatTheSameAsChan3 // a higher priority
       case: 0xFF ->chan3_whichItreatTheSameAsChan2
    }


The real usecases where I need deterministic select, are so few that a small performance loss doesn't matter to me.


Sometimes, it is not related to performance loss, it is related to implementation cleanness and complexity.


A separate `select` with empty `default` is about as simple and clean as it gets. It is easy to read, easy to reason about, and, most importantly, conveys the intention of the code perfectly.


1) Is there really a performance loss compared to if select was deterministic?

2) What in the world do you need such code for?


1) surely.

2) just read:

     https://groups.google.com/g/golang-nuts/c/SXsgdpRK-mE/m/CT7UjJ3aBAAJ

     https://groups.google.com/g/golang-nuts/c/ZrVIhHCrR9o

     https://groups.google.com/g/golang-nuts/c/lEKehHH7kZY/m/SRmCtXDZAAAJ


> user code could simulate non-deterministic

I'm curious how?

> Not vice versa

There are pretty common patterns for this. At least for real word cases where you might have one special channel that you always want to check. Ugly, but in relation to the previous question, I don't see how one is doable and one isn't?


> > user code could simulate non-deterministic

> I'm curious how?

    if rand.Intn(2) == 0 {
        select {
           case: <-chan2_whichItreatTheSameAsChan3 // a higher priority
           case: 0xFF ->chan3_whichItreatTheSameAsChan2
        }
    } else {
        select {
           case: 0xFF ->chan3_whichItreatTheSameAsChan2 // a higher priority
           case: <-chan2_whichItreatTheSameAsChan3
        }
    }
Yes, it increases verbosity to the other way, but no performance loss.


How in the world is generating a random number and branching and doubling the number of instructions "no performance loss"?


Now the non-deterministic implementation does more work than a deterministic implementation. It generates a random number and sorts the branches. The latter (sorts the branches) is not needed in the above pseudo code.

Doubling the number of instructions has no impact on run-time performance.

And there are more optimization opportunities in implementing a deterministic design. Now, the non-deterministic implementation needs to lock all involved channels before subsequent handling, a deterministic implementation might not need to.


Any time there is more than one channel being selected for it needs to cover them all equally.


Equality is meaningful only if at least two case operations are always non-blocking. This is rare in practice.

In fact, in practice, sometimes, I do hope one specified case has a higher priority than others if they are all non-blocking.


As far as priority goes, most interesting cases will have priority based on the data in the read, except for this specific case of a done chan el and a data channel. I used that pattern at first but have been moving away from it. To be sure i am mostly writing long lived processes with fixed pools of worker go routines and either never exit or exit based on WaitGroups determining the work is all done.


Yes, it (the lack of deterministic-select) is only annoying for several special cases. For most cases, it doesn't matter whether or not the default behavior is deterministic.


Wouldn't it be the case id one worker was pulling work asynchronously delivered from two places? I only use one go routine / one channel myself, but the name select itself very strongly implies it is a yield type operation where any of a number of async actions can wake it for their callback to run. Albeit without a callback syntax, it is async and better be fair.


Everyone has their own gripes. Modules are what cause me the most pain in Go - especially where they're in github and I need to fork them and now change all the code that references them. I don't know if the problems are even tractable because the way it all works is so incredibly complicated and any change would break a lot.

I would like to remove all the "magic" that's built-in for specific SCMS/repository hosting services and have something that operates in a simple and predictable manner like C includes and include paths (although obviously I don't like preprocessing so not that).

As for the language, I like the range reference idea but my own minor pet peeve is an issue with pre-assignment in if-statements etc which makes a neat feature almost useless:

  // This is nice because err only exists within the if so we don't have to 
  // reuse a variable or invent new names both of which are untidy and have potential 
  // to cause errors (esp when copy-pasting):
  if err := someoperation(); err != nil; {
    // Handle the error
  }


  // This however won't work:  
  func doThing() string {    
     result := "OK" 
     
      if result, err := somethingelse(); err != nil { // result does not need to be created but err does so we cannot do this.
          return "ERROR"  
      }
    

   return result
  }
I don't have any good ideas about how to change the syntax unfortunately.


You should be able to use "replace" to use your forked module instead of the original and you don't have to change anything.


unfortunately replace is not supported well with modules. and they are hell bent on not supporting it well.


I've been doing forks of modules and using replace to use them in my code base extensively. I had 0 problems, it works marvelously.

If you're doing some kind of global find/replace on fork, then something's definitely not right.


Yeah it works fine. It also allows local co-development of the module and its user without waiting for new versions to be noticed


yes that is one approach. it shouldn't be necessary.



yes. now try to go install your application with the replace directive. it will silently replace the code with the original upstream version.


Easily fixed using lexical scopes:

    func doThing() string {
        result := "OK"

        {
            var err error
            if result, err = somethingElse(); err != nil {
                return "ERROR"
            }
        }

        return result
    }
`err` is introduced in the lexical scope, `result` isn't so it still refers to the string from the surrounding scope. `err` does not pollute the surrounding scope.

You can also try the complete version here: https://go.dev/play/p/kDEB11YdvSs


In my experience this isn’t idiomatic Go and would definitely turn heads in a code review. But in such a small function, it’s fine to just not worry about the lifetime of your variables (and in bigger functions you can often decompose into smaller functions).


I agree. The idiomatic thing here would be to simply declare `var err error` in the surrounding scope, especially since this example is a small function where this wouldn't matter.

But "idiomatic" always takes a backseat compared to technical necessity. If there is a requirement, for some reason, to limit the scope of `err` and still allow `result` to be changed in the if's pre-assignment, then this is the way to do it.


The fix is pretty simple, just declare err ahead of time:

    func doThing() string {    
        result := "OK"
        var err error
        if result, err = somethingelse(); err != nil {
            return "ERROR"  
        }

        return result
    }


Part of the reason OP liked the `if assignment` was to avoid polluting the higher-level scope with a variable that is only needed during the if statement.

Your solution fixes the error, but at the cost of losing the upside OP saw.


Is there no equivalent to a lexical scope let definition?

let {

  var err := error
  
  scoped code

}


Of course there is:

    {
        var err error
        scoped code
    }


yes, you can wrap code in {} to scope it.


The issue is now you have a potential nil value hanging around in the code with no real reason for its existence. Three refactors and some moving around later and you manage to hit a runtime panic in production.


yeah it sucks, I think IDEs should help here. 1 click uplift var error to parent scope.


This is a great post, and I agree with much of what he said (range shouldn't copy - I would love a range that iterates by const-reference by default, to lift a phrase from C++).

Deterministic select I hard disagree with. The code in the blog post is race-y, and needs to be fixed, not select. If anything, making select deterministic will introduce _more_ subtle bugs when developers rely on that behavior only to find out in the real world that things aren't necessarily as quick as they are in development.


How would you fix that code?


I would write the author's example as follows:

    for ctx.Err() == nil {
        select {
        case <-ctx.Done():
            return nil
        case thing := <-thingCh:
            // Process thing...
        case <-time.After(5*time.Second):
            return errors.New("timeout")
        }
    }
The extra check for ctx.Err before the select statement easily resolves the author's issue.


It really doesn't though. It handles the case where the context might have expired or be cancelled, but there's still a race when entering the select between the ctx.Done() and reading from thingCh. You may end up processing one additional unit of work. In situations where the exit condition is channel-based, this won't work.

Additionally, this would only work if you had one predominant condition and that condition was context-based. If you have multiple ordered conditions upon which you want to exit, I can't think of how you'd express that as a range.


I'm not sure what you mean. There's always going to be a race condition between ctx.Done and thingCh, just depending on whether there's data available. This race condition is unavoidable.

I guess you're thinking of "what if thingCh and ctx.Done activate simultaneously?"

There's no real difference between happening simultaneously and happening one after another.

As for your other point, you can just write code like

    select {
    case x := <-conditionA:
        return x
    default:
    }
    select {
    case x := <-conditionB:
        return x
    default:
    }
    ...
But I've personally never needed code like this.


Also, this isn't semantically correct. In order to ensure that `conditionaA` is _always_ preferred over `conditionB`, you must also check if `conditionA` has received a value inside of `conditionB`:

    select {
    case a := <-conditionA:
        return a
    default:
    }
    select {
    case b := <-conditionB:
      case a := <-conditionA:
          return a
      default:

      return b
    default:
    }


It would be easier to discuss with a more concrete example. If I ever had to write code like the above I would reconsider the design and try to come up with something simpler.


I was also curious so I picked a likely-looking project on his github and indeed found an attempt to handle a channel "deterministically" at https://github.com/sethvargo/go-retry/blob/main/retry.go#L51

Honestly the whole first select seems redundant; any code that relies on this is broken as there's no other synchronization points to hang onto. You simply can't pretend the clock on the wall has anything to with the values transiting the program unless you introduce an actual synchronization point.

But OK, maybe you do have some strange performance case where this matters? In that case the whole thing could be more succinctly solved by looping on `for ctx.Err() == nil` instead of infinitely. Exactly as suggested at the start of the thread. (This would also likely be faster unless the context is under massive contention.)

It also leaks the timer until it fires if the context cancels, which seems like it would be more of a practical performance problem than any overhead to the additional select.


Right, which is noted in the post. That verbosity is, well, verbose. I generally need this in 20% of things I write.


In this use case, is it bad if the Done signal arrives the instant after you check it?


The context was introduced by the commenter. The original post does not use contexts. In general, there's a pretty common set of patterns in which multiple goroutines are writing data to different channels, and you need to ensure the data from those channels are processed with some level of priority.


Two channels is a poor way to handle priority. If data comes in on the low priority channel just before the high priority channel, you would still be blocked waiting for the low priority task to complete.

In a case like this, maybe just run two different consumer routines for the two channels, then neither would be blocked waiting on the other.


Context cancellation propagates (potentially) asynchronously anyway, so if you're relying on something canceling your context and that immediately appearing you already have a bug.

I've written `select { ..., default: }` enough times I also wish it had shorthand syntax - sometimes it's even clearer to range one "primary" channel and lead the code block with that check - but I cannot think of a case where relying on a deterministic select would not have led to a bug.


I think you could do which I'd argue is more idiomatic (

    for {
      if _, ok := <- doneCh; ok {
        break
      }
      select {
      case thing := <-thingCh:
        // ... long-running operation
      case <-time.After(5*time.Second):
        return fmt.Errorf("timeout")
      }
    }
Which goes along w/ https://github.com/golang/go/wiki/CodeReviewComments#indent-... of "Indent error flow".

edit: nvm, your break would be blocked until one of the other channels produced a value. you'd need to check for the doneCh redundantly again in the select.


Proper sum types, if only for proper error handling. The if err != nil dance is extremely verbose and error prone with consecutive checks.


That could also alleviate the no-exhaustive-enum-checking problem mentioned in another top-level comment.


Mine would be a much larger collections library. Having come to go from Java and finding that the standard go library has no trees, no stack, no skip list, etc. was quite a surprise. Possibly the advent of generics will stimulate development of a more robust standard collections library.


It does have a doubly-linked list which can easily serve as a stack, though. And it has a heap which is a poor man's replacement for some uses of tree. I've found that I can get quite a bit farther with the built-in Go slices, maps, and lists than I thought.

But yeah. Now that generics are in, I do hope they add a handful of common collections.

[0] https://pkg.go.dev/container/list@go1.17.6

[1] https://pkg.go.dev/container/heap@go1.17.6

[2] https://pkg.go.dev/container/ring@go1.17.6


> Alternatively, Go 2.0 could implement "frozen" global variables

A more general change would be to implement the "var" and "val" distinction that exists in some languages.

    const x = 1 // x is a compile time alias for the untyped abstract number 1
    var x := 1  // define x at runtime to be (int)1, x is mutable
    val x := 1  // define x at runtime to be (int)1, x is immutable
Then the globals can be defined with "val".


No need to invent a new keyword, it is ok to just use "const": https://github.com/go101/go101/wiki/An-immutable-value-propo...


No, they are very different conceptually.

- A const is an abstracted value.

- A variable is an allocated piece of memory.


Isn't that an implementation choice?

Like in C++, a const is absolutely allocated since you can get a pointer to one. And then you can do horrible stuff like const_cast that pointer and mutate the value, and the possibility of that occurring prevents the compiler from doing certain const-related optimizations.


> Like in C++, a const is absolutely allocated since you can get a pointer to one.

If I understand you correctly, you claim you can get a pointer to a Go const. This is not the case. For example, the following code will not compile:

const a int = 1

var b *int = &a

./prog.go:5:15: cannot take the address of a

See https://go.dev/play/p/QPxP-tF6qIs for a live example.


I'm not an active golang user, so it wasn't a comment on that, more just on the GP's assertion that a const had to be an abstracted value. It sounds like that is indeed the case in Go (an implementation choice), but more broadly, I wouldn't expect "const" to mean anything more to most programmers than "give me an error if I or anyone else try to modify this thing."

And in C++, it certainly isn't. Even a constexpr in C++ is a thing you can get a pointer-to— C++'s only guarantee with a constexpr is that it has to be possible to evaluate it at compile time.


Ah gotcha. Makes sense. The way I typically think of Go const is more akin to a typed macro, rather than a first class value reference where it is expanded in place. Notably, in Go constants cannot contain structs, arrays, so they have much less to do with mutability than in, say, in C++.


Yes, it is an implementation choice. I feel it's a choice that makes sense, too. But regardless, it's how things work right now in Go.


Concepts are defined as needed.


That wasn't exactly what I meant. What I meant was

- as of right now they are different, and clearly distinct, and it's actually important to unlearn thinking of a const as a var, because they don't do the same thing in practical terms

- that proposal would muddy the distinction.


Yes, they are different, const values doesn't allocated in memory NOW. But who cares? Most gophers just think const values are immutable values.

I mean we could let some const values allocated in memory.


Thankfully the people who make Go do care about conceptual muddles.


could be interesting, however I'd hope for something more visually distinctive that val/var as it took about 2-3 reads for me notice what was even the diff between L2 and L3.


Fair point. "value" and "var" then, perhaps?


perhaps, the real details would get hammered out in a proposal.


How about `mut var`?

Edit: this likely wouldn’t fly as it would be completely backwards incompatible


the val and const cases should hardly be different if the compiler has constant folding, except maybe for the typing.


From the language reference

> Constant expressions may contain only constant operands and are evaluated at compile time.

and a "const" can only be defined with a constant expression.

So the difference would be that a "val" can be assigned a value that is evaluated at runtime.


I don’t understand why Scala chose “var” for mutable variables. A variable is not defined by being mutable—it is defined by being variable, i.e. not a constant. And it is immutable in math (where we don’t have to care about performance). So “val” is also a “var”, conceptually.


I think it was Fortran which introduced the equivalence between programming variables (that are a mutable piece of memory) and mathematical variables (which describe a relationship). But they aren't really the same. And "variable" has become too fixed in computer science usage to be repaired back to its earlier mathematical meaning.


if it is not constant, it must be variable i.e. changing. Mutation = change.


def plusOne(x: Int) { val y = x + 1; return y }

In the line of code above, `y` is an immutable variable. It does not mutate, yet it "varies" as different values of x come in.


No. A variable in mathematics does not change (mutate). And yet it is not a constant.

You’re thinking inside the imperative programming box.


I would like to add proper JSON5 support.

The template problem is a real problem. I used it once and it was pain and moved away instantly. I would vote for go inside go as a template system. So you can effectively write go code.

With the help of yaegi[1] a go templating engine can be build e.g here[2].

[1]: https://github.com/traefik/yaegi

[2]: https://github.com/Eun/yaegi-template


You don't need official Go support for JSON5. Same for templating. The templating library in the standard library isn't anything special, it's just in the standard library. If you don't like it, go get one of the dozens of others on offer.

I think there's a number of language communities you can "grow up" in that teach you that things in the standard library are faster than anything else and have had more attention paid to them than anything else, like Python or Perl, because those languages are slow, and things going into the standard library have generally been converted to C. I think that because I look in my own brain and find that idea knocking about even though nobody ever said it to me directly. But that's not true in the compiled languages, of which Go is one. The vast majority of the Go standard library is written in Go. The vast majority of that vast majority isn't even written in complicated Go, it's just in Go that any Go programmer who has run through a good tutorial can read, it's not like a lot of standard libraries that are fast because they are written impenetrably. (Although the standard library may have subtle performance optimizations because it chooses this way of writing it in Go rather than that way, it's still almost entirely straightforward, comprehensible code.)

If you want JSON5 or a different template library, go get or write one. The Go encoding/json or text/template don't use any special access or have any magical compiler callouts you can't access in a library or anything else; if you grab JSON5 library (if such a thing exists), you've got as much support for it as JSON or text/template.

It's even often good that it's not in the standard library. I've been using the YAML support a lot lately. The biggest Go YAML library is on major version 3, and both of the jumps from 1 to 2 and 2 to 3 were huge, and would have been significantly inhibited if v1 was in the standard library and the backwards compatibility promise applied. v1 definitely resembles encoding/json, but that's missing a lot of things for YAML. If Go "supported YAML" by having v1 in the standard library, everybody would be complaining that it didn't support it well, and by contrast, asking anyone to jump straight to the v3 interface would be an insanely tall ask to do on the first try without any experimentation in the mean time. And I'm not even 100% sure there won't be a v4.


I disagree with this stance.

To me, the community take on "stdlib vs libraries" is a cyclical thing; we're coming out of a cycle led by JavaScript/NPM, where everything is a library due in no small part to how HORRIBLE JS/NodeJS's standard library is. Go back further, and you run into the Python/Java world which had far more comprehensive stdlibs, and today Go's (and to a lesser degree, Rust's) rising popularity is bringing back more feature-complete stdlibs.

So, it changes. And its alright for different languages to have different stances on how comprehensive their stdlibs should be. Go is absolutely an example of a language that wants tons of stuff in its stdlib; but its also a language which despises change, and thus we got a very awesome stdlib at v1, and limited improvements to it over the years.

I don't feel the "YAML changes a lot" argument is valid. It does; but if an app needs YAML, they can choose to use the stdlib, or a library, and they'll have to keep up regardless.

Putting it in the stdlib has tons of advantages. First: it increases the scope of parties affected by any breaking change, which naturally forces more deliberate thought into the change's necessity and quality. Second: it reduces the number of "things" code consumers need to update; from the go version itself & the YAML library & consuming code, to just the go version & consuming code (this has network productivity effects in only having to source one "breaking changes" changelog for your hit list on what needs updated). Third: it reduces multivariate dependency-dependency issues (eg YAMLv2 requires Go1.14, but we're on Go 1.13, so first we have to upgrade to Go1.14, then we can upgrade to YAMLv2). Fourth: it reduces the number of attack surfaces which security professionals need to monitor (all eyes on the stdlib implementation, versus N community implementations, strength in numbers). Fifth, less about YAML, but: stdlib encourages what I call the "net/http.Request effect"; if I want to write a library that does stuff with http requests, its nearly impossible to do that in a framework-agnostic way in JavaScript, because express does things differently than hapi etc; but in Go, everything is net/http.Request. So even if the stdlib doesn't have everything one needs, plugging in libraries to solve it is easier because everyone is using the same interfaces/structures.

Obviously not everything can be in stdlib, so it comes down to the question: what belongs there. And in my opinion, every language is too conservative (except maybe Python, which strikes a strong balance). Many language teams will say "we're not including X because X isn't an ISO standard, or because its still changing, or because we're not sure on the implementation". These are all arguments out of cowardice and fear of asserting an opinion. Language designers have developed a deep, deep fear of opinions; because they believe, maybe correctly, that one of their opinions will be bad, and it will hurt adoption. The issue is; having no opinion just hurts productivity, and people will also move from your language toward more productive ones (see: Go's rise in popularity, versus JS's dependency issues).


My point isn't really an argument about what does and does not need to be in the standard library, as much fun as that may be to argue about. My point is really in that first sentence: "You don't need official Go support for JSON5."

Whether the Go team agrees with you or not about any particular library, you do not need to wait around. The standard library does not get any particular special access, with the exception of a couple of very special packages (unsafe, reflect). Things outside of the standard library can have all the properties you're talking about being good, and they're available now, without needing to get an entire language team to sign off on it.


What I would like to see:

- Enun Types

- A special operator to cut down on `if err != nil { return err }` and just return at that point.

- Named arguments, and optional params with defaults

- Default values on structs

- ...macros? We already have `go generate ./...`

( edit: Removed unformatted source code )


Sum types would satisfy the first two on that list, as well as making an ergonomic optional type trivial to define.


I would add: an extended standard library for "common stuff". I don't want to import a third-party library nor write my own "utils.go" to do:

    func contains(s []int, e int) bool {
        for _, a := range s {
            if a == e {
                return true
            }
        }
        return false
    }


If you need to look up or worse, delete, a value in a slice, then you probably shouldn't be using a slice in the first place. You probably want to replace your slice with a set.



Two Go 2 proposals that interested me were:

* nillability annotations: https://github.com/golang/go/issues/49202

* Change int from a machine word size (int32 or int64) to arbitrary precision (bigint): https://github.com/golang/go/issues/19623

Sadly the nillability annotations were rejected because they weren't backwards compatible. The bigint change is also unlikely to be accepted because the issue is already five years old and there are concerns about performance.


If this was implemented:

    func foo(a, b int) int {
        return a * b
    }
At compile time, what is the return type of foo? Answer: Unknown. The compiler has no way of determining if the resulting type will be small or big int. This has to be taken into account not just in foo, but in every function calling foo, and every function that is called with the return of foo as a param.

One of the best things about golang, is how little "magic" there is. Everything that happens is immediately obvious from the code. An int that is upgraded to a completely different type when it reaches certain values, goes directly counter to that; it would be hidden behaviour, not immediately obvious from the code.

Should golang have a builtin arbitrary sized integer type? Maybe. Using math/big can become cumbersome. But a much better solution would be to introduce a new builtin type like `num`


I would really love to see default parameters and struct values. I jump between Python and Go in my day job and Go is a much better language overall but things like this make it painful


This is also my #1.

Parameter / Option ergonomics.

The current best practice of "functional options" and long function chains results in far too many function stubs ... its a minimum of 3 extra lines per parameter. Parameter structs require a whole extra struct...

Borrowing optional / named parameters from Python would cut down length and complexity of Go code drastically.


Default parameters have always struck me as dangerous. (And for my day job I use Python.)


I always hear this argument but I’ve never seen it be dangerous in my day to day. We deal with defaults all the time in programming and it’s generally fine


A better template library would be a killer feature. Similar to what PHP has been since forever; the ability to write HTML using all the Go language's features.


I'd like to see ergonomics improvements, particularly to function parameters.

The current best practice of "functional options" and long function chains results in far too many function stubs ... its a minimum of 3 extra lines per parameter. Parameter structs require a whole extra struct.

Suggestion: Just borrow named / optional parameters from Python. It would cut down length and complexity of Go code drastically.


This was a surprisingly good list. Most of these kinds of articles just consist of someone bemoaning the fact that Go isn't Haskell or whatever language they like more. But this is a legitimate list of things that could and should (in my opinion) be changed without turning Go into not-Go.


Oddly enough the list doesn't resonate much with me.

I'd love to see better mocking support. Doing mock.On("fun name", ...) is so backwards, confusing and brittle. It's also a great source of confusion for teammates when tests fail.

I miss better transaction management. I regularly juggle db, transactions and related interfaces and it's a continuous pain.

Then there's the "workaround" for enforcing interface implementation: _ InterfaceType = &struct . This could be easily part of struct def. rather than having it in var section.

As was mentioned by others doing x := 5 only to later do JsonField: &x is just a waste of intellectual firepower. Maybe this can be alleviated by generics but the lang should be able to make this a one liner.


I have much smaller ask struct type elision in function calls


I have to say I’ve no idea what you mean. In function definitions I’d interpret it as type inference (and would disagree) but you specifically talk about function calls, and consider it a small change. Can you describe what you’re thinking of?


I think I understand what they're asking for. Consider a function that takes in a struct as one of the arguments.[0]

Currently, you would have to invoke it like so:

    svc.GetObjectWithContext(ctx, &s3.GetObjectInput{Bucket: bucket, Key: key})
But... why do you have to type "s3.GetObjectInput"? The function is taking in a concrete type (not an interface) for that argument, and there is only one possible type that you can pass in... so I agree with the person above that it should be possible to elide the type like so:

    svc.GetObjectWithContext(ctx, &{Bucket: bucket, Key: key})
Go already supports type elision in some places, such as...

    []someStruct{{Field: value}, {Field: value}}
instead of having to type

    []someStruct{someStruct{Field: value}, someStruct{Field: value}}
which would be equally pointless repetition.

[0]: https://docs.aws.amazon.com/sdk-for-go/api/service/s3/#S3.Ge...


Yep exactly this ^^ thank you for providing great context that I should've added to the comment in the first place!


I would also love this. The current rules for when you're allowed to elide are nonsensical.


Ah yes, that does make sense, I would agree. That Rust doesn't have that (in any context that I know of) is one of its annoyances,


The issue with the order of range seems like using the same name for satisfying a different requirement: in a template, you're much more likely to want the value than the index, so it makes sense that a looping construct with a single parameter would be putting the value in that parameter. In a loop in normal code, you're more likely to want to do math on the index. So, I'd say the problem is more about punning the name of these two behaviors than the behavior itself being bad.


> In a loop in normal code, you're more likely to want to do math on the index.

It really is not, no. The number of loops using `enumerate` (or working on range / indices directly) in Python or Rust are a small fraction of those just iterating the sequence itself.

That would be even more so for Go, which has no higher-level data-oriented utilities (e.g. HOFs or comprehensions, which would usually replace a number of non-indexed loops, and would thus increase the ratio of indexed to non-indexed loops).


Serious question: what are the odds that go 2 ends up like python 3 and it takes the world over a decade of pain to migrate? (I like both python and go, and I’m still maintaining a sizable body of py2 code.)

“Backwards compatibility forever” seems like unnecessary shackles, and the language should be able to grow — I’ve seen some nice proposals for improvements. I just wonder what the strategy is going to be for migrating code from go1 to go2 and how painful that’s going to be.


Not a Go développer here, but let’s take the example of Java when it got (for example) generics or functional features. There was NO going back. These were so fundamental improvements that the past was absolutely outdated as soon as those new features were available. May be the same thing will happen for Go


While I agree with what you're saying, that's not the GP's point. They're talking about backwards compatibility, i.e. old code running on new versions. Java has been extraordinarilly good at that (maybe too much so), with the possible exception of Java 9. Try running Java 1.0 code on JDK 17 and it'll probably run fine. Try running Python 2 code on Python 3 and you're generally going to be in for a very rocky ride.


Not sure how relevant that is, the old code worked fine in post-generics Java.


Technically yes. But, to me, the question is how developpers are changed by a given paradigm shift. It happened with generics, it happens with functional programming. So an evolution in the language can just make its past versions instantly outdated. [afaiu, python 3 did not do that. But my point is to say that it can happen.]


But that’s a completely different matter.

The issue of Python 2 / Python 3 is that the two were not compatible, which made the transition extremely complicated. That has nothing to do with the new version making the old one essentially outdated because of the usefulness of its contents.


Ok. My bad.


> Serious question: what are the odds that go 2

The Go maintainers already said that they don't have any plans to do an actual version 2.0 anymore. Generics turned out to be possible without breaking backward-compatibility.


Rust solved this with editions[0], I don't know if is this feasible with Go.

[0] https://doc.rust-lang.org/edition-guide/editions/index.html


IMHO the long python3 migration was well executed and we're now comfortably on the back side of it. Reminds me of perl4=>5 and other big lifts.

Yes, commercial codebases understaffed for maintenance are kinda stuck, just like any legacy system. IMHO the solution must come from the business model down. Also, security, compliance & cost can help drive priority.


I’ve never heard that take, it seemed completely botched to me. The first couple versions of py3 didn’t even work well, and they were still pushing people to cut over


It wasn't well executed. The standard migration tool (2to3) was kind of considered a failure and the better one (six) wasn't ready, or even around at the time.

Python came this || close to dying during the migration, its users all having moved to other languages. Primarily data science saved it and then some time passed and libraries moved on, etc, and after a while the cost benefit calculation started swaying towards Python3, probably after 3.3 at least, so 4 years after Python3's launch.


This is an absolutely implausible take. Perl 4's entire lifetime was about 5 years; released in 1991 and Perl 5 utterly dominant by 1996. Python 2 shows no signs of an actual EOL to this day!


It would be nice if there was a tool that re-wrote your Go-1 code in Go-2.


Go-1 and Go-2 code could even cohabit using something similar to Rust’s editions system e.g. the `range` behaviour could be something like a package opt-in.

And half the article could be fixed by adding new APIs and deprecating the old ones.


I remember that someone in Go core team (Ian?) said Go 1.18 is Go 2. That means Go 2 is back compatible with Go 1.


Nope, this is exactly what has killed Python3. There is only one way forward - old code still has to work or people will simply stay on 1.0 forever.


I mean… yes, but also, not even remotely.

Languages like Rust have solved this through the concept of editions (or whatever you want to call it), essentially package or module-level “compilation mode” which allows evolving the language without breaking the old code.

The issue of the Python transition is that it was way too massive and the repercussion were way too widespread (e.g. they necessarily leaked into the APIs) for this to be possible, despite Python having an entire mechanism to handle this with `from __future__` (though admittedly that being a file-level attribute makes it less than ideal to move the language forwards).


If you can make compiler work with both - new and old code, then yes, I totally agree. But Python expected you to transpile your code to new version and recompile while also updating many of your dependencies. And that’s not something you easily can do in large codebases.

I’m not sure how editions in rust work, but if compiler can compile all editions and link them in single binary, then its something I’m totally for. But if someone expects people to go through millions of lines of old code and make sure it all works fine after migration to new version of lang, then I’m not sure your language can succeed.


> If you can make compiler work with both - new and old code, then yes, I totally agree.

Which you can, technically even Python has that mechanism (`from __future__ import <thing>` can change the language syntax).

However the ability to use that for the 2/3 transition was basically nil as an enormous number of apis were impacted, the syntax changes were really the easy bits, the semantics were much more difficult to deal with (talking from experience).

> I’m not sure how editions in rust work, but if compiler can compile all editions and link them in single binary, then its something I’m totally for.

That is exactly how it works.


Not so serious question: what is Go2 and Python4 would be the same thing?


ie. they both become Nim ;)


To expand on the logging improvements, I would like to see a `context.WithLogger` context function, to allow cross package passing of a common logger instance.


> Go's templating packages should support compile time type checking.

Does anyone know of a decent type safe templating package out there (for any language)?


React with TypeScript. Very popular and unlike Go's templates has proper type checking.


My biggest annoyance so far is the inability (and inconsistency) to have a one-liner to get a pointer to something else than a struct.

I can write x = &Foo{...} but somehow x = &42 and x = &foo() are not allowed, which forces me in some cases to declare useless variables that hurts readability.


I think Golang is awesome, but I have two major gripes that I hope can be fixed:

Dependency management:

Go mods is a dumpster fire. `go get` and `go install` is finicky and inconsistent across systems.

It's difficult to import local code as a dependency. Using mods with replace feels like a shitty hack, and requires me to maintain a public repo for something I may not want to be public. I end up using ANOTHER hack that replaces mod references to private repos and I have to mess with my git config to properly authenticate.

I've never used another language that made it so difficult to import local code. Rust's cargo is so much easier to use!

Sane dynamic json parsing:

Having to create a perfectly specified struct for every single json object I need to touch is terrible UX. Using `map[string]interface{}` is just gross.

Again, I think Go should copy the existing rust solution from serde. With serde, I define the struct I need, and when I parse an object, the extra fields just get thrown out.

If anyone thinks I'm misunderstanding something, please enlighten me. I hope reasonable solutions already exist and I just haven't found them yet.


> It's difficult to import local code as a dependency. Using mods with replace feels like a shitty hack, and requires me to maintain a public repo for something I may not want to be public. I end up using ANOTHER hack that replaces mod references to private repos and I have to mess with my git config to properly authenticate.

With regards to this concern at least go 1.18 is adding workspaces, which should help (https://sebastian-holstein.de/post/2021-11-08-go-1.18-featur...).


What you describe for JSON is already the case in Go; the stdlib json parser does simply throw out any extra fields on deserialisation.


In fact it's difficult-to-impossible to get the opposite (only json.Decoder supports a strict mode, and as soon as one intermediate type implements UnmarshalJSON it stops getting propagated).


Hardly important but I hope that in parts of the std lib where they added support for contexts by adding a `FooContext` func for every `Foo` and latter just calls the former with `context.Background()`.. we can just have `Foo` that takes a context argument.


Can someone explain this one? I couldn't see why go gave this result.

> What is the value of cp? If you said [A B C], sadly you are incorrect. The value of cp is actually: [C C C]


The "trap" of the snippet is that `cp` is an array of pointers.

What it shows is that Go doesn't have a `value` per iteration, it has a single `value` for the entire loop which it updates for each iteration. This means if you store a pointer to that, you're going to store a pointer to the loop variable which gets updated, and thus at the end of the loop you'll have stored a bunch of pointers to the last item.

This is most commonly an issue when creating a closure inside a loop, as the closure closes over the binding, and since Go has a single binding for the entire loop all the closures will get the same value.


This has suprised me twice, once in my own code where i ended up ot really understand the problem but just mutated the code till it worked and then later when I was helping someone with their code and the way they structured the question made the still suprising answer memorable.

A fix wouldn't be unwelcome but it seems it would have a good chance to cause performance regression - a lot more allocated values maybe on a lot of inner loops. I guess escape analysis might help avoid the allovations in the general case. ?


> A fix wouldn't be unwelcome but it seems it would have a good chance to cause performance regression - a lot more allocated values maybe on a lot of inner loops. I guess escape analysis might help avoid the allovations in the general case. ?

It seems unlikely unless the code is already incorrect (aka you're closing over or otherwise leaking the iteration variable).

But regardless of how I dislike the current loop's scoping this is a significant semantics change so it would obviously have to be opt-in (and it would hopefully come alongside making range loops less crummy e.g. with an actual iterator interface).


in the loop, you set cp[i] to a reference to the variable value. value is the same variable through the loop, with different values copied inside it, first A then B then C. So at the end you have cp having three times a reference to value, with the last value in it, namely C.


Most important items in my plate

    1. a unified improved *error* in stdlib with stack trace support.
    2. a unified log interface(mentioned)
    3. a STL library like c++
    4. shared library support so we dont have 100 static binaries that among them each have 90% of duplicated content. go shall support shared libraries/modules officially.


About the optimization of the regex package: a talk is scheduled tomorrow [0] to FOSDEM 2022.

[0] https://fosdem.org/2022/schedule/event/go_finite_automata/


I’ve heard Go program’s execution performance is near Java.

Is this true because I can’t think of anything more useless than that.


I want golang to stay as minimal as possible. I think of go as 2020s version of C. If you want all the madness of templating, reflection and (arguably) needless features can some privileged PhD student out there please make 2020s C++… go++?


Not go developer, just curious about the language and ecosystem and really like it. For me would be nice to have more functional features. For example - explicitly say that a variable is mutable / immutable, preferably have immutability by default. Also native support for map/filter/reduce/etc. Those are good abstraction and it is easier to read than `for loops`, since you don't need to look over shoulders all the time. Guess latest would be easier to add since there is support for generics already.


"Also native support for map/filter/reduce/etc."

Native support for them in the context of the existing Go spec will be coming with the next release. To reserve the right to evolve the native support before committing it to the backwards compatibility promise, it will initially appear in the https://pkg.go.dev/golang.org/x/exp external repository, but that is the official Go repo for things either too unstable to be included in the standard library, or still experimental. General expectation is it'll be in the standard library in the release after that. It won't be in the standard library, but it's as official as it can be beyond that.

I carefully phrased that with "in the context of the existing Go spec", because I think expectations of this support are wildly out of whack with the reality. It's still going to be a very unpleasant style to work in, with many and manifold problems: http://www.jerf.org/iri/post/2955 . I think people will be crazy to turn to that style in Go. Go wasn't just missing generics to support this style, it was missing many things, and "solving" the generics problem still leaves it missing many things.


What would you need those for? For checking exhaustive type switches? Seems like an extra keyword on "interface" would do the trick.


Is this comment meant for this article? I can't for the life of me grok what you mean!


Guessing it was meant to go on https://news.ycombinator.com/item?id=30205232 ? No idea how it ended up here


Completely true. How it got here, no idea.


We'll move it. Thanks!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: