Hacker News new | past | comments | ask | show | jobs | submit login
Go Enums Suck (zarl.dev)
137 points by pionar 11 months ago | hide | past | favorite | 230 comments



`iota` is maybe the only language feature of Go that I would actually support removing. Obviously, they never will because it would be a breaking change. Its just so vestigial. There's literally no reason to use it, and a very big reason why you shouldn't: The encoded value can change any time you re-compile your program, so you can't actually use it for anything where the value of the enum leaves the process that instantiated it (e.g. marshaling to JSON and sending over the wire). That's a terribly poor characteristic for a feature in a "systems" (emphasis on plural) programming language to have, one would think.


> The encoded value can change any time you re-compile your program, so you can't actually use it for anything where the value of the enum leaves the process that instantiated it

This is not true, iota is stable in its ordering. https://go.dev/ref/spec#Iota


I think they may be referring to if someone accidentally changes the ordering, either by inserting a new variant between two existing ones or by shuffling the order of the existing variants the value can change and cause problems.


Yes, but even with that interpretation, they claimed something much stronger:

> The encoded value can change any time you re-compile your program

Any value (not just enums) can change any time you re-compile your program, if some programmer goes in and messes it up.

The real, much softer criticism would be that Go requires its programmer's to understand the potential consequences of inserting or shuffling enum values (where iota is involved). It's a much weaker case against iota than what they stated.


Ok, fair, yes what I meant is really that 'iota' is capable of introducing action-at-a-distance, albeit in uncommon situations, because new, preceding iota declarations within a const block change the values of subsequent iotas. This wouldn't happen every time you re-compile your program; I meant that more as a shorthand for "potentially can happen when your program changes"; and I can understand why that shorthand is confusing, because a much more poorly designed implementation of iota could actually, conceivably, change the iota values on every re-compile (in much the same way Go randomizes map iteration order, for example); and this is not what Go does.


Agree, iota is terrible. I think I encountered exactly one time where iota was a good fit for what I wanted to do (an internal representation of some kind, so whether the value changes in the future was irrelevant), but even then it was just needlessly opaque compared to just assigning values.

Real enumerations/discriminated unions is the one thing I consistently wish for in the Go annual surveys


Why iota show you a knuckle sandwich


Ada has excellent enumeration types, and subtypes, and compile-time checking of case statement coverage for enum values, and an optional representation mechanism to control the binary value for each (symbolic) enum value. I'm not aware of any other language with that kind of enum type system.

See e.g. https://adaic.org/resources/add_content/docs/craft/html/ch05...


Not only that, but when you define an enum you get 'Pred and 'Succ to move between values, range iteration over all values with 'Range and 'First and 'Last, string conversion with 'Image and parsing with 'Value.


Ada rocks. It's just about the most underappreciated language ever.


Rust I think covers most if not all of Ada features you mentioned


Rust's "enum" mechanism is really an algebraic data type, and corresponds to Ada's enumeration types when applied as discriminants in variant record types. But Ada's enumeration types have a wider context of use, separate and independent from variant record types, including compile- and run-time features related directly to a) symbolic order and symbol names, b) binary representation mapped to these symbols, and c) subtyping/ranges.

I'm not aware of a good online presentation focused exclusively on Ada's enumeration types and their various uses. It's not even singled out in the Rationale documents for the design of the language and the (3?) design revisions since the 1980 launch; maybe the AARM (Annotated Ada Reference Manual) has more focused discussions? I'm not sure, I haven't looked at these since ~20y ago.


Rust enum model is a hybrid.

It allows to set discriminants explicitly and it allows unit-only enums (enums with only discriminants and no structures associated with them). You can control the underlying type of the discriminant too.

With a little bit of derive macro sugar you can even iterate through all the values of enum.


If you're using protobufs anyway, you can get also get this functionality generated by using a proto enum: https://protobuf.dev/reference/go/go-generated/#enum. It works really nicely and has all the features mentioned in the article.

One added benefit is it serializes/deserializes safely (even when you add / remove values), so you can persist and read back values without a problem - even to a different language.


A thing that so many enum solutions miss is that you have to have a path for a value outside of the current definition, or you lose a lot of flexibility in compatibility with any data that crosses wires or disks. Sounds fine for a lot of cases, of course, but in a world of mixed deployment fleets working on data, you pretty much have to have a way to allow a value that is not part of your current definition, or you are basically placing a "poison pill" on your system.


I'm not bent out of shape about go enum's like the OP. However when it comes to writing and reading data over networks or disk io a naked enum was never going to work anyway, not really. Then you turn to protobuf etc so one has a cross os/arch/cpu interoperability


I don't disagree, I don't think. Just wanting to float a reason simpler enums are usually preferred. In particular, you likely want to use the enum to restrict what values you will introduce into a system. You often, sadly, cannot use them to restrict what values are actually there. Which is why the place you pass them will see the raw int.


Yeah, but then should structs ever have a bounded size? Someone might well have gone and done did added a couple of fields to a struct over the wire.


I think I touched on what I feel is the right answer here. The data is separate from the enum. That part is clear and I don't think anyone really disagrees.

What, then, is the enum for? It is specifically to restrict what your code will do. To that end, it is a good way to restrict data your code can introduce. It is not a restriction on what is happening outside of your code, though.

You can, of course, make similar arguments for structs. Or really any data in the code. How do you know your number won't go over some arbitrary size?

Enums are, largely, the most restrictive data in code. Which is why we discuss them more, I think. If folks did work with more big numbers, I'm sure we would be more curious on why things don't act like common lisp where big numbers basically work with no extra work.


>> Sounds fine for a lot of cases, of course, but in a world of mixed deployment fleets working on data, you pretty much have to have a way to allow a value that is not part of your current definition, or you are basically placing a "poison pill" on your system.

I had the joy of a numeric ID (an int) getting a B added to the end of it to distinguish the product as being the "same" but sourced from another vendor... (and vendor is part of the system so this is layers of silly, but useful on the floor if there is a problem).

This is the down side of "fleets" of applications with differing degrees of type safety working in concert.

Should an enum be fixed, and its change need to be reflected in every system? Cause current go out of the box isnt that. Should it be open ended as your suggesting ... because a "new int value" can flow through the system in an "Unsafe" way depending on the permissiveness or quality of your code.

I dont like any of the answers, but I candidly dont have a lot of problems with the current enum system in go. Is it great, no. But if your validating at your borders/boundries (storage, api, etc) and being responsible I dont really give it much thought.


I found this lesson out the hard way by finding that a new "status" had been added to our service and blew up some of our monitoring code. Since our monitoring code was still correct for the cases that they were checking for, and had default clauses to note things that were not relevant, we were crashing purely because I had coded it to convert the incoming value into a java enum. Oops...


well thank you for the example anyway :)


Go and Python have OK enums. I will use them, but they could be simpler/more expressive. This begs the question: Is there an obstacle to releasing better enums in the next Python and Go versions? If the concern is about breaking backwards compatibility, I would be OK with a new type. Is it a culture issue, ie that Python and Go programmers don't use enums much? (Chick + egg here)

Rust's enums are great. No "auto" boilerplate if not mapping to an integer, exhaustive pattern-matching, sub-types etc.


> Rust's enums are great.

Rust doesn't have enums. It has sum types – that for some reason it arbitrarily decided to call enums.

Sum types are great. There is a good case to be made that Go would benefit from the addition of sum types. But until that day there isn't much more you can do with enums. That's all enums are – a set of named constants.


Rust's enums are entirely unlike C enums, and reasonably similar to Java enums.

Not the first time a word has been used for several nearly-unrelated concepts, and it won't be the last.


I've heard this before, but I have a struggle understanding the abstraction.

I make heavy use of rust and Python enums (Are they both misnamed?) Those + structs are generally the base of how I structure code.

The "enums" in the article also seem to be of the same intent. Is this a "no true Scotsman" scenario?

Some research implies the difference is a True Enum involves integer mapping, while a Sum Type is about a type system. I think both the Rust and Python ones can do both. (repr(u8) and .value for Rust/Python respectively)

The use case is generally: You have a selection of choices. You can map them to an integer or w/e if you want for serialization or register addresses etc, but don't have to. Is that a sum type, or an enum? Does it matter?

Another thought:

Maybe:

  #[repr(u8)]
  enum Choice {
    A = 1
    B = 2
  }
Is an enum, while

  enum Choice {
    A(C)
    B(D)
  }
Is a sum type?


Enumerations back to integers. Enumerations can have iterators written on them that exhaustively enumerate the possible values. (Sum types either have no such enumeration at all, or in general, they're useless, so you don't see them.) Enumerations can be represented by a canonical and small set of strings, if you want a string backing them.

This is what an enumeration is, partially because that's precisely what the word "enumeration" means; the ability to assign an ordinal number to each value in the enumeration. To "enumerate" a set is to assign integers to them. In Python, for instance, see the "enumerate" function, which does exactly enumeration on the output of some iterator.

Sum types can be used to represent enumerations, but it's very restrictive subset of sum types. Trying to understand what a "sum type" is through the lens of a single integer would be a very strange way to approach them. Nor are sum types a "superset" of an enumeration; a base sum type is not an enumeration. You need to add more things to it to get an enumeration. In a Venn diagram they're the classic two cicles with some overlap in the middle but with distinct bits on each side.

I do not understand the strangely active desire some people seem to have to erase the distinction between these two things, as if some advantage will result, as if sum types will somehow become more useful than they are or as if they will somehow lose their abilities if we don't also call them enumerations. There is no advantage to smudging these two unique things together. Not saying that you are promoting this per se, the__alchemist, just that I've seen it a lot and I don't get it. It's like someone wanting to claim that database and files are really the same thing; well, sure, there's some overlap, but each does many things the other doesn't and trying to squint until they actually are the same thing is generally the exact wrong direction to go to attain understanding.

To put it another way, when adding an "enumeration" into a network protocol, you allocate some fixed number of bits to hold a given sized integer. When you add "a sum type" into a network protocol, you have a lot more work to do in general.

To put it yet another way, enumerations have meaningful implementations of a ".Next()" that a sum type really doesn't. If you have a sensible implementation of a given method on one type of thing and it's not sensible on some other thing, then clearly they can not be the same thing.

(I say multiple times that a sum types doesn't have such an implementation in this message. By that I mean that while it is trivial to have a "data Color = Red | Green | Blue | RGB Int Int Int" and implement an iterator to walk through all possible values, it is not something that is generally done for all sum types, and if the sum type also includes functions or other complex values it isn't in general possible at all in common programming languages. Again, writing an interator for "all possible functions" is perfectly theoretically possible, but in engineering terms not something anyone would actually do. All enumerations can be iterated.)


This still seems to point towards Rust's enums being both, no?

Example: For a network protocol, see the first code sample I posted.

For a `.Next()`, add the method. (I did this recently)

Regarding sum types into a network not working, this again sounds like the wrapped enums. One way to do this is use an integer for the enum variant at index 0, and conditionally assign bytes of an appropriate size based on the type wrapped for the next set of bytes.


OK, so now you are advocating for erasing the distinctions.

Why? Why is it so important that they be seen as the same thing to you? What benefit is gained from it? What benefit is gained from blending together a data structure that is fixed bit size from a family of data structures of variable size, a fairly fundamental difference? What benefit is gained from failing to consider the fundamentally different uses they are put to? What benefit is gained from looking at someone list a set of differences between the two, and basically saying, "yeah, they're different, but what if not?"

I can name further properties that differ between them. All sum types can embed arbitrary other existing sum types within themselves, without practical limit. Enumerations can not, because A: they may collide on which numbers they use and B: even if you remap them, you can run out of integers, especially with smaller values like byte-sized enumerations. Enumerations may have further structure within themselves, such that particular bits have particular meanings or values a certain number apart may have relationships to each other, or other arithmetic operations can be given some meaning; sum types themselves do not generally have any such relationships. (At least, I've never seen a sum type in two clauses of the sum type are somehow related; that'd be bad design of a sum type anyhow. Even if you did this to an internal integer contained in a sum type, it would be that integer composed in to the sum type that had that relationship, not the sum type.) Sum types have a rich concept of pattern matching that can be applied, enumerations generally do not (some languages can do some pattern matching with bits but there's still no deep structure matching concept in them).

I mean, how many differences are necessary before they are not the same thing? They can not fit into the same amount of memory; one is fixed in size, the other highly variable. One is simple to serialize into memory, the other has lots of complicated machinery. Each has operations generally valid on one but not the other (enumeration, pattern matching, sum type's composition whereas enums can not generally). The range of valid values (or domain, whichever you prefer) is not the same. There are languages that have enumerations without sum types, in that enumerations appeared in mainstream languages decades before sum types were a mainstream conversation. In what other ways could they be different?

It strikes me like arguing that ints and strings are the same, because honestly, what's the difference between 11 and "11" anyhow? Even if you're working in a language that strives to make the distinction as small as possible, you're still going to get in trouble if you believe they really are completely the same thing. And any programmer who goes through like truly thinking 11 and "11" are the same thing is in for a lot of confusion as concepts they should be understanding as separate, even if at times superficially related, are actually the same.


I'm not advocating for anything; I love Rust Enums and use whatever is close to them in other languages, which is usually better than the alternative of matching strings or similar (A convention in Python).

When I hear "These aren't really enums", my first reaction is to dive in and do research. (I'd been down this road before, probably after a similar HN comment...), but I haven't found usable or practical conclusions. It seems like the distinction is too subtle to be of use.

Stated more succinctly, let's call Rust enums "Choices", as I think this is causing semantic trouble. "Choices" are an excellent tool.

I'm looking at this from an engineering perspective; not a CS or abstract mathematics one.

I am curious what your pure Enum, and pure SumDataType look like in practice. I am also curious what existing implementations of either exist. Are they Haskell conventions?


Think about the way each alternative branch of a Rust "enum" can model arbitrarily nested data. That's completely different from the things in Go and Python that are called "enums".


That's because what you insist is the only thing deserving of the name "enum" is just a sum type of unit types.


Sum type are great, but I don't think it fits in go type system.


They also tend to require proper pattern matching to be particularly useful, something which I can't see being added given Go's design philosophy.


True Scotsman spotted!


If people are spending this much time circumventing your language design, you oughta take a look inside. Go "enums" suck and limit the language.


> Go doesn’t technially have Enums and it is a missing feature in my book but there is a Go idiomatic way to achieve roughly the same thing.

Oh, so it's a lot like Python then.

> This is fine however it is nothing but an integer under the covers this means what we actually have is:

Oh, so it's a lot like C++ then.

> But what you notice here is we have no string representation of these Enums so we have to build that out next

Have to!

Yes this still all sucks. (But at least there's ugly historical precedent!)


Go is in this weird middle ground where it's modeled after C, so it's got things like no enums, return codes for errors, mutable everything, nulls, and pointers (that don't support arithmetic, so it's really just this "*" sigil that you have to remember to use sometimes), but it's also fully garbage collected and has built-in, stackful green threads. I have no idea what it's actually trying to be.


It's all of the ergonomics of C combined with the bare metal performance of a garbage collector.


and the verbosity of very old java


I would wager that the vast majority of backend software jobs are for people writing REST API microservices, exchanging JSON, with a mindset that is more practical and "blue-collar" than academic.

Golang is an absolutely ideal language for writing REST API microservices, that exchange JSON, with a practical and blue-collar mindset.

Plus it compiles to small-ish native executables. Which renders Docker superfluous in many uses cases, and also makes it well-suited for writing DevOps tooling (e.g. Docker, everything from HashCorp, etc).

It's not trying to out-cool Haskell and Rust on online message boards. But I would never in a million years evangelize either of those two languages for routine REST API work in most real-world shops, whereas I could suggest Golang without losing professional credibility.


It’s funny to emphasize the vibes of a language contra other ones when you’re supposed to be on the supposedly pragmatic side. Haskell and Rust are popular on “message boards”? Better not mention them among my peers and risk my blue collar street cred.

But it’s a red herring in any case since enum types are such a basic programming language feature. No need to evoke the Cool Kids languages at all.


Completely true. And most of the time, I enjoy writing go. But the enums are weak. However, if the Go team wants to tackle an important addition to the language, it should be non-nillable pointers. No need for Option sum types, just a type annotation that says it cannot be nil (but can be assigned a nil type if you've tested it for not being nil). I've thought about building a linter (like the Uber one), but the way the types in the syntax tree are resolved makes it too complex for a small project.


> Golang is an absolutely ideal language for writing REST API microservices

Those are strong words for a language with all the flaws I just mentioned. :D Yes, green threads are great for network programming, but it's not the only language with them, and one feature does not make it "ideal". If I had to pick the best networking language... I'd probably say Elixir.

But even if we agree that it's ideal, it doesn't change my point.


I find go distasteful, but are there really many other languages with an m:n threading model? Only other popular one I can think of is Erlang/Elixir.


Java 21, and I assume like every scripting language (Ruby, Python, etc). Though I guess with scripting you can't use more than one OS thread (not totally sure). Rust started off with it, and C# tried it too, but there are huge downsides to the model, so it's not like it's perfection incarnate and every other language just can't pull it off.


Does 21 actually automatically schedule fibers? to open cores?


Yup. It's all 100% automatic. Just use a different name for the thread pool and you're done, I believe.


Elixir performance is pretty average and it's not a statically typed language, things will blow up at runtime.


The better language at this to both Java, Go and, god forbid, Elixir is C#. It properly implements generics, pattern matching and task-based asynchronous model which allows to trivially interleave or dispatch all kinds of method calls in ways which require extremely verbose and bulky code in Go.


completely agree Elixir is a much better language all around for this type of work


More efficient than a scripting language, less performant than a systems language. I really like it for anything network-related.


C does have enums. Not trying to detract from your point, which I agree with, but enums are definitely a thing, which C has.


> C does have enums.

Then again, so does Go. Go doesn't have an enum keyword like C, but that isn't what defines enums.


But neither C nor Go have type-safe enums.


They are type safe. They are not value safe, but neither language supports value constraints, so that isn't unexpected.


What is value safety? Why should value constraints be pertinent here? near I can tell this is a neologism (introduced here? https://itnext.io/we-need-to-talk-about-the-bad-sides-of-go-...) that just happens to be near the top of search results in this area.

The introduction rule for enum values in C is _not_ type safe. You know how you can tell? Well typed programs go wrong. a language absolutely does not need value constraints of any kind to get this right.


> Why should value constraints be pertinent here?

Because that's where it is unsafe: You can introduce a value of the same type that is outside of the enumerable range. You cannot introduce a value of a different type, though. It is type safe.

Yeah, any language with a type system worth its salt has value constraints, but if you choose to forego them as C and Go have, you're not going to bother adding them just for enums. It would be kind of silly to leave developers with a noose to hang themselves with everywhere else, but then think you need to tightly hold their hand just for enums.

In fact, I'd argue that if you are short on time and need to make compromises to reach a completion state, enums are the last place you would want to take the time to add value constraints. The types more often used would find much greater benefit from having value constraints.

Case in point: Typescript. When was the last time you cared that its enums behave just like C and Go? Never, I'm sure, because it having value constraints everywhere else more than makes up for it. Giving up value constraints for safer enums is a trade you would never consider.


> Because that's where it is unsafe: You can introduce a value of the same type that is outside of the enumerable range. You cannot introduce a value of a different type, though. It is type safe

C’s type system is unsound, and not all compileable programs respect its dynamic requirements. We cope with this by referring to some code as “not type safe”.

foo bar = NOT_FOO;

You say this “typedef enum {…} foo” is not a type, naming a set of values, but just a convenient alias for whatever the representation is, thus all “enum” (regardless of actual decl) name the same set, and every constructor expression shares the same “type”. Consistent with the language specification, and passes the type checker, so you could say this code is “type safe”? but it’s one hell of a foible and not consistent with any lay (non PLT) understanding of type safety, where typesafety means the type written in the code and the runtime representation won’t desync (no runtime type errors).

If you simply forbid UB and refer to only strictly conforming programs, I will accept this modified meaning of “type safe”, but grumble that this meaning is not very good

edit to encompass parent edit: as a typescript nonprogrammer, I have nothing to add :) I am confused why you are putting the features in opposition. gradual + value-sensitive typing is a good feature, but doesn’t conflict with sums. in ocaml, we support both, real sum types as well as variant [`A | `B] etc that are structural in the way you’d want C to be


> C’s type system is unsound

Along with every other programming language under the sun. A complete type system is not exactly an easy feat – especially if you want it to be usable by people.

> We cope with this by referring to some code as “not type safe”.

Value constraints are an application of types, so yes, if C/Go had value constraints then violation of those constraints would leave it to not be type safe. But they don't have value constraints. Insofar as what the types can constrain, the safety is preserved.

It seems all you are saying is that C (and Go) do not have very advanced type systems. But that shocks nobody. Especially in the case of Go, that was an explicit design decision sung by its creators. You'd have to be living under a rock to not know that.

Was there something useful you were trying to add?


> Was there something useful you were trying to add?

Yes, the clarification about value safety, which you’ve done quite well.

Not every language is unrepentantly unsound.

I continue to identify a confusion in this thread between a property of the languages, and a property of particular code, but I have clearly exhausted your patience. thank you.


> Not every language is unrepentantly unsound.

For sure. Coq does a decent job, but it's also a complete bear to use. Tradeoffs, as always.

> I continue to identify a confusion in this thread between a property of the languages, and a property of particular code

Go on. The original statement was that C and Go do not have type-safe enums. But there is no evidence of that being the case. The types are safe.

Indeed, the types are limited. An integer type, for example, cannot be further narrowed to only 1-10 in these languages. But the lack of a feature does not imply lack of type-safety. It only implies a lack of a feature.


the disagreement is that the program is typesafe just because it was typechecked. BECAUSE the system is unsound (completeness is irrelevant), typechecking doesn’t imply type safety.

… where I am using type safety to mean “no runtime type errors/UB manifest”, ie, the property that a sound typesystem would guarantee _if we had one_. You seem to be saying that just because our type system is impoverished, does not make its resulting claim of “program is type safe” any less valid, whereas I am saying “type safety is a semantic property of programs, not of languages, and this value safety idea seems like it’s what PLTers think type safety means”.

It’s a violation of the C semantics to assign the wrong value to an enumeration, so I would say that fact the language doesn’t do anything at all to enforce or check this promotes this beyond “lack of a feature” and straight into “type unsafe”. However, I’d feel less strongly if at least initializers were checked.

As you say, different language design philosophies lead to this, and it’s not surprising. Most of these ideas came _after_ C anyway!

phone dying… no response soon.


Python does have enums in its standard library

https://docs.python.org/3/library/enum.html


Heh. Python's enums are not real types, they're just funny classes, so you still have to do dumb things like assign an internal value (commonly a dumb int) and were (and probably still are) wildly deficient in many other commonly-desired ways for a very long time. A bunch of things covered in this blog post (like StrEnum and EnumCheck) weren't added until pretty recently.


C++ added enum classes a while back, though.


It's kind of strange to see them complain about enums and then promote a DSL-specific tool they made for generating enums.

At the same time, Go has generators built in and can generate enum tables, enum to strings, and other things they have shown. I am unsure why they didn't do it the "Go" way.


DSL? The go way is to use go generate I would say and it can be used with go:generate, I would like to use the AST lib to parse go files to remove the need for any json and to be more like the cmd stringer tool.


Yes. Your DSL is JSON in this case,

{ "enums": [ { "package": "cmd", "type": "operation", "values": [ "Escalated", "Archived", "Deleted", "Completed" ] } ] }

My point was the "Go" way to do this isn't parsing a custom format (like your JSON), but it's to use go generate.


But all go generate does is run a binary like stringer? This can be used in go generate.


I'm not sure I understand exactly what you mean, but you can combine go:generate with go run so you can execute code from the current module/project that does what you want.

//go:generate go run ./internal/enumhelper -flag1 -flag2


They complain that Go has no usable, type safe enum and then show a way to build them in Go.


I love Go's lightweight, somewhat implicit style of doing enums.

You just declare an int type, and then a list of constants of that type.

People are complaining about 'iota' here, but I think it's slick and great. It combines so nicely with eliding types and values from subsequent const declarations:

  type MyEnum int

  const (
      Value1 MyEnum = iota
      Value2
      Value3
      ...
  )
Nice and simple. Most of the syntax is just the enum value identifiers. And it works well for bit flags too:

  type MyFlags int

  const (
      Flag1 MyFlags = 1 << iota
      Flag2
      Flag3
      ...
  )
Most of the above syntax isn't specific to enums (so you're already getting a lot of other things from it). The only enum-specific syntax is iota and the eliding type/value rule.

People seem to want their languages to have all sorts of guardrails, but I find many of these cumbersome. Go gives me the one enum guardrail I care about: The enums are different types, so I can't use a MyEnum as a MyFlag, or vice versa.

I've worked on giant Go codebases, with Go-style enums all over the place, and the lack of compiler-enforced exhaustive enum switches just hasn't been a problem. And it's nice to be able to use non-exhaustive switches, when you want them. Go is simple and flexible here.

The article criticizes Go incorrectly with statements such as these:

> This also means any function that uses these or a struct that contains these can also just take an int value.

> Anywhere that accepts an Operation Enum type will just as happily accept an int.

This is just not true. Here's an example you can run: https://go.dev/play/p/8VGufuxgK6b

The above example tries to assign an int variable to a MyEnum variable, and gives the following error: "cannot use myInt (variable of type int) as MyEnum value in variable declaration"

This error directly contradicts what is claimed in the article. Perhaps they mean that MyEnum will accept an integer literal, in which case I would argue that a guardrail here is silly, because again the problem just doesn't really come up in practice. Regardless, the author is not being very precise or clear in their thinking.


> Anywhere that accepts an Operation Enum type will just as happily accept an int. This is a real pain as it almost completely negates the work we have done here.

Is this a real problem? If there's a function signature that accepts `Operation`, the caller must explicitly cast the `int` to `Operation`. At that point, it's the caller's own fault.

So I'm not really following what this is solving. As demonstrated in the article, sometimes you want string constants, sometimes you want `iota`, other times you want `1 << iota`. I like that Go doesn't dictate which I have to use if I declare an "enum".


> Is this a real problem? If there's a function signature that accepts `Operation`, the caller must explicitly cast the `int` to `Operation`. At that point, it's the caller's own fault.

You would think that, but that isn't always the case: https://play.golang.com/p/Ze3pfNEVTVs

It's very easy to create an enum value that isn't actually in the defined range


Can you explain the thought process of a developer when they write 'performOperation(2)'? What do they believe '2' signifies in this context? I struggle to believe that this could occur by accident.


You struggle to imagine a programmer passing a value of the wrong type to a function?


Or passing a wrong value and the compiler allows it because the programmer trusted the compiler to "always do the right thing".


I have a totally tangential ramble queued up on this topic.

I like philosophy and I read it as a total amateur. Naming is a big topic in modern philosophy [1] with a huge amount of depth. I think of it in terms of my naïve understanding of Wittgenstein's later work and the idea that the meaning of a word actually comes from its usage within the context of a set of collaborating agents.

If I say to a programmer "use a vector", that will mean something specific if we are writing C++ and I want to use a resizable array. And it could mean something totally different in the context of a 3d rendering engine.

I think of how often I see words like "Context", "Session", "Kernel" and all of their myriad uses.

So I see articles like this as just a pointless argument because we are crossing some boundary between distinct language games. The author of this article thinks "Enum" means one thing. But it is actually the case that "Enum" is unspecified outside of some particular context. And in this case, the author is bringing some outside context and trying to reuse it inappropriately.

1. https://en.wikipedia.org/wiki/Naming_and_Necessity


So I guess you have a point that since Go does not even have enum’s as a formal concept, they can’t suck. But this presumes that the only context where you can criticize Go is the context that only contain’s concepts expressible in Go. This is basically saying “Go does not have enums, therefore it cannot be criticized for lacking enums”.

In the wider context of programming languages, enum is fairly well defined concept. Features like being able to convert a value to a string and do exhaustive checking on switch statements are widely implemented. The iota feature in Go is clearly imitating C’s enum keyword. It is fair to compare Go’s built-in ability to declare an enum-like type against other language’s ability to declare the concept.

To be clear, I’m not saying every language has to have every feature. I’m just saying the lack of a feature in a language is not a sufficient reason to excuse its lack.


“Enumeration” according to a handy dictionary: “the act or process of making or stating a list of things one after another”

I am—for some reason—reminded of philosophers who go looking for metaphysical problems, disputes, etc. where there aren’t any.[1]

https://en.wikipedia.org/wiki/Quietism_(philosophy)


I'm working on an LLM project in Go and the term "context" is overloaded to the hilt--I use it to refer to the LLM context as well as Go's `context.Context` which I'm using all over the place. It's made worse since the most natural plural of context is... context. My solution is to use `ctx` for Go context, `contexts` for arrays of LLM context structs, and `contextPart` for individual context structs.

"model" is another one like that since I'm using it to refer to both data models and ML models. And "prompt" since it can be an LLM prompt or a terminal prompt for the user (this is a CLI tool).

Differentiating all these overlapping terms in ways that aren't super confusing is definitely a challenge.


Context is actually one of my least favorite parts of go. I wish they at least reimplemented it with generics so it’s not just a unlabeled box of maybe some stuff that can also cancel things


Yeah it's definitely a bit of a junk drawer. I'm using it for timeouts, cancelation, and cancelation trees and it's not so bad, but I know it can also be used to pass data around which doesn't strike me as a great idea.


This is the "you're holding it wrong" of linguistic arguments.


To be clear, I'm not saying the author is wrong. The author wants a thing that Go doesn't have (a thing that I would also like to have in Go). He is calling the thing he wants an "Enum". Then he is insinuating that Go actually has the thing he wants but it sucks. It is confusing because Go does have a thing that some people also call "Enum".

What I'm saying is the thing he wants and the thing Go has may share a name but they aren't the same thing. Just like C++ has "vectors" and OpenGL has "vectors" that share some superficial similarities but are ultimately totally different things.

If someone wrote an article that said "OpenGL vectors suck" and then mentioned it missed a bunch of features available in std:vector as justification, most people would recognize this error and dismiss the discussion.


If you are already using gRPC in your codebase, then you can define your enums with Protobuf, which does much of the same as the tool shown in this article.


Ah yes, gRPC, something using which requires more ceremony and worse UX in Go than in …C# of all the languages.


What I like about Go is not the language itself (I'm not a language designer but I dislike a lot of choices that Go makes) but the entire culture around it of doing things the idiomatic way and moving on. I'm someone who, if you give them a tool that's flexible, will spend time optimizing it. And I'm already busy optimizing other stuff so it's nice to have something constant to build upon.

Oh and you don't have to use large power-hungry IDEs that don't integrate with any sort of config management to get a decent experience! (/hj)

If I ever learn Haskell it's over for y'all though.

(Agree with OP btw, using codegen to get the enums I want is a workable remedy for Go's lack of enums.)



I like the "go generate" approach to integrate such tools. I used a different (but identically named) go-enum tool [0], which accepts the go type and generates implementations for a bunch of interfaces. The neat thing is that the starting point is the "idiomatic" go enum definition, rather than description via a separate DSL:

  //go:generate go-enum -type=State
  type State int
  const (
    Unknown State = 0
    Disconnected = 1
    Connected = 2
  )
which then generates a separate file with implementations such as:

  func (i State) MarshalJSON() ([]byte, error) { ...

[0] https://pkg.go.dev/github.com/searKing/golang/tools/go-enum#...


Go's iota is probably one of the worst ideas in all programming languages.

Not a full typesafe enum type, the same clunky "enums" (assigned constants) available in C, but they bother to implement an auto-incremented counter.

So you can't depend on the enum for exhaustiveness warnings e.g. on switch statements, type checking, or correctness, but you do get a useless numeric association autogenerated with iota - so that you can lose the association if you re-order your enum values that you have serialized earlier and want to reload in the future.


I love iota! It comes in handy everywhere.

Don't serialize to raw integers unless you absolutely have to. Serialize to a string value: it's future/oopsie proof and helps with debugging. The nature of iota is pushing people away from bad habits.

But yes, getting warnings about missing enums in switch statements is very handy. But Golang's type system never aspired to be as rigid and encompassing as C++, Haskell, Rust, etc.


>But Golang's type system never aspired to be as rigid and encompassing as C++, Haskell, Rust, etc.

Well, didn't have to aspire to all that to at least make an effort to be more helpful, especially in trivial aspects, like having an actual enumerated type, or an Optional/Error type...


I don't think you understand how minimalist Golang is... the std lib only has room for anything you would ever need for a web service including a full web server, builtins like hashmaps, a bunch of things for concurrent programming like channels, go routines, etc. There's also language features like returning tuples and destructuring them which is actually more advanced than it's peers.

To include an optional type would be against go's identity.


> I don't think you understand how minimalist Golang is... the std lib only has room for anything you would ever need for a web service including a full web server, builtins like hashmaps, a bunch of things for concurrent programming like channels, go routines, etc.

I can't tell if this is sarcasm.

If it isn't, I don't see how what you've mentioned is minimalistic or how adding an option type would be against Go's identity.


Why use ints in the first place if what you really want is strings?


Integers are far faster, smaller, and golang is strongly typed. Once you're past the serialization phase, strings have many disadvantages.


> rigid and encompassing

Rigid makes it sound bad. I would suggest "reliable" instead.


This is well said. I understand wanting to keep the language smallish and simple, but enums is such a basic and well known feature, it's almost asking for trouble or dissatisfaction to not fully implement it.


> Go's iota is probably one of the worst ideas in all programming languages.

iota is a great idea, especially if you have to define bit mask constants, e.g. 1 << iota. I wish other languages (yes, also those with enums) had it as well.


Having an actual set type is much better than having to manually fiddle with bitwise operations.


...also:

* no null safety

* working with errors

* poor type system


The root cause is wanting to support fixed-size array types in structs, which means nothing can require initialization and everything needs a zero value. The caller can just modify every field directly.

This is sort of like how network protocols can’t statically guarantee enums are valid either. When sending bits over the wire, you can send any bits you like. There can be “values reserved for future use,” but to deny their use, you need a runtime check.

A similar solution works in Go. A runtime check in a constructor function will fix it. The enum’s value would need to be returned as an unexported field in a struct, which is the only way to guarantee that it’s not writable, except by copying it from another valid value.

I don’t see a particular reason why Go couldn’t make this easier.


Anyone writing a compiles to go, go++ yet? There are generators for better enums, sum types, and more. Bring them all together! I'm only kinda joking.

Im also curious now if the Typescript checker was written in a way that it could be adapted to new languages easily.


https://github.com/goplus/gop, but they go slightly too overboard imo.


It reminds me of Vlang (vlang.io), but where Goplus is a half way step between Go and them. It strikes one as, a lot of people felt there are things missing in Go that they wanted, but how Go is governed it would never be possible to add or make such changes to the language. So we have Vlang and Goplus.

In the case of Goplus, it compiles to Go. Speaking of which, Vlang allows transpiling from Go and possibly to Go, but from and to C is more of their priority.

The intent of the Goplus author and contributors seems to be that their people could easily switch between both, but that their version is more feature rich.


That looks nothing like go. I wonder why even call it go+ at that point


They already exist, D, C#, Odin, Nim... no need to insist when the community is so down into minimalism.


That actually sounds like an ok idea imo - some more quality of life features but with the small executables and fancy runtime of Go would be pretty nice.


You should not use these enums with ints++ in security sensitive applications, because they're sensitive to rowhammer attacks.

Use uint64s with minimal bit overlap.

Maybe nice to include in this ultra-advanced enum libray


It has always been surprising to me how primitive Go really is, for no really good reason.

I understand the evolution of C, it made perfect sense back when it was invented. And the limitations were necessary due to the wide array of architectures and extremely limited computers of the time in every dimension (CPU speed, IO speed, RAM size, disk size, etc).

Many of those dimensions have been improved by several orders of magnitude, and both compilers and runtimes can afford to be comprehensive. Yet we get this ham-strung language out of the gate.

Very disappointing.


Simplicity is a feature. Sure, how complicated can effective enums really be, but Go's general philosophy is to think hard (& sometimes for a long time) before adding every bell and whistle.

I have a far easier time delving in to previously unknown Go code for the first time compared to something like Scala (or even Java). Go is a solid language for those who value that and want to enable the experience for others.


Simplicity can be taken too far. Take this to its logical close cousin and you would have Forth or Basic.

Also, we have been doing computer language design for quite awhile now. This isn’t a new frontier. The deficiencies in Go aren’t in areas of “oh, we never thought of that!”, but are in very well known areas with known solutions.

I find Go code is obscured with house keeping code that isn’t necessary in better languages.


That simplicity can lead to more complex and hard to read/support code.

For example, to encode a JSON structure with a dynamic top-level key you need to write a custom marshaller OR marshal twice. That's... awful. Like bonkers level insane.


You're right.

Go is not simple, it is idiotically designed to deliberately exclude common sense features that ironically makes it less simple and more error prone to code in and read Go.

Other languages are objectively better than Go for every imaginable use case. Rust is better for embedded. Kotlin is better for back end. I could go on.

The creator of Go is very open and candid that he thinks his target audience, Google Engineers, are too stupid to use "advanced" features like oh I don't know, sane error handling? and any number of basic things other languages have.

I know how cringe it is to start flame wars about programming languages, but srsly, Go, PHP, Perl, JS and a few others really are objectively worse (for every context and use case) than widely used alternatives.


> sane error handling?

Golang _has_ sane error handling. It just considers errors a normal and expected situation.

When you perform a http request, and the result is successful you expect the result to be assigned a variable, right? Then why would you expect non-successful outcome to be returned in a different way? Why is it different? Why do you unwind the stack? Something terrible happened? Definitely not, it's as real life as 200 OK.

For unrecoverable things golang has panics, and if you don't like the idiomatic way of handling errors, you can just throw them like exceptions.


100% agree, and Go gets oh so close

But the correct and only sane way to do this is Either<Error, Success> that you can then pass on, map over both or either of the two, flatMap to chain with other Eithers, fold into a single thing etc etc. Not endless sprinkling of

if err != nil { log.Fatal(err) }

everywhere (and no, those operations are not obscure, esoteric or difficult to learn or understand - they're the same for other types like Option, List etc and are trivial to learn in a day for people who aren't familiar with them)

+ not making the compiler distinguish between null and non-nully values (as eg Kotlin, Rust and Haskell does) in itself as well is inexcusable for a modern language


Btw, tried to implement Result[T] flatmaps etc, it looks uglier than err != nil

func myfunc(url string) Result[string] {

  tup := FromTuplePtr(http.Get(url))

  return FlatMap(tup, func(r http.Response) Result[string] {

    return Map(FromTuple(io.ReadAll(r.Body)), func(b []byte) string {

      return string(b)

    })

  })

}


I agree that the type system should be better. And for some reasons, golang didn't even implement proper tuple types. However, now with generics, you can actually do Result[T] with all functions you described.

> in itself as well is inexcusable for a modern language

In Go, you can assign nils only to pointers


What language typically returns the `Either<Error,Success>` you refer to here? I get (and love) the idea but have never seen it in official documentation (sure I could go off the beaten path and implement in my language of choice).

Also, did you come up with this on your own, or were you exposed to it?



not as many languages as you'd hope unfortunately, but plenty do (see eg other reply you got, there are more still including F# etc etc)

+ other languages get close, eg Kotlin has nullable types (which is a poor substitute) and Result (which is also poor because it's not a true Either)

that said lots of languages these days have libraries that do it (Arrow, Vavr and countless others)

IMO the killer simple language that Go tries and fails to be would be something like a Kotlin+Arrow with heavily reduced syntax and features, eg

no exceptions (use Either or a correct Result type)

no loops (use map, fold etc)

no nulls (use a correct Option/Maybe type)

etc etc

= in such a language, we learn that methods return things, those things will be what they say they are (guaranteed by the compiler), they will tell you what you can do with them, and if a program compiles, you can be pretty damn sure it works as intended

insert "all the languages are broken, I should create a new language" meme here...


C# gets pretty close with NRTs, pattern matching, terse record declaration and task-based async syntax, lambdas and Result libraries if you like those. Also nicely builds to self-contained binaries, both JIT and AOT.


> Why do you unwind the stack? Something terrible happened? Definitely not,

Definitely do.

In Go we just have to emulate it, badly, by manually writing code to forward the error up the stack so you can finally top-level print “error bad thing happen” or maybe some unholy stringification of wrapped errors possibly collected along the way.


Please explain how errors are fundamentally different so they require drastically different way of returning.


I still can't understand how anyone designing a language can defend self-rolled stack traces as a good thing.


You can in theory unwrap them but that seems to rarely be something people use in the real world


It is basically Limbo with updated syntax, and AOT instead of a JIT.


Very disappointing.

Disappointing it maybe but productive it is not considering people can easily move to far better languages.


“Anywhere that accepts an Operation Enum type will just as happily accept an int.”

Hey! Just like in C! I digress, I think the issue is that the author comes from another language where enums are a thing. In go, they aren’t. Enums should be types. Types that don’t infer to an int. Use an interface. Be happy.


enums are handled poorly in so many language. very confusing.


I generally agree that this is a big problem with Go, so I don't want to quibble too much, but the author acknowledges that the language doesn't have enums and that they're just trying to use this feature like enums (TBF, this is common advise on the internet and a lot of code does this): instead the author should be thinking "how do I solve this problem without enums since they don't exist?"

I'd be willing to bet that there's just a better way to do whatever the actual real-world example they want to achieve is (this was not entirely clear to me from the examples in the post).

Like I said though, that doesn't mean that (real) enums wouldn't be an even better way to do it than whatever the Go way is for a given problem, so I don't want to quibble too much since I think this is one of my biggest day-to-day complaints about Go, but it's worth pointing out that the premise can be flawed and that it's still a problem in the language, these two things aren't completely orthogonal.

TL;DR — Instead of pulling in a code generator and another library, it may be good to think of alternate ways to do the same thing without a lot of extra code footprint.


>instead the author should be thinking "how do I solve this problem without enums since they don't exist?"

Which is exactly what they do in the post.

They still have every reason to complain about Go's oft suggested lame substitute.


> They still have every reason to complain about Go's oft suggested lame substitute.

Well, yeah, this is also the reason they deserve quite a bit of ridicule from actual Go users.


The author considered the Go pattern for “enums”. Found it lacking. Made their own code generator for their preferred pattern.

That’s two options. What are the others that are meaningfully different? You have to be able to deal with simple “sum types” in the sense of: this type could take on the value of one of these X predetermined constants. This requirement doesn’t disappear just because the language doesn’t directly support it.


I can't really think how this could be done.

Other languages either substitute enum with primitive type, string, or use strong type system tricks.

Go do duck typing, .. so..


Enums suck in a bunch of languages including C#. It has binary compatibility issues that need consideration along with some other gotchas and shortcomings.

So much so that much of the dotnet official stuff, ie asp.net, use static classes with string fields instead of enums.

Unfortunately that doesn't play well with libraries that have enum support like entity framework. PITA.

One saving grace is the ability to create extension methods on enums.


It's actually the coolest thing about Go enums, that they just it – enums.

Developers with a background in other languages assume all enums' use cases need string representation. Well, no. They are needed sometimes, but not always.

The same with the ability to pass int to the enum. Author says:

> Anywhere that accepts an Operation Enum type will just as happily accept an int.

Well, this is simply not true. [1] You'll have to cast your int into your enum, which is totally fine if it's your intent. Granted, there are plenty of valid cases where you need validated input, especially for the public libraries. But hey, not every code is a publicly facing library, and not all need this validation. Why spend CPU and memory on something that probably won't be needed, and that can be implemented with the existing language primitives?

In the end, the author did a great job of solving his own requirements around enums and even wrote a code generator that helps him generate this for millions of enum types per second. :)

[1] https://play.golang.com/p/Ch-IZ26p0v8


>> Anywhere that accepts an Operation Enum type will just as happily accept an int.

>Well, this is simply not true.

As comment above [1] pointed out `printOperation(2)` is still valid.

[1] https://news.ycombinator.com/item?id=39565640


Not trying to nitpick here, but '2' is not an int here. It's the constant evaluated at compile time. But yeah, valid case if users of your code are in the habit of typing magic numbers into your function that expects enum.

My understanding is that on a practical level, these kinds of issues arise from misusing types (or not caring about them) and naively putting variables of one type into the function that expects another. Examples with number constants typed in manually do not hold ground.


This feature that technically doesn’t exist sucks


I love Go. Especially how the devs stubbornly refuse to learn anything from Java, but stumble boneheaded into everything that Java solved over the years. Generics? We don't need that .. (time goes on) .. okay, damn it. Here! Generics! Enums? We don't need that, just do iota/integers!

How long will it be this time until the Go devs accept that Java Enums are a safer and better abstraction over integers for the cases where you'd want an Enum? And that they allow something like EnumSet, which are type-safe bitsets, without everyone having to do that by hand?


I think boneheaded is a good way to describe the evolution of go. It seems like the original authors were convinced most of the complexity of modern languages was unjustified and have slowly proven themselves incorrect over the years


Not a great take. The go team made a lot of decisions that weren't mainstream at the time, and nailed them. Fast builds, native binaries, language simplicity, new concurrent primitives, interface model, defer, no build flags, package system.

Yeah it has evolved a bit since, but keeping the language simple is a worthwhile goal, so they didn't make rapid changes. It was intentional and thoughtful. If you want lots of language features, pick another language. I'll take my simple one.

Also: go enums do suck.


> new concurrent primitives

You could make a case that concurrency primitives weren't mainstream in programming languages at the time. But there's not a strong case for saying that Go introduced new concurrency primitives unless you just ignore the history of programming and programming languages. Nothing in Go's concurrency model was new. Not quite mainstream, sure. But not new. [EDIT: By primitives I take you to mean built-in to the language, not brought in via libraries like pthreads or something.]

I'd also question the statement that "native binaries" were not mainstream. That seems to ignore a lot of code out there, including the C++ code that Go was (in part) meant to replace at Google.

Defer as syntax is maybe new? But some form of finally construct was in a lot of languages used at the time Go was developed. Defer flattens the code by reducing indentation levels, but it introduced nothing new in terms of concepts that weren't already being used by programmers of mainstream languages.


Faiiir. Nothing was net new concept. But the package they made was quite unique. Garbage collected but always native. No thread access, native channels and coroutines instead. Defer is pretty much net new in language design terms. No while loop?!? “If err != nil”!!? Lots of bold ideas, in a good package, and it worked so well. Calling the evolution boneheaded dismisses how hard it is to make so many opinionated bets in one go, and still make something successful.


Just pointing out that it was not really as novel as you seem to believe it was at the time it came out.

> Garbage collected but always native.

Ok, sure. There were no other native garbage collected languages. Ignoring history, this is true.

> No thread access, native channels and coroutines instead.

If we ignore history again, also new with Go.

> Defer is pretty much net new in language design terms.

I can't think of an equivalent in the form of syntax, so sure. This is a point to Go. It's a small change, but useful for flattening code.

> No while loop?!?

I don't know why the exclamation mark. They have one named type of loop with `for`, but they definitely have a while loop:

  for x <= 10 {
    ...
  }
That's a while loop, it's not an infinite loop, it's not a do-while loop. That they reduced their looping constructs to one name (and then determine which actual loop kind by what's between `for` and `{`) does not mean they actually removed while loops. This does simplify the syntax, maybe.

> “If err != nil”!!?

[edit: missed this one]

  if (some_c_lib_fun(...) == -1) {
    // check the errno
  }
> Calling the evolution boneheaded

I didn't. Why are you putting this here?


defer is a poor man’s C# IDisposable (or even IAsyncDisposable)

    // The file handle will be freed at OS level when exiting current scope
    using var file = File.OpenHandle("somefile");


defer is best compared to try/finally or unwind-protect and friends. You can defer any action, not just whatever the IDisposable happens to cleanup. It also gets access to the lexical scope for its deferred actions.

You could imitate that with IDisposable, but it would be overkill compared to just using finally (in C#).


^ thread


I totally agree that go nailed the head on a lot of things, including the list you provided (minus package system). I’m not convinced the evolution was intentional though from the start. From my memory the _attitude_ of the go ecosystem was that generics were not worth the complexity (for example). I don’t have any concrete evidence of that, it’s just the vibe I got from talking to folks about it.

I also have great distaste for error handling in go but that’s a distinct argument to have.


>>It seems like the original authors were convinced most of the complexity of modern languages was unjustified and have slowly proven themselves incorrect over the years

Python was the same way a few years back. People would give elaborate lectures on why Perl's features were so bad, only that they agreed to add many such features to Python within a decade, even at the extreme act of breaking backwards compatibility.

This is seems to be a common arc to so many things. When you start you are all about principles and as you age, you realise practicalities of every day life demand making lots of tradeoffs and deviations from founding principles.


> convinced most of the complexity of modern languages was unjustified

You don't think that most of the complexity of modern languages is unjustified?


I think most modern languages have complexity in the wrong areas. The type systems are not complex enough to capture the things I want algebraic data types with exhaustiveness guarantees).


Not even Java. The entirety of programming language development. Go is a language written by very good software developers but very bad language designers. It's an entire language of "why don't you just..." statements. Errors? Why don't you just return a value? Generics? Why don't you just use duck typing? Packages? Why don't you just use vendoring?

In some cases these statements have some merit, but, as in most cases, they demonstrate that the authors didn't really do their homework before making a language. Or they willfully ignored all of these issues. I don't know which is worse.


> Generics?

Go does have generics, though.

> Why don't you just use duck typing?

Go doesn't have duck typing. It has structural typing, which is not duck typing. Duck typing is dynamic typing (at runtime), structural typing is static (at compile time).

> Packages?

Go does have packages.

> Why don't you just use vendoring?

In Go it's recommended to use versioned modules, not vendoring.


It has generics now. It took them a solid 10 years to get it. Packaging took them what, 8 years? They had that ridiculous GOROOT stuff for the longest time. And interfaces are basically duck typed.


Go interfaces are not ducked typed.


So the inventors and maintainers of V8 and C are very bad language designers? I don't like either of those languages but those are aome pretty strong words.


Did any V8 people work on Go? And V8 is an implementation of an existing language, so that fits under "very good software engineer, very bad language designer". As for C, well, it's not a fantastic language. Not to mention, it's been 50 years since C was created, ignoring those 50 years of progress to build essentially C with GC is pretty poor language design.


Robert Griesemer worked on V8 and Ken Thompson invented C. V8 is obviously not a language, but it is a massive project to make it as obscenely fast as it is and used in as many places, which should give Griesemer some credibility. I personally don't like C, but everything is obvious in hindsight and it's used everywhere, so you gotta give some credit to Thompson as well.


I'm struggling to understand the context of this comment. What do the inventors of V8 and C have to do with this topic?


> Go is a language written by very good software developers but very bad language designers

That's exactly why I love it so much


> We don't need that .. (time goes on) .. okay, damn it.

To prove your point further, at the time I got the impression that most of the community was against that decision and didn't see the point in introducing generics in the language.


> Generics? We don't need that

You make it sound as though the developers were opposed to generics, which isn't accurate. Perhaps some in the community expressed such sentiments. The plan has always been to possibly include generics at some point, which they then did.


You cannot expect to get the Turing award for just creating a language that uses stuff already known and just changes the syntactic sugar.[1]

To be explicit though, this comment is spot on. As someone who was part of the original Java development group when it was called "First Person Inc", I found the language "equivalence" concept debates the most interesting. For example, is Boolean a first class type? Or is it just a one bit integer? Is integer always signed? If you have 1 bit integers, 8 bit integers, 16, 32, and 64 bit integers, why not make 1024 bit integers a type too? Why is the number of bits fixed? If you want to be super radical, is it bits in an integer or is it digits? Is the integer type (radix, digits)? At one time there were discussions about real (signed), integer (unsigned), frac (fractional) and float (split).

And then a product manager type walks in and says something like "Love the architectural purity y'all are going for here but nobody else uses all these things so let's not make something that is so complicated we'll never ship it."

The author does a good job of exploring the characteristics of "good" enums, and I think it would be even better if it was understood that if your language is going to be used to implement finite state machines (which most programming languages do) then having strong protections against injecting invalid states into those machines is essential. If the language provides a way, that is great, otherwise you end up like the author did generating 30 - 50 lines of code for something that should take 3 - 5 lines to express.

[1] This is an inside joke, IYKYK


Forget about Java, not even Pascal or C enumerations from 50 years ago!


Java generics aren't even proper generics. For that better look at Rust and C#.


Everything in Java is a compromise. The leaders of Java will often admit that. Josh Bloch openly talks about Java being a working man's language, a blue collar one. Every idea has to be tampered down to make it fit for Java's purpose (Bloch has said that they blew Java's complexity budget on closures and wildcards). BGGA was the real proposal for adding closures to Java, but instead it went with CICE. Java works, and it's good enough to work in, and it's got enough jobs in it. But no one thinks it's the perfect language. There are no Java equivalents of C++ or Haskell die-hards.


This sounds like a strange way to excuse deficiencies of a language in verbosity or otherwise.


In what sense? Because they only apply to non-primitive types?


I'm assuming because of erasure?

In C#, List<T> and List<U> follows the same assignment rules as T and U, and at runtime are represented by distinct types. That means that going from List<T> to object to List<U> causes a runtime error at the point of casting.

In Java, every generic type is erased to object at runtime, so the runtime type is just List, and you could cast List<T> to object to List<U> and only get an error later, when you try calling U methods on the contents of the list.

(Yes in C# List is a concrete vector type and in Java it is a random-access collection interface, but that is not relevant here)


The delayed error happens only when you ignore unchecked warnings, which would have been compiler errors if not for backward compatibility. One can turn them into errors with `-Werror`.

The type erasure has occasional benefits, like allowing objects that are polymorphic in their type argument when that’s still safe semantically (a simple example being emptyList() and emptySet()), where the type system isn’t expressive enough to otherwise allow it. This is a bit like the “unsafe” escape in other languages.


if T and U don't have the same erasure the compiler will forbid the cast

if they do the compiler will warn you that an List<T> to List<U> cast is naughty

but in that case the only methods you can call on it are that of the erased type anyway

in practice I don't think I've ever seen a bug as a result of this type of erasure (and I've probably worked with at least several million lines of Java)


To allow backward compatibility Java introduced Generics with type erasure, which in short means they only exist at compile-time, not at runtime (there are some hacks around that, which various devs have used with great success to still get the information). That is another reason to just start with Generics from the beginning if you design a new language, so you won't have compatibility problems when you introduce them. It's not like Generics were a controversial feature when Go came out.

C#, which is often cited as an example for "generics done right" chose another path, which allowed generics at runtime - they made a hard break and just threw backward compatibility out of the window iirc. The reason Javas designers didn't do that is not only introduced generics far later in its lifecycle, but Java also has always followed the hard rule that breaking backward compatibility is something which should only ever used as a last resort and never between two versions directly following each other.


IIRC Arthur Van Hoff, one of the original Java developers, actually advocated for the inclusion of generics in the initial version, but it was dropped due to time constraints. It’s one of those features that a statically types language will always regret to not include from the beginning.


The "threw backward compatibility out of the window" happened in .NETFW 2.0, in 2005 (it did not, non-generic code that targets the pre-2.0 spec would work even today, the SDK is dead long ago but copied verbatim it would just run).


No, Java generics are basically syntactic sugar over casts, which is why types are erased at runtime when you're trying to debug. Performance also isn't as good as for C# generics since the Java approach limits optimizations.


Haskell also has type erasure. It’s an implementation detail, or are you saying that Haskell does not have proper generics?


I don't know the specifics of Haskell's implementation but if it's mostly the same as Java's then yeah?


That's a minor implementation detail that devs almost never have to think about.


You'll run into it with generic arrays[0] in Java which are reasonably common.

[0]https://www.baeldung.com/java-generic-array


As someone who works with Java all the time I'm the first to admit that something like TypeToken in GSON or comparable things in other libraries is not the greatest of things. I've also more than once wished I could to if(xy instanceof List<Something>), which you cannot do in Java. Can you work around it? Sure. Do I understand why Java has it? Yes. But "minor" .. no, it's not so minor in my experience.


Go doesn't have enums and as such they cannot suck. Something that is non-existent can't be good or bad. The title is clickbait. Go has constants, and they have a great feature for defining constants (iota).


> Go has constants

That's literally what enums are: A set of named constants.

You might be thinking of what is traditionally known as sum types, which some people have recently started calling enums[1]. Indeed, Go does not have sum types.

[1] Presumably because of Rust using the wrong term when specifying its sum types


is there any further information on why Rust and other recent languages have started using `enum` to refer to sum types? I don't use Rust or TypeScript (edit: apparently TS doesn't have this, my memory is bad) or any of those languages and it's been very strange to see this redefinition occur


Maybe to appeal to C and C++ developers. Rust makes its syntax superficially similar to C/C++ syntax in many other ways: pointer/reference syntax, declaration of "struct" types, generic types using <T>, curly brace block structure, and the naming conventions enforced by their lints. To be fair, many of these traits of C and C++ are also copied by other programming languages (e.g. curly braces). But they could have gone in a different direction and had pointer and record syntax more like Pascal, or made a syntax more like OCaml, Standard ML, or Haskell.


I think it might be due to the O'Caml influence on early Rust. They call them enums there[0].

[0] https://www.ocamlwiki.com/index.php?title=Enums_in_OCaml


They are called variants in OCaml (and inductive data types in Roqc).

But way more important: this OCaml wiki is an AI generated mess full of, well, bullshit. https://discuss.ocaml.org/t/whats-up-with-ocamlwiki/13605


Ah, I see. That's sad :(

... and now that you mention it, I do remember the variants terminology, esp. around the polymorphic variants feature. It's been 20+ years since I used OCaml, I'm afraid...


My best guess is Java.

Edit: I guess if you've never seen that this is, uh, controversial. Or something. Anyway, Java enums are full-strength classes, look at the planet example here https://docs.oracle.com/javase/tutorial/java/javaOO/enum.htm...

This is more like a Rust enum than a C one, I think you'll find.


Typescript has actual enums. They behave just like Go's (for better or worse).

It's not clear why Rust got confused.


yep, sorry about that, I misremembered TypeScript as having the same "kind of enums" as Rust.


TypeScript has discriminated unions if that's what you were thinking of

https://www.typescriptlang.org/docs/handbook/2/narrowing.htm...


The article is about enums, which are lacking in Go.


No, Go definitely has enums. It is sum types (that some people have recently started calling enums) that Go lacks.


Go does not model enums as separate types (like e.g. Pascal), they are essentially just integers like C.


No, Go enums are definitely separate types. Sure, technically there is also an integer (or some other base representation) hidden in there somewhere, but that's what an enumeration is. Without that you don't have an enum.


Creating new types wrapping int is not really the same thing. It's not a closed set. Presumably one could define additional overlapping constants with the same integer type elsewhere?


Go types do not support value constraints, no. That has nothing to do with enums, though. That's a different feature altogether.


It has a lot to do with enums, especially if you are claiming statically typed enums. When defining a type, more often than not, we want to define the values that make up the set. For example, 'type boolean = true | false'


> we want to define the values that make up the set. For example, 'type boolean = true | false'

Sure, or, more relevant, `type monthOrdinal = 1-12` or `type email = {string}@{string}`. Any advanced type system will allow for that, of course, but Go does not. It does not even pretend to claim to be an advanced language. It has, quite explicitly, chosen to not be.

Yes, you are right that if Go had value constraints then an enum type could utilize those constraints, but, again, nothing to do with enums themselves. You are confusing unrelated features.


> You're confusing unrelated features

Actually I think you are. For example, almost all statically typed languages since Pascal do not have value constraints but support typed enums as closed sets. There's no advanced type system needed - no need to define enums as integers and then put additional constraints in the type system to try and restrict this. There is also no need to model enums as integers in the type system in order to use integers as a runtime representation.


You can't have closed enums without value constraints. Yes, some languages have been lazy and provided value constraints only for enum types.

Which is an interesting choice: Give a noose for developers to hang themselves with for every single other type other than enums – the types they are going to use most often – and not think twice, but then go full on helicopter parent when using enums – the one type that isn't particularly interesting.

It's a neat parlour trick, don't get me wrong, but I guess that's why almost all of the popular statically typed languages since Pascal (C, C++[1], Typescript[2], etc.) didn't bother with closed enums. They put their time into features that actually mattered to developers instead.

[1] Added later in life, granted.

[2] Ironically, does support value constraints except in the case of using enum.


> You can't have closed enums without value constraints

Sum types and closed enums don't need to constrain existing sets of values, they define the set of values. Again, I think you might be confusing the type system with runtime representation.

> It's a neat parlour trick, don't get me wrong,

It's a step towards sum types which are the mathematical dual of product types. Not a parlour trick at all, every modern language should have algebraic data types.


Using integers to model optional behaviour sucks, whatever you choose to call it. It's what one does in assembly language and C. Even Pascal from the early 70s had type-safe enumerations.


> The title is clickbait.

The article is an advertisement for the authors own Go package that addresses the "problem."


    func (o Operation) IsValid() bool {
        if o == Unknown {
            return false
        }
        return true
    }
Why, oh why don't people just write

    return o != Unknown
This is so common in the code that I'm seeing on the Internet, on GitHub etc. Is it because people don't understand booleans?


This is in line with 'guard clauses' and this is why it is accepted as idiomatic code.

For this example we only have a single condition but as soon as you add more conditions it start to get out of hand.

I prefer the original code because it makes the codebase as a whole easier to read. But I don't think there are any 'hard facts' to support using either of these styles over the other in these simple cases.


There is one simple hard fact: shorter code is faster to read and understand, period. If someone doesn't understand what the result of `o != Unknown` is then they should probably go back and read more about programming. Sorry to sound a bit condescending.


I would’ve written a shorter letter if I had more time. I’ve also noticed people don’t want to think through Boolean logic enough to write things legibly. Or the other extreme is way too many Boolean operators on a line where it gets very hard to parse. Even worse when there are multiple cases like that in one function


That's clearly not true, and one of the primary design choices Go makes. It trades concise code for more readable code.


A good linter can help people see where their code isn't concise and give nudges toward better style. Clearly, being concise can sometimes mean being opaque, but in this case it's not, the boilerplate dilutes the meaning without adding value.

Go was developed as a highly opinionated language in which this style of imperative code is apparently preferred. Similarly, the ternary operator shines as a way to use a single assignment to obtain a value that follows from a concise boolean expression, but the ternary operator is wholly omitted from Golang.

Although, being opinionated is not in itself a bad thing. If you want to create something, it helps to have a viewpoint that gives you something to say.


I think it depends on your definition of "clean code". I do personally like the more succinct version, but "never use a negative condition" makes people do.... stuff like this


Hey did you read to the bottom of the article. It shows that that was just a temporary solution, the real invalid code is

func (t operation) IsValid() bool { _, ok := strOperationMap[s] return ok }

Tbh it has been updated to us an array instead for string indexing.


Most people aren't good programmers. And most also don't make a concerted effort to improve.

Sorry to say, but that's the simplest truth. Many go years and years in this industry with little improvement to quality/succinctness/readability. If you cared, and tried, you would.

Programming skill is uniquely distinct from domain knowledge, by the way. e.g. all of the research code written by domain experts that is full on spaghetti.


They don't use linters either.


Pascal in its original 1970's design,

    type myEnum = (value1, value2, value3, value4)
Naturally that is too advanced and slows compile times.


Wirth regretted it later and didn't add it to Oberon. Quote from "From Modula to Oberon":

"Enumeration types appear to be a simple enough feature to be uncontroversial. However, they defy extensibility over module boundaries. Either a facility to extend given enumeration types has to be introduced, or they have to be dropped. A reason in favour of the latter, radical solution was the observation that in a growing number of programs the indiscriminate use of enumerations (and subranges) had led to a type explosion that contributed not to program clarity but rather to verbosity. In connection with import and export, enumerations give rise to the exceptional rule that the import of a type identifier also causes the (automatic) import of all associated constant identifiers. This exceptional rule defies conceptual simplicity and causes unpleasant problems for the implementor."


> enumerations give rise to the exceptional rule that the import of a type identifier also causes the (automatic) import of all associated constant identifiers

I am confused by this assertion. I mean, if I had a module `Source` defining an enum:

    type MyEnum = (value1, value2, value3, value4)
and another module wanted to import it:

    import ( MyEnum ) from "Source"
I don't see how I have automatically imported all of the associated constant identifiers. Unless he was assuming that this would force me to import `value1`, `value2`, etc. as distinct identifiers? But it seems these ought to be namespaced inside `MyEnum`, e.g.

    MyEnum.value1
And one could easily imagine an import syntax to selectively import Enum values if desired:

    import ( MyEnum: ( value1, value3 ) ) from "Source"
Of course, I'm just making up syntax, but I hope the meaning is clear.


I know, and there is a reason why my favourite descendant from Oberon is Active Oberon, and not what Wirth pursued after 1992.

Oberon-07 minimalism doesn't make Go better.


You can disagree with design philosophies, but Wirth and the Go designers probably have thought more about these things than you.


Appeal to authority.

Also note that none of Wirth's Oberon variants have achieved any commercial success.

Modula-2, which had enums, on the other hand did enjoy a limited success, across UNIX, PC and Amiga, and is nowadays even available as standard GCC fronted.

Go would have been a failure if the authors weren't Google employees, like it happened with their Limbo.


Dart is a really good example. A dead language but somehow it was created by Google, what gives?


Everyone giving this example keeps missing the part of Google politics, where the team lost the support from Chrome team, the language development was rescued by AdWords team that had just migrated from GWT into AngularDart before this occurred, key language designers like Gilad Bracha and Kasper Lund left Google, Angular migration to Typescript, leaving Dart only existence reason being powering AdWords until Flutter came to be.

Nowadays I would bet there are more people using Flutter/Dart than either Xamarin, Cordova or React Native on mobile apps.

Also anyone using JavaScript libraries like scss, is dependent on Dart, at least until they remember to rewrite it in Rust, as is now fashionable on JavaScript tooling ecosystem.


> Naturally that is too advanced and slows compile times.

Sarcasm ?


On the spot.


>>> But what you notice here is we have no string representation of these Enums so we have to build that out next, for this I just use a map[Operation]string and instantiate this with the defined string constants.

Cries in I18N...


Or just define the constants as string constants in the first place: "type Operation string"




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: