Enums, immutables, and generics aren't good because they're exciting. They're good because they give us expressive tools to write descriptive, type safe systems that manage state better.
I don't like "modern" systems because I have a fetish for novelty. There's nothing novel about these concepts, they've been around since the 60s and 70s. I like these tools because they improve my ability to reason about the code, but more importantly they let the compiler and other static analysis tools reason about my code.
I am getting old and lazy. I want the compiler to do more for me, not less.
What I see is a situation where Go is gaining traction from two communities:
a) people who attempted to build large systems at scale in dynamic late bound languages like Python and Ruby and NodeJS etc, and hit the wall from a performance and maintainability POV. I could have warned you...
b) people who came from the Java world and got frustrated
with the language and tooling there
People coming from a) especially but also b) to some degree will be perfectly comfortable with Go missing the nicer aspects of modern static typed languages because they never had them in the first place.
As for...
"Go doesn’t have a virtual machine or an LLVM-based compiler."
How is this any kind of pro or con? It's just an implementation detail.
Exactly. I don't want sum types becuase they're exciting, I want them because they're the simplest way to communicate to the compiler that "my data can either be this or this and nothing else", so I don't blow off my foot by accidentally making it that.
When I was younger and less jaded, it actually was fun and exciting thinking up ways to hack around language limitations like that. Now it's just frustrating.
That actually surprises me a lot. I've seen plenty of critique about lacking generics and exceptions, while haven't seen much complaint about ADTs.
From my rather dilettante point of view they are neither too complex for "philosophy of simplicity", nor they add much of compile time. It seems to me they even can be implemented as a syntactic sugar on top of type casts.
I certainly cannot see them being more complex than generics.
Yeah, it's unclear to me why there's not more demand for them either. I think it might that most go users with prior static language experience are coming from Java, where there aren't sum types either.
From my naive perspective, they seem like an easy win, and are 100% more important to me than generics.
The problem with this is it's extremely limiting and conflicts with interfaces. So it's not very useful: you can't define enumerations with no associated data, or enumerations with the same associated data types, without the syntactic overhead of explicitly wrapping and unwrapping those types. So you'd end up with
type thing1 int
type thing2 int
type foo either {
thing1
thing2
}
var f foo = …
switch v := f.(type) {
case thing1:
data = int(v)
case thing2:
data = int(v)
…
and frankly that's a bit gross.
I think `select` would be a better basis for dispatching than switch as it already supports getting data out of stuff, and it better represents the linear top-to-bottom dispatching.
I see; that's because I'm this case we'd be fusing the discriminator with the payload.
With destructuring pattern match constructs you'd be binding variables to "inner" members of the sum type.
I do understand that. I'm not so sure it's so important, compared to
Just being able to say that a box can contain either A or B or C.
Interfaces are great when you don't care what does a box contain as long as it quacks like a duck.
Sometimes though you need to carry around one out of N types of something and currently all you can do is to use an interface{} and deal with the possibility of a runtime error if somebody breaks the invariant.
> I do understand that. I'm not so sure it's so important
It absolutely is, even more so because of Go's interface. The difficulty of properly discriminating between interfaces extending one another is a reason the FAQ gives for rejecting union types.
> compared to Just being able to say that a box can contain either A or B or C.
I'd argue that this is by far the lesser use case, and furthermore trivially and completely subsumed by the alternative.
> Sometimes though you need to carry around one out of N types of something and currently all you can do is to use an interface{} and deal with the possibility of a runtime error if somebody breaks the invariant.
And sometimes you need to carry around one of N values some of which have overlapping contents and currently all you can do is get bent.
I believe sum types are not added because they come close in functionality to interfaces, the idea being that if something should be X or Y, you make an interface that X and Y implement.
That's fair from one use, but the other, way more common in Go, is the whole "I'm going to return either an object OR an error". There's no common interface between the two, it's a distinct two options. Because go has no native support for sum types you get all this nonsense where every function returns a tuple of an object and an error, with the implicit assumption (not at all checked by the compiler) is that if the error is nil, then the object is valid. It's awful
That's not even true in the stdlid--there are some io errors that aren't errors per se and at the same time perform an action and return a value, e.g. short-write.
Sure, and in those cases you could continue to return a tuple. In fact having those cases not return a Result<T, E> when everything else does would actually make it more discoverable; right now people assume err means failure.
There is some similarity, but it is so agonizingly superficial. At their core, they're for two very different, arguably orthogonal, purposes, and they behave in two very different ways. Sum types are for composing datatypes, and interfaces are for abstracting behaviors.
In practice, that means that there's just not much overlap between their semantics. Sum types let you say, "A Widget is either a Foo or a Bar, but nothing else." Interfaces give you no way to set boundaries like that. They say, "A Widget is anything that declares itself to be a Widget." And then you can declare Widgets Foo and Bar, sure, but anyone else can come along later and create Baz and Bof.
Interfaces, on the other hand, place restrictions on behavior. You say, "A Widget must support these operations," and, if Foo and Bar are Widgets, they must support those operations. Sum types don't let you do that. A sum type Widget with elements Foo and Bar places zero restrictions on what Foo and Bar look like. They could be literally anything.
The question, "What would happen if the elements of a variant type were themselves interfaces?" leaves me wondering if the authors' consideration of variant types consisted of taking a cursory look at Scala, which does (confusingly and hackily) co-opt Java's inheritance mechanisms in its implementation of sum types. Which does lead to some serious language warts. There are plenty of other languages which have both interfaces (or typeclasses) and sum types implemented more independently, though, and it does not typically lead to confusion.
That last paragraph is also somewhat bothersome, and makes me think once again that this response is more of an offhanded dismissal than a considered answer. The full question is essentially, "Why don't you implement this feature that would greatly increase the language's type safety?" and the response is, "Because you don't need it. See, if you just abandon type safety, what you can do is..."
I suspect that the real answer, even if the FAQ authors or the people who designed the language don't realize it, is that generics are practically (if not technically) a precondition for an algebraic type system. You could implement one without generics, but it wouldn't be very useful.
There is not a lot of critics about the lack of ADTs because most people have never been exposed to them and don’t even know what they are and the concepts of product and sum types. Generics are much more common.
They gave the reason for not including tagged unions that they overlap in confusing ways with interfaces. Not that the reasoning is correct or not. It was their goal to make the language dead simple.
Now with the new generics proposal, it seems they have hacked up some crude closed interface types, which are something like union types. :/
Maybe something like a union of struct type T and interface type I, but T also satisfies I. Then you have to specify some extra things / add features to keep it symmetric and being able to construct it both ways equally easily. GO Authors have a primary goal of
Secondly, they think interfaces cover many of the need of variant types and they don't want some non-orthogonal feature.
imagine an interface{} that can be annotated to say "you can only assign values ofItypes A, B and C to it. E.g. interface(A,B,C){}
Since it's an interface{} it has no behaviour, the only thing you can do is a type assert / type switch.
By extension, you could add some methods: interface(A,B,C){Foo(bool) int}
This fails statically unless A, B, and C implement said interface.
If they do, then this interface has exactly the same semantics as the normal Go interfaces, except that only types A, B and C can be assigned to it (and not type D even if it implements Foo(bool) int
I think it's just down to population sizes. There's a mass of people who already have (non-higher-kinded) generics and exceptions in a language they already know, so they know what they're missing. Far few people have access to ADTs.
If anything, nil checking in Go makes it quite exciting (will I get a nil pointer error somewhere? Who knows!). Sum types would make this quite boring though.
You would never initialize an enum like that, since you would use one of the enumerated values instead. That is, after all, the reason behind defining an enum in the first place.
When could you end up with an enum that has an invalid value? When you get the value during runtime and typecast from an int. In that case, though, you should obviously have a runtime check that verifies the value is legal.
Your example case is something that I can't think of a reason to do. It isn't a case where you would need to be careful and knowledgeable to avoid it - there's just no reason to do it. You would use On and Off instead of defining a variable with this type.
This happens all of the time for enums in structs especially during (de)serialization and returning a zero value and an error.
func DeserializeState(raw []byte) (State, error) {
type Foo struct {
State state `json:"state"`
Data map[string]string `json:"data"`
}
f := Foo{}
err := json.Unmarshal(&foo, raw)
if err != nil {
return ???, err
}
}
s := DeserializeState([]byte(`{"data": {"key": "value"}}`)
fmt.Println(s) // prints 0 oops
Nobody did back then though. Getting a free C compiler before GCC wasn't easy. Even commercial Unix systems shipped with licensing restrictions on their C compilers.
I bought Pascal, and then Modula-2, for my Atari ST. They were cheaper or the same price as a C compiler. Though C was a better choice for that system, since the OS was written in it and the calling conventions, etc. were all C.
On the Macintosh (and Lisa before it), by contrast, Pascal was the way to go.
So I think part of the reason C won out over Wirth languages back in the late 80s was because it just "fit" better with the systems that were emergent then. The swing towards Unix/Posix or Unix-like machines meant that the syscall interface for most things was defined with C calling conventions, and most example code was done that way.
And C++ also became quite popular, while the various object oriented Wirth / Wirth-like languages were not well standardized or available.
I appreciate your comment. I think it proves the idea how important free compilers and run times are. The web took off not due to C, but Perl. It was free and came with many servers that people rented. After that came the 90s with languages like Java and other free higher level languages. At the same time free OSes like Linux allowed people to be more technical without cost.
By the time this came around, universities taught C++ or Java in their intro to programming. Students used those. Then found the free version from discussion with peers.
It really is interesting just how much tooling plays a part in a language's success. It has to "just work" without any kind of crazy build steps. Rust with cargo handles this well, as does ReasonML with esy, Elm, and so on.
One might ask how NPM succeeded and well, it was the only way to write code on the Web, so people had to use it, one way or another.
npm did a fantastic job. Before it JS "dependency management" has been copying other people's scripts from random websites, embedding them in countless performance-degrading <script> tags, and never updating the code ever.
On the other hand, the tedium of manually copying <script> tags onto each page of an application naturally encourages developers to try to limit the total number of them.
Nowadays, no-one blinks an eyelid when `npm install foo` drops 1000 packages in your node_modules directory.
Pricing is the major reason that Delphi is not selling. The version of Delphi which can actually connect to a "client-server" database across the network costs $3,999 and the $999 for annual renewals..
However, since Delphi doesn't sell well we have the classic catch-22 situation - there aren't enough developers and the existing ones are retiring and hardly any new developers are using Delphi. So naturally companies are reluctant to commit to Delphi for new development.... further reducing the chance of bringing in more developers.
At the time, Borland and Watcom C++ just blew Turbo Pascal out of the water. There was no competition, feature and performance-wise. Also OO, which became another things..
I think parent meant "Borland c++ blew away Borland turbo pascal". I can confirm this, as in highschool I (painfully) ported my science fair project from tp to Borland c++ specifically for this reason and saw a ~2x speedup.
It caught on during the 80 and mid-90s, Object Pascal was originally designed for Lisa and MacOS OS development, before Borland, TMT and others adopted it.
But then Borland decided selling Lifecycle tooling to Fortune 500 was more interesting than small developers.
Delphi and C++ Builder are still around, unmatched in several of their RAD capabilities, but a couple of generations were lost, even if now they try with the community editions.
- Turbo Pascal difficulty in dealing with 32-bits x86. Turbo C++ had different memory models (but I think it was still on the 640k limit) and then DJGPP came with protected memory
Ok then we got Delphi. But to talk to Windows you needed the C/C++ interface. And don't get me wrong, Borland C++ libraries were much, much better than the MS VC 6.0 MFC. Really.
But for the times where you needed a "direct line" with Windows, C was the way to go, so maybe that was it.
It's still used in defense and aerospace. Last talking to people who were making inertial navigation units (the movement sensors like in your phone but much better and for a plane), all the software was done Ada, millions of lines of Ada.
After using Go for ~5 years, so many little things in Rust blew me away. I thought I was okay with Go's "enums" but then I saw Rust's Enums. Most notably, the real enum type combined with pattern matching was an eye opener over what I've been missing.
Iterators were another one. The ability to express data transformation (mapps/filters/etc) in a very concise way blew me away. I had no idea what I had gotten used to in Go, though that's definitely not to say that I didn't feel the pain.
There's advanced features in Rust I could live without, but most of it just feels empowering. The beauty in it though, in my mind, is that you don't need to use all that advanced stuff. You can write Rust shockingly similar to Go.
The only thing Go truly nailed in my eyes is green threads. Those will always be better in Go than Rust (though futures are getting way better). Go nailed green threads.
But all the other "lack of features as a feature" left me frequently wanting for more tools to solve simple problems. And I was a Go nut. I have a Gopher plushie in my car for Petes sake.
This applies in any case where one language is more complex than another. You can write almost any style of any language in C++, for example.
The problem is that every team ends up writing in their own subset of these languages, which means it's impossible to ever really achieve expertise. Each team's definition of the language is different, and no one has worked on every team. Ergo no one in the world is actually a C++ expert at any given company's "version" of C++, even if you know every C++ feature independently. You have to follow the style guide which tells you what subset of the language to use and how to use it. This isn't an insurmountable problem but it is a problem. Rust has the same issue.
With Go, everyone can feel free to use the entire language and every team's code ends up looking and feeling incredibly familiar, making it straightforward to contribute to most parts of any code base.
> This applies in any case where one language is more complex than another. You can write almost any style of any language in C++, for example.
True, but my point was primarily that Rust and Go share many of the same patterns. Structs, methods and interfaces can look nearly identical.
This matters in my view when people think you need to go the most efficient and complex way to achieve similar goals as you might in Go. You were fine with performance loss in Go, so why complicate your life in Rust?
It's, at least to me, a useful lesson. Using every tool is a form of premature optimization. Go forced me not to do that, sure. Rust doesn't, sure. So I do hold more responsibility in Rust than I do in Go, but that doesn't mean I can't learn the positives of Go _(boring code/etc)_ without suffering some of the extremes of their decisions _(no enums, pattern matching, etc)_.
> With Go, everyone can feel free to use the entire language and every team's code ends up looking and feeling incredibly familiar, making it straightforward to contribute to most parts of any code base.
Yea, it's a trade-off I suppose. My problem with that though is when I realized I don't like Go's version of verbosity and spreading out logic. I've had pages full of helper functions just to do some minor iteration mapping, flattening, etc.
Having every team keep to the same standard of _(in my view)_ bad still feels bad. Consistent, sure, but consistently bad.
> With Go, everyone can feel free to use the entire language and every team's code ends up looking and feeling incredibly familiar, making it straightforward to contribute to most parts of any code base.
And it goes a step further. I read Go library code and it looks just like code I would have written. It's easy to understand and makes sense.
I came to go from C++ and was so glad about how few sharp objects were lying around, working in software with just one allocator, just one string type. I’m possibly damaged by previous experiences but ultimately I’m also glad not to have to pretend I’m clever. I just go brrr and solve the problem with for loops and go home to my loved ones at 5:30 exactly.
Like proper enums might be nice I guess, but really what I love are all the other things which are not there. Lack of enums has not hurt me deeply. What hurt me was languages where someone might conceivably express the COM apartment model and also think it was a good idea.
Do you somehow labor under the impression that Rust programmers stay up at nights and on weekends to learn to use "clever" tools like iterators, enums, and sum types?
These aren't clever tools. They're dumb tools like chisels and hammers. Yeah, you can just use a screwdriver to chip things away or to whack something in, but there are better tools that are purpose-built and let you do the job faster, more precisely, and with less effort.
> The only thing Go truly nailed in my eyes is green threads. Those will always be better in Go than Rust (though futures are getting way better). Go nailed green threads.
If you think Go's green threads are great, you should see erlang's. In erlang if you need to construct a mutual failure domain (have two green threads mutually destruct if one fails) you can use the link() function. Done. You can also trivially choose to have one of them be notified on the failure of the other instead of being destroyed.
The crazy thing is that sum types (Rust enums) and pattern matching have been around for at least 30 years. I'm simply not interested in learning any new language that doesn't have sum types, they allow you to write incredibly expressive and terse code.
For all the complaints about the breakneck change in rust, it's a very "boring" language: all of its features with the single exception of the borrow checker already exist in other languages.
I know this won't be popular to say on HN, but I think Rust has the same problem Perl does. The language has so many systems to learn and a tough syntax that it looks unreadable to people just starting out.
I mean I could see learning Rust being really hard if you only know something like Python or JS. The only "system", as you say, that is present in Rust that doesn't have something analogous in C++ is the borrow checker, and lifetimes still exist in C and C++. Rust is significantly simpler and easier to learn than C++
C++ has a much shorter time to first non-"hello world" program than Rust. C++ has a lot of features, but few of them are mandatory for general development. With Rust you have a pretty steep hill to climb before your first non-trivial program compiles.
C++ and Rust, IMO have a very similar featureset, Rust just puts that upfront as properly part of the language. Those C++ features are pretty much mandatory for general development, and likewise you will find them in most open source and production projects. Programming without them is just C++'s one of many ways that it gives you enough rope to hang yourself.
Yes, you could program C++ without even knowing what std::unique_ptr (and I talk to many college grads with C++ on their resume who don't know what unique_ptr is, or that C++ has more than one type of pointer). But Rust won't let you use raw pointers (as part of the language), whereas in C++ you will be told "make sure you have read Google's 10,000 word style guide before committing any code".
I believe https://news.ycombinator.com/item?id=23715759 works as a response to your point. In my eyes syntax is the least interesting thing of any language, their semantics are way more important, and quite a bit of syntax ends up being derived from it, and the rest boils down to aesthetics. The syntactic complexity that Rust has is there because it is encoding a lot of information, modulo things like "braces vs whitespace blocks" and "<> vs []" which, again, come down purely to style. Also, having a verbose grammar is useful for tools like the compiler and IDEs because having lots of landmarks in your code aids on error recovery and gleaning intent from not-yet-valid code.
It's not any particular feature that makes a language a mess. It's the interaction between the features. It's a bit like mixing paint, it's very easy to end up with greyish poop.
Go was designed by very experienced programmers that understood the cost of abstraction and complexity well.
They didn't do an absolutely perfect job. It's probably true that Go would be a better language with a simplified generics implementation, enums, and maybe a bit more. That they erred on the side of simplicity shows how they were thinking. It's an excellent example of less is more.
Most programmers never gain the wisdom and/or confidence to keep things boringly simple. Everyone likes to use cool flashy things because it makes what can be a boring job more interesting.
But if your goal is productivity, and the fun comes from what you accomplish, then the code can be relatively mundane and still be very fun to write.
Precisely, and this is one area where go fails completely. The features don't interact well at all!
Tuple returns are everywhere, but there are no tools to operate on them without manually splitting the halves, checking conditionally if one of them exists, and returning something different based on each possibility. Cue the noise of subtly-different variants of `if res, err := nil; err != nil` in every function.
Imports were just paths to repositories. Everything was assumed to just pull from the tip of the branch, and this was considered to be just fine because nobody should ever break backwards compatibility. They've spent years trying to dig themselves out from under this one.
Everything should have a default zero value. Including pointers. So now we go back to having to do manual `nil` checking for anything that might receive a nil. But thanks to the magic of interfaces, if you call a function that returns a nil interface pointer, it will directly fail a nil comparison check! This is completely bonkers.
Go has implicit implementation of interfaces which makes exhaustive checking of case statements impossible. So you type-switch and hope nobody adds a new interface implementation. So you helpfully get strong typing everywhere except for the places you're most likely to actually mess something up.
Go genuinely feels like a language where multiple people each had their pet idea of some feature to add, but nobody ever came together to work on how to actually make those features work in concert with one-another. That anyone could feel the opposite is absolutely incomprehensible to me.
Given that I am involved in the Rust project I'm very likely biased, but given that I've focused on the learnability of the language (diagnostics and ergonomics) I have a bit of context on this subject.
When designing a language there are intrinsic (what things the project wants to focus on, be they features of the language or the associated tooling that affect the language, like generics or compilation speed) and extrinsic (external impositions like being able to run on certain platforms, or interfacing with existing technologies like being able to run a statically linked binary in Linux or being able to debug using gdb or calling C libs without runtime translation) design constraints. All languages have (or should have) an objective of being easy to learn, pick up and use long term. It might just not be the top priority.
For the sake of argument you can take Python where expressiveness at runtime and clean syntax are prioritized over speed, Go where fast compilation and multithreaded microservices are prioritized over more complex language features, and Rust where fast binaries and expressiveness are prioritized over ergonomics (when push comes to shove this is the case, otherwise you wouldn't need to call `.clone()` or add `&` to arguments when calling a method ever), you can see how these objectives permeate every decision throughout the language.
When it comes to Rust in particular, I feel it is still a boring language despite the appearance of too many features, precisely because of how they interact between them and fit together naturally. It is not the best fit for every use case, but it is one of the projects out there that is embracing the fact that it can't be as easy to learn as it could be (without sacrificing some of the constraints that make it interesting as a systems language), but we can rely on the compiler being a necessary part of the developer toolchain to make the compiler understand the user's intent when they do things that make sense from extrapolated misunderstanding of the language and help them write the "correct" code instead. This has the added benefit that reading the code is easier because you have to "guess" much less what it is doing. Remember that if the code can confuse a parser it will also confuse humans. On the opposite end of the spectrum you have JavaScript, where it's grammar has a lot of optional or redundant ways of doing the same thing (think semicolon insertion), which makes the act of reading and debugging code harder. This is a reasonable approach in a case like the web, less so in a compiled language that can evolve independently from the end users' platform.
The thing it nails is that mediocre programmers (like me) can easily understand and reason about source code written by others. In the C++ world, this can be very hard and thus time consuming. Go on the other hand let's me focus on the business value as opposed to becoming a language lawyer.
You're right that you can't mix named types with the base type.
However, they are not enums in any way either, since there is no limit on their possible values to some restricted set - they really are just integers, even more than in C# or C++, even though there is no implicit conversion.
Your statement implies that errors for using invalid values are guaranteed at runtime as a feature of the language. That entire statement is incorrect.
You really don't get errors... I literally pasted a link to a working, non-erroring example in the comment that you responded to. You clearly saw the code. Did you click "run"?
You only get an error at runtime if you add "-fsanitize=undefined" (or "-fsanitize=enum"), where the compiler will inject some code into your binary.
But the error doesn't actually stop code execution: it just prints a warning!
Here is a link with the sanitizer enabled, and no warning is even printed for using an invalid value: https://godbolt.org/z/NA7FNQ
So, not only is the sanitizer not even comprehensive, it's not actually a feature of C++. It's a best-effort feature from the compiler to add a non-standard runtime code sanitizer to your binary. Warnings for using invalid values are not guaranteed.
You had to use casts to get those to compile. Go will let you shoot yourself in the foot by accidentally writing "z = 3", which is way more likely than accidentally writing "z = (Something)3".
The point is that those languages’ enums are very much integers too. Do those languages allow you to write out as many types of integers literally in your code? No. But that’s not the point being addressed here. You can reread the comment I was responding to, and there’s no hint that I can see anywhere that they’re talking about untyped literals.
C#, C++, and Go all share the same flawed enum representation strategy. Use Rust if you want to get better enums... those other languages aren't any better for enums. C#, C++, and Go enums are all type safe, and they're all perfectly capable of holding completely unexpected values. (Although, C++ in the older permissive mode was not type safe period when it came to enums, if I remember correctly. I admit it's been long enough that I could be misremembering.)
I already addressed your point in detail with you elsewhere.
I agree that being allowed to write any type of integer as a literal is theoretically an ergonomic issue when it comes to enums, but it’s one issue I’ve literally never seen happen even once in practice. I point out that there are linters available to address the issue if it’s one you feel so strongly about. Surely you use linters in every language?
I have been paid professionally for years to work in both Rust and Go codebases. Linters are essential for both languages, and CI is where you guarantee that no lint-failing code is allowed to pass.
I’m well aware of Go’s flaws, but people throughout this thread (yourself included) have made tons of baseless claims about Go. Just because it’s popular to hate on a given language doesn’t make it okay to use incorrect statements for that purpose, such as saying that Go’s enums aren’t type safe. They are a separate type. The language does allow you to write integer literals of any integer type. It also does this with untyped float and untyped string literals as well, as a fun fact.
Go has a number of legitimate flaws, including the absence of both generics and sum types. You could even legitimately complain about untyped literals —- they are really nice in some ways, but they do have some trade offs.
No, it doesn't. Go's enums are type safe. You can't accidentally mix two different kinds of enum values, or accidentally use some random value of type "int" where an enum value is expected. The type system protects you from values of the wrong type being used. This was demonstrated by that Go Playground link.
Literals in Go are untyped until they're used. How they're used determines the type, and then they have a very real type, with very real type safety being enforced. So, if you're using a literal "3" where an integer of type "state" is expected, the Go language specifies that the type of "3" is "state". This is an ergonomic issue when you're expecting exhaustive Sum Types, but not a type safety issue.
Should all literal integers just always be a specific type? Let's decide that all literals should be of type "int". Great. Now you can't type a large, 64-bit integer literal to pass as a value to a function, because that would overflow the "int" type, even though the argument is desired to be "int64".
There are trade-offs to every approach.
> In C++ or Haskell, implicitly assigning integer literals like that isn't valid.
I can't comment on how Haskell does things, but C++ is more complicated than you seem to think.
"The type of the integer literal is the first type in which the value can fit, from the list of types which depends on which numeric base and which integer-suffix was used."
Go's approach is equally type safe here. It assigns the type to the literal based on the expression the literal is used in.
As I said before: I really wish Go had proper Sum Types. But enums in Go are type safe, contrary to what you have claimed in several comments here.
A desire to restrict the polymorphism of integers is fine, but it doesn't really change the type safety argument at all.
"Enums" here are just another type of integer, exactly like in C# and C++. They're not a separate concept. I wish Go had Sum Types or even just exhaustive, non-integer (as far as the programmer knows) enums, but neither of those are requirements to have a type safe enum. The only difference vs C++ and C# is that Go has untyped literals, which are quickly handed a type based on where they're used.
Enums in Go have type safety, which is the point you disagreed with. You can't assign values of the wrong type to an enum-typed value without explicit conversion in Go. That's type safety.
Linters that fit your team's expectations are a good thing to use in any language, and Go is no exception here.
Would this linter be unnecessary in other languages? Sure. I would argue it's still unnecessary, because I've never even once seen anyone accidentally type a literal value in Go where they meant to use one of the predefined enum values. It's certainly possible, but it hasn't caused me any lost sleep.
Fair enough. I would say if the way someone uses the Go "enum pattern" is causing them to have issues with type safety then their code could probably use a refactor, but the point does stand.
`iota` is one of the strangest features in Go by far for me. So much complexity for such a simple feature with so few applications, instead of implementing even simple enums...
The pure magic of Delphi was in its component architecture and especially the ability to define define properties - with its own sophisticated property pages.
Also, you could specify if a property has read and write capabilities or just read or write.. and even if what sort of accessor and setter methods are to be used.
.NET really does not have the component architecture with property pages etc in the same way that Delphi has - along with the ability to register itself into an IDE etc.. I could be wrong though
But when you do need you can very well use "PhD level languages". Existence of Go for non PL enthusiast like me does not stop anyone to use language what is most favored by them.
That's not the point. In the given Pascal you can add N different cases and compiler will check whether all instances of `state` are one of those N cases but not anything else. In Go you can't do that since it's really just an int which has 2^32 or whatever cases. There is no way to communicate to Go compiler that a `state` can be either `on` or `off` but not anything else. You can only communicate a `state` is an integer (2^32 different `state`s) and in particular state=0 is called off, state=1 called on. That's useless.
Bool can be either true or false. So your statement that there is no way to communicate to Go compiler that a `state` can be either `on` or `off` but not anything else is "bool false".
I'm not sure what you're trying to say? Bool can have 2 states, but in software we need things that have 3, 4 or N states too. So we need an abstraction that is like Bool but can have any number of cases. Moreover, bool is clearly not sufficient as it not only has 2 states, it can only represent exactly 1 bit of information. Whereas, e.g. in Haskell "Maybe X" has 2 states either "just X" or "nothing" but can represent arbitrary amount of information. (imagine having Go compiler forcing (bool, X) pair to have only two states either "(true, X)" or "(false, null)" such that compiler doesn't allow you to construct e.g. "(false, X)"). So bool is like very very specific example of what is being discussed here.
Speaking as someone who technically would fall in b) in your community list: Java has all of those nice features and tooling now of other static languages, and when I tried out Go I found it tedious to work without them, especially generics.
I mean sure, I'd rather be working in Kotlin, but Java is at least acceptable now.
Yeah I am speaking as a person who was a professional Java developer for years, but the last time I worked in it was 8 years ago, and it was painful given I had experienced working with Ocaml, Erlang, and Scala. Kotlin wasn't a thing yet.
I've been working almost exclusively in C++ @ Google since then.
Java looks much improved now. But I still feel like there's a culture of complexity and bloat in Java code ... But I work in the Chromium code base and it has the same problem :-)
Old code and programmers with old ideas hold back "real life" Java projects quite a bit from what they technically could be in a more modern form.
Complexity and bloat definitely come with that, I think some folk get used to building EnterpriseTurboWidgetFactoryImpl type horrors and never consider the language doesn't need to do that kind of thing any more... Java being so backward compatible I think also means it doesn't do enough to discourage that either though, which is also a reason why Kotlin is so nice -- the lowest effort solution is miles more understandable.
> I think some folk get used to building EnterpriseTurboWidgetFactoryImpl type horrors and never consider the language doesn't need to do that kind of thing any more.
Did it ever though? I feel this junk comes from the Spring side more than pure Java. I'd love to know a good way to get rid of it as Spring is everywhere.
Ho man, I remember when Spring came on the scene and it was a breath of fresh air compared to the 10,000lb J2EE gorilla that was common then.
No, Java has always had ExcessivePatternFactoryImpl-itis.
And what's worse is that the language never had good supports for the patterns that the community seemed to prescribe. So you had an insistence on value and transfer objects and JavaBeans with pointless getters and setters leaking out their eyeballs, but no language support for properties or automatically managing these data objects. Or a desire to push the visitor pattern etc. but no pattern matching constructs. Apart from generics, which ended up being excessively complex and practically turing complete, the language was horribly anemic and repeatitive.
I've seen a lot of that kind of thing in non-Spring code (I can think of at least one codebase where they rejected using Spring as terrible, but their hand-rolled alternative eventually grew to something much worse), and honestly good Spring Boot code looks surprisingly nice and free of that kind of thing.
Maybe it was old school Spring that started it (it seems to have seen a big uptick in enterprise grossness in the 2000s so its an interesting hypothesis), but even Spring has moved on from it.
I write in Kotlin regularly, and occasionally in Swift, but I use Go for anything server-related, if I can help it. I find the stdlib to be an incredible simplifying force for these programs, not least because deployment is trivial. I guess you could say I belong to the 'b' community. My issues with programming circa 2010 were in part related to tooling, but also just bloated, over-engineered code bases that obscure all basic computing behind layers of nigh impenetrable (but still leaky) abstractions.
I can see how if you became fluent in a nicer language like Swift, it would be frustrating to move to Go and find your typical methods for expressing certain patterns are unavailable. They have been sacrificed for keeping the language overhead small, which in turn creates various warts and edge cases that give more ammunition for being frustrated with the language. I accept these tradeoffs when working in Go because I am typically thinking about concurrency and memory overhead in those projects, and Go makes measuring and reasoning about these properties of your program straightforward.
How often does "hit the wall from a performance and maintainability POV" happen with Python / Ruby / NodeJS ? I mean seriously, how many more huge projects do we need to build until we can show it's a sane alternative? 120 Billion worth Shopify isn't enough, maybe Instagram then? Github ?
When you see the effort some of those compagnies go to get some decent performance, chose the right language from the start would have been easier on that side, look at Facebook with Hack, Youtube ( Python ) that re-implemented a VM in Go, Shopify is also doing a lot of things to improve performance because of Ruby limitation, when you pick up a language with by default good performance you don't have to go that route.
Are there as many examples of the inverse, big successful companies that started off with a "fast" stack and didn't need to optimize the one they grew with? It seems like flexibility is more important than performance until you actually have a winning product.
Ruby is working hard on performance but Sorbet is hardly a common thing. The Rails community isn't gonna adopt types anytime soon imo. To be fair most likely never, at least not as a community. Some companies may go that route but I hardly anticipate a big movement.
Those are all grafted on afterwards, and they don't see too much use apart from the people who made them (ie Stripe with sorbet). Compare that to a language that has first-class support for all of these built in. As you learn such language you're forced to learn these concepts as well, so more people in the community use them, thereby increasing their effectiveness through sustained development because they're popular features.
This is a bad faith argument, the parent is not saying to use C, but surely you can concede that there are better languages for performance and are useful for web development? Even Node is faster than Python and it's chugging along well in the web dev community.
Node is probably faster than Python but so what? It comes with its own set of problems and drawbacks.
If you're optimising for performance I don't see why you would go for Node anyway, and if you're not optimising for performance than I still don't see why Node but to each his own :)
C++ as it is written can be arbitrarily slower and may have hidden bottlenecks due to copy constructors and poorly thought out Stl usage. But C++ fanboys refuse to admit this.
Do these sites also have back end services written in other languages for the heavy listing? Anecdotally, seems to be a lot of stories to that effect.
And as noted in the article, Javascript/NodeJS is in a different performance category from Python and Java, more like Java or Go. But the simplicity of Go might make the performance and memory usage more predictable.
Shopify is absolutely a Ruby / Rails company. If out of the thousands of developers that work there you have a team or two writing some performance heavy c++ / Go code that says absolutely nothing.
Wouldn't it actually say a lot? Engineering is about trade-offs, and the trade-off that says "use this special language for 1% of our most critical services instead of the company-wide standard language" is not "absolutely nothing".
Lines of Code is a bad proxy for mission critical.
All of Shopify is mission critical. The code that constitutes the Shopify monolith is mission critical for sure. The company won't function without it.
This is somewhat circular, but: when your team decides to rewrite a component in a more complicated or more obscure (at least at Shopify) language to increase throughput rather than spinning up more instances of the current solution, then you're talking about a component that's even more critical than the other critical components.
It's just common practice - in Ruby some popular gems that parse XML or JSON come with a C extension that does some performance heavy calculation. Most gems don't do this, but when performance is very important some do.
Us Rubyists see no problem with that, and we don't take this as some kind of incentive to leave Ruby, maybe even the opposite; knowing that you can do 99% of your app in Ruby, and if ever needed, easily fill in the 1% with Rust/Go/C++ is very reassuring imo.
It entirely depends on what you're doing. There are two issues as I see it:
a) Lack of static analysis tools means having to do more testing (manual or automated) for simple mechanical errors.
b) For super high throughput low latency systems, they are not up to snuff. When I worked in real time bidding in ad-tech this was actually a serious concern. I'd reach for Rust (or C/C++) in this scenario now.
Speaking from experience, the JVM is fast enough for real time bidding in adtech. Most major ad exchanges require the latency to be below ~80ms, which is not that hard to achieve.
In contrast to high-frequency trading, there is no competive advantage in having a lower latency.
Are compile times and program start-up times not a factor?
One of the things I really appreciate about golang (from a completely different field) is how quic teh builds are, and how fast binaries start up (it's like I wrote it in C).
Java can compile quickly, a few minutes at most when C++ would take hours, so I am tempted to say that's not a problem.
The startup time is negligible in my experience (few seconds for JVM or python imports). I have to take over slow starting applications from time to time and it's always because of loading data and doing stupid shit on startup, irrelevant of the language. It's not a problem for production because server application only reboot once in forever.
It's still a problem for microservices architectures, unfortunately, especially if you want to support dynamic scaling of some kind. A few seconds is nothing if you expect that your server will be up forever, but it becomes a lot if sometimes it goes down for a bit to move to a different machine, and that takes seconds for your customers.
Also, JIT languages have a very poor habit of doing a terrible first impression because of the warm-up time, especially in Java. If you are delivering applications to customers, that becomes a real burden - the very first time they use your shiny new application, everything is moving like molasses, until the JVM decides it's JIT time...
In a microservice architecture you'd probably have more than one instance running at any given time though, and do a rolling restart so there's always at least one instance available.
Yes there are niches where Ruby won't work, even big niches like Kernel development or real time systems maybe.
But for web development, in general, these languages have proven themselves for so long it's getting quite ridiculous now to say they won't work.
As for testing - I disagree. Frameworks like Rails are so easily testable it's a breeze. Java/Spring dependency injection is jumping through hoops just to provide a testable framework I find it hard to believe it's any easier.
Yes, I don't do web development. At least not mostly.
But I understand that that's what most people out there are doing. But they should also not delude theirselves that the toolsets appropriate for that environment are appropriate every where else. I see this bias a lot, on HN even, that everyone now is a "full stack developer" doing this kind of development.
Having the code execution paths radically change by adding an annotation to a method or a class makes it very difficult to reason about what it will do when deployed.
If that annotation you found through Google does what you want and expect, that's great. But if it doesn't, or fails in unexpected ways, debugging it can be a nightmare.
This always baffled me -- when Guava came on the scene, and when Spring also adopted annotations...
The whole original point of dependency injection was to decouple dependency management from the code, to make it easier to test, and easier to reason about and analyze.
DI via annotations goes ahead and sticks them right back in there. And now we have, like you say, magic code that is difficult to reasona bout.
Yes, that's what makes it so terrible. All of the "action at a distance" complexity of Ruby meta-programming, with none of the concise and easy to read code!
just out of curiosity -- for ad-tech real time bidding, what's your network latency and bandwidth like? I can buy that C/C++ is needed if you are colocated to an exchange, but if you're bidding online, those few fractions of an ms you save in C/C++ vs node.js you could have also saved by locating your server closer to whatever ad-tech exchange you're bidding on.
The issue I had back in this line of work was with garbage collection and was an issue in the 99th percentile.
When you have an expectation from the exchange that you respond in under 80ms, and 25-50ms of that is eaten up by transport, you don't have a lot of time to mess around.
So you spend the first chunk of time optimizing I/O and how you're accessing it.
Then you start looking at computation -- improving caching, etc.
At a certain point you start noticing in your graphs that you're on average doing quite well. But there's those hiccups every Nth request...
And now you're in the game of fighting with your language's garbage collection algorithm...
Try to improve allocations where possible. There's often plenty of small allocations happening in Java that can be avoided (string usage is one major driver). Less allocations means less frequent garbage collections.
Then one hack is to disable the garbage collection. Let the software run up to consume all the 100 GB of memory of the server and crash and restart. There is no pause of garbage collection when there is no garbage collection.
If it's not enough, the last resort is to write native code or switch to C++.
At this level of scrutiny, replacing algorithms is appropriate and if the algorithms are built into the language then replacing the language may be one* way to clear the issue, but you have to do your homework to get there.
> b) For super high throughput low latency systems, they are not up to snuff. When I worked in real time bidding in ad-tech this was actually a serious concern. I'd reach for Rust (or C/C++) in this scenario now.
What did you do? You wrote servers in C? you wrote a Redis for instance?
Optimized for GC and figured it out. But that point also coincided with a job switch; I went to another ad tech company, but on the exchange side rather than the bidding side. And all our ad server infrastructure for that was written in C/C++ (with embeded V8 JS for biz logic stuff). Then that company was bought by Google, and I worked on the exchange side at Google, too, where everything was also in C++.
My successor at the original startup rewrote everything in Python. And I watched from the exchange side as they struggled for two months to meet basic performance constraints. They eventually got it though. It certainly can be done.
It's worth pointing out this was 10 years ago. And in the meantime we've had the usual improvements in machine performance, and SSDs are a thing in data centres, etc.
> How is this any kind of pro or con? It's just an implementation detail.
It is pro in sense Rust developers spend inordinate time in blaming LLVM for slow compilation. Without Virtual machine is pro that steps to install, setup or upgrading virtual machines on target platform is no longer required
Coming from the world of developing on the absolutely massive Chromium code base where compilation on a non-specialist workstation can take 3-4 hours, but build caching (goma) etc makes it mostly a non-issue [most of the time] ... I have a hard time believing that compilation times are honestly still a major issue for most developers.
There are only a few code bases out there that are massive enough that compilation time on a modern machine is a major impediment to productivity. And in those cases I find that IDE and analysis tools on large codebases are an even bigger problem than compilation speeds...
Also the VM argument you give makes little sense -- you can have your language run on a VM without the VM being a seperately bundled package like Java or .NET. For example: Python runs on a VM. Its own VM, which is part of the Python runtime itself. Now, it's not a particularly good VM, and Python is a late bound dynamic typed language so it's also pretty slow, but there's nothing stopping one from having a fast JIT VM with a static language like Go. Not saying one should do that, but it's entirely possible and there's arguments to be made either way.
I think compilation times in template-heavy C++ are still a major headache, especially since even modifications to private fields can trigger massive re-compilations.
> There are only a few code bases out there that are massive enough that compilation time on a modern machine is a major impediment to productivity.
Depends upon what you mean by "major impediment to productivity". I think someone like Bret Victor would argue that if your feedback loop is greater than a couple of seconds, you're toast.
From that perspective, _the majority_ of _all_ codebases are an impediment to productivity.
> It is pro in sense Rust developers spend inordinate time in blaming LLVM for slow compilation.
Some of it really is LLVM, witness the latest LLVM release where Rust lost 10% in compile time and changed absolutely nothing other than the LLVM version.
However, the Rust folks are also painfully aware of just how much slow compilation is the fault of the Rust side. The whole idea of moving to MIR is to enable optimizations to be done with more context so LLVM doesn't have quite so much code that it has to generate and then spend all its time trying to optimize away.
However, Rust is always going to be slower to compile than a language like Go where compilation speed was an actual primary goal.
c) Minimalist-loving hipsters that read articles like this and bandwagon on Go just like any other trend (merits of the language aside). These are the same folks that use a hand crank to grind their coffee beans.
Hand-cranker here. What we seek is bliss, nirvana, not trying to shove square pegs into round holes. We like F#, Haskell and Rust more.
On a more serious note, Go pisses me off every other day. Just off the top of my head, it's pedantic where it shouldn't and vice versa. I get that commented out usages of a variable are not OK for production, but why don't you give me an escape hatch, like --dev flag? And it won't complain about dead code at all if you stick an unconditional return in a middle of a function! Ugh.
I mean, strictly speaking I should downvote you for tone... but I am laughing ;-)
I know people who work at Capital One, I wonder how extensive Go usage is there...
I have started looking at job postings recently and I see a lot that have aspirational-Go in their postings. I have managed to avoid Go during my tenure here at Google (which isn't hard since it's not used much at all, despite what people outside of Google seem to think), but I'm starting to worry that if I move on I'm going to have to take a job writing it.
Who is hiring remote for Rust, Zig, OCaml, or Erlang? Those seem more palatable to me? :-)
Before learning a new language I always look at Linkedin for open jobs. Last week, I could find next to nothing for anything Rust and OCaml. I eventually decided to modernize my C++ skills to be relevant again
I assumed it was job posts in the vein of "new code written in Go" or "migrating systems to Go", kinda like when somebody says "monolith moving to microservices" but actually you spend 95% of the time working on the monolith.
In Go you use the pattern of prefixing enum constants with the enum category to indicate that the enums are related to each other (e.g. StatusOk, StatusNotFound instead of status.OK, status.NotFound). Which is funny because today I came across this quote that "patterns are a demonstration of weakness in a language" in a Go talk https://youtu.be/5kj5ApnhPAE?t=276. I wish enums will eventually be implemented, generics are coming after all.
Variant types are very different from enums. Variant types allow you to define a type that is "one of these N types". Enums are the same thing at the value level: they allow you to define a type that is "one of these values". You can't implement one in terms of the other.
Variant types do have some similar use cases with interfaces, but I don't see any relationship with enums in terms of use cases, so I don't think that applies.
Also, let's not forget that even C has enums, so their lack of inclusion in Go is baffling (especially since they went and implemented the much more arcane yet limited `iota` feature).
Not in any useful sense of the word. A C enum is a typedef and a few named constant, it's absolute shit, and if the choice was restricted to "C enum or nothing" then "nothing" was absolutely the right call.
Go's typedecl + iota is actually a step up from C's enums, because (aside from "untyped constants") you need an explicit conversion from "any random integer" to your typedecl. It's not any more reliable or any safer at point of use, but it will prevent some misuses, or at least force the developer to consider what they're doing.
Yeah, I think the confusion stems from the word "enum" being used to mean different things in different langs.
A great illustration lies in comparing enums in Objective-C (essentially just named integers), whereas the associated values in Swift enums make them "Sum" types.
> "Go doesn’t have a virtual machine or an LLVM-based compiler."
> How is this any kind of pro or con? It's just an implementation detail.
This is actually one of my favorite things about Go. The fact that you get a single statically-linked binary at the end is so much nicer than having to mess around with JVMs and JAR files and CLASSPATHs. Sure, there's a lot of tooling to handle this in the Java world, but it adds significant complexity, and makes life difficult when you need to do something slightly different.
You are underestimating the impact of that choice on language design. When Java was designed, garbage collection was one of the major features, and almost all languages with GC were either interpreted or using a VM. Static native binaries with GC and goroutines is something relatively unusual in the design space. I can think of OCaml and Haskell doing something similar for example.
Sure, and if the moon was made of cheese, it would still be a sphere lighting up the night sky.
The fact that the go designers prioritize things like statically compiled binaries is the whole magic of Go in the first place. The language is whatever, it's fine, it's cool. But like the article says, the magic is in its holistic attitude towards process, tooling, and distribution.
> I am getting old and lazy. I want the compiler to do more for me, not less.
Ironically, and for the same reasons, I want the opposite. I want the compiler to stop fighting me on every little detail. I'm exceptionally tired of writing boilerplate interfaces, thinking excessively about what size of integer I want to use, or digging into lambda calculus just to make a stone-stupid compiler (or worse, a borrow checker) happy.
Also, ironically, go doesn't even do this - it fails compilation on the most useless and pedantic of errors - unused variables and imports. Despite the fact that it's common practice to comment out sections of code during development and debugging, which inevitably leads to unused variables/imports.
How ironic, one of the first things that I missed in Java was that. Nor the compiler error, nor the standard linter (checkstyle) prevents unused variables.
Don't get me wrong, I find Rust has pretty high mental overhead. The borrow checker really is not intuitive to me.
For the niche that Go is applied in often.. .basically middleware type things, long running services, etc. I wouldn't necessarily jump to Rust.
I think Rust is well suited to lower level systems development, as a replacement for C/C++ there.
What's frustrating about Go to me is that we _need_ a nice clean compiled language with a static type system and garbage collection for services development. To replace Java, etc. But ... not this one. Go to me feels like a huge step backwards, and in my experience with code review with the Go folks at Google, it's a very strident and dogmatic language community, too.
I'd like something like OCaml, but with less functional religion, maybe.
Coming from C and areas where C is used, enums tend to used with values defined explicitly because they usually don't represent an abstract set but indeed specific values that have to be respected.
An example at random from a domain Go is used a lot in (networking): When you define an enum of the resources recors in DNS you want the values to match the RFC definitions as they are used in DNS packets.
With that in mind Go's approach to enums makes sense: we usually want to label explicit values, potentially of an explicit type, we don't just want an abstract semantic label.
I haven't touched Java in a long time, but as far as I recall low-level operations on fields, bitmaps, etc. were not a strong point.
Sure, there is value in having your enums have specific values, but even in C they at least get a bit of namespacing.
And yes, Java enums are very verbose if you want to use them for low-level operations, but that is almost entirely the fault of the rest of Java and not enums themselves specifcally.
You still need to be an expert on the virtual machine to not have things blow up. It's another factor of a complex equation.
My last job had a group of admins that thought any Java app from the Apache Foundation was the solution. Whenever things went wrong, their solution was, "just up the max heap size!" However, they're admins and had no idea about the app and the users had no Java experience to actually suggest tuning the JVM.
Yes, this was an org issue and a failure of tech leadership, but nevertheless, the JVM was a point of failure.
Go has a runtime with a GC with a heap size, just like the JVM does. People will end up having to tune their Go runtime GC's parameters, too. Whether the runtime is executing compiled native code, or executing JITted bytecodes is of little relevance to the user.
What you're speaking to is the issues with a _separate_ VM process and a cultural thing with Java generally, not a VM in particular.
My understanding was that a golang program's heap size will automatically grow as needed, and later (eventually) shrink as memory goes unused.
Is this true of java? Is it possible to run a java program in a way where it is allowed to use up all of the system's heap if needed, but also plays nicely with other processes on the system (i.e. yields the memory back to the OS after it becomes unused)?
Maybe this has been improved in the last 10 years but heap management imho is one of the not-great things about the JVM when I was using it. What is happening is quite invisible to sysadmins used to monitoring traditional unix processes. To them it looks like a giant memory hog, non-cooperative, when in reality the program might only be using a small % of what top is showing allocated to the process.
The original JVM philosophy seemed to be to cohost a bunch of stuff in one monolithic JVM process, rather than run a bunch of separate communicating processes. And in fact this "container" philosophy is what the original J2EE servers operated with.
>How is this any kind of pro or con? It's just an implementation detail.
Not exactly. It's hardly a detail, and affects use.
The first (not having a VM) in Go's case translates into AOT static compilation, and you not needing to have a VM installed to run your programs.
The second (not using LLVM) translates in Go's case to faster compile times compared to LLVM based compiled languages.
While both have their drawbacks (e.g. LLVM compiled code would probably be faster), both are pluses for my preferences/use cases - not mere implementation details.
I would love sum types (which would allow for real enums), too. Russ Cox has commented on Reddit why the Go teams considers them incompatible with Go [1].
The first challenge is zero values: For a sum type like (int, float), there's no natural zero value. I think that sum types would have to follow the same design as Go's interfaces; a sum type value has to be nillable. This is unfortunate, but the Go team burned some bridges when it decided to support nils, and they now have to deal with that problem.
Secondly, there's the matter of what a sum type of interfaces means. I think this is solvable, too, and I don't agree that it presents a conflict. Sum types express the range of allowed values. What you do with a variable, once it holds a value, is no different than today:
type Source sum { io.Reader, io.ReaderAt }
var src Source
switch t := src.(type) {
case io.Reader:
case io.ReaderAt:
}
This is no different from:
var src interface{}
switch t := src.(type) {
case io.Reader:
case io.ReaderAt:
}
> Russ Cox has commented on Reddit why the Go teams considers them incompatible with Go [1].
I wish people would stop beating around the bush and own up to these being design mistakes of the language. If you can't have nice things because earlier poor decisions (e.g., supporting nils) has walled you into a corner, it's a problem. It's not an example of "refreshing simplicity", it's a shortcoming of the language. Period. Every language has some warts, just acknowledge them!
This is not a sum type, it's something similar but different, sometimes called a union type, because the point is to store a closed set of different types under a single umbrella.
A sum type has variants with associated data, but the type of that associated data is orthogonal to the variance, you can have multiple variants with no associated data, or the same associated data type (both are very common use cases), which would simply be impossible to express here.
With a proper sum type, this is not an issue because the Reader and ReaderAt associated data would be stored in explicitly and specifically different variants. In Rust parlance:
> The first challenge is zero values: For a sum type like (int, float), there's no natural zero value. I think that sum types would have to follow the same design as Go's interfaces; a sum type value has to be nillable. This is unfortunate, but the Go team burned some bridges when it decided to support nils, and they now have to deal with that problem.
The proper way to fix this would of course be the same one C# took: every C# type used to be default-able (although to their credit it was not generally implicit). With the introduction of non-nullable reference types, they had to change the language to handle cases where types would not be default-able.
Of course for Go that has wider-ranging implications e.g. currently I don't think there's any validity tracking because `var i <type>` tacks on an implicit zeroing of the value, it has no concept of "uninitialised values" whereas C# had that even before non-defaultable types. It also breaks their assumption and assertion that you should be able to add new fields to a structure and that'd get automatically filled with garbage without the caller being aware (also a terrible idea).
My understanding is that there's no real distinction between "sum type" and "union type" as such.
I may be wrong, but whether you have an indirection through a type constructor or not doesn't alter the meaning of the union. In the fictional Go syntax, you could also have the same indirection:
type Reader struct { R io.Reader }
type ReaderAt struct { R io.ReaderAt }
type Source sum { Reader, ReaderAt }
The difference is that Reader and ReaderAt are completely separate types, not type constructors, and that Rust has a shorthand for essentially valueless type constructors, allowing Rust's enums to act both like classic C enums and like sum types. But in Rust, as I understand it, enum variants aren't actually types. In your example, you can't have:
let r: source::Reader;
Of course, you can do the same thing in Rust, at the cost of readability:
> My understanding is that there's no real distinction between "sum type" and "union type" as such.
There’s a difference in the ambiguity — or lack thereof — which is the purported reason why go could not have sum types.
> But in Rust, as I understand it, enum variants aren't actually types.
That’s correct. They’re just constructors for values if the enum type.
> It's just too late for Go to do anything here, I think. Zero values permeate the language. For example, consider structs. It's normal to do
It’s not too late for anything. Existing types do not have to change and don’t prevent adding new non-default-able types (which would be transitive).
Those types, with those new guarantees, would not allow for the magical zeroing and zero-extending of existing types but they would not break anything.
> Enums, immutables, and generics aren't good because they're exciting.
That's your opinion, and maybe even that of other programmers which have good intention. However advanced PL features are still sometimes overused because the users find them very interesting and can't resist applying them everywhere.
As an extreme example I read a statement from a Rust programmer about liking framework XYZ, because it creates those nice type-system puzzles and makes things super-safe. However for an outsider this super-safe code could also be undecipherable gibberish.
Overall I think that while type-help can be good to avoid errors there is also a line where the complexity in type system features outweighs the complexity of the initial problem. And that after you cross the line programmers spend their time trying to understand complex but less-error-prone code instead of simple but easy-to-understand-and-fix code.
Where that line exactly is depends on the problem space.I certainly think it is somewhere between Go's extremely simple approach and the extremely advanced feature sets that some other languages allow for.
I thought a LLVM-based compiler would have been nice for a while but in reality a lot of the tooling is C or C++ based and the run-times here are fundamentally different.
This is why putting Go and C code together is so inconvenient (and slow), the calling conventions / stack setup is very different.
I really favor the C/C++ binary interface for its simplicity but Go is really good for what it is built for.
But yes, compilers are always a good argument. I can understand however why C++ frustrated people. To get this beast to work for you, even after C++11 - it requires a lot from the developer. Today, things are a lot better though and they keep getting better.
I agree with this sentiment. A lot of this depends on purely the angle of how you view things.
One example: People claiming the borrow checker gets in their way in Rust because it’s so strict.
Guess what? If your code doesn’t compile due to the rigidness of the borrow checker, it’s incorrect I’m sorry to say. It’s either got a data race, use after free bug or some other invariant. You don’t want that behavior in your code at all.
I’m happy to offload the responsibility to the compiler to check for this.
> Enums, immutables, and generics aren't good because they're exciting. They're good because they give us expressive tools to write descriptive, type safe systems that manage state better.
The main problem is just that these expressive tools scales poorly with regards to the number of people involved in the code base and its age.
The amount of time to write and/or duplicate code is just a fraction of the amount of time it takes to read/understand/debug code that have degenerated over time due to the number of people involved and various amounts of "expressiveness".
And no, a style guide isn't enough because these idioms change over time and there are a lot of social factors involved. Where for example the senior devs are actually the ones that a few years back decided that these overly-expressive code is the best thing since sliced bread.
I completely disagree. Code duplication is one of the worse things for readability, and generics quickly make up for their weight in gold on this front.
Go code is very hard to read and review especially because it has so much repetition where little details can hide. "Is that a regular error check that logs and returns, or did I miss some actual logic there?", "Is that loop just trying to find a value in an array, or is it also doing something else?" etc - boilerplate upon boilerplate, which you learn to glaze over, until it's not really boilerplate and you miss something.
I think Go is pretty well known for being easy to read so I'm not sure what exactly you find hard to read. Duplicated code is not inherently unreadable. Code shouldn't be abstracted away and be made "reusable" before it's been duplicated once anyways. I also believe that the amount of additional code duplication created by the lack of expressive language features is overestimated.
The examples you give doesn't seem to be specific to Go? In Go the error checks are, as most point quickly out, very explicit and often just err != nil checks. Not sure how they can be confusing, albeit repetitive.
I'm a bit surprised by the amount of down votes my reply has received given that this is a pretty common sentiment among very experienced devs within the game dev industry.
Readability has multiple dimensions, of which "difficulty" is only one. The parent post is not talking about golang being confusing, but rather being verbose and low density, which makes it easier for bugs to hide when reviewing.
Yeah, it does indeed. But compared to what? Reviewing high density code can surely hide bugs as well when reviewing? Either way, it's also easier to debug low density code so there are more things to consider.
The biggest problem when reading Go code is that, like C, it tends to contain a lot of details that have to do with the language implementation, instead of the problem domain.
Errors are bubbled up manually. Loops often use array indices, instead of expressing the desired operations. You often need to use pointers instead of values just to avoid the cost of constantly copying structs.
And every time you encounter one of these things, you get to spend some time thinking about why this or that was chosen,or if you're missing some detail.
Java is much more readable, because it has far fewer of these concerns. Every time you see a catch block, you know it is there for a reason. If someone is making a copy of an object, you know they wanted to make sure the original isn't changed. With Streams, if I want to write an operation which filters a list, I can use filter(), not 'create a new array, go through the original, every time you find an element matching the condition, write it at some index in the new array, increment that index'.
I think we'll have to agree to disagree on this one. That standard for loops are less readable just doesn't make any sense to me. Neither do the preference of exceptions which is pretty much the first things most experienced people turn off and ban in C++.
Sure, most experienced people dislike exceptions, except for Bjarne Stroustrup, Herb Sutter, everyone else who designs the 3 popular C++ compilers, everyone who designs some of the the most popular C++ libraries (Boost, newer Qt), Java, .NET, Swift, JavaScript, OCaml, Python, and a few others.
And if you think that
int j;
for i := range someArray {
if hasSomeProperty(someArray[i]) {
filtered[j] = someArray[i]
j++
}
}
Language design and lib design are different things than building, maintaining, and shipping a game involving hundreds of programmers. To use Boost is somewhat of an internal joke within the game industry.
You don't need to use indexes like that in Go. You would append to the new array.
The thing is that in practice such loops usually goes beyond just filtering on a boolean basis, so you combine whatever you want to do with that array in a single full iteration.
Sure, games (and probably real-time systems in general) are one domain where exceptions are not a good control-flow mechanism. But there is much, much more to the software industry than real-time software, and extremely large systems built by huge teams of programmers do successfully use exceptions as a core error-handling mechanism, sometimes in C++, much more often in managed-memory languages.
You're right about the indexes, I should have used append() there.
Related to loops though, the more you need to do in a single loop, the less clear the code becomes in the traditional loop style. Note that stream-style constructs don't iterate more than once either (not that doing M things per iteration vs doing M iterations is necessarily a clear performance win, depending on the size of the array etc).
For example, I would say that the readability difference is even more pronounced between these 2:
for i := range players {
isLocal := false
for _,localPlayer := range localPlayerIds {
if localPlayer.firstName == players[i].firstName
&& localPlayer.lastName == players[i].lastName {
isLocal = true
break
}
}
if !isLocal {
break
}
allLocalMoney += players[i].money
numLocalPlayers++
}
avg = allLocalMoney / numLocalPlayers
Add a little bit of grouping and things will get even worse for the first example. And sure, you could extract the checking into a separate function, but I wanted to come up with something that takes a few operations.
On the other hand, I fully agree that sometimes you just need to do various actions (i.e. anything with side-effects) for each element in a list, and there there is nothing better for readability than plain loops. I just like the option of more explicit transformations when that's what I'm doing.
I think the C# example is pretty dense to parse tbh, I need to match parenthesizes and partial results back and forth during mental parsing. Also, how would you go about stepping through those iterations in a debugger? It's just so fragile, because people will insist and/or keep using these things when a loop is more appropriate when the code grows etc, this is the reason why it scales badly over a lot of folks. Things become more and more opaque and hard to debug.
I guess in the end readability is on the eye of the reader. For me the C# one is much clearer in what it wants to do, and the formatting helps me ignore the parentheses entirely (assuming it compiles).
Debugging is not difficult at all, you can put breakpoints inside the lambda and use continue instead of sigle step. If required, you can also step through the library code, but that is not usually necessary.
And as the code grows, the high-level representation of what the code is meant to do stays clear, while the loop-based version grows in incidental details that you have to take a step back to understand.
Note that I've written commercial software in this style in a team with a about 1-200 other programmers. I am not coming at this from some hobby, 3-5 person project experience. It's just that different industries and different areas value different aspects of code. In my case, this is the middleware portion of a traffic generation solution that can simulate all L2-7 commonly used networking protocols, at the scale of a small city and beyond. Of course, the actual traffic generation code is extremely performance sensitive and is written in C and C++ (and quite a bit of Verilog) with a very different style, probably closer to what you are familiar with[0]. But there are many layers of configuration above that where we usually value clarity and correctness more than raw performance, and where we can afford to use these types of constructs, and our experience has always been that they vastly improve cooperation, not at all hinder it like you seem to imply.
[0] though we do have a real-time traffic stats analyzer library that is written in template-heavy, boost-heavy C++, and is on the critical performance path with soft real-time constraints, so this is also possible.
I don't like "modern" systems because I have a fetish for novelty. There's nothing novel about these concepts, they've been around since the 60s and 70s. I like these tools because they improve my ability to reason about the code, but more importantly they let the compiler and other static analysis tools reason about my code.
I am getting old and lazy. I want the compiler to do more for me, not less.
What I see is a situation where Go is gaining traction from two communities:
a) people who attempted to build large systems at scale in dynamic late bound languages like Python and Ruby and NodeJS etc, and hit the wall from a performance and maintainability POV. I could have warned you...
b) people who came from the Java world and got frustrated with the language and tooling there
People coming from a) especially but also b) to some degree will be perfectly comfortable with Go missing the nicer aspects of modern static typed languages because they never had them in the first place.
As for...
"Go doesn’t have a virtual machine or an LLVM-based compiler."
How is this any kind of pro or con? It's just an implementation detail.