Enums, immutables, and generics aren't good because they're exciting. They're good because they give us expressive tools to write descriptive, type safe systems that manage state better.
I don't like "modern" systems because I have a fetish for novelty. There's nothing novel about these concepts, they've been around since the 60s and 70s. I like these tools because they improve my ability to reason about the code, but more importantly they let the compiler and other static analysis tools reason about my code.
I am getting old and lazy. I want the compiler to do more for me, not less.
What I see is a situation where Go is gaining traction from two communities:
a) people who attempted to build large systems at scale in dynamic late bound languages like Python and Ruby and NodeJS etc, and hit the wall from a performance and maintainability POV. I could have warned you...
b) people who came from the Java world and got frustrated
with the language and tooling there
People coming from a) especially but also b) to some degree will be perfectly comfortable with Go missing the nicer aspects of modern static typed languages because they never had them in the first place.
As for...
"Go doesn’t have a virtual machine or an LLVM-based compiler."
How is this any kind of pro or con? It's just an implementation detail.
Exactly. I don't want sum types becuase they're exciting, I want them because they're the simplest way to communicate to the compiler that "my data can either be this or this and nothing else", so I don't blow off my foot by accidentally making it that.
When I was younger and less jaded, it actually was fun and exciting thinking up ways to hack around language limitations like that. Now it's just frustrating.
That actually surprises me a lot. I've seen plenty of critique about lacking generics and exceptions, while haven't seen much complaint about ADTs.
From my rather dilettante point of view they are neither too complex for "philosophy of simplicity", nor they add much of compile time. It seems to me they even can be implemented as a syntactic sugar on top of type casts.
I certainly cannot see them being more complex than generics.
Yeah, it's unclear to me why there's not more demand for them either. I think it might that most go users with prior static language experience are coming from Java, where there aren't sum types either.
From my naive perspective, they seem like an easy win, and are 100% more important to me than generics.
The problem with this is it's extremely limiting and conflicts with interfaces. So it's not very useful: you can't define enumerations with no associated data, or enumerations with the same associated data types, without the syntactic overhead of explicitly wrapping and unwrapping those types. So you'd end up with
type thing1 int
type thing2 int
type foo either {
thing1
thing2
}
var f foo = …
switch v := f.(type) {
case thing1:
data = int(v)
case thing2:
data = int(v)
…
and frankly that's a bit gross.
I think `select` would be a better basis for dispatching than switch as it already supports getting data out of stuff, and it better represents the linear top-to-bottom dispatching.
I see; that's because I'm this case we'd be fusing the discriminator with the payload.
With destructuring pattern match constructs you'd be binding variables to "inner" members of the sum type.
I do understand that. I'm not so sure it's so important, compared to
Just being able to say that a box can contain either A or B or C.
Interfaces are great when you don't care what does a box contain as long as it quacks like a duck.
Sometimes though you need to carry around one out of N types of something and currently all you can do is to use an interface{} and deal with the possibility of a runtime error if somebody breaks the invariant.
> I do understand that. I'm not so sure it's so important
It absolutely is, even more so because of Go's interface. The difficulty of properly discriminating between interfaces extending one another is a reason the FAQ gives for rejecting union types.
> compared to Just being able to say that a box can contain either A or B or C.
I'd argue that this is by far the lesser use case, and furthermore trivially and completely subsumed by the alternative.
> Sometimes though you need to carry around one out of N types of something and currently all you can do is to use an interface{} and deal with the possibility of a runtime error if somebody breaks the invariant.
And sometimes you need to carry around one of N values some of which have overlapping contents and currently all you can do is get bent.
I believe sum types are not added because they come close in functionality to interfaces, the idea being that if something should be X or Y, you make an interface that X and Y implement.
That's fair from one use, but the other, way more common in Go, is the whole "I'm going to return either an object OR an error". There's no common interface between the two, it's a distinct two options. Because go has no native support for sum types you get all this nonsense where every function returns a tuple of an object and an error, with the implicit assumption (not at all checked by the compiler) is that if the error is nil, then the object is valid. It's awful
That's not even true in the stdlid--there are some io errors that aren't errors per se and at the same time perform an action and return a value, e.g. short-write.
Sure, and in those cases you could continue to return a tuple. In fact having those cases not return a Result<T, E> when everything else does would actually make it more discoverable; right now people assume err means failure.
There is some similarity, but it is so agonizingly superficial. At their core, they're for two very different, arguably orthogonal, purposes, and they behave in two very different ways. Sum types are for composing datatypes, and interfaces are for abstracting behaviors.
In practice, that means that there's just not much overlap between their semantics. Sum types let you say, "A Widget is either a Foo or a Bar, but nothing else." Interfaces give you no way to set boundaries like that. They say, "A Widget is anything that declares itself to be a Widget." And then you can declare Widgets Foo and Bar, sure, but anyone else can come along later and create Baz and Bof.
Interfaces, on the other hand, place restrictions on behavior. You say, "A Widget must support these operations," and, if Foo and Bar are Widgets, they must support those operations. Sum types don't let you do that. A sum type Widget with elements Foo and Bar places zero restrictions on what Foo and Bar look like. They could be literally anything.
The question, "What would happen if the elements of a variant type were themselves interfaces?" leaves me wondering if the authors' consideration of variant types consisted of taking a cursory look at Scala, which does (confusingly and hackily) co-opt Java's inheritance mechanisms in its implementation of sum types. Which does lead to some serious language warts. There are plenty of other languages which have both interfaces (or typeclasses) and sum types implemented more independently, though, and it does not typically lead to confusion.
That last paragraph is also somewhat bothersome, and makes me think once again that this response is more of an offhanded dismissal than a considered answer. The full question is essentially, "Why don't you implement this feature that would greatly increase the language's type safety?" and the response is, "Because you don't need it. See, if you just abandon type safety, what you can do is..."
I suspect that the real answer, even if the FAQ authors or the people who designed the language don't realize it, is that generics are practically (if not technically) a precondition for an algebraic type system. You could implement one without generics, but it wouldn't be very useful.
There is not a lot of critics about the lack of ADTs because most people have never been exposed to them and don’t even know what they are and the concepts of product and sum types. Generics are much more common.
They gave the reason for not including tagged unions that they overlap in confusing ways with interfaces. Not that the reasoning is correct or not. It was their goal to make the language dead simple.
Now with the new generics proposal, it seems they have hacked up some crude closed interface types, which are something like union types. :/
Maybe something like a union of struct type T and interface type I, but T also satisfies I. Then you have to specify some extra things / add features to keep it symmetric and being able to construct it both ways equally easily. GO Authors have a primary goal of
Secondly, they think interfaces cover many of the need of variant types and they don't want some non-orthogonal feature.
imagine an interface{} that can be annotated to say "you can only assign values ofItypes A, B and C to it. E.g. interface(A,B,C){}
Since it's an interface{} it has no behaviour, the only thing you can do is a type assert / type switch.
By extension, you could add some methods: interface(A,B,C){Foo(bool) int}
This fails statically unless A, B, and C implement said interface.
If they do, then this interface has exactly the same semantics as the normal Go interfaces, except that only types A, B and C can be assigned to it (and not type D even if it implements Foo(bool) int
I think it's just down to population sizes. There's a mass of people who already have (non-higher-kinded) generics and exceptions in a language they already know, so they know what they're missing. Far few people have access to ADTs.
If anything, nil checking in Go makes it quite exciting (will I get a nil pointer error somewhere? Who knows!). Sum types would make this quite boring though.
You would never initialize an enum like that, since you would use one of the enumerated values instead. That is, after all, the reason behind defining an enum in the first place.
When could you end up with an enum that has an invalid value? When you get the value during runtime and typecast from an int. In that case, though, you should obviously have a runtime check that verifies the value is legal.
Your example case is something that I can't think of a reason to do. It isn't a case where you would need to be careful and knowledgeable to avoid it - there's just no reason to do it. You would use On and Off instead of defining a variable with this type.
This happens all of the time for enums in structs especially during (de)serialization and returning a zero value and an error.
func DeserializeState(raw []byte) (State, error) {
type Foo struct {
State state `json:"state"`
Data map[string]string `json:"data"`
}
f := Foo{}
err := json.Unmarshal(&foo, raw)
if err != nil {
return ???, err
}
}
s := DeserializeState([]byte(`{"data": {"key": "value"}}`)
fmt.Println(s) // prints 0 oops
Nobody did back then though. Getting a free C compiler before GCC wasn't easy. Even commercial Unix systems shipped with licensing restrictions on their C compilers.
I bought Pascal, and then Modula-2, for my Atari ST. They were cheaper or the same price as a C compiler. Though C was a better choice for that system, since the OS was written in it and the calling conventions, etc. were all C.
On the Macintosh (and Lisa before it), by contrast, Pascal was the way to go.
So I think part of the reason C won out over Wirth languages back in the late 80s was because it just "fit" better with the systems that were emergent then. The swing towards Unix/Posix or Unix-like machines meant that the syscall interface for most things was defined with C calling conventions, and most example code was done that way.
And C++ also became quite popular, while the various object oriented Wirth / Wirth-like languages were not well standardized or available.
I appreciate your comment. I think it proves the idea how important free compilers and run times are. The web took off not due to C, but Perl. It was free and came with many servers that people rented. After that came the 90s with languages like Java and other free higher level languages. At the same time free OSes like Linux allowed people to be more technical without cost.
By the time this came around, universities taught C++ or Java in their intro to programming. Students used those. Then found the free version from discussion with peers.
It really is interesting just how much tooling plays a part in a language's success. It has to "just work" without any kind of crazy build steps. Rust with cargo handles this well, as does ReasonML with esy, Elm, and so on.
One might ask how NPM succeeded and well, it was the only way to write code on the Web, so people had to use it, one way or another.
npm did a fantastic job. Before it JS "dependency management" has been copying other people's scripts from random websites, embedding them in countless performance-degrading <script> tags, and never updating the code ever.
On the other hand, the tedium of manually copying <script> tags onto each page of an application naturally encourages developers to try to limit the total number of them.
Nowadays, no-one blinks an eyelid when `npm install foo` drops 1000 packages in your node_modules directory.
Pricing is the major reason that Delphi is not selling. The version of Delphi which can actually connect to a "client-server" database across the network costs $3,999 and the $999 for annual renewals..
However, since Delphi doesn't sell well we have the classic catch-22 situation - there aren't enough developers and the existing ones are retiring and hardly any new developers are using Delphi. So naturally companies are reluctant to commit to Delphi for new development.... further reducing the chance of bringing in more developers.
At the time, Borland and Watcom C++ just blew Turbo Pascal out of the water. There was no competition, feature and performance-wise. Also OO, which became another things..
I think parent meant "Borland c++ blew away Borland turbo pascal". I can confirm this, as in highschool I (painfully) ported my science fair project from tp to Borland c++ specifically for this reason and saw a ~2x speedup.
It caught on during the 80 and mid-90s, Object Pascal was originally designed for Lisa and MacOS OS development, before Borland, TMT and others adopted it.
But then Borland decided selling Lifecycle tooling to Fortune 500 was more interesting than small developers.
Delphi and C++ Builder are still around, unmatched in several of their RAD capabilities, but a couple of generations were lost, even if now they try with the community editions.
- Turbo Pascal difficulty in dealing with 32-bits x86. Turbo C++ had different memory models (but I think it was still on the 640k limit) and then DJGPP came with protected memory
Ok then we got Delphi. But to talk to Windows you needed the C/C++ interface. And don't get me wrong, Borland C++ libraries were much, much better than the MS VC 6.0 MFC. Really.
But for the times where you needed a "direct line" with Windows, C was the way to go, so maybe that was it.
It's still used in defense and aerospace. Last talking to people who were making inertial navigation units (the movement sensors like in your phone but much better and for a plane), all the software was done Ada, millions of lines of Ada.
After using Go for ~5 years, so many little things in Rust blew me away. I thought I was okay with Go's "enums" but then I saw Rust's Enums. Most notably, the real enum type combined with pattern matching was an eye opener over what I've been missing.
Iterators were another one. The ability to express data transformation (mapps/filters/etc) in a very concise way blew me away. I had no idea what I had gotten used to in Go, though that's definitely not to say that I didn't feel the pain.
There's advanced features in Rust I could live without, but most of it just feels empowering. The beauty in it though, in my mind, is that you don't need to use all that advanced stuff. You can write Rust shockingly similar to Go.
The only thing Go truly nailed in my eyes is green threads. Those will always be better in Go than Rust (though futures are getting way better). Go nailed green threads.
But all the other "lack of features as a feature" left me frequently wanting for more tools to solve simple problems. And I was a Go nut. I have a Gopher plushie in my car for Petes sake.
This applies in any case where one language is more complex than another. You can write almost any style of any language in C++, for example.
The problem is that every team ends up writing in their own subset of these languages, which means it's impossible to ever really achieve expertise. Each team's definition of the language is different, and no one has worked on every team. Ergo no one in the world is actually a C++ expert at any given company's "version" of C++, even if you know every C++ feature independently. You have to follow the style guide which tells you what subset of the language to use and how to use it. This isn't an insurmountable problem but it is a problem. Rust has the same issue.
With Go, everyone can feel free to use the entire language and every team's code ends up looking and feeling incredibly familiar, making it straightforward to contribute to most parts of any code base.
> This applies in any case where one language is more complex than another. You can write almost any style of any language in C++, for example.
True, but my point was primarily that Rust and Go share many of the same patterns. Structs, methods and interfaces can look nearly identical.
This matters in my view when people think you need to go the most efficient and complex way to achieve similar goals as you might in Go. You were fine with performance loss in Go, so why complicate your life in Rust?
It's, at least to me, a useful lesson. Using every tool is a form of premature optimization. Go forced me not to do that, sure. Rust doesn't, sure. So I do hold more responsibility in Rust than I do in Go, but that doesn't mean I can't learn the positives of Go _(boring code/etc)_ without suffering some of the extremes of their decisions _(no enums, pattern matching, etc)_.
> With Go, everyone can feel free to use the entire language and every team's code ends up looking and feeling incredibly familiar, making it straightforward to contribute to most parts of any code base.
Yea, it's a trade-off I suppose. My problem with that though is when I realized I don't like Go's version of verbosity and spreading out logic. I've had pages full of helper functions just to do some minor iteration mapping, flattening, etc.
Having every team keep to the same standard of _(in my view)_ bad still feels bad. Consistent, sure, but consistently bad.
> With Go, everyone can feel free to use the entire language and every team's code ends up looking and feeling incredibly familiar, making it straightforward to contribute to most parts of any code base.
And it goes a step further. I read Go library code and it looks just like code I would have written. It's easy to understand and makes sense.
I came to go from C++ and was so glad about how few sharp objects were lying around, working in software with just one allocator, just one string type. I’m possibly damaged by previous experiences but ultimately I’m also glad not to have to pretend I’m clever. I just go brrr and solve the problem with for loops and go home to my loved ones at 5:30 exactly.
Like proper enums might be nice I guess, but really what I love are all the other things which are not there. Lack of enums has not hurt me deeply. What hurt me was languages where someone might conceivably express the COM apartment model and also think it was a good idea.
Do you somehow labor under the impression that Rust programmers stay up at nights and on weekends to learn to use "clever" tools like iterators, enums, and sum types?
These aren't clever tools. They're dumb tools like chisels and hammers. Yeah, you can just use a screwdriver to chip things away or to whack something in, but there are better tools that are purpose-built and let you do the job faster, more precisely, and with less effort.
> The only thing Go truly nailed in my eyes is green threads. Those will always be better in Go than Rust (though futures are getting way better). Go nailed green threads.
If you think Go's green threads are great, you should see erlang's. In erlang if you need to construct a mutual failure domain (have two green threads mutually destruct if one fails) you can use the link() function. Done. You can also trivially choose to have one of them be notified on the failure of the other instead of being destroyed.
The crazy thing is that sum types (Rust enums) and pattern matching have been around for at least 30 years. I'm simply not interested in learning any new language that doesn't have sum types, they allow you to write incredibly expressive and terse code.
For all the complaints about the breakneck change in rust, it's a very "boring" language: all of its features with the single exception of the borrow checker already exist in other languages.
I know this won't be popular to say on HN, but I think Rust has the same problem Perl does. The language has so many systems to learn and a tough syntax that it looks unreadable to people just starting out.
I mean I could see learning Rust being really hard if you only know something like Python or JS. The only "system", as you say, that is present in Rust that doesn't have something analogous in C++ is the borrow checker, and lifetimes still exist in C and C++. Rust is significantly simpler and easier to learn than C++
C++ has a much shorter time to first non-"hello world" program than Rust. C++ has a lot of features, but few of them are mandatory for general development. With Rust you have a pretty steep hill to climb before your first non-trivial program compiles.
C++ and Rust, IMO have a very similar featureset, Rust just puts that upfront as properly part of the language. Those C++ features are pretty much mandatory for general development, and likewise you will find them in most open source and production projects. Programming without them is just C++'s one of many ways that it gives you enough rope to hang yourself.
Yes, you could program C++ without even knowing what std::unique_ptr (and I talk to many college grads with C++ on their resume who don't know what unique_ptr is, or that C++ has more than one type of pointer). But Rust won't let you use raw pointers (as part of the language), whereas in C++ you will be told "make sure you have read Google's 10,000 word style guide before committing any code".
I believe https://news.ycombinator.com/item?id=23715759 works as a response to your point. In my eyes syntax is the least interesting thing of any language, their semantics are way more important, and quite a bit of syntax ends up being derived from it, and the rest boils down to aesthetics. The syntactic complexity that Rust has is there because it is encoding a lot of information, modulo things like "braces vs whitespace blocks" and "<> vs []" which, again, come down purely to style. Also, having a verbose grammar is useful for tools like the compiler and IDEs because having lots of landmarks in your code aids on error recovery and gleaning intent from not-yet-valid code.
It's not any particular feature that makes a language a mess. It's the interaction between the features. It's a bit like mixing paint, it's very easy to end up with greyish poop.
Go was designed by very experienced programmers that understood the cost of abstraction and complexity well.
They didn't do an absolutely perfect job. It's probably true that Go would be a better language with a simplified generics implementation, enums, and maybe a bit more. That they erred on the side of simplicity shows how they were thinking. It's an excellent example of less is more.
Most programmers never gain the wisdom and/or confidence to keep things boringly simple. Everyone likes to use cool flashy things because it makes what can be a boring job more interesting.
But if your goal is productivity, and the fun comes from what you accomplish, then the code can be relatively mundane and still be very fun to write.
Precisely, and this is one area where go fails completely. The features don't interact well at all!
Tuple returns are everywhere, but there are no tools to operate on them without manually splitting the halves, checking conditionally if one of them exists, and returning something different based on each possibility. Cue the noise of subtly-different variants of `if res, err := nil; err != nil` in every function.
Imports were just paths to repositories. Everything was assumed to just pull from the tip of the branch, and this was considered to be just fine because nobody should ever break backwards compatibility. They've spent years trying to dig themselves out from under this one.
Everything should have a default zero value. Including pointers. So now we go back to having to do manual `nil` checking for anything that might receive a nil. But thanks to the magic of interfaces, if you call a function that returns a nil interface pointer, it will directly fail a nil comparison check! This is completely bonkers.
Go has implicit implementation of interfaces which makes exhaustive checking of case statements impossible. So you type-switch and hope nobody adds a new interface implementation. So you helpfully get strong typing everywhere except for the places you're most likely to actually mess something up.
Go genuinely feels like a language where multiple people each had their pet idea of some feature to add, but nobody ever came together to work on how to actually make those features work in concert with one-another. That anyone could feel the opposite is absolutely incomprehensible to me.
Given that I am involved in the Rust project I'm very likely biased, but given that I've focused on the learnability of the language (diagnostics and ergonomics) I have a bit of context on this subject.
When designing a language there are intrinsic (what things the project wants to focus on, be they features of the language or the associated tooling that affect the language, like generics or compilation speed) and extrinsic (external impositions like being able to run on certain platforms, or interfacing with existing technologies like being able to run a statically linked binary in Linux or being able to debug using gdb or calling C libs without runtime translation) design constraints. All languages have (or should have) an objective of being easy to learn, pick up and use long term. It might just not be the top priority.
For the sake of argument you can take Python where expressiveness at runtime and clean syntax are prioritized over speed, Go where fast compilation and multithreaded microservices are prioritized over more complex language features, and Rust where fast binaries and expressiveness are prioritized over ergonomics (when push comes to shove this is the case, otherwise you wouldn't need to call `.clone()` or add `&` to arguments when calling a method ever), you can see how these objectives permeate every decision throughout the language.
When it comes to Rust in particular, I feel it is still a boring language despite the appearance of too many features, precisely because of how they interact between them and fit together naturally. It is not the best fit for every use case, but it is one of the projects out there that is embracing the fact that it can't be as easy to learn as it could be (without sacrificing some of the constraints that make it interesting as a systems language), but we can rely on the compiler being a necessary part of the developer toolchain to make the compiler understand the user's intent when they do things that make sense from extrapolated misunderstanding of the language and help them write the "correct" code instead. This has the added benefit that reading the code is easier because you have to "guess" much less what it is doing. Remember that if the code can confuse a parser it will also confuse humans. On the opposite end of the spectrum you have JavaScript, where it's grammar has a lot of optional or redundant ways of doing the same thing (think semicolon insertion), which makes the act of reading and debugging code harder. This is a reasonable approach in a case like the web, less so in a compiled language that can evolve independently from the end users' platform.
The thing it nails is that mediocre programmers (like me) can easily understand and reason about source code written by others. In the C++ world, this can be very hard and thus time consuming. Go on the other hand let's me focus on the business value as opposed to becoming a language lawyer.
You're right that you can't mix named types with the base type.
However, they are not enums in any way either, since there is no limit on their possible values to some restricted set - they really are just integers, even more than in C# or C++, even though there is no implicit conversion.
Your statement implies that errors for using invalid values are guaranteed at runtime as a feature of the language. That entire statement is incorrect.
You really don't get errors... I literally pasted a link to a working, non-erroring example in the comment that you responded to. You clearly saw the code. Did you click "run"?
You only get an error at runtime if you add "-fsanitize=undefined" (or "-fsanitize=enum"), where the compiler will inject some code into your binary.
But the error doesn't actually stop code execution: it just prints a warning!
Here is a link with the sanitizer enabled, and no warning is even printed for using an invalid value: https://godbolt.org/z/NA7FNQ
So, not only is the sanitizer not even comprehensive, it's not actually a feature of C++. It's a best-effort feature from the compiler to add a non-standard runtime code sanitizer to your binary. Warnings for using invalid values are not guaranteed.
You had to use casts to get those to compile. Go will let you shoot yourself in the foot by accidentally writing "z = 3", which is way more likely than accidentally writing "z = (Something)3".
The point is that those languages’ enums are very much integers too. Do those languages allow you to write out as many types of integers literally in your code? No. But that’s not the point being addressed here. You can reread the comment I was responding to, and there’s no hint that I can see anywhere that they’re talking about untyped literals.
C#, C++, and Go all share the same flawed enum representation strategy. Use Rust if you want to get better enums... those other languages aren't any better for enums. C#, C++, and Go enums are all type safe, and they're all perfectly capable of holding completely unexpected values. (Although, C++ in the older permissive mode was not type safe period when it came to enums, if I remember correctly. I admit it's been long enough that I could be misremembering.)
I already addressed your point in detail with you elsewhere.
I agree that being allowed to write any type of integer as a literal is theoretically an ergonomic issue when it comes to enums, but it’s one issue I’ve literally never seen happen even once in practice. I point out that there are linters available to address the issue if it’s one you feel so strongly about. Surely you use linters in every language?
I have been paid professionally for years to work in both Rust and Go codebases. Linters are essential for both languages, and CI is where you guarantee that no lint-failing code is allowed to pass.
I’m well aware of Go’s flaws, but people throughout this thread (yourself included) have made tons of baseless claims about Go. Just because it’s popular to hate on a given language doesn’t make it okay to use incorrect statements for that purpose, such as saying that Go’s enums aren’t type safe. They are a separate type. The language does allow you to write integer literals of any integer type. It also does this with untyped float and untyped string literals as well, as a fun fact.
Go has a number of legitimate flaws, including the absence of both generics and sum types. You could even legitimately complain about untyped literals —- they are really nice in some ways, but they do have some trade offs.
No, it doesn't. Go's enums are type safe. You can't accidentally mix two different kinds of enum values, or accidentally use some random value of type "int" where an enum value is expected. The type system protects you from values of the wrong type being used. This was demonstrated by that Go Playground link.
Literals in Go are untyped until they're used. How they're used determines the type, and then they have a very real type, with very real type safety being enforced. So, if you're using a literal "3" where an integer of type "state" is expected, the Go language specifies that the type of "3" is "state". This is an ergonomic issue when you're expecting exhaustive Sum Types, but not a type safety issue.
Should all literal integers just always be a specific type? Let's decide that all literals should be of type "int". Great. Now you can't type a large, 64-bit integer literal to pass as a value to a function, because that would overflow the "int" type, even though the argument is desired to be "int64".
There are trade-offs to every approach.
> In C++ or Haskell, implicitly assigning integer literals like that isn't valid.
I can't comment on how Haskell does things, but C++ is more complicated than you seem to think.
"The type of the integer literal is the first type in which the value can fit, from the list of types which depends on which numeric base and which integer-suffix was used."
Go's approach is equally type safe here. It assigns the type to the literal based on the expression the literal is used in.
As I said before: I really wish Go had proper Sum Types. But enums in Go are type safe, contrary to what you have claimed in several comments here.
A desire to restrict the polymorphism of integers is fine, but it doesn't really change the type safety argument at all.
"Enums" here are just another type of integer, exactly like in C# and C++. They're not a separate concept. I wish Go had Sum Types or even just exhaustive, non-integer (as far as the programmer knows) enums, but neither of those are requirements to have a type safe enum. The only difference vs C++ and C# is that Go has untyped literals, which are quickly handed a type based on where they're used.
Enums in Go have type safety, which is the point you disagreed with. You can't assign values of the wrong type to an enum-typed value without explicit conversion in Go. That's type safety.
Linters that fit your team's expectations are a good thing to use in any language, and Go is no exception here.
Would this linter be unnecessary in other languages? Sure. I would argue it's still unnecessary, because I've never even once seen anyone accidentally type a literal value in Go where they meant to use one of the predefined enum values. It's certainly possible, but it hasn't caused me any lost sleep.
Fair enough. I would say if the way someone uses the Go "enum pattern" is causing them to have issues with type safety then their code could probably use a refactor, but the point does stand.
`iota` is one of the strangest features in Go by far for me. So much complexity for such a simple feature with so few applications, instead of implementing even simple enums...
The pure magic of Delphi was in its component architecture and especially the ability to define define properties - with its own sophisticated property pages.
Also, you could specify if a property has read and write capabilities or just read or write.. and even if what sort of accessor and setter methods are to be used.
.NET really does not have the component architecture with property pages etc in the same way that Delphi has - along with the ability to register itself into an IDE etc.. I could be wrong though
But when you do need you can very well use "PhD level languages". Existence of Go for non PL enthusiast like me does not stop anyone to use language what is most favored by them.
That's not the point. In the given Pascal you can add N different cases and compiler will check whether all instances of `state` are one of those N cases but not anything else. In Go you can't do that since it's really just an int which has 2^32 or whatever cases. There is no way to communicate to Go compiler that a `state` can be either `on` or `off` but not anything else. You can only communicate a `state` is an integer (2^32 different `state`s) and in particular state=0 is called off, state=1 called on. That's useless.
Bool can be either true or false. So your statement that there is no way to communicate to Go compiler that a `state` can be either `on` or `off` but not anything else is "bool false".
I'm not sure what you're trying to say? Bool can have 2 states, but in software we need things that have 3, 4 or N states too. So we need an abstraction that is like Bool but can have any number of cases. Moreover, bool is clearly not sufficient as it not only has 2 states, it can only represent exactly 1 bit of information. Whereas, e.g. in Haskell "Maybe X" has 2 states either "just X" or "nothing" but can represent arbitrary amount of information. (imagine having Go compiler forcing (bool, X) pair to have only two states either "(true, X)" or "(false, null)" such that compiler doesn't allow you to construct e.g. "(false, X)"). So bool is like very very specific example of what is being discussed here.
Speaking as someone who technically would fall in b) in your community list: Java has all of those nice features and tooling now of other static languages, and when I tried out Go I found it tedious to work without them, especially generics.
I mean sure, I'd rather be working in Kotlin, but Java is at least acceptable now.
Yeah I am speaking as a person who was a professional Java developer for years, but the last time I worked in it was 8 years ago, and it was painful given I had experienced working with Ocaml, Erlang, and Scala. Kotlin wasn't a thing yet.
I've been working almost exclusively in C++ @ Google since then.
Java looks much improved now. But I still feel like there's a culture of complexity and bloat in Java code ... But I work in the Chromium code base and it has the same problem :-)
Old code and programmers with old ideas hold back "real life" Java projects quite a bit from what they technically could be in a more modern form.
Complexity and bloat definitely come with that, I think some folk get used to building EnterpriseTurboWidgetFactoryImpl type horrors and never consider the language doesn't need to do that kind of thing any more... Java being so backward compatible I think also means it doesn't do enough to discourage that either though, which is also a reason why Kotlin is so nice -- the lowest effort solution is miles more understandable.
> I think some folk get used to building EnterpriseTurboWidgetFactoryImpl type horrors and never consider the language doesn't need to do that kind of thing any more.
Did it ever though? I feel this junk comes from the Spring side more than pure Java. I'd love to know a good way to get rid of it as Spring is everywhere.
Ho man, I remember when Spring came on the scene and it was a breath of fresh air compared to the 10,000lb J2EE gorilla that was common then.
No, Java has always had ExcessivePatternFactoryImpl-itis.
And what's worse is that the language never had good supports for the patterns that the community seemed to prescribe. So you had an insistence on value and transfer objects and JavaBeans with pointless getters and setters leaking out their eyeballs, but no language support for properties or automatically managing these data objects. Or a desire to push the visitor pattern etc. but no pattern matching constructs. Apart from generics, which ended up being excessively complex and practically turing complete, the language was horribly anemic and repeatitive.
I've seen a lot of that kind of thing in non-Spring code (I can think of at least one codebase where they rejected using Spring as terrible, but their hand-rolled alternative eventually grew to something much worse), and honestly good Spring Boot code looks surprisingly nice and free of that kind of thing.
Maybe it was old school Spring that started it (it seems to have seen a big uptick in enterprise grossness in the 2000s so its an interesting hypothesis), but even Spring has moved on from it.
I write in Kotlin regularly, and occasionally in Swift, but I use Go for anything server-related, if I can help it. I find the stdlib to be an incredible simplifying force for these programs, not least because deployment is trivial. I guess you could say I belong to the 'b' community. My issues with programming circa 2010 were in part related to tooling, but also just bloated, over-engineered code bases that obscure all basic computing behind layers of nigh impenetrable (but still leaky) abstractions.
I can see how if you became fluent in a nicer language like Swift, it would be frustrating to move to Go and find your typical methods for expressing certain patterns are unavailable. They have been sacrificed for keeping the language overhead small, which in turn creates various warts and edge cases that give more ammunition for being frustrated with the language. I accept these tradeoffs when working in Go because I am typically thinking about concurrency and memory overhead in those projects, and Go makes measuring and reasoning about these properties of your program straightforward.
How often does "hit the wall from a performance and maintainability POV" happen with Python / Ruby / NodeJS ? I mean seriously, how many more huge projects do we need to build until we can show it's a sane alternative? 120 Billion worth Shopify isn't enough, maybe Instagram then? Github ?
When you see the effort some of those compagnies go to get some decent performance, chose the right language from the start would have been easier on that side, look at Facebook with Hack, Youtube ( Python ) that re-implemented a VM in Go, Shopify is also doing a lot of things to improve performance because of Ruby limitation, when you pick up a language with by default good performance you don't have to go that route.
Are there as many examples of the inverse, big successful companies that started off with a "fast" stack and didn't need to optimize the one they grew with? It seems like flexibility is more important than performance until you actually have a winning product.
Ruby is working hard on performance but Sorbet is hardly a common thing. The Rails community isn't gonna adopt types anytime soon imo. To be fair most likely never, at least not as a community. Some companies may go that route but I hardly anticipate a big movement.
Those are all grafted on afterwards, and they don't see too much use apart from the people who made them (ie Stripe with sorbet). Compare that to a language that has first-class support for all of these built in. As you learn such language you're forced to learn these concepts as well, so more people in the community use them, thereby increasing their effectiveness through sustained development because they're popular features.
This is a bad faith argument, the parent is not saying to use C, but surely you can concede that there are better languages for performance and are useful for web development? Even Node is faster than Python and it's chugging along well in the web dev community.
Node is probably faster than Python but so what? It comes with its own set of problems and drawbacks.
If you're optimising for performance I don't see why you would go for Node anyway, and if you're not optimising for performance than I still don't see why Node but to each his own :)
C++ as it is written can be arbitrarily slower and may have hidden bottlenecks due to copy constructors and poorly thought out Stl usage. But C++ fanboys refuse to admit this.
Do these sites also have back end services written in other languages for the heavy listing? Anecdotally, seems to be a lot of stories to that effect.
And as noted in the article, Javascript/NodeJS is in a different performance category from Python and Java, more like Java or Go. But the simplicity of Go might make the performance and memory usage more predictable.
Shopify is absolutely a Ruby / Rails company. If out of the thousands of developers that work there you have a team or two writing some performance heavy c++ / Go code that says absolutely nothing.
Wouldn't it actually say a lot? Engineering is about trade-offs, and the trade-off that says "use this special language for 1% of our most critical services instead of the company-wide standard language" is not "absolutely nothing".
Lines of Code is a bad proxy for mission critical.
All of Shopify is mission critical. The code that constitutes the Shopify monolith is mission critical for sure. The company won't function without it.
This is somewhat circular, but: when your team decides to rewrite a component in a more complicated or more obscure (at least at Shopify) language to increase throughput rather than spinning up more instances of the current solution, then you're talking about a component that's even more critical than the other critical components.
It's just common practice - in Ruby some popular gems that parse XML or JSON come with a C extension that does some performance heavy calculation. Most gems don't do this, but when performance is very important some do.
Us Rubyists see no problem with that, and we don't take this as some kind of incentive to leave Ruby, maybe even the opposite; knowing that you can do 99% of your app in Ruby, and if ever needed, easily fill in the 1% with Rust/Go/C++ is very reassuring imo.
It entirely depends on what you're doing. There are two issues as I see it:
a) Lack of static analysis tools means having to do more testing (manual or automated) for simple mechanical errors.
b) For super high throughput low latency systems, they are not up to snuff. When I worked in real time bidding in ad-tech this was actually a serious concern. I'd reach for Rust (or C/C++) in this scenario now.
Speaking from experience, the JVM is fast enough for real time bidding in adtech. Most major ad exchanges require the latency to be below ~80ms, which is not that hard to achieve.
In contrast to high-frequency trading, there is no competive advantage in having a lower latency.
Are compile times and program start-up times not a factor?
One of the things I really appreciate about golang (from a completely different field) is how quic teh builds are, and how fast binaries start up (it's like I wrote it in C).
Java can compile quickly, a few minutes at most when C++ would take hours, so I am tempted to say that's not a problem.
The startup time is negligible in my experience (few seconds for JVM or python imports). I have to take over slow starting applications from time to time and it's always because of loading data and doing stupid shit on startup, irrelevant of the language. It's not a problem for production because server application only reboot once in forever.
It's still a problem for microservices architectures, unfortunately, especially if you want to support dynamic scaling of some kind. A few seconds is nothing if you expect that your server will be up forever, but it becomes a lot if sometimes it goes down for a bit to move to a different machine, and that takes seconds for your customers.
Also, JIT languages have a very poor habit of doing a terrible first impression because of the warm-up time, especially in Java. If you are delivering applications to customers, that becomes a real burden - the very first time they use your shiny new application, everything is moving like molasses, until the JVM decides it's JIT time...
In a microservice architecture you'd probably have more than one instance running at any given time though, and do a rolling restart so there's always at least one instance available.
Yes there are niches where Ruby won't work, even big niches like Kernel development or real time systems maybe.
But for web development, in general, these languages have proven themselves for so long it's getting quite ridiculous now to say they won't work.
As for testing - I disagree. Frameworks like Rails are so easily testable it's a breeze. Java/Spring dependency injection is jumping through hoops just to provide a testable framework I find it hard to believe it's any easier.
Yes, I don't do web development. At least not mostly.
But I understand that that's what most people out there are doing. But they should also not delude theirselves that the toolsets appropriate for that environment are appropriate every where else. I see this bias a lot, on HN even, that everyone now is a "full stack developer" doing this kind of development.
Having the code execution paths radically change by adding an annotation to a method or a class makes it very difficult to reason about what it will do when deployed.
If that annotation you found through Google does what you want and expect, that's great. But if it doesn't, or fails in unexpected ways, debugging it can be a nightmare.
This always baffled me -- when Guava came on the scene, and when Spring also adopted annotations...
The whole original point of dependency injection was to decouple dependency management from the code, to make it easier to test, and easier to reason about and analyze.
DI via annotations goes ahead and sticks them right back in there. And now we have, like you say, magic code that is difficult to reasona bout.
Yes, that's what makes it so terrible. All of the "action at a distance" complexity of Ruby meta-programming, with none of the concise and easy to read code!
just out of curiosity -- for ad-tech real time bidding, what's your network latency and bandwidth like? I can buy that C/C++ is needed if you are colocated to an exchange, but if you're bidding online, those few fractions of an ms you save in C/C++ vs node.js you could have also saved by locating your server closer to whatever ad-tech exchange you're bidding on.
The issue I had back in this line of work was with garbage collection and was an issue in the 99th percentile.
When you have an expectation from the exchange that you respond in under 80ms, and 25-50ms of that is eaten up by transport, you don't have a lot of time to mess around.
So you spend the first chunk of time optimizing I/O and how you're accessing it.
Then you start looking at computation -- improving caching, etc.
At a certain point you start noticing in your graphs that you're on average doing quite well. But there's those hiccups every Nth request...
And now you're in the game of fighting with your language's garbage collection algorithm...
Try to improve allocations where possible. There's often plenty of small allocations happening in Java that can be avoided (string usage is one major driver). Less allocations means less frequent garbage collections.
Then one hack is to disable the garbage collection. Let the software run up to consume all the 100 GB of memory of the server and crash and restart. There is no pause of garbage collection when there is no garbage collection.
If it's not enough, the last resort is to write native code or switch to C++.
At this level of scrutiny, replacing algorithms is appropriate and if the algorithms are built into the language then replacing the language may be one* way to clear the issue, but you have to do your homework to get there.
> b) For super high throughput low latency systems, they are not up to snuff. When I worked in real time bidding in ad-tech this was actually a serious concern. I'd reach for Rust (or C/C++) in this scenario now.
What did you do? You wrote servers in C? you wrote a Redis for instance?
Optimized for GC and figured it out. But that point also coincided with a job switch; I went to another ad tech company, but on the exchange side rather than the bidding side. And all our ad server infrastructure for that was written in C/C++ (with embeded V8 JS for biz logic stuff). Then that company was bought by Google, and I worked on the exchange side at Google, too, where everything was also in C++.
My successor at the original startup rewrote everything in Python. And I watched from the exchange side as they struggled for two months to meet basic performance constraints. They eventually got it though. It certainly can be done.
It's worth pointing out this was 10 years ago. And in the meantime we've had the usual improvements in machine performance, and SSDs are a thing in data centres, etc.
> How is this any kind of pro or con? It's just an implementation detail.
It is pro in sense Rust developers spend inordinate time in blaming LLVM for slow compilation. Without Virtual machine is pro that steps to install, setup or upgrading virtual machines on target platform is no longer required
Coming from the world of developing on the absolutely massive Chromium code base where compilation on a non-specialist workstation can take 3-4 hours, but build caching (goma) etc makes it mostly a non-issue [most of the time] ... I have a hard time believing that compilation times are honestly still a major issue for most developers.
There are only a few code bases out there that are massive enough that compilation time on a modern machine is a major impediment to productivity. And in those cases I find that IDE and analysis tools on large codebases are an even bigger problem than compilation speeds...
Also the VM argument you give makes little sense -- you can have your language run on a VM without the VM being a seperately bundled package like Java or .NET. For example: Python runs on a VM. Its own VM, which is part of the Python runtime itself. Now, it's not a particularly good VM, and Python is a late bound dynamic typed language so it's also pretty slow, but there's nothing stopping one from having a fast JIT VM with a static language like Go. Not saying one should do that, but it's entirely possible and there's arguments to be made either way.
I think compilation times in template-heavy C++ are still a major headache, especially since even modifications to private fields can trigger massive re-compilations.
> There are only a few code bases out there that are massive enough that compilation time on a modern machine is a major impediment to productivity.
Depends upon what you mean by "major impediment to productivity". I think someone like Bret Victor would argue that if your feedback loop is greater than a couple of seconds, you're toast.
From that perspective, _the majority_ of _all_ codebases are an impediment to productivity.
> It is pro in sense Rust developers spend inordinate time in blaming LLVM for slow compilation.
Some of it really is LLVM, witness the latest LLVM release where Rust lost 10% in compile time and changed absolutely nothing other than the LLVM version.
However, the Rust folks are also painfully aware of just how much slow compilation is the fault of the Rust side. The whole idea of moving to MIR is to enable optimizations to be done with more context so LLVM doesn't have quite so much code that it has to generate and then spend all its time trying to optimize away.
However, Rust is always going to be slower to compile than a language like Go where compilation speed was an actual primary goal.
c) Minimalist-loving hipsters that read articles like this and bandwagon on Go just like any other trend (merits of the language aside). These are the same folks that use a hand crank to grind their coffee beans.
Hand-cranker here. What we seek is bliss, nirvana, not trying to shove square pegs into round holes. We like F#, Haskell and Rust more.
On a more serious note, Go pisses me off every other day. Just off the top of my head, it's pedantic where it shouldn't and vice versa. I get that commented out usages of a variable are not OK for production, but why don't you give me an escape hatch, like --dev flag? And it won't complain about dead code at all if you stick an unconditional return in a middle of a function! Ugh.
I mean, strictly speaking I should downvote you for tone... but I am laughing ;-)
I know people who work at Capital One, I wonder how extensive Go usage is there...
I have started looking at job postings recently and I see a lot that have aspirational-Go in their postings. I have managed to avoid Go during my tenure here at Google (which isn't hard since it's not used much at all, despite what people outside of Google seem to think), but I'm starting to worry that if I move on I'm going to have to take a job writing it.
Who is hiring remote for Rust, Zig, OCaml, or Erlang? Those seem more palatable to me? :-)
Before learning a new language I always look at Linkedin for open jobs. Last week, I could find next to nothing for anything Rust and OCaml. I eventually decided to modernize my C++ skills to be relevant again
I assumed it was job posts in the vein of "new code written in Go" or "migrating systems to Go", kinda like when somebody says "monolith moving to microservices" but actually you spend 95% of the time working on the monolith.
In Go you use the pattern of prefixing enum constants with the enum category to indicate that the enums are related to each other (e.g. StatusOk, StatusNotFound instead of status.OK, status.NotFound). Which is funny because today I came across this quote that "patterns are a demonstration of weakness in a language" in a Go talk https://youtu.be/5kj5ApnhPAE?t=276. I wish enums will eventually be implemented, generics are coming after all.
Variant types are very different from enums. Variant types allow you to define a type that is "one of these N types". Enums are the same thing at the value level: they allow you to define a type that is "one of these values". You can't implement one in terms of the other.
Variant types do have some similar use cases with interfaces, but I don't see any relationship with enums in terms of use cases, so I don't think that applies.
Also, let's not forget that even C has enums, so their lack of inclusion in Go is baffling (especially since they went and implemented the much more arcane yet limited `iota` feature).
Not in any useful sense of the word. A C enum is a typedef and a few named constant, it's absolute shit, and if the choice was restricted to "C enum or nothing" then "nothing" was absolutely the right call.
Go's typedecl + iota is actually a step up from C's enums, because (aside from "untyped constants") you need an explicit conversion from "any random integer" to your typedecl. It's not any more reliable or any safer at point of use, but it will prevent some misuses, or at least force the developer to consider what they're doing.
Yeah, I think the confusion stems from the word "enum" being used to mean different things in different langs.
A great illustration lies in comparing enums in Objective-C (essentially just named integers), whereas the associated values in Swift enums make them "Sum" types.
> "Go doesn’t have a virtual machine or an LLVM-based compiler."
> How is this any kind of pro or con? It's just an implementation detail.
This is actually one of my favorite things about Go. The fact that you get a single statically-linked binary at the end is so much nicer than having to mess around with JVMs and JAR files and CLASSPATHs. Sure, there's a lot of tooling to handle this in the Java world, but it adds significant complexity, and makes life difficult when you need to do something slightly different.
You are underestimating the impact of that choice on language design. When Java was designed, garbage collection was one of the major features, and almost all languages with GC were either interpreted or using a VM. Static native binaries with GC and goroutines is something relatively unusual in the design space. I can think of OCaml and Haskell doing something similar for example.
Sure, and if the moon was made of cheese, it would still be a sphere lighting up the night sky.
The fact that the go designers prioritize things like statically compiled binaries is the whole magic of Go in the first place. The language is whatever, it's fine, it's cool. But like the article says, the magic is in its holistic attitude towards process, tooling, and distribution.
> I am getting old and lazy. I want the compiler to do more for me, not less.
Ironically, and for the same reasons, I want the opposite. I want the compiler to stop fighting me on every little detail. I'm exceptionally tired of writing boilerplate interfaces, thinking excessively about what size of integer I want to use, or digging into lambda calculus just to make a stone-stupid compiler (or worse, a borrow checker) happy.
Also, ironically, go doesn't even do this - it fails compilation on the most useless and pedantic of errors - unused variables and imports. Despite the fact that it's common practice to comment out sections of code during development and debugging, which inevitably leads to unused variables/imports.
How ironic, one of the first things that I missed in Java was that. Nor the compiler error, nor the standard linter (checkstyle) prevents unused variables.
Don't get me wrong, I find Rust has pretty high mental overhead. The borrow checker really is not intuitive to me.
For the niche that Go is applied in often.. .basically middleware type things, long running services, etc. I wouldn't necessarily jump to Rust.
I think Rust is well suited to lower level systems development, as a replacement for C/C++ there.
What's frustrating about Go to me is that we _need_ a nice clean compiled language with a static type system and garbage collection for services development. To replace Java, etc. But ... not this one. Go to me feels like a huge step backwards, and in my experience with code review with the Go folks at Google, it's a very strident and dogmatic language community, too.
I'd like something like OCaml, but with less functional religion, maybe.
Coming from C and areas where C is used, enums tend to used with values defined explicitly because they usually don't represent an abstract set but indeed specific values that have to be respected.
An example at random from a domain Go is used a lot in (networking): When you define an enum of the resources recors in DNS you want the values to match the RFC definitions as they are used in DNS packets.
With that in mind Go's approach to enums makes sense: we usually want to label explicit values, potentially of an explicit type, we don't just want an abstract semantic label.
I haven't touched Java in a long time, but as far as I recall low-level operations on fields, bitmaps, etc. were not a strong point.
Sure, there is value in having your enums have specific values, but even in C they at least get a bit of namespacing.
And yes, Java enums are very verbose if you want to use them for low-level operations, but that is almost entirely the fault of the rest of Java and not enums themselves specifcally.
You still need to be an expert on the virtual machine to not have things blow up. It's another factor of a complex equation.
My last job had a group of admins that thought any Java app from the Apache Foundation was the solution. Whenever things went wrong, their solution was, "just up the max heap size!" However, they're admins and had no idea about the app and the users had no Java experience to actually suggest tuning the JVM.
Yes, this was an org issue and a failure of tech leadership, but nevertheless, the JVM was a point of failure.
Go has a runtime with a GC with a heap size, just like the JVM does. People will end up having to tune their Go runtime GC's parameters, too. Whether the runtime is executing compiled native code, or executing JITted bytecodes is of little relevance to the user.
What you're speaking to is the issues with a _separate_ VM process and a cultural thing with Java generally, not a VM in particular.
My understanding was that a golang program's heap size will automatically grow as needed, and later (eventually) shrink as memory goes unused.
Is this true of java? Is it possible to run a java program in a way where it is allowed to use up all of the system's heap if needed, but also plays nicely with other processes on the system (i.e. yields the memory back to the OS after it becomes unused)?
Maybe this has been improved in the last 10 years but heap management imho is one of the not-great things about the JVM when I was using it. What is happening is quite invisible to sysadmins used to monitoring traditional unix processes. To them it looks like a giant memory hog, non-cooperative, when in reality the program might only be using a small % of what top is showing allocated to the process.
The original JVM philosophy seemed to be to cohost a bunch of stuff in one monolithic JVM process, rather than run a bunch of separate communicating processes. And in fact this "container" philosophy is what the original J2EE servers operated with.
>How is this any kind of pro or con? It's just an implementation detail.
Not exactly. It's hardly a detail, and affects use.
The first (not having a VM) in Go's case translates into AOT static compilation, and you not needing to have a VM installed to run your programs.
The second (not using LLVM) translates in Go's case to faster compile times compared to LLVM based compiled languages.
While both have their drawbacks (e.g. LLVM compiled code would probably be faster), both are pluses for my preferences/use cases - not mere implementation details.
I would love sum types (which would allow for real enums), too. Russ Cox has commented on Reddit why the Go teams considers them incompatible with Go [1].
The first challenge is zero values: For a sum type like (int, float), there's no natural zero value. I think that sum types would have to follow the same design as Go's interfaces; a sum type value has to be nillable. This is unfortunate, but the Go team burned some bridges when it decided to support nils, and they now have to deal with that problem.
Secondly, there's the matter of what a sum type of interfaces means. I think this is solvable, too, and I don't agree that it presents a conflict. Sum types express the range of allowed values. What you do with a variable, once it holds a value, is no different than today:
type Source sum { io.Reader, io.ReaderAt }
var src Source
switch t := src.(type) {
case io.Reader:
case io.ReaderAt:
}
This is no different from:
var src interface{}
switch t := src.(type) {
case io.Reader:
case io.ReaderAt:
}
> Russ Cox has commented on Reddit why the Go teams considers them incompatible with Go [1].
I wish people would stop beating around the bush and own up to these being design mistakes of the language. If you can't have nice things because earlier poor decisions (e.g., supporting nils) has walled you into a corner, it's a problem. It's not an example of "refreshing simplicity", it's a shortcoming of the language. Period. Every language has some warts, just acknowledge them!
This is not a sum type, it's something similar but different, sometimes called a union type, because the point is to store a closed set of different types under a single umbrella.
A sum type has variants with associated data, but the type of that associated data is orthogonal to the variance, you can have multiple variants with no associated data, or the same associated data type (both are very common use cases), which would simply be impossible to express here.
With a proper sum type, this is not an issue because the Reader and ReaderAt associated data would be stored in explicitly and specifically different variants. In Rust parlance:
> The first challenge is zero values: For a sum type like (int, float), there's no natural zero value. I think that sum types would have to follow the same design as Go's interfaces; a sum type value has to be nillable. This is unfortunate, but the Go team burned some bridges when it decided to support nils, and they now have to deal with that problem.
The proper way to fix this would of course be the same one C# took: every C# type used to be default-able (although to their credit it was not generally implicit). With the introduction of non-nullable reference types, they had to change the language to handle cases where types would not be default-able.
Of course for Go that has wider-ranging implications e.g. currently I don't think there's any validity tracking because `var i <type>` tacks on an implicit zeroing of the value, it has no concept of "uninitialised values" whereas C# had that even before non-defaultable types. It also breaks their assumption and assertion that you should be able to add new fields to a structure and that'd get automatically filled with garbage without the caller being aware (also a terrible idea).
My understanding is that there's no real distinction between "sum type" and "union type" as such.
I may be wrong, but whether you have an indirection through a type constructor or not doesn't alter the meaning of the union. In the fictional Go syntax, you could also have the same indirection:
type Reader struct { R io.Reader }
type ReaderAt struct { R io.ReaderAt }
type Source sum { Reader, ReaderAt }
The difference is that Reader and ReaderAt are completely separate types, not type constructors, and that Rust has a shorthand for essentially valueless type constructors, allowing Rust's enums to act both like classic C enums and like sum types. But in Rust, as I understand it, enum variants aren't actually types. In your example, you can't have:
let r: source::Reader;
Of course, you can do the same thing in Rust, at the cost of readability:
> My understanding is that there's no real distinction between "sum type" and "union type" as such.
There’s a difference in the ambiguity — or lack thereof — which is the purported reason why go could not have sum types.
> But in Rust, as I understand it, enum variants aren't actually types.
That’s correct. They’re just constructors for values if the enum type.
> It's just too late for Go to do anything here, I think. Zero values permeate the language. For example, consider structs. It's normal to do
It’s not too late for anything. Existing types do not have to change and don’t prevent adding new non-default-able types (which would be transitive).
Those types, with those new guarantees, would not allow for the magical zeroing and zero-extending of existing types but they would not break anything.
> Enums, immutables, and generics aren't good because they're exciting.
That's your opinion, and maybe even that of other programmers which have good intention. However advanced PL features are still sometimes overused because the users find them very interesting and can't resist applying them everywhere.
As an extreme example I read a statement from a Rust programmer about liking framework XYZ, because it creates those nice type-system puzzles and makes things super-safe. However for an outsider this super-safe code could also be undecipherable gibberish.
Overall I think that while type-help can be good to avoid errors there is also a line where the complexity in type system features outweighs the complexity of the initial problem. And that after you cross the line programmers spend their time trying to understand complex but less-error-prone code instead of simple but easy-to-understand-and-fix code.
Where that line exactly is depends on the problem space.I certainly think it is somewhere between Go's extremely simple approach and the extremely advanced feature sets that some other languages allow for.
I thought a LLVM-based compiler would have been nice for a while but in reality a lot of the tooling is C or C++ based and the run-times here are fundamentally different.
This is why putting Go and C code together is so inconvenient (and slow), the calling conventions / stack setup is very different.
I really favor the C/C++ binary interface for its simplicity but Go is really good for what it is built for.
But yes, compilers are always a good argument. I can understand however why C++ frustrated people. To get this beast to work for you, even after C++11 - it requires a lot from the developer. Today, things are a lot better though and they keep getting better.
I agree with this sentiment. A lot of this depends on purely the angle of how you view things.
One example: People claiming the borrow checker gets in their way in Rust because it’s so strict.
Guess what? If your code doesn’t compile due to the rigidness of the borrow checker, it’s incorrect I’m sorry to say. It’s either got a data race, use after free bug or some other invariant. You don’t want that behavior in your code at all.
I’m happy to offload the responsibility to the compiler to check for this.
> Enums, immutables, and generics aren't good because they're exciting. They're good because they give us expressive tools to write descriptive, type safe systems that manage state better.
The main problem is just that these expressive tools scales poorly with regards to the number of people involved in the code base and its age.
The amount of time to write and/or duplicate code is just a fraction of the amount of time it takes to read/understand/debug code that have degenerated over time due to the number of people involved and various amounts of "expressiveness".
And no, a style guide isn't enough because these idioms change over time and there are a lot of social factors involved. Where for example the senior devs are actually the ones that a few years back decided that these overly-expressive code is the best thing since sliced bread.
I completely disagree. Code duplication is one of the worse things for readability, and generics quickly make up for their weight in gold on this front.
Go code is very hard to read and review especially because it has so much repetition where little details can hide. "Is that a regular error check that logs and returns, or did I miss some actual logic there?", "Is that loop just trying to find a value in an array, or is it also doing something else?" etc - boilerplate upon boilerplate, which you learn to glaze over, until it's not really boilerplate and you miss something.
I think Go is pretty well known for being easy to read so I'm not sure what exactly you find hard to read. Duplicated code is not inherently unreadable. Code shouldn't be abstracted away and be made "reusable" before it's been duplicated once anyways. I also believe that the amount of additional code duplication created by the lack of expressive language features is overestimated.
The examples you give doesn't seem to be specific to Go? In Go the error checks are, as most point quickly out, very explicit and often just err != nil checks. Not sure how they can be confusing, albeit repetitive.
I'm a bit surprised by the amount of down votes my reply has received given that this is a pretty common sentiment among very experienced devs within the game dev industry.
Readability has multiple dimensions, of which "difficulty" is only one. The parent post is not talking about golang being confusing, but rather being verbose and low density, which makes it easier for bugs to hide when reviewing.
Yeah, it does indeed. But compared to what? Reviewing high density code can surely hide bugs as well when reviewing? Either way, it's also easier to debug low density code so there are more things to consider.
The biggest problem when reading Go code is that, like C, it tends to contain a lot of details that have to do with the language implementation, instead of the problem domain.
Errors are bubbled up manually. Loops often use array indices, instead of expressing the desired operations. You often need to use pointers instead of values just to avoid the cost of constantly copying structs.
And every time you encounter one of these things, you get to spend some time thinking about why this or that was chosen,or if you're missing some detail.
Java is much more readable, because it has far fewer of these concerns. Every time you see a catch block, you know it is there for a reason. If someone is making a copy of an object, you know they wanted to make sure the original isn't changed. With Streams, if I want to write an operation which filters a list, I can use filter(), not 'create a new array, go through the original, every time you find an element matching the condition, write it at some index in the new array, increment that index'.
I think we'll have to agree to disagree on this one. That standard for loops are less readable just doesn't make any sense to me. Neither do the preference of exceptions which is pretty much the first things most experienced people turn off and ban in C++.
Sure, most experienced people dislike exceptions, except for Bjarne Stroustrup, Herb Sutter, everyone else who designs the 3 popular C++ compilers, everyone who designs some of the the most popular C++ libraries (Boost, newer Qt), Java, .NET, Swift, JavaScript, OCaml, Python, and a few others.
And if you think that
int j;
for i := range someArray {
if hasSomeProperty(someArray[i]) {
filtered[j] = someArray[i]
j++
}
}
Language design and lib design are different things than building, maintaining, and shipping a game involving hundreds of programmers. To use Boost is somewhat of an internal joke within the game industry.
You don't need to use indexes like that in Go. You would append to the new array.
The thing is that in practice such loops usually goes beyond just filtering on a boolean basis, so you combine whatever you want to do with that array in a single full iteration.
Sure, games (and probably real-time systems in general) are one domain where exceptions are not a good control-flow mechanism. But there is much, much more to the software industry than real-time software, and extremely large systems built by huge teams of programmers do successfully use exceptions as a core error-handling mechanism, sometimes in C++, much more often in managed-memory languages.
You're right about the indexes, I should have used append() there.
Related to loops though, the more you need to do in a single loop, the less clear the code becomes in the traditional loop style. Note that stream-style constructs don't iterate more than once either (not that doing M things per iteration vs doing M iterations is necessarily a clear performance win, depending on the size of the array etc).
For example, I would say that the readability difference is even more pronounced between these 2:
for i := range players {
isLocal := false
for _,localPlayer := range localPlayerIds {
if localPlayer.firstName == players[i].firstName
&& localPlayer.lastName == players[i].lastName {
isLocal = true
break
}
}
if !isLocal {
break
}
allLocalMoney += players[i].money
numLocalPlayers++
}
avg = allLocalMoney / numLocalPlayers
Add a little bit of grouping and things will get even worse for the first example. And sure, you could extract the checking into a separate function, but I wanted to come up with something that takes a few operations.
On the other hand, I fully agree that sometimes you just need to do various actions (i.e. anything with side-effects) for each element in a list, and there there is nothing better for readability than plain loops. I just like the option of more explicit transformations when that's what I'm doing.
I think the C# example is pretty dense to parse tbh, I need to match parenthesizes and partial results back and forth during mental parsing. Also, how would you go about stepping through those iterations in a debugger? It's just so fragile, because people will insist and/or keep using these things when a loop is more appropriate when the code grows etc, this is the reason why it scales badly over a lot of folks. Things become more and more opaque and hard to debug.
I guess in the end readability is on the eye of the reader. For me the C# one is much clearer in what it wants to do, and the formatting helps me ignore the parentheses entirely (assuming it compiles).
Debugging is not difficult at all, you can put breakpoints inside the lambda and use continue instead of sigle step. If required, you can also step through the library code, but that is not usually necessary.
And as the code grows, the high-level representation of what the code is meant to do stays clear, while the loop-based version grows in incidental details that you have to take a step back to understand.
Note that I've written commercial software in this style in a team with a about 1-200 other programmers. I am not coming at this from some hobby, 3-5 person project experience. It's just that different industries and different areas value different aspects of code. In my case, this is the middleware portion of a traffic generation solution that can simulate all L2-7 commonly used networking protocols, at the scale of a small city and beyond. Of course, the actual traffic generation code is extremely performance sensitive and is written in C and C++ (and quite a bit of Verilog) with a very different style, probably closer to what you are familiar with[0]. But there are many layers of configuration above that where we usually value clarity and correctness more than raw performance, and where we can afford to use these types of constructs, and our experience has always been that they vastly improve cooperation, not at all hinder it like you seem to imply.
[0] though we do have a real-time traffic stats analyzer library that is written in template-heavy, boost-heavy C++, and is on the critical performance path with soft real-time constraints, so this is also possible.
I’ve been very happily using Go the last 7 years (disclaimer, I’m employed to work on Go as of 2 years ago), some random personal thoughts.
It’s interesting how a language can be boring and so fun to use at the same time.
I guess I don’t want excitement at the language layer because there’s plenty of opportunity for it in other layers.
A boring language to me means there’s usually roughly one way of doing something, so you just need to write up that one thing and move on to the next part of what you’re building, knowing the past part is finished. Once something works well and has a good API, I almost never want to come back to it and rewrite it with a different set of language features that I happen to learn.
The process of writing Go is fun to me because starting with an empty Go program to do a new task is always a refreshing experience, it’s easy to reuse any past functionality that you’ve already created (or found) and you can focus on the new task. In contrast, when I was primarily using C++, starting a new codebase wasn’t pleasant because of all the housekeeping and boilerplate (like defining a uint16 type in a platform-agnostic way) I needed to copy/paste just to have a reasonable starting point.
Also, the feeling of gofmt/goimports running on save and formatting the code on save is a huge part of what makes it fun (fortunately, code formatters are much more commonplace now; I wouldn’t want to go back to not having them).
I guess one thing that Python and later other languages like Go got right, is this "batteries included" mindset, where a language can be used without much dependencies on other codebases.
I was following Go in the beginning (2009/2010) and the team behind Go had a great architectural understanding of how to make a platform library in a "cathedral" fashion. This is one of the big reasons Go did great as a new language.
People from Python and Ruby could use Go as a alternative on server/backend software because the core platform was there together with AOT compilation, GC and its was a much faster alternative to those languages. As people from Java could be released from the burden of too many abstractions, heavy runtime and the big memory pressure on servers.
Go was a good savior back than, and still is a good alternative to the infrastructure kind of sofware.
Giving the crowds using Go were distinct, each one of them missed different things in their former runtimes.
> it’s easy to reuse any past functionality that you’ve already created
If you look closely, those aren't angle brackets, they're characters from the Canadian Aboriginal Syllabics block, which are allowed in Go identifiers.
The language spec at https://golang.org/ref/spec is a surprisingly approachable one-pager, it’s worth at least glancing over.
One of my all-time favorites talks is https://vimeo.com/53221560 from 2013. It shows off how Go scales by starting with a small task and continuously adding more requirements into the mix, while also demonstrating its concurrently support.
If you have built a program in any other language I would say rewrite that program in GO. The benefit being you already solved the the domain logic of the program the only thing you need to focus on is writing/learning GO.
If this is your first programming language or attempt to learn to program I would say to just pick a small but simple app/program and just go ahead and build it. Don't spend too much time on reading articles or video tutorials just get your hands dirty as soon as possible.
I think a lot of the commenters are missing the point.
Golang is less expressive than other languages, like Rust or Haskell. You can't create fancy custom types and things like that. Of course there are drawbacks.
The benefit however, is that it's much harder to write "clever" code, which translates into being harder to write unmaintainable code. If you know Go, you can sit down at nearly any Go codebase and know what's going on pretty easily. That certainly isn't the case for C or C++, and probably others (other people have told me this is true of Rust, but I have not used it, so I will not claim that).
Golang is not the fastest, or the most expressive, or whatever else. But it's got enough expressiveness to not be painful for most use cases (slices, maps, and structs cover a lot of ground). It's fast enough for most use cases (faster than most scripting languages).
In my opinion, Golang has a very good ratio of (effort required to learn) / (utility of learning). I have no doubt that Haskell is extremely powerful... if you can learn how to use it.
I wouldn't be signing up to write a new OS kernel in Go, but for a userland program that doesn't have hard-RT requirements? I think Go strikes a better balance compared to C, C++, Java, Python, and others.
> The benefit however, is that it's much harder to write "clever" code, which translates into being harder to write unmaintainable code. If you know Go, you can sit down at nearly any Go codebase and know what's going on pretty easily.
I don't think this necessarily follows. I don't think "clever"ness is what makes code unmaintainable. You can still create a tangled mess in Go just as easily as other, more expressive languages. In fact, I think it's more likely in Go simply because you _have_ to write more code in Go. More code is more maintenance. Go encourages you to repeat yourself and to not create / use abstractions. Nil pointers are still a thing. It's incredibly easy to accidentally shadow a variable and not handle an error. These are all solved problems in other languages, yet they persist in Go, and they are all maintenance headaches.
> I don't think "clever"ness is what makes code unmaintainable. You can still create a tangled mess in Go...
It's not the only factor, but it is certainly _a_ factor: "Clever code" is typically meant as derogatory. Fancy abstractions for simple use cases is a common cause of tech debt IME, and harder to unwind than under-abstracted code (tedious as that is to fix).
> It's not the only factor, but it is certainly _a_ factor: "Clever code" is typically meant as derogatory
Yep, but here's the thing - if the problem is misuse of the tool, then the solution should be to not misuse the tool. Saying "we will remove features because some people misuse it" is like saying "we will remove headlights from cars because some people drive irresponsibly[1]".
Go remains my language of choice if I have to build efficient webservices, but I really feel sad when people make it sound like Go is awesome because it misses language constructs. No! It is awesome because it is FAST (quite near C) but it is very easy to do simple things like writing http servers, parsing JSON and so on (like Python et al).
Overall, I agree with this sentiment, but it doesn't mean Go couldn't be improved without going all "fancy". Rust isn't the paragon of type systems.
For example, I'd look to languages like Pascal, Modula, Oberon (the last two an important influence on Go in the first place), Ada, and Nim for how to provide practical mechanisms that don't descend into Haskell/ML-style type-system magic. Several of those languages have also implemented generics successfully.
> Golang is less expressive than other languages, like Rust or Haskell. You can't create fancy custom types and things like that. Of course there are drawbacks.
You can absolutely do whatever you want at runtime with reflection. The problem is that compile time type checking possibilities are poor in contrast. It's exactly like C in this regard. You can't type check a linked list in C at compile time. Go has typed arrays but you get my point, replace arrays with generic stacks, generic circular buffers or what not. this isn't "fancy types", it's basic CS.
I sometimes wonder if Go is a long running by joke by Google.
Like someone internal at Google said people will use and adopt en masse anything they make, regardless of how good it is.
Someone else said no way, a bet was made, and Golang was released. Ignoring 20-30 years of programming languge theory advancements. Yet everyone laps it up.
And here we are today. I'm sure someone at Google is laughing.
I find Go completely tedious and inelegant to work with, its too verbose and i think verbosity hides intention and expressiveness.
I think the sense that this is a "Google language" is exagerated. What it is is a language created by a certain group of people at Google with Rob Pike at its origin, and they would have created this language wherever they were working. It's a continuation of the work they did on Limbo/Inferno, etc. not something adapted for Google's needs, as it is sometimes advertised as.
So I'm in no position to criticize Rob Pike, he's a far smarter person than me who has achieved a lot more than I ever will. But I also don't think Go solves any problems that I have, and my one experience with going through Go code review at Google turned me off the language probably forever.
While hard to say quantifiably, from what I’ve seen almost any new backend service is written in Go (apart from ML-related, which Go doesn’t support). Given that many-many older services were implemented in C++ and Java, these still dominate.
Weird quirk of using Go at Google is that any backend service at Big G mostly copies protos, and in Go you end up with these many-page inline protos initializations, which are unreadable.
Google has a "readability" restriction for reviewers/authors on CL (changelists, I guess like a pull request). You need a reviewer with readability in a given language to approve your CL before it can land. When I first started, to get readability involved putting together a good archetypal CL in the language, showing good knowledge of the language, lots of tests, good documentation, etc. and then it would go into to a readability queue, get reviewed, and you'd get your readability capability.
Go was different in that they had introduced an incremental readability process. Instead of one CL, you had to do many, each reviewed by different people, and over time after doing a whole bunch of CLs you could get readability. This is the process that Google eventually adopted for all languages, but they started with Go.
What I found was the Go reviewers were a) exceptionally pedantic [even by Google standards] but, worse, b) very contradictory. One would chastize me for using channels where they were unnecessary, so I'd remove their use and just use simple functions but then a later reviewer would scorn me for not using channels in the same code. It became a painful game of tennis. And the whole thing had a feeling of ideology over practicality.
Googler here. I somewhat agree and have often wondered about perceptions of Go's popularity within Google. It seems like teams at G are much more likely to choose Java or C++ for new projects, at least outside the SRE org. Meanwhile Go's reputation is tightly linked to Google.
> Ignoring 20-30 years of programming languge theory advancements. Yet everyone laps it up.
I’d argue Go didn’t ignore those things, it just made a different set of trade-offs than most people are used to or expect from a modern programming language. The rationale for this is is well explained in https://youtube.com/watch?v=rFejpH_tAHM&t=306.
> I’d argue Go didn’t ignore those things, it just made a different set of trade-offs than most people are used to or expect from a modern programming language.
Another beast that had Google's branding on it, had all sorts of technical awfulness, and was not popular within Google either. Used for a few projects, but nothing consumer facing other than some ads products.
> But the people who design and build bridges, they're great at it. Bridges get built on time, on budget, and last for dozens, hundreds, even thousands of years. Bridge building is, if you think about it, kind of awesome. And bridges are such a common occurrence that they’re also incredibly boring. No one is amazed when a bridge works correctly, and everyone is kind of amazed when software does.
Emphasis mine was added by me.
Sorry, source needed. You're a software developer, not a bridge builder. Yes, bridges in my country seem to work, fair enough. But I have the faintest clue if the design/building process really is going that smoothly. To write this down as a fact to build your argument is IMO a pretty big assumption.
Especially if you consider the state in which some infrastructure is around the world. I'm impressed to see the occasional aqueduct from 2000 years ago in Europe, but i also see the occasional modern bridge collapsing in the same country (https://www.theguardian.com/cities/2019/feb/26/what-caused-t...).
From what I've heard and seen, infrastructure in the US (roads, bridges) isn't exactly the example for "build once, use for thousands of years".
And that's not even considering how many of those road or bridge building projects really are in time and budget.
I am a US citizen with a bunch of civil engineer friends who are Professional Engineers (the capital letter, licensed kind). For a good number of years I had a roommate whose job was literally designing bridges. We've talked about this very comparison. Bridge building is not as boring as this author thinks. It can be pretty stressful.
There are a few factors that make bridges very reliable (but not perfect) in the US. A real engineer can probably describe this far better than I can.
1) Physics and material science are pretty well known quantities.
There is innovation and new materials come to market, but by the time they are approved to be used in real bridges their qualities are fairly well known and tested. They have solid evidence of the capabilities of a material or part. (Tensile strength of an item under various environmental conditions, for example).
2) Standards & Very Firm Requirements (Where waterfall works!)
They have a set of strict approved industry standards to rely upon. Not the "industry standard by convention" kind that you see around IT a lot, but the "tested, reviewed, approved, published by a standards body" kind. When a civil engineer is assigned to build a specific bridge they reference these standards that tell them "given this type of ground do this", "given this length of span do that". They know the lengths and widths, they check the traffic patterns, they know the desired weight limits. Challenges do come up, yet they have fewer unknown unknowns. Something changed from their expectations, maybe the ground is different than expected, but they still have standards that guide them in what to do in those cases.
3) Reviews, reviews, reviews.
Other Professional Engineers have to sign off on the work of the initial designer(s). Deep reviews. Not what typically passes for software architecture and code reviews I've seen.
4) Legal responsibility. Professional Engineers have an amount of personal accountability at stake because real lives are at stake if something fails.
5) "User" common sense and intuition
Nobody builds a 100,000 ton vehicle and then tries to drive it over a bridge. Most any human that thought about doing this can probably reason that doing so would break the bridge. In my experience software users have no idea that trying to push a 1TB file into a system that was designed to only handle 1MB files will cause it to fail. They often have no idea when their data volume increases. "I'm doing the same thing I did last month!"
All that and more, yet bridges still go over budget and over time and fail and people die.
I agree with this. I live in Austin, TX, and we’ve constantly got five to ten big construction projects going, both in the private sector (new buildings) and the public sector (usually roads and other transportation infrastructure). It’s certainly possible I just haven’t been paying good enough attention, but it seems like the vast majority of such projects that have been completed over the last six years have been over budget and behind schedule.
Being boring really is what I love most about Go. It doesn't change much and there is a culture of simplicity, so it's quick to onboard new devs and there isn't a lot to get your head around. That also makes it good for building large projects, as the complexity is in the domain and the application, not clever uses of the language itself.
I almost wish they'd make it more boring and take a few things out, but it's probably too late now to change the language much and people would complain vociferously. Things I never use and wish it didn't have: panic, goto, labels, struct tags, executable comments, even arrays.
The only things I'd like to see improved significantly are enums, errors, and user-space generic collections (coming soon!).
You and I must experience time differently. I read a couple days ago that the end of 2022 would be the earliest they might appear but probably further out than that.
I've been using it about 8 years at work, and don't mind if they take another year, it'll take a while to work into large code bases anyway. Personally I've felt generics as a lack but not a huge unfillable hole, and prefer language stability to churn, so am completely fine with it taking time to add large features. I'm pleased with the evolution of the generics proposal so far - it's getting simpler and more transparent.
I really wish in their mooted Go 2 they could just remove some features, as listed above, rather than add some. Perhaps everyone prefers a slightly different subset of the language though so that isn't really possible.
Boring technology that does great things has been a huge part of my career for the past eight years at Precision Nutrition. With that said, the following opinions are mine and mine alone.
Here are some boring choices we made that paid dividends:
1. Postgres. It does what it does, it's stable, it's reliable, and continues to improve as time goes on.
2. Ember. It's by far the most stable front end framework going. It's made our job easy in uncountable numbers of ways.
3. PostCSS. Write plain CSS, use modern features, remove plugins as browser support grows.
A few years ago I'd have said Rails was a boring technology we chose too. These days I'd say it's a bit of a thorn. It's clear that some of the magical choices made make long term maintainability challenging, as are some of the performance characteristics. A few of the libraries used by the community have bitten us hard too insofar that they're either a maintenance nightmare, poorly maintained, or exceptionally difficult to migrate away from (e.g. CanCanCan, Active Model Serializers).
These days I'm playing with Elixir, which is also boring but allows you to do exciting things. I especially appreciate ExUnit having come from the RSpec world — assert, refute, end of story.
I'm a frontend dev and amateur solo founder interested in tools that empower me to build more with less effort. Should I finally bite the bullet and learn Postgres/SQL? I have the (ill-informed) impression that it's better suited for large and complex systems whereas I'm mostly building SaaS and simple e-commerce apps.
Aren't there usually full-time SREs focused on managing and maintaining SQL databases? I can get really really far, really quick with Cloud Firestore (for free at my scale), and stay focused on my frontend. But as complexity grows and requirements change I can sometimes find myself in sticky situations-- mostly because there are no tools for schema migration in Firestore.
But on the other hand with Firestore and similar noSQL products I can get scaleability, security, speed, stability and a lot more all out of the box.
Not OP but you will get far with Amazon RDS or another hosted Postgres offering if you can afford it.
I'm always terrified of ecomm using NoSQL for carts and transactions but I'd say it's overblown on my part I just prefer to rely on the durability of a SQL database using guardrails at the schema level to keep my data correct. Either tool is easily misused but Postgres really can handle a lot it just doesn't get the shiny attention some other host services do because it doesn't over promise things you probably won't need to use. That said, you can do really slick NoSQL style stuff with Postgres HStore and JSONB columns.
There are a lot more data-store-as-a-service offerings for NoSQL but I don't find any of them particularly beneficial for "plain" apps but agree you can get started a lot faster. Firestore is a lot of fun when you need things to sync between clients and there are tools to do the same / similar with Postgres (noted below).
There's a difference between operating the database [SRE type job] vs consuming a database [developer]. They're both very valuable skills, but learning the basics of SQL [i.e. being a consumer] is a no brainer. The skills will serve you well for the rest of your career. It's not something that changes every few years like frontend frameworks do.
This is my favorite learning material for ANY programming related skill ever: https://use-the-index-luke.com/, and it's all about SQL queries.
At the start of this year we talked to 50+ developers. Many of them wanted to use Postgres, but chose Firebase because it was easier.
We created Supabase specifically to solve this pain point. It’s just Postgres, with a nice UX and features. You get direct access to your PG box so you can modify it in any way you want. We’re also open source, so there is no lock-in.
I hope try out PG (regardless of the hosting solution you choose). It’s an amazing tool
> Among compiled languages, garbage collection is rare
??????
> Rust's borrow checker is a fascinating way to get high performance and memory management, but it effectively turns the developer into the garbage collector, and that can be hard to use correctly
Isn’t the whole point of rust that you cannot get memory management incorrectly because compiler guides you?
> all Go code is formatted the way that go fmt says code should be formatted.
just like rustfmt, elm-format, prettier and I hope most modern or future languages
I agree with his main idea, I also like the idea of a boring language, but then his arguments misses the point completely, talking about benchmarks and such.
He should talk about why having no exceptions can be a good thing, or having no generics.
Having a very small API surface and stable language is one of the reasons I love Elm as well, nobody is able to do too smart abstractions with monads and such, it’s about code, readability, explicit better then implicit, etc, nothing to do with garbage collecting, performance, blablabla
I worked at a shop where there was some Ruby code and autoformatting was argued against from one of the senior tech leads because "[ruby] wasn't designed for an autoformatter. go was built that way"
Absolutely blew my mind. I love go fmt, prettier, black, etc and cannot understand why people still love to argue over styling at this point. I get that he was saying the formatter ships with Go but I don't subscribe to the idea that you can't use an opinionated formatter because the language authors didn't make one.
I can kind of understand it. If it's a first class part of the language, then it's never going to get painted into an ugly corner.
As a nearly related example, the LESS css pre-processor language was developed prior to `calc()` expressions. As a result, the LESS language parses arithmetic expressions. So `calc(10em + 10px)` compiles to `calc(20em)`. So in order to do `calc()` expressions, you have to use nasty hacks like `calc(10em ~"+" 10px)`. If LESS "shipped" with CSS, this wouldn't have happened.
I don’t think it’s about being opinionated so much as being configurable. If you can change its behavior then sooner or later you will find someone who insists you do, and then you don’t actually have a standard anymore.
I like the authoritarian approach of go fmt and the strong idioms in Golang generally. Maybe because I lost many an hour arguing about the correct way to configure PerlTidy, only to have my hard-won righteous perfectionism destroyed when everyone joined the Cult of Moose.
SML, D, etc., are compiled GC languages but not in wide use. These days you can make a good argument for .NET/Java, although the traditional implementations are not compiled.
The compiler does “guide you” in memory management in Rust, but its “guidance” consists largely of refusing to compile buggy programs. Getting the program to compile e.g. doing the memory management correctly, is still a difficult matter, although this varies a lot by experience and program requirements.
To put it another way, if you are interested in shortening the time between writing a program and having a crash-free version, Rust will improve your situation. Although it will be a long time either way because bug-free programs are hard to write. If on Theo the hand you wanted to get something running your machine by this evening, Rust will spend a lot of time complaining about issues you are unlikely to immediately encounter.
> Nope, manual memory management can be done properly and this is not rocket science.
Outside of NASA (often, literally rocket science!), where has manual memory management ever been done properly? I don't think that it ever has. For that matter, I would be unsurprised if even NASA had manual memory management failures.
I think this also ignores that video games are _vastly_ more complicated than the space shuttle - from a "computers" perspective. The did some very impressive things with some novel and primitive computers, but the amount of "state" that is being kept track of and data flowing through those computers is tiny.
Obviously this is all a matter of opinion but I don't consider a lack of exceptions, enumerations and generics "boring", I just consider them to be missing features.
Certainly there's an argument to be made for simpler languages being better (maybe we should all be writing in Assembly?) but I'm personally not convinced.
I hear these sorts of arguments often, as someone who likes both Go and C I do have some responses: I would hardly call exceptions in C++ a "feature" but more of a "liability." Enumerated types (probably referring to sum types here) have trade offs that are not necessarily clear from simply reading the code. Generics often lead to code which is (IMO) difficult to follow, debug, and reason about. The point about assembly is jest? Or dishonest? I would argue that C, C++, Go, Haskell, etc etc are "simpler" than assembly, if they weren't we probably would be writing assembly, and I suppose following that logic maybe we'd be hand constructing opcodes with a two key keyboard.
Terraform is I/O bound on cloud provisioning APIs (its CPU performance doesn't matter), and its complex object models need to be extensible. Python would have been a better choice.
As a former maintainer of Terraform, I 100% assure you that Python would have in no way been a better choice.
While Terraform is I/O bound to some degree, it has a large degree of concurrency - something quite miserable to do in Python.
Terraform does not take full advantage of even the type system available in Go, but a dynamic language would make things even worse. I do not think Go is the ideal language for programs like Terraform, but having spent many thousands of hours in that codebase now, the answer is stronger types (likely Rust), not fewer.
Go is like C. AKA roll your own unsafe type system at runtime with void pointers (or interface {} in Go). This isn't acceptable for a language released 10 years ago. On one hand you have a rigid type system very limited in terms of polymorphism, on the other hand with interface {} anything goes, even creating Adhoc functions at runtime.
Go should have taken cues from ADA and how it implements generics. There is enough polymorphism to solve most cases where generics are useful, but also enough limitations so that a codebase isn't cluttered with `Type A<Type B<C,D,Type E<F,G,H>>>` horrors everywhere.
My favorite thing about Go is that I can jump into any codebase, written by any developer, and the mental load required to unwrap their logic is very low.
The "stylistic opportunity" for other languages is such that I can usually tell which developer on the team wrote what just at a glance. Not so with Go. It's all the same bland but beautiful gray.
Something to be said about languages that help external people understand them, as opposed to languages developed to improve developer satisfaction.
Re. the bridge analogy in the article: we had a bridge being built behind our building at work and they had announced the opening date a year in advance. Sure, when the date came the bridge was opened exactly on time. I was walking with my colleague across it and asked him, "How come we can't ever predict when we'll be done?". His answer was deeply insightful: "They build the same bridge every time while we build a different one".
The manual memory management thing -- like needing GC -- feels like a false dichotomy. Most of these languages that have GC need you to "manually" manage resource lifetimes anyways for a lot of resources: files, network connections. If you're making a UI, you think about when components are mounted and unmounted. The logic for all this is domain logic and part of the code for your program. Memory is just one such resource, and I think there should be more acknowledgement of it as such by languages, instead of just piling it up and kicking a "GC cycle" to clean it up whenever, leading to pauses in interactive programs and whatnot (Go's GC, and Go in general, do not perform well under WASM in my experience).
In Go you have to "manually" defer a file.Close() call -- vs. the destructor getting automatically called in C++ (it even gets automatically called when a containing object is destructed, if it's a member, and so on). Go's version doesn't seem very "automatic" to me.
The simplicity of Go actually hides a lot of runtime complexity that is happening: the GC, map lookups, scheduling of goroutines and so on. Which you have to understand and then ultimately learn how to tune from a distance (because you don't have direct control) for a lot of applications where it matters. For server applications (where the hardware is under your control) and also CLI tools (which can just run, only allocate memory, then just exit), this matters less. If your code is running on user's devices as an interactive application, it feels to me like how it performs on and uses their actual hardware is part of your responsibility as the developer, and some things about Go make that harder to do. Like you have to think about the GC and try to avoid it ("from a distance") if you run into that (which you do quickly in WASM, and which eg. Gio tries to explicitly design around, and so on).
I want to see languages that care about simplicity and also include such deterministic resource management as part of their design and offering.
> In Go you have to "manually" defer a file.Close() call -- vs. the destructor getting automatically called in C++ (it even gets automatically called when a containing object is destructed, if it's a member, and so on). Go's version doesn't seem very "automatic" to me.
To clarify this one point, Go allows you to attach a destructor (finalizer) to an object, but since the language is garbage collected, there’s no guarantee of when the destructor will be called, if ever, since disabling the garbage collector is entirely possible, even if not advisable.
The File object will Close the file when the Finalizer runs, since the standard library attaches a finalizer to it, but you should defer the Close to deterministically ensure that the file is closed when you expect it to be closed, and to free that OS resource of a file handle as soon as possible, since some OSes limit how many you can have at a given time. Finalizers also won’t be run when the program exits, which is fine for most things, but you’ll want to have ensured any buffers were flushed before then, and defer can guarantee that.
Languages like Rust and C++ have deterministic destruction at the end of a given scope thanks to RAII, so it makes sense for them to lean more heavily on destructors there.
Java File*Stream objects should have “close()” called on them, from what I’m seeing on a Google search, so this doesn’t seem to be a surprising pattern for a garbage collected language. I know in Python you’re encouraged to use a “with” block to ensure that file objects get closed.
One of the questions that i have always wondered about is how do financial companies work with math libraries in any language other python - which not only has the entire math/statistical software mindshare..but pandas/scipy + BLAS/LAPACK is optimised to its gills in Fortran.
Go is on average about as fast or as slow as Java, with different strong and weak points. But there is definitely a lot if optimisation potential, the compiler is only "so so" in that regard.
In my experience Go uses less RAM than Java, but perhaps it has absolutely nothing to do with the language but more with how libraries are written. I rarely see fixed sized arrays used in Java for instance. It's common in Go. When it comes to speed, on a running server, Go isn't faster at all at scale, and DB drivers tend to be less mature for certain RDBMS.
Go's GC has always had pretty low throughput compared to a JVM, but with value types (which Java doesn't have yet) it can be faster if you manage to keep everything on the stack.
Right, thank you for this correction! I've just spent some more time with Go, looks like I was wrong and it has something closer to Type Classes (or Traits), but not subtyping.
Turns out I was wrong (and corrected thankfully), Go does not have subtyping, which is good news (at least it makes the language way more tolerable for me).
IMO, the thing that makes go good, from an enterprise point of view, is that you can take a team of mediocre coders who have never used it before, and deliver working code quickly. It might be ugly, it might be dumb, but it'll get the job done, and will generally be easier to maintain than python or ruby in the long run and with good performance.
I think a few things contribute to that:
1) Relatively few language primitives -- it takes like a day to learn 90% of the keywords you'll be using day to day.
2) Transparency -- it's easy to look at any code base, and track down all the relevant code, even without an ide. Compared to a language like ruby, there's not a lot of 'magic' happening.
3) Conciseness. Unlike a lot of statically typed languages -- go code isn't verbose. It's statically typed, but the inference works well enough that you're very rarely having to use type hints all over the place -- Java has started to be more like this, but back when go started, it was a breath of fresh air.
4) Kubernetes. If you want to build on top of k8s, go is pretty much the only game in town. It just makes everything eaaier. Other languages have k8s support, but any language other than go will have more friction.
5) Go channels and concurrency -- I actually don't think this is that important, but knowing that it's available if you have that kind of performance demand is nice.
> Conciseness [...] you're very rarely having to use type hints all over the place
This hasn't been my experience. When I tried porting some Python code to Go, I found it wouldn't compile without lots of type casts (mostly between different integer sizes and signedness).
My experience moving to Go from languages like Python and Ruby required a significant mind shift. I couldn't port my solution - I had to revisit the problem from the perspective of a Go programmer.
I think that is because you’re coming from a weak typed language to a strong typed one. I’d from the outset you have to have your types match you wouldn’t end up with so many types.
It really shouldn't need any type annotations. A Hindley-Milner type system would have been more flexible and require no type annotations (as well as far less interface {} and unsafe casts), and still support everything Go has. The language as a whole seems to just commit all of the mistakes of C, except with garbage collection.
I wonder what the actual statistics on bridge building are. Are they really typically done on time and on budget? How many times do the drawings or engineering specifications for a specific bridge have to change while getting built to adapt to new information?
We trot out this particular analogy all the time in software engineering and I can't remember ever seeing any real data to back up the assertions.
From having lunch with guys from the mechanical engineering department (they were right next door at uni) my anecdotal second-hand wisdom is: Most larger engineering projects are as broken, tedious, expensive and unpredictable as most software projects. You will always find unsteady ground that nobody told you about, new last-minute requirements, shitty subcontractors making a mess and substandard parts that happened to be cheapest on the tender. From what I've heard, around 2 in 3 projects are over budget or over time in a significant way because something was massively screwed up somewhere and needed to be hotfixed, patched, rebuilt or -my favourite- reevaluated to be statically sound without structural changes.
I think the common software engineering myth about "real engineering" being somehow better and something one should aspire to is just really a myth.
...functions returning multiple parameters, confusing operators that are different 'just because', types on the right hand side instead of the left 'just because', I don't think GoLang is solving as many problems as it creates. It's different because being different is cool, not because it solves any sort of practical problems.
If I were going to go learn some strange non-C-like-outer-worldly-syntax, I think your effort is best spent learning Rust, which is way faster, offers some strong failure guarantees, and fills a lot of use cases from embedded systems to WASM.
It strikes me how this gives a value proposition for go that is very similar to COBOL: Easy structure, predictable to build and run, minimize individual coding style differences,....
Not necesarrily disparaging for either lanfuage, as COBOL, boring as it is, has brought tremendous value in large scale software engineering.
There are of course big differences too, like i18n, variable width strings, ... But if you give COBOL a c-style syntax,some dynamic size data structures, and you squint a bit, you end up quite close to Go
But is Go boring? I'm less interested in language syntax and semantics than I am about the runtime. I mean this generally and with all languages. Yes Go the language is simple and boring but the runtime, to me at least seems somewhat magical. By that I'm really talking about the concurrency. A lot of people sell Go by talking about Go routines but to me it's the one thing that makes me wary of adopting it. I'm very uncomfortable just blindly using concurrency features and not really knowing what's going on underneath.
Immutability is much more difficult to reason about except for some very mathy problems.
It is extremely natural to think in terms of state, so much so that almost all algorithms you'll find in computer science are commonly expressed in imperative terms.
In a concurrent language, shared mutable state is the trap that Hoare's CSP book was urging us to avoid. Worse, in Go builtin maps are always writable yet parallel writes and reads can segfault. There's a sync.Map type but it isn't a drop-in replacement (because builtin maps don't implement any interfaces) and it isn't typesafe (because no generics).
Yes, Go has very bad primitives whichever way you look at them. I still think you can't say a language with built-in immutability is more boring technology than one with built-in mutability.
There's nothing stopping you from using Go's maps without mutation. You can go ahead and send copies of maps through channels, just like you would if they were really immutable. Sure, the interface is clunky, but so is every other interface in Go, so I don't see too much reason to complain here...
It is fine to use local mutable variables to implement an algorithm, however global objects modified all over a complex program can be risky - you might have bugs caused by getting into an "unexpected" state.
And what does that have to do with everything being immutable by default? Either the language supports mutability as a first-class concept, or it doesn't. Sure, you shouldn't be using mutable global state, either in Go or in Haskell.
The bad part about Go is that it's impossible to write type-safe immutable collections. Apart from that, I don't think it encourages mutable global state anymore than any other language with first-class mutation.
Maybe, Idk who that appeals to.. But Go lacks the even most trivial of array / slice manipulation functions. Is this loop an index() or apply() or filter() or count() or sum() or wutt? Don't even talk about generic data structures.
To be fair, Go authors are pretty smart having created something more usable than C while not even seriously looking at how things are done in other languages.
If you were trying to choose between Go, D, Nim, and Zig for a general-purpose compiled language, what are the criteria for differentiating and what are the pros and cons of these languages? Are there other languages that should be included in this list? (Common Lisp?)
(FWIW, chief on my list of criteria is the "debug story": what kind of hell am I in when things inevitably go wrong?)
I often try to jam OCaml in that list based on my own interests, and people also suggest Crystal a lot. To me Nim is maybe the most interesting to the widest audience.
To make things interesting, can we add the following choice: wading chin-deep over a half-mile wide tract of land flooded with raw sewage, to reach a C compiler.
There's been other boring, fantastic languages... Lua comes to mind. There also always seems to be one or two "crippling" missing features for these kinds of languages as well. I actually think it's a good thing! We should stop adopting one-size-fits-all thinking.
I think the main advantage of go is it's learning curve. Standard lib is quite small, so even with no experience, in a few weeks you could be fully productive on any go code base. With e.g. C, Rust, Java, Ruby this would probably be months.
While the article overall might not be totally wrong, at least two details are dead wrong.
Go DOES have a virtual machine, thats what gives you all the lightweight threads and garbage collection. You may call it "runtime", but there is no difference between a heavyweight runtime or a lightweight VM.
And go DOES have exceptions, thats what panic does. The are just used less frequently than in other languages (e.g. Java, where even the happy path may contain exceptions, e.g. in Swing).
A "runtime" is not a virtual machine anymore than the standard c library is a virtual machine... it's clear what virtual machine means in this context: an abstraction layer between machine code and program code that translates one to the other at runtime.
Yes, but here they don't. A VM is a non-physical machine that executes instructions. Nothing more, nothing less. And yes, this means that even a library might be considered a VM. I don't see why people get so wound up about that.
Because half of your point is nonsensical without that stretch of the meaning in regards to VMs.
Go programs do not run on a VM. They run on a specific CPU architecture, and specific OS. You would need an actual VM to run the same program on something different. That is unlike Java which on its own is a VM to run programs on.
I would argue the other half of your point is nonsensical in the same matter. You just conflated two different definitions of exceptions, which are similar, and have intersections, but not identical.
> there is no difference between a heavyweight runtime or a lightweight VM.
There's a big difference. A runtime is just functions that get called, but a VM means that you have bytecode that it's interpreting instead of native code.
Go's error type is something declared as being returned from a function, there's no try-catch with an exception being handed up a call stack. That's like saying ASM has exceptions because you can leave an error code in one of the registers.
The VM bit doesn't make sense to me at all. By that logic any language which implements a GC or has a native support for threads constitutes a VM?
There's no difference between a runtime and a VM? As an embedded developer I would argue that there's quite a difference between the two.
edit: Clarification on error type being "always" returned from a function.
> Go's error type is something declared as being returned from a function, there's no try-catch with an exception being handed up a call stack. That's like saying ASM has exceptions because you can leave an error code in one of the registers.
But Go's functions "recover()" and "panic(...)" are very much like "catch" and "throw" and are all about the callstack. There's just no "try", because this special snowflake implementation of exceptions works with slightly coarser granularity than is typical (function granularity instead of block granularity), since it's living under the same roof as the error type.
Coding Go is like an old bridge. Because almost all Go code will be removed or replaced in short order. And only a very few survivors will still exist years from now to feed the confirmation bias of "history."
That analogy should go the way of most old bridges.
I don't like "modern" systems because I have a fetish for novelty. There's nothing novel about these concepts, they've been around since the 60s and 70s. I like these tools because they improve my ability to reason about the code, but more importantly they let the compiler and other static analysis tools reason about my code.
I am getting old and lazy. I want the compiler to do more for me, not less.
What I see is a situation where Go is gaining traction from two communities:
a) people who attempted to build large systems at scale in dynamic late bound languages like Python and Ruby and NodeJS etc, and hit the wall from a performance and maintainability POV. I could have warned you...
b) people who came from the Java world and got frustrated with the language and tooling there
People coming from a) especially but also b) to some degree will be perfectly comfortable with Go missing the nicer aspects of modern static typed languages because they never had them in the first place.
As for...
"Go doesn’t have a virtual machine or an LLVM-based compiler."
How is this any kind of pro or con? It's just an implementation detail.