Very interesting ideas in there. Especially the thought of an upcoming Go 2 perhaps being done in such a compatible way that "Go 2" is actually "Go 1.1X"
He also hints that it would likely only be done to address major issues/concerns/changes such as:
* generics
* modules (perhaps done? time will tell)
* errors (if err != nil {}, extra type information, chaining, etc)
* context
Still to early to tell if anything I listed or he listed would be part of "Go 2" or whatever it happens to be called.
> Very interesting ideas in there. Especially the thought of an upcoming Go 2 perhaps being done in such a compatible way that "Go 2" is actually "Go 1.1X"
The person you replied to is quoting the slides that posit that generics are a consideration for Go 2.0, so your spittle laden post betrays your own bizarre hangups.
"Cue hate, give me c# 1.1 anyday"
You brought the hate. Take it when you leave. Thanks.
Stability is good is what's stable meets your needs.
The more they stall adding stuff like generics the more painful (and more work for the tools and ecosystem to catch up) will be when they eventually cave in and do it...
And there will always be all the workarounds, third party solutions, and wasted efforts that would could have been avoided if something that could have been delivered early on (like e.g. package management) didn't come a decade late for no good reason.
At least now they've come up with a module/dependency system, which, even though a decade late, doesn't bring anything extraordinary to the table compared to what other platforms already had for ages.
Generics and error management remain before it makes sense to make it stable and boring for eternity...
When I think of generics i think custom data structures. Currently you are stuck with interfaces, but you can also wrap a library returning interfaces with methods operating on a specific type.
I would be interested to hear about use cases that couldn't be solved in this manner
"Solved" by having lots of meaningless unsafe glue code. That's not a solution.
It sounds like you're going to reject proposals for generics on the grounds that "you could just cast values of type `interface{}`" unless someone can show you a situation where you can't do that. The problem is that the people who want this sort of feature don't think that's even a solution.
You can not have a custom collection class containing objects _only_ of a certain type. For example ConcurrentHashTable<Int, String>. That is something that is used every single day. It is used so frequently that Go has generic slice and map built into the language. That is, they created a language with lots of special datatypes that can not be created in normal Go because of its lack of generics instead of creating generics.
It is a bit like asking a Java-programmer to use Java 1.4.
When you ask a Java programmer, or C# programmer or Haskell programmer or Swift programmer or... what they would change in the language you do not get the answer: remove generics.
>I dig that it's a few extra lines of code, but I guess I'm looking for a really mean example.
Well, if you don't care about repeating yourself, mistakes by omission, needless boilerplate, and/or loss of type safety, then, sure, everything is possible in "a few extra lines of code" in a turing complete language...
Back when Borland C++ 2.0 came out for MS-DOS, templates were still being designed.
So the first version of BIDS, Borland's C++ data structures library, used pre-processor tricks where one defined the types before including the respective data structure.
Oddly, I have pushed back on many devs introducing generic interfaces. For library writers, they can be nice. For everyone else, they typically just get you in trouble.
So, I don't want to remove them from the language. However, I do question a large percentage is their use in everyday programming.
> You can not have a custom collection class containing objects _only_ of a certain type. For example ConcurrentHashTable<Int, String>.
I'll start by saying - I largely agree with you. Libraries, particularly of data structures, are the biggest weakness of Go as a language.
However, this bit is not really fair:
> When you ask a Java programmer, or C# programmer or Haskell programmer or Swift programmer or... what they would change in the language you do not get the answer: remove generics.
Much like values, language features need to be compared to each other, not evaluated in isolation.
Everyone likes loyalty, but some people value it over honesty and some value the reverse.
Everyone likes generics, but some people value faster compile times, a more simple language implementation and a lower learning curve more. Some people value it less.
If you value compile times (and you actually think generics make a difference, I do not) you would not make slices and hash tables generic like in Go, would you?
If you wanted a simpler language, you would not make slices and hash tables generic, would you?
If you wanted lower learning curve, you would not make special case generics would you?
If you wanted to wait, to get the perfect generic solution because a sub-optimal one is not good enough, you would not create a sub-optimal non-perfect temporal solution with special-cased generics for certain data types would you?
If you argue that I am loyal and not honest, and therefore like generics because it is used in my favorite language, I think you are wrong.
The languages I use most of the time have a huge design flaw. It is called null. But when they created Go they chose to (except in special cases) get rid of (useful) generics instead of the disaster that is null.
The problem is that Go has not learnt from other languages. It is too imperative, not expressive enough, lacking in static typing and horrible at fault handling.
>It's pretty difficult for generics not to adversely affect compile times.
Compared to what? Non having type-specific code?
Because if you manually write (or use code generation madness as some propose) that type specific code you need yourself, then the compiler will take the same time to compile it as if generics were a language feature.
The compiler also has to generate the specialized code. But I suspect that the main factor here is just that the presence of generics in a language encourages the use of generic programming to an extent that wouldn't be practical in a language without generics. Take e.g. the boost geometry library (C++). Theoretically you could write a similarly generic library in Go using code generation, and it would probably take ages to build too. Of course, no-one would actually do this.
I don't know anything about CLU. ML compilers are a mixed bag. OCaml (which is a different dialect, of course), has an acceptably fast compiler, but it's not near-instant like Go. Ada compilers are famous for being slow, at least historically. C# is, again, acceptably quick to compile, but not as quick as Go. I don't know about D. I've heard good things about Delphi's compile times, but no-one uses it now.
In that comparison you forgot the part of turning off optimizations so that the code quality is similar to what Go standard compiler spends time doing.
The compilation speed drop is easily seen when using gccgo instead.
A side note for Delphi, it is still relatively used in European enterprises, with an yearly conference in Germany.
And I could have mentioned other languages like Eiffel, with their mixed JIT for development and AOT via C compilers for final delivery.
No, I know all of the languages that you mentioned except CLU. And since I don't think there are even any practical CLU implementations available for modern hardware, I'm not sure how you are able to compare CLU compile times to Go compile times. Are you maintaining a legacy CLU codebase or something?
Now, if there are scientific comparisons available, I'll happily defer to those. But unless you spend inordinate amounts of your time comparing compile times between languages, I doubt that you have any scientific info to go on either.
> I don't know about D. I've heard good things about Delphi's compile times
Doesn't look like actually knowing them to me.
> Are you maintaining a legacy CLU codebase or something?
No, just you are apparently stating that a CLU compiler developed to be usable in 1971 hardware will run slower than Go on 2018 hardware, which doesn't make much sense.
Talking numbers, D was taking 1.24s to compile its complete standard library in 2010, (too lazy to try out the latest version) including the piles of template code that it has.
> Talking numbers, D was taking 1.24s to compile its complete standard library in 2010, (too lazy to try out the latest version) including the piles of template code that it has.
Quite a lot of that time appears to be spent linking. I wonder if the post on the mailing list was reporting the literal compile time, rather than the build time?
It was a bit difficult to find a go project of similar size that was easy to build. The backend component of limetext is about 10k LOC compared to 35k for Phobos. When I do a ‘time go build -o foo’ in go/src/github.com/limetext/backend, I get the following:
real 0m0.163s
user 0m0.140s
sys 0m0.207s
Multiply that by 3 plus a bit to compensate for the LOC difference, and it's still pretty good.
Still no idea what you are on getting with regard to CLU. We don't have any info on how fast it compiled.
> At least now they've come up with a module/dependency system, which, even though a decade late, doesn't bring anything extraordinary to the table compared to what other platforms already had for ages.
I don't think you fully grasp the breakthrough that the theory behind Go modules is for dependency management in general. It vastly improves on and simplifies anything that existed in this domain before.
It really does. Our codebase is very large and I'm a member of the team responsible for the build system and language tooling. Java library resolving in our repo can take 45mins from a clean build. Python is better, and we've managed to lock it down to offline only sources so libraries never update on us out of the blue. I've been a gopher since the the first announcement and I'm so excited that Russ Cox discovered a simple way to declare and resolve dependencies that avoids the SMT solver nightmare that I've seen bog down progress in every major project I've been involved in, from Ocaml and Haskell to python, java, rust and javascript (security nightmares every night here).
One of my greatest hopes is I can convince my teammates that its be worth the time investment to transition all our dependencies management ecosystems into the Minimum Version Selection algorithm.
It’s a slightly simplified version of Cargo which itself is an evolution of what yarn/npm/sbt/etc are doing. This is explicitly called out in the vgo paper.
It only seems like a revolution because the previous situation in Go was such a mess. As someone that writes a lot of both Rust and Go I’m not nearly as impressed.
>It vastly improves on and simplifies anything that existed in this domain before.
I've read the specs and justification posts. I don't find it gives anything better (and in some ways it's worse) than what can be achieved with, e.g. cargo.
>It's like saying Copernican heliocentrism didn't bring anything extraordinary to the table compared to Ptolemaic geocentrism
Yeah, it's like saying that, with the only difference that we're actually talking about something that brought nothing extraordinary here (as opposed in the contrived example).
For all the emphasis they put on stability, there is one part that doesn't seem particularly stable – on slide 32:
"Go 1.11 supports the upcoming OpenBSD 6.4 release. Due to changes in the OpenBSD kernel, older versions of Go will not work on OpenBSD 6.4."
This is, I assume, because Go makes direct syscalls rather than going through the C library. Many operating systems, including Windows, don't consider that a stable public interface.
So does this mean that older Go binaries will stop working when you upgrade the OS? I guess that would make the source compatibility extra important to them, since you must keep recompiling your old programs as you upgrade to newer a OS. (Perhaps it's also why they're dropping support for older OS versions so quickly?)
OpenBSD doesn't make any promises about ABI stability between releases. That means that all older binaries may stop working when you upgrade the OS, not just older Go binaries.
This might be seen as a problem, but it allows the OpenBSD devs to push the envelope of expectations and find bugs across the entire software ecosystem.
An example of this was the change to 64-bit time_t back in 2013 (talk: https://www.openbsd.org/papers/eurobsdcon_2013_time_t/ ), and evolving syscalls like pledge(2) that changed syntax between releases as the problems/benefits of a specific implementation choices were learned.
This is not related to syscall vs libc. OpenBSD 6.4 requires stacks to be mapped with MAP_STACK, so it requires a runtime change. I’m not sure how using libc would have prevented this
Windows calls their syscall wrapper ntdll.dll, not libc.so; but it's conceptually the same thing. The ntdll.dll ABI is public & stable, the underlying syscall ABI isn't. Several unixoids do the same thing, where their libc.so ABI is public & stable, but the underlying syscall ABI isn't.
One really minor, but powerful, thing I'd love to see from the Go team is a better designed documentation experience. Here's the Google-produced Github library godoc [1]; its absolutely horrendous to browse, as is any sufficiently large package. Compare that to Elixir's [2] auto-generated documentation. There's a lot of work that can go into just improving that static HTML and making it more easily grokable.
I've not used Elixir, but I'm surprised by the strong dislike for GoDoc. I've used Java, C#, C/C++, Python, and JavaScript and I've never found anything to be as nice as GoDoc.
In particular, Godoc is absolutely effortless. No build step in your project, no cryptic documentation syntax, no need to tell the documentation generator how to link to your dependencies' documentation pages, etc. Everything just works, and the output is quite a lot cleaner than the aforementioned as well.
No doubt there's a little room for improvement in how the text is layed out and organized, but it's still leagues better than all other documentation systems I've used. With Python (sphinxdocs) in particular, everything gets rendered into a giant page with no clues about what class's `__str__()` method I'm currently looking at.
That's mostly what I mean. I love godoc and the philosophy behind it; its a massive improvement over the manual documentation build steps of Java, JS, Python, etc. But even more modern languages, like Rust and Elixir, do it better. There's a lot of low-hanging UX and usability fruit to be fixed in godoc which can make the experience better.
Ah, I see. I guess I think of these UX improvements as “the last mile”, and it’s nice to have something that gets us 95% of the way there compared to everything else that makes you work to get to 40%.
Also, i’ve never found Rust’s docs to be very easy to read; maybe the complexity of the type system makes for a more challenging UX, or maybe my opinion is simply an outlier.
In any case, it’s good to know things can get even better. Thanks for sharing!
Godoc is an excellent forcing function when it's not ignored. Designing your package's public namespace so that it looks sensible in godoc yields great results for consumers.
Godoc is too plain and almost cumbersome to read for me, while Elixir is too pretty and not so efficient in displaying the type information in my opinion.
It's funny how these things are so personal. I love Godoc, mostly because everything is on one page, with very little JS and a good, almost monochrome colour palette. At the same time, I really can't stand Rust's docs, because everything seems to get its own page, it's colours galore, and of course there is a sidebar.
Godoc compensate by being more functional. The elixir example is missing links to the source code of the functions/methods etc. Godoc also generates an excerpt with all the functions/methods/types in the package.
ExDoc does not miss links to the source code. (Click on any Module/function and there should be brackets "<>" that point to the source code.) Types are also present in any module that defines them.
Have a look at modern web development. Because they have no macros, they wrote a bunch of tools which "compile" different domain specific languages (DSL's) to "the real stuff". Nowadays you compile pseudo HTML, pseudo CSS and JavaScript (modern style) to HTML, CSS and JavaScript (backward compatible, a.k.a. old style JS). That's insane. On the other hand, whiteout those code-transformation tools – which are called macros if they are built into the language – all those fancy and simple web-tools would not be feasible.
Ignoring macros means to me oversimplifying programming. Which results into not solving the real (meta) problem. For a while you will get away without macros/code generation, but in the long run they are inevitable.
One big problem with macros is that they make it very difficult to build good tools. You can't really understand the meaning of any particular piece of syntax without completely evaluating any macros that might be in use.
That sounds exactly like what a compiler has to do. And what all type-aware Go tools currently have to do. So we're already doing that, we just have no language-level support for doing so, so we have incredibly less-capable workarounds like makefiles and go:generate.
> Macros are a cop out of language design. They mean "everybody do what you want and create your own language, we don't care anymore".
I prefer safe AST based macro than runtime reflection like Go has. The latter is actually a cop-out, unlike the former in the context of type safety at compile time.
At this point, Go would just see macros as automatically called code generation, which leads to an unpredictable speed reduction in compile times especially with large dependency trees. Slowing down compile times at any noticeable level is probably a non-starter based on language goals.
I still don't get it; so you invented a language based on idea of not doing the "evil" things C++ or say other languages did and now you have to re-invent all those "evil" things because community wants it?
It isn't necessarily a bad idea to start with a minimal design and add features later once the implementation is more mature. Even if Go never gets generics, it's still a good solution for a number of problem domains (for example, it seems to have found something of a niche for infrastructure; Kubernetes, Prometheus, juju, etc.)
Kubernetes has enormous masses of repetitive code that is essentially a workaround for the absence of generics or other kinds of improved types. The reliance on "union types" -- really, exclusive arcs -- gives me the heebies and also guarantees that there is a bottleneck for new features: PRs to core. And then there's the whole TypeMeta thing. An entire dynamic type system because Golang can't express what Kubernetes needs.
And no, I don't see code generation as a good thing. It's fragile and difficult to safely upgrade.
Kubernetes has more or less invented their own type system. I agree that much of it is horrible to deal with, but I don't know how you could easily express it in any language.
Of course someone could always prove me wrong by doing just that.
That's kind of my point; the code might not be as beautiful as we could ever imagine, but as a product it works extremely well, and would be worse if it carried around a lot of runtime baggage and poor performance which many of the alternatives would have given.
This is not about beauty. It is about reliability, simplicity and safety, all of which contribute to development velocity.
Golang is simple only for toy examples.
At scale Golang codebases are difficult to navigate, understand and test due to the repetitive repetition of repetitious repeats.
And the idea that at-the-time-viable alternative languages like Java, C++ or D are too slow for something as relatively lightly trafficked as the core Kubernetes controller codebase is just plain silly.
When I mentioned speed, I obviously wasn't ruling out C++ on that basis - but I would rule out Python or Node. Java is fast enough when going but the awkwardness of needing a JVM plus abysmal runtime memory usage make it pretty unattractive to me, especially for the k8s agents where the resource needs have to be subtracted from total node size.
I've not found Go difficult to navigate because of repetition. Quite the reverse really - it's easy to reason about where everything comes from and (usually) easy to work out how everything connects together. I would take that any day over a large Java codebase with the usual obfuscations of dependency injection, or a Python or C++ codebase where somebody has tried to be "clever".
so coming back full cycle to realize they should've hired actual programmers that writes C++ so they don't have to invent a new language to babysit them
He also hints that it would likely only be done to address major issues/concerns/changes such as:
* generics
* modules (perhaps done? time will tell)
* errors (if err != nil {}, extra type information, chaining, etc)
* context
Still to early to tell if anything I listed or he listed would be part of "Go 2" or whatever it happens to be called.