As someone who writes Go every day for work, I can't agree that Go is simple. Using a language for analytics without generics can be quite painful and error prone.
Go is a language that pushes remembering corner cases and failure conditions onto the programmer rather than the language and runtime itself.
When you already have to remember a myriad of corner cases for business logic, also remembering so many corner cases for your code hurts productivity.
I also believe that languages exist to make getting to an end result in given domains easier. Go does not make my life easier.
I really hope it gets generics. I wish it would do away with nil/null.
Nim is a very good language that actually accomplishes the simplicity Go wanted imo.
Go affords simplicity to the Go compiler writers at the cost of burdening Go users with having to remember inane things.
I don't think that's possible to retrofit onto a language. Best you can do is to add non-nullable types. But zero values are so core to Go's semantics that I kind of doubt it's possible to even add those in a sensible way.
Examples that bother me sometimes. YMMV of course.
Writing to a closed channel panics, but writing to a nil channel blocks forever.
Appending to a nil slice works fine, but inserting into a nil map panics.
If you have a function that returns an error struct, and you wrap it with another function that returns the error interface, nil returns from the inner function will no longer test equal to nil.
Defining a method with a receiver type of Foo, rather than <star>Foo, means all modifications to the Foo get silently dropped. This can also happen to methods that correctly take a pointer receiver, if their caller incorrectly takes a value receiver.
What you describe is all true, but honestly, this is such a small number of corner cases compared to traditional languages(C, C++, python, javascript, etc.), that it is really not a big deal. Also most of this is clearly documented.
Agreed about C/C++/JS, though I'm curious to hear your thoughts about Python. There are only two really common gotchas in Python that I tend to notice:
- Using an iterator more than once silently produces nothing. I notice this when I insert print statements to debug something, but then accidentally turn the following for-loop into a no-op. The Python 3 change that made more top-level functions return iterators made this problem more common, though I agree with the performance justification for doing it. It would be nice if iterating again after hitting the end raised an Exception, though I'm sure that would break all sorts of code that assumes it doesn't.
- Mutating function default arguments affects all subsequent calls. This is most common Python gotcha people seem to talk about.
A corner case about Go which makes me absolutely crazy: calling Reset() on a timer which has already fired has the biggest gap between "What I expect to happen" and "What actually happens" of any stdlib I've ever worked with.
(You can see it here in the playground, but try it on your local machine if you don't believe me and/or think the playground has an inconsistent understanding of what time actually means: https://play.golang.org/p/ltdV9dI609 )
You never drained the longTimer channel, so when you say "We agree that longTimer has fired, right?"; that's not quite true. After you call Reset(), you're still getting the value from the first firing, because that's the first time you read from the channel at all.
The docs are quite clear on this behavior and say "Timer will send the current time on its channel after at least duration d." -- key words being at least and says nothing about when you choose to read from the channel.
It's even worse. The longTimer has fired, and it sent a message on the channel just as it was supposed to. When Reset() is called, it causes a second firing and a second message. Here is the code, corrected to illustrate. The output times are exactly as one would expect.
I presume an example would be something like remembering to check `rows.Err()` in `package sql` at the end of iterating through all rows. If you check the error each iteration while calling `rows.Scan()` but forget to check the `rows.Err()` at the end, it could potentially be much, much later that you find out something went wrong.
The way I personally use it ... I'd say you're exactly right. And adding that to an easy cross compilation/platform story that can side-step C toolchains in many cases, garbage collection, and intelligible concurrency, and you've got an extremely useful tool.
So the main reasons for Go are the fact 'coproc' keyword added to Bash since version 4 is still considered experimental[1] and the fact that a vast majority of modern developers don't know shell good enough.
As someone who writes Go every day for work,
I can't agree that Go is simple.
You may be confusing "simple" with "good". A simple solution to a complex problem may not be a good one. Go can be simple and still not a great solution because it pushes complexity to a higher level.
Kind of. Nim offers a thread pool for CPU intensive tasks[1]
and async await for IO intensive tasks. Currently, the two don't really mix but work is ongoing to change that.
> Nim is a very good language that actually accomplishes the simplicity Go wanted imo.
Coincidentally, I just looked at Nim this evening, wrote some code and came away with the opposite impression. I'll copy-paste my sort-of-blogpost on this from [1]:
Nim itself feels a lot unlike Go and, interestingly, a lot like C++:
1. Non-orthogonal features. In C++, there are references, which are like pointers, but not quite, so you always have to think about which one to use. In Nim, non-orthogonal features include tuples vs. objects, and sequences vs. openarrays.
2. Feature creep. Just look at the language manual [2]: generic functions, type classes, 10 calling conventions (10!), garbage collection (not sure if optional or not), inline assembler, operator overloading, exceptions, an effect system, AST macros. Name any contemporary programming language feature, chances are that Nim has it.
Then there's the documentation. The language manual [2] is okay-ish, considering the size of the language. But when I dive into the library reference, the built-in module "system" [3] is so outlandishly huge and the documentation so badly formatted that it takes a lot of hunting to find what's in this module (and what isn't). Just look at all the duplicate entries in the navigation bar on the left. The system module should definitely have been split into multiple modules for the multiple concerns it covers, possibly with reexports in the actual "system" module to make sure they're all imported by default. Some parts should just be moved out of it, for example the whole file IO business belongs into "os" IMO.
Another thing that concerns me about the standard library is how many duplication is going on in there. There are at least 4 different XML parser AFAICS, and two regular expression libraries, both based on the same backend (pcre). This might be the same confusion that most languages suffer from in their pre-1.0 stabilization phase, but especially then, it's a strong argument not to use the language in production pre-1.0.
The thing that really killed it for me that I was able to produce SIGSEGV by accident, without the compiler warning me about it. I think I wrote something along the lines of:
type Config = tuple
someSetting: string
anotherSetting: string
proc readConfigFile(path: string): Config =
var file = open(path)
defer: file.close()
# TODO: implement the rest
var cfg: Config
return cfg
var cfg = readConfigFile("./example-config")
echo(cfg.someSetting) # this produces a SIGSEGV; probably because
# the memory backing `cfg` is not initialized
In 2016, I expect any new language to warn me about (or outright refuse to compile) code that might access uninitialized memory or do any other unsafe stuff, especially for a program that will run as root.
Your example doesn't produce SIGSEGV for me. It prints out the text, "nil".
However, if I write
echo(cfg.someSetting[0])
it will segfault. This makes sense because it dereferences a null pointer (in Nim-speak, a nil value). I would guess that's what you experienced.
Dereferencing a null pointer is not unsafe. The program cleanly exits. Compiling that via C is sketchy though, because the C compiler may treat provable null pointer dereferences as undefined behavior.
I would tend to agree, based on instinct, that Go is poor for data science work (as it happens I am about to find out for real in the coming weeks :| ). To build systems to feed data into some other analysis platform: definitely, but dynamic number crunching, not so much. Rob Pike probably knows this better than most; Sawzall would probably not exist otherwise.
I was just reading an article describing Sawzall needing to sandbox code really makes me think of Haskell, or some other language with an effect system.
What we like to keep missing is that golang innovates not as a language, but as a tool to contribute to software project success.
Project success in the software industry is abysmal, and we still keep thinking we can spin up another language that will contribute to project success because it let's us express ourselves in new ways. Well, how's that working out so far?
The reason why golang appears to have such wide adoption in such a short period of time is that it really does seem to contribute to helping devs get their shit done. Massive amounts of working code are being written in golang, and that's good for the software industry as a whole.
Currently I run a massive project written in the standard issue kitchen sink corporate language (C#). It's got generics, functional extensions, all kinds of shit to make the most discriminating programmer happy. Well guess what, IMO C# for all it's features still doesn't serve the business of software dev as well as golang because it doesn't pull off what golang is brilliant at (easy to code for wide range of skill levels, easy to mentor, easy to test, easy to hire for). The result is difficulty finding productive devs, and a code base that is not up to my preferred quality standards.
This may be hard to swallow, but it might really be the case that you can get more quality work done with more devs if toolchain simplicity is emphasized over language features. If the evidence continues to bear this out for golang, then it's time for me to shed some language biases just so I can remain competitive.
Is it really the case that languages with more features cause more project failures? Has the success rate of Java (the prototypical blub language) decreased after they've introduced new features like generics?
My experience has often been that whatever feature is lacking in the language tends to be made up for by huge code bases that are impossible to navigate, or using frameworks that abuse whatever dynamic features you have in the language horribly with added complexity in tooling and debugging.
C#, Java, and Go serve basically the same demographic and have most of the same strengths. Both Java and C# have larger ecosystems but more intellectual baggage. People without long-standing affinities to Java or C#, but who have the systems programming problems these Java-likes solve, are likely to appreciate Go, because it's simpler to pick up and use.
I think if you're proficient in one Java-like, you're single-digit weeks from being proficient in any of them, so if you're choosing your first, choose whichever one is easiest for you to go with. For a lot of Unix people coming to Java-like from Python or Ruby, that easiest choice is going to be Go: it's fully unixy but doesn't have the heavyweight runtime.
> C#, Java, and Go serve basically the same demographic and have most of the same strengths.
Agreed. Which is why switching your org from C# to Go is going to do nothing (at best) for productivity or project completion rates.
Edit: Likewise if you are having trouble hiring C# developers it is very unlikely you will have an easier time hiring Go developers (although for a few geographic locations it might be true).
Neither Java nor C# are really adequate for 'systems' programming problems, assuming the term is being used colloquially - OS level problems using system API calls, process, threads, etc.
Both Java and C# are really more suited to enterprise services. It is difficult at best to communicate with the underlying OS behind the VM. you really need to be using C / C++ to reach the kernel on Unix. Probably the same on Windows, though I'm guessing .Net provides some sort of integration.
And I teach them Go in a week, or even less. I had programmers going from absolute no Go experience to deploying working, idiomatic Go code to production in 2 days.
Go has been around nearly a decade with the backing of none less than Google and yet it remains a fairly fringe language. Elixir is on a much steeper adoption curve. So is Swift, but Elixir doesn't even have a tech heavyweight behind it.
Wikipedia says Elixir first appeared in 2012, while Go first appeared in November 2009. Go is not that much older than Elixir...so I don't buy your analysis. By time Go was four years old, it had much greater adoption than Elixir does today.
What leads you to believe Elixir is on a much steeper adoption curve? I'm a bit surprised by that statement because I've seen a lot of open source projects written in Go (Docker, IPFS, Kubernetes, etc.) but none in Elixir.
Find me one reference for Go in "use" at Google in 2007. The wikipedia page does not make this claim, it is only stated that this is when Go itself was being created.
As some people like to point out, I'd also like to remind that in 1968 Algol had:
- user defined record types
- user defined sum types
- switch/case statement with support for sum types
- unified syntax for value and reference types
- closures with lexical scoping
- parallelism support
- multi-pass compilation
Given that many mainstream languages don't offer even what Algo68 had, I personally understand how a Go developer might thing that "nothing is new under the sun" since the 80's. After all, Go ignores all progress in programming languages for the last 40 years.
I do love the attempts of Go developers to rationalize Go's choices. But in the end it will end up being a hated language, universally recognized as a net negative in the industry. But that won't stop the working programmer from doing the same mistake again and again.
>After all, Go ignores all progress in programming languages for the last 40 years.
I've seen this meme being spouted so much every time Go's mentioned it's ridiculous.
No, piling up feature upon feature is not progress otherwise we wouldn't be using anything but C++.
Go is a language you pick for the right situation. If it's not enough for what you're trying to do, go for a different one instead of trying to expand in the wrong direction leaving you with warts, like Java's done, C++'s done, Python, JavaScript etc... which you will have to end up avoiding in order to write performant and clear code, counting on luck not to have to deal with code that abuses those features to create anti-pattern upon anti-pattern.
>After all, Go ignores all progress in programming
languages for the last 40 years.
I've seen this meme being spouted so much every
time Go's mentioned it's ridiculous.
Is it a meme when it is true? To support this question, witness the statements of Rob Pike[0] below.
---
Regarding the utility of supporting first-order functions[1]:
I wanted to see how hard it was to implement this sort
of thing in Go, with as nice an API as I could manage.
It wasn't hard.
Having written it a couple of years ago, I haven't had
occasion to use it once. Instead, I just use "for" loops.
You shouldn't use it either.
---
Regarding progress in programming languages[2]:
One thing that is conspicuously absent is of course
a type hierarchy. Allow me to be rude about that for
a minute.
And[2]:
Programmers who come to Go from C++ and Java miss
the idea of programming with types, particularly
inheritance and subclassing and all that. Perhaps
I'm a philistine about types but I've never found
that model particularly expressive.
---
The part about "particularly inheritance and subclassing and all that" is ironically a meme spouted by Go's community so much it is, if you'll pardon my borrowing your description, ridiculous. For the curious, there are many community "Go-isms" explainable by the Pike talk[2].
Even a casual reading of the "list of significant simplifications in Go"[1] (35 in all) is enough to reasonably support the "ignoring all progress" position.
I don't see how that helps your argument. For loops are more than enough for that task, it's easily readable and universal.
And Go does support first-class and higher order functions so not sure what you're talking about here.
> [2]:
How's ditching inheritance in favor of composition "ignoring the last 40 years"? It's the biggest example together with goroutines that proves that phrase is a meme, and that we've learnt a lot on typing best practices as an industry over the past couple decades.
And as an added bonus, another thing that's a good example of Go actually looking back and improving upon what's been done before is the select statement. Most popular languages fall through by default with the switch statement.
Outside of examples on the internet, I can't recall right now the last time I've seen a switch statement in the wild that didn't break at the end of every case. Making the case (pun not intended) for a fallthrough statement and having switch/select break by default.
I'm no language historian, but I did limit my timeframe to my personal experience of 36 years of development for a reason. There was necessarily incredible research in the 20 years prior to that, which made it into useable languages, of course this is no hard cutoff, just an arbitrary personal choice, and I am happy to stand corrected. My point was simply that at least in the languages that make up 2σ of all code running in production in the last 20-30 years very rarely do they include "language" features that did not appear as truly innovative sometime before that. It may be argued that simply including some prior research in a useable language is innovative, or that the incremental improvements are themselves innovation, but then we are bogged down in semantics. In summary every time anyone in recent years has claimed to be innovating in language design someone else counters it with prior art, and that Go never pretended to be innovative in this direction. Given the lineage, and experience of the Golang authors it is more likely, in general that they considered and rejected, rather than ignored.
And where is Algol now? It's dead. Theoretical superiority on paper is worth absolutely nothing when there is no usable implementation for modern computing environments out there.
This article was written to make Go look bad and unoriginal, but it inadvertently proves that Go is Algol's legitimate successor exactly _because_ it has all these features _and_ a working implementation that is available for wide variety of architectures and operating systems.
I think Pascal (and Modula and Oberon) are more legitimate successors to Algol.
I wasn't aware that all of those working implementations had stopped working.
Popularity is just one dimension a language can be placed on. Java utterly dominates the volume of new code being written in regular industry and has most likely done so for the entire lifetime of Go.
And this says little about their other relative virtues.
I think it could easily be INTERCAL (and INTERCAL#, of course) dominating the world now, if Sun, IBM and other giants choose it instead of Java and pushed it with the same amount of force.
The author is quite correct. Go is super boring, and runs fast. Two great points for it.
For me however I just never felt happy writing Go code. I have a couple of open source projects with it, so I have put it through it's initial paces to see if we fit.
The language that did make me happy was Elixir. Everything about the language and the surrounding tooling is polished. You end up with significantly less lines of code that's easy to understand.
Here's just one example from me - both examples scrape some info from HTML:
+100 Go is super boring and that it's a selling point. Code is a tool, not a device for entertainment. I'm yet to meet a 20+year developer who is wowed by extensive/unique/complex features, which makes me think as I also mature as a developer I'm going to find those things less important.
However, the Go version is way easier to understand. Mind you, I have very little experience with Elixir. In the interest of being pragmatic, the easier code is to understand, the easier it will be to maintain, and we spend much more time maintaining code than writing it fresh.
> Go is super boring and that it's a selling point. Code is a tool, not a device for entertainment.
It is a balancing act.
As an industry we don't "do" training on work time.
So how do you convince developers to work on learning and development during their own time?
One way is to make the language fun and interesting.
> I'm yet to meet a 20+year developer who is wowed by extensive/unique/complex features
20+ year devs don't like jumping onto the latest unproven technique/language. Don't mistake that for wanting few/limited features in a language.
20+year developers arent driving transitions to go. It is fairly new developers wanting to switch because it's cool, it's new, and it helps level the playing field by bringing experienced developers down a peg or two.
Having a lot of experience in Elixir and Go I will say that the advantage of Elixir is not excitement, but that you can reason about your code and expect it to work. For years. Half assed go programs will crash. Half assed elixir programs can run for decades without maintenance.
Elixir is way easier to understand than Go. If you will put one week into learning it, you'll be a lot further than you are with go.
I have a couple decades of professional experience, if you are building a serious system, you want Elixir. IF you are building a devops tool, where a single binary with no install process is what's important, you want go.
But given all the containerization and distributed programming people are doing these days using go for that kinda shows to me that most engineering is done by people who don't understand distributed systems.
> You end up with significantly less lines of code that's easy to understand.
That is very interesting claim, but there is something more than just tooling and number of lines of codes - a paradigm. Elixir is a functional [1] but Go is imperative [2] programming language. It changes a point in discussion quite a lot, especially when you say 'code is easy to understand'. Personally I prefer Elm over JavaScript/React, because it is 'easier and simplier', but I remember a situation in college when after C#-course we had introduced Prolog and F# and many of newbies found functional programming very difficult... But maybe it is matter of taste.
> Everything about the language and the surrounding tooling is polished.
I don't have strong Elixir experience, but playing with Phoenix framework made me really happy to see how many packages are well documented to create backend for web application. But is almost perfect, almost - because it is not Go.
Go is way more performant (Elixir have results comparable to Python or PHP[3]), has great virtual file system [4], auto generating docs (godoc), gofmt, gorename, golint, gocode (no matter which editor you use - VSCode, SublimeText, Vim you have great autocompletion) and a lot of other things (i.e. examples) which makes learning Go easy for newcomers (i.e. devs who are bored of PHP).
Go has more performance if there is some number crunching work but if it comes to APIs or web applications, I don't think so. That techempower benchmark for Phoenix is seriously flawed [3]. We use Elixir in production and according to our benchmarks, the performance is very close to Go or sometimes even better. We also use Plug (which is used by Phoenix underneath) directly if it is just a small API. These benchmarks relates more to our experience. [1] [2]
Thank you for sharing results of other benchmarks with the Phoenix framework. It significantly changes perspective if we speak about Phoenix performance, but still Go has very good results [1][2] (Gin) in case of throughput, latency and consistency.
You made me curious how release 1.7 affects performance of popular routing packages for Go (Gin, Echo, Httprouter).
For instance, without changing too much, you could implement scrapeProfile like this: https://play.golang.org/p/sP34n9acy7 I think that reads quite nicely, although I'm sure someone else can do even better.
If you modified dumpToCSV to take an interface instead of the concrete type, you wouldn't even have to prepare the user structure. You could pass in the vcard directly.
I'm not sure if they're idiomatic. However, I did write both projects with the same amount of experience in each language. It goes to show then, how much easier it is to write idiomatic Elixir.
Also the elixir-lang Slack channel is just full of incredibly nice people. :P
This has been my experience as well. I think a lot of people came to Go looking to solve some limitations from Ruby, Python, JS. While it does that, you get a lot of trade offs that make it a great solution where you had a problem but not a migration path for everything.
From what I've found so far, Elixir gives a migration path for just about everything except heavy number crunching. Several people who came to Go from dynamic languages have seemed to echo this sentiment.
I find "nice to look at" and "easier to understand" tend to converge over time. Elixir pushes me to expand my mental maps a bit more than Go, but once I grok it the code reads much more coherently and concisely.
In some ways they converge, in some ways not. On some level "nice to look at" means everything conveys some meaning, and nothing is obviously awkward. "Easy to understand" means everything you might need to know is on the screen. Those things are often aligned, but sometimes orthogonal.
Put another way, you can make something that looks simple and pleasing, but hides a lot of complexity that's required to understand in order to be able to do your work.
>I find "nice to look at" and "easier to understand" tend to converge over time. Elixir pushes me to expand my mental maps a bit more than Go, but once I grok it the code reads much more coherently and concisely.
Sounds like something that won't hold up as well over time.
Having written a lot of elixir code and now spending my days writing Go code, I think of Go as a tragedy of missed opportunity. If go did concurrency correctly, the way elixir and erlang do it, it would be the language I want to use for everything (well and the pipe operator from elixir is really special.)
But concurrency in go is a terrible hack, it's just slightly better multi-threading with all the deadlocks and mutex and hassles that come with it. Channels and goroutines are not erlang processes.
That so many people think of go as a concurrent language shows how little people understand concurrent programming.
Every engineer who thinks they are decent needs to learn at least one good concurrent language (elixir or erlang would be my suggestions.)
So, Go is designed to be an engineering language and not an academic toy. Contrary to other languages, Go programmers "deliver" and have a pragmatic view of the real development world, not just their own commits. Go programmers need a deeper understanding of computer science because other programmers are lazy and have everything given for free and probably don't need to know how it works.
A whole page discussing the virtues of Go by insulting people.
It may be harder to write some things, but it definitely is easier to read Go code.
Besides, while the language itself may be more verbose than it could be, the standard library is extremely pragmatic and terse. It's like the opposite of the standard C++ library. E.g. to see if a string starts with another string in C++:
In your C++ example you're giving `std::mismatch`, which is a generic algorithm. If you're so inclined you could provide a wrapper that has the same interface as the Go example, but you're comparing apples to oranges. I'd argue that `std::mismatch` is much _more_ pragmatic than the Go example, in that I can use it to check any lists of user defined types.
In reality, these two methods do completely different things. `std::mismatch` is a completely generic algorithm that 'returns the first mismatching pair of elements from two ranges', which can be used for much more than `strings.HasPrefix`.
Did you read the article? The fact that there is 1 way to do it in Go, and 100 different ways to do it in C++ (or some other older language) is a feature, not a bug.
Or checking for the presence of an item in a slice. Because Go doesn't supply sets, or allow you to write them yourself, this is something i find myself needing to do a lot, and every time, i'm writing that idiotic function from scratch.
> The Go standard library is full of things that do exactly what you want them to, whereas in other languages you have to manually do it yourself.
No.
Python: toCheck.startswith(prefix)
JavaScript: toCheck.startsWith(prefix)
Haskell: prefix `isPrefixOf` toCheck
What other languages does Go compete with that don't have a "string starts with" in their standard library.
Since I feel I've proven my point about all other languages Go competes with having this function, what other helpful functions do you think Go has that it's competitors do not?
...and I haven't felt this kind of pinch when using C#, ever. But what do I know? I'm just a .NET wage-slave pleb who's too mentally handicapped to see Go's glory.
Yep. Add those and a proper type system and you have yourself a decent language. But when the creator of the language doesn't see the value in abstractions [0], then it's probably never going to happen.
His formulation of reduce() is strikingly clumsy, both in signature and implementation. I daresay I wouldn't have much use for such a function either!
Most languages which provide a reduce() permit programmers to provide an initial "carry-in" value. This is a neater and more useful way to handle the cases of a zero- or one-element list. Moreover, it lets you do more interesting things with the reduction. Consider the following, using ES6-style JavaScript to collect a set of the unique values of a list via reduce():
function unique(list) {
return list.reduce(function (sofar, item) {
if (!sofar.includes(item)) { sofar.push(item); }
return sofar;
}, []);
}
It was just the first example that came to mind to illustrate the general pattern. We could also make this particular example shorter by using your naming conventions and writing it in a more functional manner instead of mutating the list:
function unique(list) {
return list.reduce((r, i) => r.includes(i) ? r : r.concat(i), []);
}
Clearer? I dunno; probably depends on the reader's background and preferences.
I still don't see what you are gaining here. Outside of code golf, the goal should not be to try and write code as short as possible, regardless of the language you are using. But this just seems to validate Pike's assertion that a for loop is more suitable to the problem.
Perhaps an example of where map/reduce is a significant improvement to the expressiveness would be appropriate for the discussion?
Transform that into a list of actual points in worldspace- the hitboxes for players, the AABB centers for buildings. (Not all player models have the same hitbox count.) Unless we are holding a weapon that does splash damage- then go for the feet on players (their origin). And if we hold a projectile weapon, do prediction based on the player's velocity.
Now transform each point into a 2-tuple (position, score), based on some heuristics implemented in another function.
Do a raycast to each point. Filter out those that can't be hit.
Take the point with the highest score, if there is one, and set our viewangles to aim at it. Otherwise leave them alone.
The actual implementation of this looked something like this:
(Note that max_by is just a special case of reduce/fold; in my experience, you rarely want to use reduce directly; there's probably a more ergonomic wrapper. Sometimes you do, though.)
To me, that's pretty readable (stuff specific to the game aside, like the trace.fraction ugliness- fraction is "how far" the trace got before hitting something, 1.0 meaning there's nothing in the way. the comparison is to handle some floating-point inaccuracy there), and handles some really annoying cases properly.
I agree wholeheartedly with the notion that you should rarely use reduce directly. It is much less useful than map or filter.
Suppose that you have a bunch of things implemented using map or filter. When someone writes parallelized versions of map and filter, all of the existing code gets the benefits.
Now suppose you have a bunch of basic functions implemented using reduce (sum, product, min, max, reverse, ...). Can these be parallelized? Yes - by throwing away the 'reduce' implementation, and starting from scratch.
The problem with reduce, compared to its more useful cousins map and filter, is that it is too powerful. Map and filter are more limited than reduce, but if you can express your computation in terms of maps and filters, you get something valuable in return. If you can express is in terms of reduce, you save a few keystrokes, and that's about it.
For anyone interested in this kind of stuff, I recommend Guy Steele's talk "Organizing Functional Code for Parallel Execution; or, foldl and foldr Considered Slightly Harmful": https://vimeo.com/6624203
Reduce can be parallelized when the reducing function is associative. You can split the input sequence into chunks that are reduced in parallel et merge the results with another sequential reduce.
I like that the function in your link is called preduce and not just reduce. Reduce has a standard definition, which doesn't require associativity. To eliminate confusion, a function that does require associativity deserves a different name, just like here.
And using these names, I would say that preduce seems much more useful to me than reduce.
Picture map, filter, and fold as for loops with annotations that restrict what they can do. This helps both the source code reader (you and I) and the source code compiler more easily understand what's going on.
For the compiler this makes optimization easier. For the reader it makes reading easier after learning what these recursion primitives do.
Well no need to be so aggressive I guess. Many of the points make some sense. But ultimately for me I get the feeling that although the managers might be happy with Go, I'd definitely not want to be such a programmer day-in day-out.
The article mentions a keynote speech by Rob Pike* from 2012 which is quite illuminating. The trade-offs made were all centered around google-scale and the pain points of such a massive operation. It stands to reason that people working outside of that environment may be less pleased with the language.
Google is not the only entity that operates at scale, and simply because google does it does not mean it is the correct choice. That's kinda cargo cultish.
In distributed systems, go is fragile and dangerous -- because it will panic. IT has no supervision system, and it has the potential for deadlocks, in fact, unless you engineer around it, all coroutines and channels will produce deadlocks and can silently kill your program. When that happens you have no idea why things are broken-- nothings happening.
> In distributed systems, go is fragile and dangerous -- because it will panic.
Do you know when it will panic? Do you know you can recover from panic if you for example want to communicate with other systems that this node is going offline?
> it has the potential for deadlocks
I could write that for most of languages that have mutexes. This is design problem, not language problem.
> When that happens you have no idea why things are broken-- nothings happening.
It's only true if you do not know how to use debugger and don't know how language features you use works.
tbh I don't think workers even inside of that environment will be much pleased either. The managers are likely pleased because it speeds up the organizational efficiency, but that doesn't necessarily have anything to do with the happiness of the developers who actually write code and have to bear its many unpleasantness. That's just two separate things.
> To provide any solution in Go that needs a dynamic data structure you can choose between hand rolled linked structures or a Slice or Map (or compose with them). As they are quite different the choice is normally obvious. Contrast this to the choice between map, set, hashset, bag etc etc, or rolling your own in a language that makes this a lot harder.
I can't help but think the whole article is filled with bursts of dishonesty.
A language like C++, which let you use the proper data-structure in about two lines of code, is a lot easier when it comes to data-structures. While the Go programmer implements a multi-map, priority-queue or red-black tree, anyone else will have moved on to an actual topic of interest.
If you need a particular data-structure, surely having one ready in the toolbox is a net positive, not a negative.
I'm not much of a Go programmer but I would definitely regard Go's multiple return and error handling (save the 'no assertions' clause) as very cool. I'm not sure if any other languages have experimented with that approach before the rise of Go, but to me at least it appears much saner than the prevalent ridiculousness of exception handling.
Some languages support tuples - you can also use stuff like ADTs to the same effect. I think Go's advantage here is that it's the only way to handle non-panic exceptions, so you won't have systems where half of the errors are handled with exceptions and half with returning tuples, for instance...(I personally like the lack of "throwing exceptions" part, but find the multiple-return somewhat exotic)
Handling it as tuples is just as fine, but it's important that a language intending to do so be devoid of verbosity and cruft. For example, D supports tuples, but I would not want to attempt Go-style error handling in it: https://rosettacode.org/wiki/Return_multiple_values#D
You make a good point, too, that multiple-return being the only way to do errors is more ideal than the language saying "oh, we support that, but we also have exceptions, too!" At least in a language with Go's philosophy, you can expect other people's libraries and your own code to play by the same rules.
Multiple return is great, though sending a tuple back is how we've been doing it in erlang for 20 years. Not much difference between {foo, bar} and (foo, bar).
Go's error handling however is terrible, absolutely the worst and its tendency to panic is atrocious. Especially without supervision or restart capability. HEre's a spot where elixir has it right and is vastly superior.
I have large go programs in production for several years which have never panicked from the first line of code written. Go's error handling is perhaps simplistic but it doesn't encourage the use of panic, quite the reverse.
I'd be happier if panic didn't exist, but it is extremely rare in real world programs and the std lib.
> Custom data structures can be composed from the well understood builtins, rolled in under 100 lines of code and can can exist close to the place they are used (yes repeated!). The effect of this approach on readability, maintainability, decoupling, ... adds so much more value to the whole lifecycle, than the cost of the omission.
It's interesting to me that this philosophy comes from the Go designers at Google, and that Google is also well known for keeping vast amounts of source code advancing in lock-step in a single repository. From reading the recent article on Google's source code repository structure, I believe that being able to reuse code (e.g. data structure implementations) without versioning headaches is one of the intended and actual benefits. It's of course not that surprising that two different areas (Go design and repository structure) might pull in two different directions, but these are two important high level issues so it does seem a little inconsistent to me.
> It is not ‘missing’ comprehensions, or inheritance, or generics, they are omitted (and I pray, always will be). In some way, in the context of the current fashion of returning to more functional languages, or the evolution of good old languages to include more functional paradigms (I’m looking at you Javascript and Python for two examples) then in a tenuous convoluted way Go has ‘innovated’ by avoiding that trend.
That's such a weird statement. If anything, those are likely more OOP-related than FP-related, and he didn't really point out what's so bad about "more functional paradigms", besides the implication that it might be harder for new hires to pick up etc.
Anyways, I see that Go reduces the learning curve and simplifies lifecycle of huge projects, but at considerable costs about language features and expressiveness. I myself if working as a developer would rather not bear those costs just for the sake of the whole clogs of the organization running a bit more smoothly, and also so that myself would not just program day-in day-out en masse with everybody else out there in an overly simplified language that potentially puts me at more of a disadvantage in my career path. Maybe the leaders of huge companies would have other thoughts and there will definitely be developers who are happy to fill those roles, it's just not me.
> “There is nothing new under the sun” rings true in all languages since the 80’s.
Really? Nothing? Sure a language like Rust has drawn from many other concepts in other languages, but it has done so while actually bringing high level features to a language that has zero overhead costs. But yes, it's not simple like Go.
Did Go need to make all errors unchecked? There are no guide rails telling you that you forgot to check an error result. This is a runtime thing you need to discover. Is this actually simpler?
Go made the decision to allow for Null, even after nearly every other modern language and other older ones are trying to kick it to the curb; Swift, Rust, Scala, Kotlin, no nulls (the JVM ones have a compatability problem, as does swift with ObjC, but still). Is it simpler to delay discovery of Null data to runtime?
Go decided to not have generics, to keep the language easier to learn and more approachable. It's hard to argue with this one. Like lambdas, it can be a complicated concept to learn, but once you unlock this in you code, you write less code and accomplish more. So yes, it's simpler, but at too high a cost IMO.
To me the innovative feature of Go is the small runtime built into the binary making deployment dead simple and easy. This is a million times better than JVM, Ruby, Python, Perl, etc. This is a huge improvement over Java, and something every language should have an option for. Ironically this is also the least innovative feature, because this is how static binaries in C and C++ have worked for years.
I think this article is very well written, but I don't think it's fair to the innovation going on in other languages.
(Disclaimer: I used Go, discovered the three primary flaws as I listed above, and then searched for a better language. It would be fair to call me a hater, usually I try to avoid this, but in this case that's fine with me)
> Go decided to not have generics, to keep the language easier to learn and more approachable. It's hard to argue with this one.
Plain parametric polymorphism is super easy to understand. Standard ML could be learnt in a week by someone who doesn't know how to program.
Admittedly, the interaction between parametric polymorphism and subtyping is tricky and subtle. And it seems most programmers have gotten used to taking subtyping for granted. But what if subtyping isn't always a good idea? Say, because it forces you to reason about variance (which humans always seem to do wrong!).
(inb4: Yes, Go has subtyping. When a struct conforms to a given interface, that's subtyping.)
Honestly parametric polymorphism is a big slippery slope feature. There's always Just One More Thing -- higher rank/order/kinded types, where clauses, dependent types, specialization... or else you force people to use dynamic checks/allocations/casts that reduce your type-safety and bog the code down relative to the "optimal" design.
Don't get me wrong, I love me some parametric polymorphism, but it's by no means a simple thing as far as I've seen. Especially if you care about the effeciency of things (you can fudge a lot more wih lots of indirection/allocation, like Java does).
> Don't get me wrong, I love me some parametric polymorphism, but it's by no means a simple thing as far as I've seen.
I disagree. You should look at OCaml (and Standard ML, though the eqtypes in SML are a botch): generics are dead simple there. Much simpler than Go interfaces, in fact.
Sure, there's always "one more thing" you could add, but that's always true for anything in any language. Slippery slope is a fallacy for this reason.
> or else you force people to use dynamic checks/allocations/casts that reduce your type-safety and bog the code down relative to the "optimal" design
Which is what happens even more if you don't have generics!
(0) They ensure that `op=` can only be used on things that actually have decidable equality.
(1) If SML were to be equipped with dependent types in the future, it would make sense to only allow `eqtypes` as type indices. First-order unification can be used on syntactic values of `eqtypes`, so the basic architecture of a Damas-Milner type checker can be retained, in spite of having dependent types.
OTOH, equality and comparisons in OCaml are completely broken.
A language designer can provide let polymorphism, refuse to add more, and call it a day.
> (higher) order/kinded types,
This is orthogonal to parametric polymorphism. Higher-kinded types are problematic for inference, and the way Haskell has implemented them has unfortunate consequences for modularity.
> where clauses,
This is just syntactic sugar. (FWIW, what I think Rust needs is better inference, rather than ways to make type signatures less verbose.)
> dependent types,
This is unrelated to parametric polymorphism.
> specialization
This is antithetical to parametric polymorphism.
> or else you force people to use dynamic checks/allocations/casts that reduce your type-safety and bog the code down relative to the "optimal" design.
Standard ML doesn't have dynamic checks or unsafe casts, and I don't find myself longing for them.
Higher-rank types are not orthogonal to parametric polymorphism, instead they are a special case. You can see this when you realise that rank-k polymorphism is a subsystem of System F (the paradigmatic typing system for parametric polymorphism) for any k. The let-polymorphism of the ML-family is just rank-1. See Chapters 22 and 23 of Pierce's great "Types and Programming Languages".
Higher-kinded types are
problematic for inference
That is true, but already type inference for rank-3 polymorphism is undeciable, and the same is true for System F polymorphism.
In practise, Haskell needs only few kind-annotations to make kind inference possible. This is helped by unannotated kind variables having kind * in Haskell (IIRC).
> Higher-rank types are not orthogonal to parametric polymorphism, instead it's a special case.
Errr, sorry, I only saw “higher-kinded”, not “higher-ranked”. But, of course, you are right.
> That is true, but already type inference for rank-3 polymorphism is undeciable, hence also System F polymorphism.
Let polymorphism covers 95% of what most programmers need. So if a language designer feels particularly risk-averse (a perfectly legitimate position), they can provide just let polymorphism and ML-style type inference, and then call it a day.
Of course, higher-ranked polymorphism is a nice thing to have, and you can require type annotations when you use more (as Haskell does).
> In practise, Haskell needs only few kind-annotations to make kind inference possible.
A more serious problem with higher-kinded types IMO is that they wouldn't interact very well with an ML-style module system, where you can define an abstract type whose internal implementation is a synonym. `newtype` is an ugly hack.
provide just let polymorphism
and ML-style type inference
I mostly agree with this, and this should be the default starting point for any new programming language. If B. Eich had built Javascript on this basis, the world would have been a better place.
My main caveat would be that even a basic language needs a mechanism to glue related code together, objects, modules, structs with row-typing, existentials, not sure. But something.
> My main caveat would be that even a basic language needs a mechanism to glue related code together, objects, modules, structs with row-typing, existentials, not sure. But something.
While not perfect, I think ML's solution is pretty reasonable: a separate module language, whose complexity doesn't infect the core language.
I'm aware of it. But my previous suggestions were in part shaped by the stated goals of Go's designers: to keep the language simple and easy to learn for non-language geeks. 1ML is really cool, but its type system can be intimidating: small vs. large types, incomplete inference, type-checking as elaboration into System F-omega, etc. OTOH, plain Damas-Milner is dead simple.
Another problem with 1ML is that it's only a research prototype for now, so we can't take it "for a spin". Given the 1ML inventor's main job, this is unlikely to change any time soon, unless some kind soul takes on the 1ML project lead.
Because, in ML, a type-level function that one module views as an abstract type constructor might be viewed by another module as a type synonym. Haskell's type language allows type constructors to appear partially applied, but requires type synonyms to appear fully applied, so it can't deal with this discrepancy.
Structs implementing interfaces can be viewed as subtyping, but they don't have to be. An alternative is to take a typeclass-like view. Basically, a function "(A, A) -> A where A implements I" could be implemented as "(pointer to vtable of interface I, voidptr, voidptr) -> voidptr".
Not having "implementation inheritance" between structs helps a lot, though I'm not sure if Go's anonymous fields might pose a problem.
> An alternative is to take a typeclass-like view.
Type classes alone don't give you anything like Go's interfaces.
Type classes plus existentials give you something kinda like Go's interfaces, but requires explicit casts (in the form of unwrapping the contents of an existential constructor and putting them into another existential constructor).
Type classes plus rank-N types can express Go's interfaces, but at that point Haskell already has subtyping, induced by subclasses. (Or else how do you think “a Lens is a Traversal” is possible?)
I just spat out my coffee. Standard ML can be learned in a week by someone who doesn't know how to program?
Have you ever actually tried to teach someone who doesn't know how to program? It takes months, even when using a simple language like Python. Or even BASIC, which was designed specifically for beginners.
Standard ML is a good language (especially considering when it was developed, in the 1970s). Somehow a cult has grown up around it that prevents people from seeing that it isn't the solution to all problems, just another tool in the toolbox. Sad.
> Have you ever actually tried to teach someone who doesn't know how to program?
Yes.
> It takes months, even when using a simple language like Python. Or even BASIC, which was designed specifically for beginners.
I never said anyone can learn everything there is to programming in a week. I only said anyone can learn the Standard ML language in a week. You might encounter far more difficult things along the way, but they shouldn't be related to Standard ML itself.
The biggest irony of Go is that they have invariant parametric types for channels, arrays, etc. and indeed primops like channel sends, make, len, etc. are parametric functions. So for all the defensiveness about how parametric polymorphism is "difficult to understand", Go programmers seem to deal just fine with it on a day-to-day basis. What they lack is the ability to let programmers introduce their own parametric types and functions, presumably because we're too dumb to be anything but consumers of this functionality.
What they lack is the ability to let programmers introduce their own parametric types and functions, presumably because we're too dumb to be anything but consumers of this functionality.
Here's what my experience is with certain features in languages: They enable some programmers to do great things, while also enabling a few programmers blinded by hubris to do maddening things. Over the life of a large project, the unfortunate coincidence of different pieces of hubris driven code sometimes causes an outsized amount of frustration.
Analogy: Most people, most of the time, have the good sense to operate cars, drones, and high powered laser pointers without becoming a dangerous nuisance. However, there is a potential for a minority of users of such devices to cause far more than their share of public nuisance. Therefore, there are rules and restrictions about how such things are used by many people at scale.
So yes, as an individual you are probably just fine. But you aggregated with a whole bunch of other programmers is likely to be a different story.
You can make this argument with nearly anything we naturally accept as a feature of a programming language. The ability to name things has a long history of abuse. The ability to define types, implement programming patterns, define new syntactic features via high order functions, use concurrency primitives. Hell, simply the idea of programming is rife with potential for abuse.
This is a narrative without specifics, and unfortunately always where the conversation seems to end with gophers. We accept some amount of features that can be abused because they offer utility that outweighs their potential for misuse. So how exactly are generics a worse offender than these other features or a worse tradeoff for their utility? Because from my perspective, being able to define parametric data types and functions is a huge win for safety and terseness of code without a lot of downside.
Hell, simply the idea of programming is rife with potential for abuse.
Exactly. Everything you add has a cost/benefit for a particular context. Evidently you disagree with how the Golang team has calculated cost/benefit with regards to generics.
Because from my perspective, being able to define parametric data types and functions is a huge win for safety and terseness of code without a lot of downside.
Terseness is a good thing? Some people say terseness is bad. Is safety the only issue or always the top priority? All production code exists in a specific context. It's best to tailor to your specific context. This may well mean that you may encounter a context where you do not want to use Go.
> Terseness is a good thing? Some people say terseness is bad.
Clarity is good. Clarity comes from both including every relevant detail (which pulls away from terseness) and excluding irrelevant details (which pushes towards terseness). Clarity also comes from saying everything that has to be said exactly once and no more than that (which pushes towards terseness).
Unfortunately, when you program in Go, you often have to pay attention to irrelevant details, and you have to say what you want more than once.
> Is safety the only issue or always the top priority?
The benefits of typeful programming go beyond type safety. They also include: “economy of thought”, “fearless refactoring”, “less time wasted on fixing stupid mistakes”, etc.
The benefits of typeful programming go beyond type safety. They also include: “economy of thought”, “fearless refactoring”, “less time wasted on fixing stupid mistakes”, etc.
Funny, but that's exactly what we Smalltalkers had in Smalltalk -- with far less of the "type system" enforced by the compiler and almost all of it in our heads. (That said, back in the day, we had tooling which was more advanced while also being more responsive, years ahead of everyone else, so our viewpoint might be skewed.)
> with far less of the "type system" enforced by the compiler and almost all of it in our heads
That's why reasonable people want to have good type systems: People who think that they can keep the type system in their head are exactly the people whose opinion should be ignored.
> Funny, but that's exactly what we Smalltalkers had in Smalltalk
Smalltalk doesn't let you say “this object responds to message Foo only when used in this part of the program”. In other words, there's no separation of concerns.
> and almost all of it in our heads
What you realistically can't produce entirely in your head is a proof that your program is correct, unless the language is explicitly designed to lift part of this proof obligation. That's exactly the role of parametricity: to help you separate concerns, allowing you to prove one small thing at a time.
Smalltalk doesn't let you say “this object responds to message Foo only when used in this part of the program”.
In VisualWorks, there were different two ways of writing a short script to verify this in a matter of seconds. You could also sometimes achieve this with a few cascaded searches through the Refactoring Browser.
This doesn't scale to either large programs or programs not entirely written by yourself.
My industry experience clearly shows that you're just flat wrong -- with multiple large systems written by other people over the span of over a decade.
As a library author, you can't search code written by users of your library.
What kind of nonsense is this? The library author doesn't need to do such a search! The library users in Smalltalk would do such searches. Access to source was the norm. Decompilation in Smalltalk is trivially perfect, excluding local variable names, so closed source was fairly pointless.
> My industry experience clearly shows that you're just flat wrong
Please tell me how a code search performed by library author Foo will ensure that library user Bar won't break invariants Foo intended to enforce.
> What kind of nonsense is this?
In ML, I can prove that users my library can't use my library wrong. Maybe they won't be able to use my library at all - they type checker will reject every attempt. But it guarantees that, if they can use my library, they will use it right, in the sense that every invariant I enforce won't (and can't possibly) be broken by users.
For example, there may be multiple ways to realize the same ordered set as a red-black tree (balanced differently), but I can arrange things so that the difference can't be observed by users of the ordered set abstraction.
> The library users in Smalltalk would do such searches.
Library users shouldn't be in the business of enforcing invariants that are only relevant to the library's author. See? This is what I mean by “Smalltalk can't separate concerns”.
I'm not saying all of this to be mean. It's been known for quite a while that parametricity is the mathematics of abstraction and separation of concerns [0]. If you need to insulate users of your code from your design choices, you absolutely need parametricity. (Or “social conventions”, but those don't work in the long run.)
If we consider Go in the context of a systems (their definition) engineering language within Google's enterprise, limiting choice is not about how dumb the programmers are, but about ensuring conformity.
Rewriting code is expensive. As my office knows well. We maintain lots of old embedded systems and have to periodically rewrite or rehost it because the old hardware platforms aren't available or aren't performant enough for new features. These become multi-year, multi-million dollar projects, for relatively little gain.
By ensuring that developers and architects conform to certain conventions, it means (in theory) that this code maintenance is much cheaper, and that rewrites can be avoided or minimized. This is a good thing and lets organizations be more flexible and productive, as their time and money is no longer wasted on the old things, but can be spent on the new things.
> limiting choice is not about how dumb the programmers are, but about ensuring conformity.
What you say doesn't make sense given how Go reflection is implemented. If it was really about limiting choice Go would have no reflection . Go reflection is basically a way to opt-out of its (poor) type system. You should never have to do that in a statically typed language yet Go reflection is used a lot in the standard library itself.
Furthermore let's be honest. What do you think is more complicated ? generics or concurrency ? generics aren't complicated, at all.
> We maintain lots of old embedded systems and have to periodically rewrite or rehost it because the old hardware platforms aren't available or aren't performant enough for new features.
But Go isn't for embedded system programming. You can't run Go on bare metal without an OS.
> By ensuring that developers and architects conform to certain conventions
Enforcing conventions is of course a good thing! The problem is how Go enforces conventions:
(0) When Go enforces a convention mechanically, it's a triviality that can be adequately handled by external tools (e.g., naming, formatting, unused variables, etc.).
(1) When a convention is actually useful (e.g., the correct way of using an interface), Go's type system is too dumb to understand it, let alone enforce it.
> aren't performant enough for new features
Second-class parametric polymorphism (“generics”) is purely a compile-time feature. It can be completely eliminated (that is, turned into the non-generic code you would've written otherwise) using a program transformation called “monomorphization”, before any target machine code is generated. So there's no runtime price to be paid.
To be precise, you need to outlaw polymorphic recursion to be able to do full monomorphisation. I'm not sure if that's what you meant by "second-class" in this context
First-class polymorphism is what System F gives you: functions from types to values.
Second-class polymorphism is what Damas-Milner gives you: let-bound identifiers may admit more than one type, in which case every type they admit is subsumed by a type schema.
Second-class polymorphism rules out polymorphic recursion if you consider every recursive definition as syntactic sugar for applying a fixed point combinator to some expression of type `a -> a`, for whatever monotype `a`.
That's a new turn of the phrase for me. I've only ever heard the expression "second-class parametric polymorphism" used in reference to enforcing predicativity (which does not rule out polymorphic recursion)
FWIW I'm not trying to strawman the argument behind not having features like this. Rob Pike said this in a talk about Go:
"The key point here is our programmers are Googlers [...] They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt."
I'll concede there's the possibility for some weird tongue-in-cheekness here, but it definitely seems to be the canonical view among gophers that Go's paucity of features is about accessibility for programmers that don't understand them or find them cumbersome to work with.
I think this idea that "paucity = good" is so easily abusable that whenever this comes up from gophers I wish they would concede that this is an unhelpful simplification of what they must actually believe. Assembly language has possibly the highest paucity of concepts given it offers no ability to introduce language-level abstractions (other than say conventions about calling, etc.), but Go is nothing like this.
The argument can't be that paucity is good as a general condition, its that there are forms of abstraction and programming language features that Gophers find unhelpful or difficult to understand. The problem I have with this when applied to parametric polymorphism, is that Gophers already work with these concepts daily, so it can't be that use of them is complicated.
I also have a hard time believing that the ability to define parametric types and functions costs you anything. It's almost always self-evident when to use parametric types or functions, things that are "wrappers" or "collections" probably account for 80% of their use. I also don't think I've ever experienced ambiguity of choice with the feature. For instance I don't think I've ever been in the situation where I had to trade off implementing a generic definition vs. N specialized definitions. The frustration of using Go is actually that I now have to consider the later as a possibility or trade off type safety by using unsafe casting.
If there's a place where parametricity truly introduces complexity, I'd love to hear about it from a Gopher instead of a blanket statement about how "programmers don't understand it", "it decreases readability", or "Go is simpler without it".
seems to be the canonical view among gophers that Go's paucity of features is about accessibility for programmers that don't understand them or find them cumbersome to work with
Please keep in mind that there are differences at scale. What is "easy to work with" for 1 programmer over a month might not be so for 20 programmers over years.
The argument can't be that paucity is good as a general condition, its that there are forms of abstraction and programming language features that Gophers find unhelpful or difficult to understand
The argument is that simpler is better at scale. Airplanes can move freely in 3 dimensions, but airliners are constrained to fly in particular ways around busy airports and cross country.
I also have a hard time believing that the ability to define parametric types and functions costs you anything. It's almost always self-evident when to use parametric types or functions, things that are "wrappers" or "collections" probably account for 80% of their use.
I could see an argument for parametric collections and parametric sorting in Go. Not, however, for wrappers.
The frustration of using Go is actually that I now have to consider the later as a possibility or trade off type safety by using unsafe casting.
In your experience, what kind of "cost" has there been in unsafe casting to use collections? Even in environments like Smalltalk, where all use of collections amounts to "unsafe casting," I've rarely seen situations where a mistake of this type wasn't found trivially. Does your frustration come from having to abandon the "assured safety" the type system would give you, or does it come from an experience of the costs?
> * Does your frustration come from having to abandon the "assured safety" the type system would give you, or does it come from an experience of the costs?*
For me, it's entirely about expressiveness. This:
reverse: 'a list -> 'a list
where the type encodes that `reverse` is a function from a list of some type of elements to another list of the same type of elements, is more informative than this:
reverse : list -> list
where it's obvious that this list has elements of some type, yet it's not clear what the element type is, and it's not clear that the resulting list has elements of the same type as the input list.
Beyond being able to express and communicate intent, there's the added benefit that the type system can statically check that the input elements are the same type as the resulting elements. There's also no worry about information loss associated with subsumption (the rule of subtyping that allows a value of a subclass to "become" a value of one of its superclasses, losing specificity in the process which may only be regained with a type cast (this is one reason I tend to favor row polymorphism as well -- no subsumption means no information loss and no need to cast)) because no subtyping is involved in this case of parametric polymorphism.
> The argument is that simpler is better at scale.
Parametric polymorphism is simple and well understood. And not exactly new either: it has been understood for some 40 years already.
> I could see an argument for parametric collections and parametric sorting in Go.
C++'s <algorithm> header is proof that there are lots of algorithms that benefit from being expressed generically, not just sorting.
> In your experience, what kind of "cost" has there been in unsafe casting to use collections?
Without type safety, there's a disincentive for decomposing things into smaller parts, because the cost of manually verifying that the parts are compatible is greater than the benefits of decoupling them. Would a Go programmer even dream of bootstrapping fancy data structures from simpler ones?
> Even in environments like Smalltalk, where all use of collections amounts to "unsafe casting," I've rarely seen situations where a mistake of this type wasn't found trivially.
At scale, the law of large numbers says that even improbable events will occur every now and then. Unfortunately, a program with even one bug is still incorrect.
Would a Go programmer even dream of bootstrapping fancy data structures from simpler ones?
What use is there for a fancy data structure? In practice, these occasions aren't that common. Many "fancy" data structures tend to exhibit bad cache behaviors if implemented naively.
At scale, the law of large numbers says that even improbable events will occur every now and then. Unfortunately, a program with even one bug is still incorrect.
Are you an undergraduate? Depending on how you interpret the spec (which isn't cut and dried when business requirements meet the real world) almost every page of production code has some kind of bug in it. Also, the law of large numbers isn't that relevant for most codebases and developer populations -- the numbers aren't that large. The effect of hubris is much larger in practice.
Improving asymptotic bounds. Providing functionality typically not supported by common data structures. (e.g., I want a key-value container that's a priority queue on keys, but concatenates multiple values associated to the same key)
> Are you an undergraduate?
No.
> Also, the law of large numbers isn't that relevant for most codebases and developer populations
The law of large numbers certainly applies to >100 KLOC codebases, unless your bug rates are somehow magically two or three orders of magnitude lower than the average.
> Improving asymptotic bounds. Providing functionality typically not supported by common data structures.
It's nice to know that you remember stuff from the textbook. However, how often do you need to do something like this for real production code? Depending on what it is you normally do, it's entirely possible that you need to do this every other project. It's also possible that you never have a real need to do these things. (It's also possible that you never have a real need to do these things, but you do them anyways, which is far, far worse.)
> The law of large numbers certainly applies to >100 KLOC codebases, unless your bug rates are somehow magically two or three orders of magnitude lower than the average.
True, which is why I'm pretty confident that 15 years of Smalltalk development on many large code bases without running into a heterogeneous collection debugging conundrum is possibly a valid data point.
Contrast that with an endless parade of "hubris coding" in the same timeframe. My impression is that the damage caused by "hubris coding," or the gratuitous worship of "cleverness," far outweighs that caused by insufficient type information by 2 or 3 orders of magnitude. If you're going to be clever about applying your clever, you need to apply it in a fashion where it gives your company's business the biggest bang for the buck. Most coders in their 20's are just trying to impress their fellow programmers.
> It's also possible that you never have a real need to do these things.
What I don't have a real need for is the ability to destroy the internal invariants of other modules. :-p
> True, which is why I'm pretty confident that 15 years of Smalltalk development on many large code bases without running into a heterogeneous collection debugging conundrum is possibly a valid data point.
Who says homogeneous collections are the only use case for parametricity? Parametricity is useful whenever you need to make sure that unrelated parts of your program don't accidentally rely on (or, even worse, alter) each other's implementation details. “Modularity”, as they call it elsewhere. Of course, Smalltalk has none of this.
> My impression is that the damage caused by "hubris coding," or the gratuitous worship of "cleverness," far outweighs that caused by insufficient type information by 2 or 3 orders of magnitude.
I don't separate concerns to be “clever”. Au contraire! I separate concerns to deal with my own brain's limited ability to simultaneously process multiple pieces of information. (And I'll be perfectly honest: I also separate concerns because it's beautiful.)
“Hubris” is a term I would reserve for those who write large programs whose constituent parts don't have fixed structure, yet claim they understand what's going on in the code. (Or perhaps they claim “the tests do the understanding”?)
What I don't have a real need for is the ability to destroy the internal invariants of other modules. :-p
I only ever recall this happening when someone inadvisedly modified or added a method to a library. In my actual industry practice, type information never did anything to "preserve the internal invariants of other modules." The only time we lamented the lack of type information was in large scale refactorings.
Who says homogeneous collections are the only use case for parametricity?
No one. However, that was your example. My argument is that's a really poor example. And now you are abandoning it.
I don't separate concerns to be “clever”.
I also like separating concerns. I'm also not a dynamic typing bigot, though you seem to be imagining you are arguing with one. You seem to have devolved into abandoning your points of arguments and portraying your discussion partner as a series of strawmen. How in the heck did you get here from parametric algorithms? This smacks of intellectual dishonesty.
“Hubris” is a term I would reserve for those who write large programs whose constituent parts don't have fixed structure
And earlier, you were claiming something about refactoring. Do you see a contradiction here?
> In my actual industry practice, type information never did anything to "preserve the internal invariants of other modules."
Most languages don't have abstract types (not to be confused with abstract classes!), so there's that. Abstract types protect invariants of modules from external tampering. This is a mathematical fact.
> However, that was your example. My argument is that's a really poor example. And now you are abandoning it.
I'm not abandoning anything. I'm only saying that the use cases of parametricity go far beyond parametric collections.
> I also like separating concerns.
Good! Then what do you gain from the existence of reflection (which is pretty much the opposite of type abstraction), or the possibility of sending wrong messages? This is as anti-separation-of-concerns as it gets.
Even more worrisome is what you have said in another post: “Decompilation in Smalltalk is trivially perfect, excluding local variable names, so closed source was fairly pointless.” (https://news.ycombinator.com/item?id=12340864) How can you pretend this is compatible with separating concerns? You're talking about inspecting the structure of arbitrary parts of a program!
> I'm also not a dynamic typing bigot, though you seem to be imagining you are arguing with one.
I've just made technical claims. I haven't personally attacked you. If you think I did, my apologies.
> How in the heck did you get here from parametric algorithms? This smacks of intellectual dishonesty.
I also request that you refrain from making personal attacks.
Anyway. Parametricity means more than you think. The inability to inspect the representation of an abstract type is an example of parametricity too.
> And earlier, you were claiming something about refactoring. Do you see a contradiction here?
Nope, I don't see it. Refactoring produces a different program with a different fixed structure. And the difference shows up when the old and new programs have different types. The reason why types are helpful is precisely because they guide the evolution from the old to the new program.
> Please keep in mind that there are differences at scale. What is "easy to work with" for 1 programmer over a month might not be so for 20 programmers over years.
> The argument is that simpler is better at scale. Airplanes can move freely in 3 dimensions, but airliners are constrained to fly in particular ways around busy airports and cross country.
For one, I just debate the premise the simplicity has anything to do with cardinality of features/concepts. But let's take that argument at face value: then why _not_ assembly if this is the case? Why not a language with the absolute minimum number of concepts? I think if you interrogate this premise you'll find it doesn't hold a lot of water and that Go doesn't really aspire to this goal anyways. I think we have some amount of working memory for being able to intuit programming with a certain number of concepts. There's a valid argument that some languages suffer by breaking that barrier (though I personally think Go underestimates where that barrier is), but it seems incorrect that language designers should be optimizing for a minimal number of features.
I think complexity at scale has more to do with features that interact poorly (or cause poor interactions more frequently with a larger number of people). Specifically its about composition. For instance, there's a valid argument to be made that asynchronous exceptions (i.e. the ability to interrupt another thread with an exception) and locks poorly compose. Mutable state is a common example of a feature that's a detriment to composition. But parametric polymorphism, if anything, gives us a much greater ability to compose. It allows us to define functions that work on data arbitrarily parameterized by other types, which makes them conducive to composition. And likewise, we don't suffer ability to reason about composition at scale with parametric types. A parametric function does not gain complexity as more team members are added, more code is written, more deadcode accumulates, etc. Parametricity changes nothing at scale.
> In your experience, what kind of "cost" has there been in unsafe casting to use collections? Even in environments like Smalltalk, where all use of collections amounts to "unsafe casting," I've rarely seen situations where a mistake of this type wasn't found trivially.
That's an argument for Go to not have types. But Go does have types, and type safety is often espoused as a benefit of Go. If you're going to have types, it makes zero sense to me why you should not have parametric polymorphism, since this is the only way to have things like typed collections without opening yourself up to the possibility of casting errors. Frankly I find it bizarre that people claim that they have found type errors to be trivially fixable, because the scope of where a type error can be introduced is enormous in an untyped language... its literally every location that potentially calls into the code where the error occurs.
> Does your frustration come from having to abandon the "assured safety" the type system would give you, or does it come from an experience of the costs?
Yes, type safety is an enormous advantage to writing correct code in my opinion. It's one of the best mechanisms a programming language can give you for enforcing invariants about data. The curry-howard correspondence is a huge advantage to writing correct code. Every place a type checker isn't being used to delimit acceptable data is a potential source of a huge number of bugs. It's also a frustration because casting introduces conversation and type checking boilerplate that a type checker could ultimately take care of for you.
> But let's take that argument at face value: then why _not_ assembly if this is the case?
Okay, then you can throw away the rest of your post and stop right here. The overwhelming historical evidence is that assembly doesn't scale.
> That's an argument for Go to not have types.
Sorry, that doesn't follow. Is the logic here just because I mention Smalltalk, that I'm advocating late binding and the only type being Object for Go? Sorry, but that doesn't follow. The argument is that Go doesn't need a more complicated type system to avoid problems with heterogeneous collections -- because practice shows that even a simpler one can suffice.
> Frankly I find it bizarre that people claim that they have found type errors to be trivially fixable, because the scope of where a type error can be introduced is enormous in an untyped language...
Sounds like you're invoking freshman level false "common knowledge." Have you ever worked in an "untyped" language in a real project? What if a project simply used runtime asserts? Then a type error in a heterogeneous collection would be caught in unit testing. If it got out to production, it could be easily caught and logged. In 15 years of Smalltalk industry work I never encountered the kind of heterogeneous collection type error you're referring to in production. The closest thing I can recall involved the heterogeneous typed reuse of a local variable. (Which is simply bad coding style in Smalltalk.) In Go, you have a type system that provides much more feedback at compile time, and workable mechanisms for detecting the problem at runtime. So at least in this one instance (heterogeneous collections) there is arguably almost no practical benefit to parametric polymorphism.
(P.S. Technically speaking, Smalltalk is strongly typed with message passing semantics for methods implemented through late binding. It's not "untyped.")
> Okay, then you can throw away the rest of your post and stop right here. The overwhelming historical evidence is that assembly doesn't scale.
Huh? I'm not actually arguing that assembly is a scalable language. I'm invoking a counter-example to the idea that a smaller cardinality of concepts is inherently a good thing. Assembly has a smaller number of concepts than Go, so by the espoused benefits of having a language with less features, assembly should be favored. But obviously this is not true, so I debate that Gophers actually ascribe to this version of "simplicity".
My point here is that Gophers need to examine their rhetoric a little more and get better at honing their definition of "simplicity", since its clearly not just having less "stuff" as Rob Pike seems to claim in every Go presentation.
> Sorry, that doesn't follow. Is the logic here just because I mention Smalltalk, that I'm advocating late binding and the only type being Object for Go? Sorry, but that doesn't follow. The argument is that Go doesn't need a more complicated type system to avoid problems with heterogeneous collections -- because practice shows that even a simpler one can suffice.
Your original question was how does unsafe casting introduce cost. It adds cost in exactly the same way that every other means of circumventing a type system or not having a type system introduces cost: it allows runtime errors to occur at points where data is illegally used.
Type systems are effectively proof solvers. Just like making an improper assumption in a logical proof can lead to a faulty conclusion, forcing a type system to assume a type for a value that it cannot prove can lead to a buggy program. This is why programmers who strong believers in static type checking take issue with casting: its a way of circumventing the protection that a type checker gives you, when instead you can add power to the type system for expressing your constraints or add means of showing the equivalency of different types.
> Sounds like you're invoking freshman level false "common knowledge."
There's no need to get personal here. I'm making a factual point: it's true that any code path calling into the point where a type bug occurs is potentially responsible. Nothing in my comment is invoking "common knowledge". Also you should keep in mind that invoking your "personal experience working on X large scale system in industry" is not a compelling argument. It's not even a comparative argument about an untyped language vs a statically typed language.
> Have you ever worked in an "untyped" language in a real project? What if a project simply used runtime asserts? Then a type error in a heterogeneous collection would be caught in unit testing. If it got out to production, it could be easily caught and logged. In 15 years of Smalltalk industry work I never encountered the kind of heterogeneous collection type error you're referring to in production. The closest thing I can recall involved the heterogeneous typed reuse of a local variable.
Yes, I have. I've worked in Python and Javascript for a couple large projects. I'm not going to get into my feelings about this, because I don't think it forms the basis of a compelling argument.
However, I take issue with the claim that these kinds of bugs are always trivially caught in unit tests. One thing to note about untyped languages is that they allow an infinite number of values to be passed to a function by virtue of being untyped, so there's no way to write an exhaustive unit test. This isn't unique to untyped languages (for instance, I can't write an exhaustive unit test in haskell for a function that accepts strings), but a sufficiently expressive typed language always gives me the ability to reduce the scope of my tests by writing more constrained types (for instance, with sized collection types using Data Kinds in haskell). Similarly, languages that disallow parametric types cannot express constraints about contained values in a type, which allows exactly the same sorts of bugs that an untyped language can have.
Unit tests are great, but they are better suited for probing the edges of acceptable inputs based on assumptions about the code under test, and are generally poorly matched to providing the guarantees of a type system. They are not perfect: they can suffer from laziness, code rot, faulty assumptions, etc. I've seen bugs in test code far more frequently than I've seen bugs in a type checker (in fact I don't ever think I've seen a bug in a type checker).
My argument here boils down to the fact that you can trivially show there's potential for human error here that a type system can protect against. The point of contention you have is that these kinds of bugs don't manifest in practice. In my experience they do, and they occur more frequently in larger scale systems where there's more invariants to juggle that a type system doesn't ensure for you. I'd also argue that this largely explains the resurgence of typed languages with more expressive type systems (like scala, rust, swift, idris, hack, etc.). Ultimately I think we just have to agree to disagree here.
> (P.S. Technically speaking, Smalltalk is strongly typed with message passing semantics for methods implemented through late binding. It's not "untyped.")
Untyped is commonly used in academic literature to refer to "dynamically typed" languages[1]. The strong/weak typing distinction is arguably imprecise or a useless distinction, especially for dynamically typed languages. For example, how does smalltalk prevent "type punning" when functions do not declare the types of values they may be called on? Perhaps you can make the argument that dynamic languages like these can justify their claim to "strong typing" by having builtin operators that do not make implicit conversions of the values they work on, but this guarantee doesn't hold in general in user defined code, so it seems like a useless distinction.
Ouch. That quote is incredibly unkind to his colleagues. I remember back when Google practically required at least a masters degree for their new hires. Throwing out Georgia Tech grads with 4.0 GPAs and only a bachelors.
Guess they don't think so highly of their hires anymore.
We are yet to be convinced a degree or GPA have any correlation with an actual abilities related to programming with regard to quality, performance, abstractions, languages, tools or anything.
> There are no guide rails telling you that you forgot to check an error result. This is a runtime thing you need to discover. Is this actually simpler?
This may be considered cheating since it isn't baked into the language but there are tools to do this at build time, here's one: https://github.com/kisielk/errcheck
There are lots of tools in Go that make up for issues people have with the language. Another example is the IDE macros people use for the standard if err != nil {} block.
Thank you, and may I say what a well written comment.
Poor old Tony Hoare (algol was mentioned earlier (rightly) as an exemplar of innovation) but Null in a safe memory managed context is a different beast to a true Null reference.
Null appears then as something between a known state and an not quite an exception, it carries different semantics from either, and whilst this could be seen as more complexity, I think the "I just don't know" case in practicality is useful, if harder to reason about.
Your point about the runtime is very true. Partly because of this - error checking is overrated! Yes I said it! We have go code that has been running in a reasonably high scale production environment for over two years and there are `if:...;err != nil` blocks that have never been touched in millions of calls per day, for 2 years. We have redundant services and trap panics in the rpc handlers, the nil becomes very clear and the rest of the system makes good progress. We save lines, save tests, and release a single binary fast. One example of where Go helps us deliver value faster, by being able to choose to ignore exceptions. Many people find this very uncomfortable. I say they are mistaking where the true project risks lie.
> being able to choose to ignore exceptions. Many people find this very uncomfortable. I say they are mistaking where the true project risks lie.
It's funny, every language I know gives you an option to basically ignore the error and just pray. I get what your saying, but if this is the type of code you want to produce, you can still do that in other languages that have strong types around Null and Errors.
In Rust for example:
my_possible_error.unwrap()
In places where you are explicitly making that choice. And to me that's the big difference. Is it explicit or implicit/unknown?
Fair point, just differing levels of hoops to jump through, or general expectations of best practice. In that sense I would say Java and C# do not allow you to 'ignore' exceptions.
I suspect your "pray" == monitor closely and fix fast (often never).
I agree it can be problematic that there is nothing in the language that indicates that the author is explicitly in 'pray' mode. It could well be that even a brief defence of a missing error check during a PR code review is not worth avoiding (un-triggered) err handling.
OCaml and Haskell build self-contained binaries, and have done for decades. OCaml has very fast compile times, and an ordinary C-like linking system. You can even directly link C *.o files. Go certainly isn't "innovative" here.
And if you don't need types, there's LuaJIT with all the dynamic features, best inline C FFI ever done, runtime performance in par with native compiled languages, and tools for producing self-contained binaries if needed - https://luapower.com/bundle
Rust is not an alternative to Go. Go is significantly easier to learn as it's very simple. Crystal and Nim come close but they do not have the backing of a large tech company and their ecosystems are not as mature.
There are much more robust and advanced programming languages with native code compilers, imperative programming support and decent code performance - e.g. OCaml.
How is it dangerous? Your qualifying remarks with regards to Haskell's domain make no sense. If you need a fast, compiled language with managed memory, high ease of development and a strong ecosystem then you can't go wrong with Haskell.
'Type stable computation' and a strong correctness guarantee are some added benefits of Haskell, though any strongly typed language (like for example Rust) will have these qualities.
A nice benefit of Haskell that most other languages don't have is that it is explicit about side effects which gives you some extra confidence in the behaviors of your code. Related to this is its unusually powerful type system, which allows you to make some abstractions for generic code that are not possible in most other languages.
FWIW, strictness may be introduced into haskell programs. Weak-head normal-form evaluation is builtin with the "seq" function and the "deepseq" library is commonly used for fully normal form evaluation of expressions.
GHC 8 also introduces the Strict and StrictData pragmas[1] which allow you to make a module (or its types) fully strictly evaluated.
Can you give me an example of a domain where laziness is not desirable? I only do Haskell for side projects, so perhaps I lack exposure to some domains.
Laziness can make it hard or counterintuitive to determine the runtime properties of your program, especially with regards to memory. Same with real-time systems. But you can turn it off, or force evaluation if needed.
Almost all static compiled languages have AOT compilation to native code, even Java and .NET ones, although many tend to ignore it.
So all the ML languages, Java, .NET, Pascal dialects, Modula-2, Modula-2, Oberon and its descendants, Ada, Crystal, Nim, D, Rust, Swift, Objective-C, ...
"Like lambdas, it can be a complicated concept to learn, but once you unlock this in you code, you write less code and accomplish more"
With the caveat that anyone who may be maintaining/using/enhancing your code will also need to be able to "unlock" this. Keeping the language simpler has benefits beyond the initial code development.
This is one of my biggest frustrations when these sorts of discussions come up. It seems a great many programmers -- amateurs, students, and even professionals -- resist learning anything new.
As programmers, we deal primarily in abstractions. Our programming languages offer formal tools for creating and manipulating abstractions. In my view, any language that offers more tools for abstractions is better than another language that offers fewer such tools. As a professional whose primary job is to deal with abstractions, any new kind of abstraction is of interest. All programmers should be not only willing to be constantly learning new techniques and new abstractions, but we should be eager to learn and apply these things. Bigger toolbox => better quality of life w.r.t. work.
Even at my day job, I've heard things like "that's too computer sciency for mere mortals". I'm sorry, are we not computer scientists? Are we in the habit of employing people who are not professional programmers to write our software? And to think I'm the only developer in the office without a master's degree, as if they all decided that once they graduated they were finished learning...
Heaven forbid you should have to learn something! To educate yourself! To grow in terms of knowledge and skill! Do we have "development goals" every year for no reason at all?
As if spending an hour or two learning something would kill you!
> This is a huge improvement over Java, and something every language should have an option for.
I am not sure what improvement you are talking about. Deploying Go apps requires recompiling for the target platform. On JVM, you only need to install the JVM. There are also tools to wrap JVM apps in executable files that will automatically download and install a suitable JVM.
I'm specifically talking about the classpath, jars, separate jvm install. These are a pain to manage across environments.
I'm a longtime Java engineer, it's a great language, but I do think the compile once thing isn't as big an advantage anymore.
With the advent of the LLVM, it's easy to target specific machines. rustup, even makes it possible to build binaries for every target environment you have.
And let's be honest, how many people target more than Linux/x86_64 on the server side? Even if you target FreeBSD or Windows, my bet is that your still generally only targeting one platform.
Java is battle-hardened and has seen almost every situation in the business programming. Go has miles to go before it can even be eligible to be compared to Java in terms of productivity and maintainability.
The JVM is, but sometimes there are things that require specific bug fixes in the GC for example where it's just a big issue combining the jars with the JVM etc.
My only put is that a single thing to deploy is easier than multiple.
The irony in all of these comments is that they almost all fall back to, or start with, discussing language design, and on the whole ignore the tools and processes that have a consistency from Go team to Go team. The value of this power and consistency is probably overlooked in this and many other conversations because they are complex to discuss, and it is simply easier to focus on the almost provable value of the language features, missing or present (maybe another availability bias at work?). The point of the article was to try and refocus on Go as an engineering tool in a much broader context.
You'd have a point if the default with Go was a good choice. In practice its not. It's a terrible choice that gives you no ability to use multiple versions of a package (across projects), control a project's dependency's versions, vendor a dependency or build from your system build directory on a case by case basis, etc.
The entire build chain with Go is probably one of the most frustratingly limited build tools I've ever used, which is probably why nearly every Go developer I've met has switched to using one of the third party options.
If you make a program that is executed a million of times or more a day, it make sense to have a language that is "near the CPU", and allows to optimize and speed up the most. This is what Go is. It will be a mistake to use it elsewhere.
Go (the language) have at least two implementations: the official Go implementation and GCC (yes, Go is included in GCC, along with Fortran and Ada).
The latest Go implementation (Go 1.7) has made Go a lot faster. I would argue that it closer the speed of executables generated with GCC (gcc/g++) than OpenJDK, the Oracle JVM, Mono or the .NET compiler for C#.
Go (the language) can be made just as fast as C (the language), for many cases. Go has the advantage of making it much easier to use multiple processors, though.
> The latest Go implementation (Go 1.7) has made Go a lot faster. I would argue that it closer the speed of executables generated with GCC (gcc/g++)
Go 1.7 performs nowhere near the set of optimizations that GCC and LLVM do. GCC/LLVM have a huge number of algebraic simplifications (InstCombine), aggressive alias analysis, memory dependence analysis, instruction scheduling, an optimized instruction selector, a highly tuned register allocator with stuff like rematerialization, SCCP, etc. etc. It will take years and years for Golang to come close.
> Go (the language) can be made just as fast as C (the language), for many cases.
No, it can't. The M:N scheduling model will always have some overhead relative to 1:1 if you don't need the performance profile of C10K-style servers. The dynamic semantics of "defer" is an unavoidable performance tax over RAII. Unwinding is mandated by the language, inhibiting some optimizations. There is little control over allocation: language constructs allocate in ways that are not immediately obvious. The fact that interfaces result in huge numbers of virtual calls results in a good amount of overhead that (unlike Java) Go can't even eliminate with inline caching, because it's AOT compiled. This is just off the top of my head.
> Go has the advantage of making it much easier to use multiple processors, though.
Not really. Go's parallelism primitives are just as low-level as those of C. The "one size fits all" scheduling algorithm is a poor fit for getting the most performance out of multicore. The lack of generics is a real problem: it prevents you from using optimized concurrent data structures without paying the tax of interface{} or going through code generation hoops.
In any case, the lack of SIMD basically kills Go's applicability in these domains.
Everything you wrote is true, but I think you're overstating the performance cost of Go design and implementation choices. I think the parent comment is fair in saying that Go is somewhere between GCC and the JVM in terms of performance. But I agree that Go is not designed for extreme performance (the kind of software where even a 1% gain matters), and that C/C++/Rust are better for these purpose.
Until you have control over stack/heap and data locality you're never going to be able to approach C/C++/Rust speeds.
Conversely if you're using C/C++ through a ton of heap/virtual pointers then you're losing a lot of the value the language brings and should be using something higher level.
Go allocates on the heap unless it can prove something doesn't escape, in which case it's on the stack. Not explicit programmer control, but I think you can reasonably make it do what you want.
Because it exposes pointers as a first-class concept, you also have good control of how data is laid out in memory (=> locality).
It's not like Python or Java where everything is a pointer and gets spread out all over memory.
> Not explicit programmer control, but I think you can reasonably make it do what you want.
Escape analysis, like any such analysis, gets much more difficult in the presence of higher-order control flow. Currently the Go compilers punt on higher order control flow analysis. And Go uses higher-order control flow in spades, due to its heavy reliance on interfaces.
The end result is that lots of stuff is heap allocated.
> Java where everything is a pointer and gets spread out all over memory.
That's not true for Java. Its generational garbage collector performs bump allocation in the nursery, yielding tightly packed objects with excellent cache behavior. Allocation in HotSpot is like 3-5 instructions (really!)
I think the HotSpot approach makes the most sense: instead of trying to carve out special cases that fall down regularly, focus on making heap allocations fast, as you'll need to make them fast anyway. After that, add things like escape analysis (which HotSpot has as well).
Java: But you're still chasing pointers for an array of objects right? Vs being able to just say, "I want this array to be X objects, all laid out in in a row in memory." I'm not a java programmer, but I'm pretty sure I've seen code that used primitive types rather than classes to get around this.
Actually, even Go isn't helping as much as it could here -- sometimes you want to have an array of objects that lays out each column (field) of memory contiguously, which Go gives you no easy way to do. But then neither does C or C++.
Isn't allocation just a couple instructions for basically GC languages?
> Isn't allocation just a couple instructions for basically GC languages?
If those GC'd languages have a precise generational GC with bump allocation in the nursery. Go doesn't (and the proposed GC design doesn't allow for this, unfortunately).
> That's not true for Java. Its generational garbage collector performs bump allocation in the nursery, yielding tightly packed objects with excellent cache behavior. Allocation in HotSpot is like 3-5 instructions (really!)
That post's numbers are entirely based a giant multi-gigabyte long-lived array: the classic worst case for a generational GC. That is not representative of most memory allocations. The generational hypothesis, which has been empirically verified in real world code again and again, is that most allocations are short-lived and small.
There were 3 points mentioned in post with only 1 about GC. Performance and memory efficiency were major points not delivered by standard idiomatic Java code.
Considering the popularity of memory compact Java collections like Fastutils etc I feel that memory bloat of standard Java is very common issue plaguing Java applications.
Can you do an arena allocator in Go with disparate types? If not then you're really missing out on data locality.
In also not a huge fan of a compiler "automatically" performing escape analysis. Makes a single change causing cascading perf problems very easy and hard to catch.
no, not with disparate types (maybe if you resort to weird tricks with unsafe). i'd be curious to hear a use-case for this that ends up being different than just 'normal gc allocation' (not doubting you, just curious).
Two different things here: previous versions of the compiler got slower as it was moved, in automated fashion, from C to Go. Code compiled with Go was not slower. The new version of Go has both improved compile time (though not quite to the speed of the first few versions of the compiler) and improved code generation, so code is faster than any previous version of Go.
One thing I can credit Go for is leading me from the untyped world to the typed one. I soon found the holes in the Go type system, and went looking for a stronger type system.
Some magical built-in types (scalar, array, hash, typeglob, regexp, io handle) you cannot confuse, interface{} is scalar holding a reference, and built-in datatypes (arrays and hashes) are magical and you cannot construct something similar. Well, at least there's 'tie' mechanics in Perl after all that makes types extensible.
Go is more from the typed world than Python where I came from. It will tell you at compile time you are using a string rather than an int (unless you use interface{}).
The only difference is a selection of basic types. There's just no such types as string or int in Perl, Perl is a contextually polymorphic language whose scalars can be strings, numbers, or references (which includes objects). Although strings and numbers are considered pretty much the same thing for nearly all purposes, references are strongly-typed, uncastable pointers with builtin reference-counting and destructor invocation.
Reflection you need to use, whether it be with interface{} in Go or with $scalar in Perl is runtime thing. reflect.TypeOf(tst), ref($tst) - there's no much difference.
Type mismatch errors for basic types are handled in compile time, be it Go or Perl.
It is a small thing because there are other languages that offer the same safety with more features and lets face it, if a C coder is willing to embrace a GC enabled language there are lots to chose from, with AOT compilation to native code.
I just expected more from Google, specially if one compares to the other company sponsored languages.
Comparing apples to apples, the first compiler sponsored by 'other company' was PHP, so Go looks not that bad in comparison. Reason is second, and maybe Google's second language would be 1ML.
I love how C++ has had simple features like default arguments / function overloading for decades, while modern languages like Go and Rust require awkward workarounds.
Swift 3 looks good, though. They've learned the right lessons.
Default arguments, at least as done in C++, complicate the language's semantics (e.g., template specialization selection) far more than they raise the level abstraction. Definitely not a well-designed feature.
And Rust's traits are a far more principled (and thus better!) approach to overloading than anything C++ has (Boost's concept checks?). Traits turn concepts into language entities that are directly expressible in Rust syntax, rather than in awkward English documentation.
I certainly wouldn't say these are "simple features" in C++. Overload resolution in particular is one of the most complicated parts of the language. Default arguments can get weird since the right hand side of the default, more or less, just gets inlined at the call site; I do recall there were a couple bugs in the last year at work because of C++ default arguments, though I don't remember their details.
It's also fun when you don't realize a particular function has a default argument until you make a function pointer to it and assign/pass it to something that you mistakenly think is compatible. Depending on how nasty your codebase's use of templates and overloaded functions is, this can be a nightmare to debug.
Android predates Go. And it wasn't even started by Google.. Android was it's own company and had already made it's decision on Java well before Google decided to buy it.
Not to mention Go is focused on a different use case. Go is gunning for microservices (with it's concurrency chops) and CLI based tools (being a single compiled binary).. whereas Android apps are a totally different beast that stands little to gain from either of those. In fact shipping multiple binaries for different architectures is a bit of a detractor for Android considering it supports MIPS, ARM, and x86.
> until I realized Al-*-Go was not actually written in Go!
Again, AlphaGo was based off technology from DeepMind, a company Google acquired.
Please atleast do some quick wikipedia browsing before spewing FUD.
As someone who was there, let me say that the decision to use Java was definitely not made before the Google acquisition. The codebase that came with the acquisition was C++/JavaScript and was completely rewritten.
So? Apple introduced Swift after Obj-C to make developing iOS apps easier. What about GOOG? Couldn't they at least use Dart or Go (both of which they developed) for android app development after Java? BTW, last time I checked, they're still in for a lot to come from Oracle.
> Again, Al-*-Go was based off technology from ...
So let me get this straight. They bought a technology which was apparently written in C++/JS and rewrote it in another lang, but then again, they did not choose Go or Dart.
Uhh, as someone who was there when Android began: There was no Go in 2005. And anyway, Go started with explicit about initial goal of being a server-side language, with choices (eg, only static linking) appropriate for that goal.
Go is a language that pushes remembering corner cases and failure conditions onto the programmer rather than the language and runtime itself.
When you already have to remember a myriad of corner cases for business logic, also remembering so many corner cases for your code hurts productivity.
I also believe that languages exist to make getting to an end result in given domains easier. Go does not make my life easier.
I really hope it gets generics. I wish it would do away with nil/null.
Nim is a very good language that actually accomplishes the simplicity Go wanted imo.
Go affords simplicity to the Go compiler writers at the cost of burdening Go users with having to remember inane things.