The type system in Go is as important as the concurrency system. Sure, Erlang can do concurrency as well as (or better than) Go, but it doesn't have the type safety that Go offers.
For one, a shared-nothing design makes migration to different machines easier and doesn't require a stop-the-world garbage collector.
Erlang is memory safe in the presence of many-core (or many-system) parallelism. Go is not (you can segfault, possibly in an exploitable way, if GOMAXPROCS > 1).
Erlang unbounded channels reduce deadlocks because your sender can continue execution without waiting for a receiver.
Have we heard a lot of stories about deployed Golang apps having problems because of garbage collection? It's true that this is a significant designed-in advantage for Erlang, which can run it's collector on a process-by-process basis.
What's also true and potentially compensatory is the C-like degree of control Golang gives you over how you allocate memory and lay it out.
I'm not sure the unbounded channel thing is a real advantage for Erlang. I'm happy to be convinced I'm wrong. What's a real, correct design which would be hard to realize in Golang (without unbounded channels) that relies on unbounded channels?
> Have we heard a lot of stories about deployed Golang apps having problems because of garbage collection? It's true that this is a significant designed-in advantage for Erlang, which can run it's collector on a process-by-process basis.
In general most Go apps that have been deployed are Web apps and server infrastructure, where concurrent garbage collection is not too much of a problem in practice. So Go's choice makes sense in Go's context. It does limit parallel scalability in some contexts—which of course are not the contexts that most people have been using Go for at this point.
> What's also true and potentially compensatory is the C-like degree of control Golang gives you over how you allocate memory and lay it out.
Go doesn't give you C-like control over allocation of memory. Language constructs will allocate memory in ways that are not immediately obvious, to quote Ian Lance Taylor [1]. It does give you control over layout of memory.
> I'm not sure the unbounded channel thing is a real advantage for Erlang. I'm happy to be convinced I'm wrong. What's a real, correct design which would be hard to realize in Golang (without unbounded channels) that relies on unbounded channels?
Suppose you're pulling down images from the network and printing out a sorted list of URLs of all the images you find. You might structure it as two goroutines A and B. Make two channels, "urls" and "done". Goroutine A is the network goroutine and simply crawls looking for images to stream to B over the channel "urls". When it's done it sends "true" on "done". Goroutine B is the sorting goroutine and first blocks on the channel "done" before it proceeds, after which it drains the "urls" channel and sorts the results.
This program contains a deadlock due to synchronous message sends. If there are more URLs to be downloaded than the buffer size of "urls", then the program will deadlock. If "urls" were an asynchronous channel, however, this would be fine.
Of course this can be structured to fix it, by doing the send in another goroutine for example (although that costs performance). But hopefully that's a good illustration of the subtleties of synchronous message sending.
Go doesn't give you C-like control over allocation of memory. Language constructs will allocate memory in ways that are not immediately obvious, to quote Ian Lance Taylor [1]. It does give you control over layout of memory.
These are two sides of the same coin. Having control over memory layout allows you to implement what are in effect allocators.
Here, you will never actually block on your send since it runs on it own goroutine. I can't see an actual use case for this kind of thing but since you are using this argument over and over then.. :)
It's mostly FUD, but it's a rathole of an argument since nobody has defined what "exploitable" means. Rather than nail down the term so we can all have ourselves an even more pointed language war, we should probably just let this go.
In general I consider segfaults exploitable, because of heap spray and virtual method calls. Even if not "exploitable", I consider it "very very scary".
This is far fetched. The threat scenario that page contemplates is "what happens if you're trying to safely run untrusted Golang code, as if it was content-controlled Javascript and you were a browser". It's not the case that Golang in its natural environment gives attackers the ability to paint the heap with malicious addresses and then provide themselves with a statistically significant shot at corrupting memory to exploit those addresses.
Your point is a reason that Golang couldn't be dropped in as a browser Javascript replacement. But nobody has ever suggested that it could be; it can't, just like Java (which was designed for the purpose but failed at it) and Erlang (which wasn't) can't.
> It's not the case that Golang in its natural environment gives attackers the ability to paint the heap with malicious addresses and then provide themselves with a statistically significant shot at corrupting memory to exploit those addresses.
What if a Go program allowed Go objects to be scripted by untrusted user code written in JavaScript? In browsers it is very possible to corrupt the Frame (Gecko)/RenderObject (WebKit) tree, which is in a C++ heap that is separate from the JavaScript heap.
The memory safety issues in C++ are a problem not because they mean that untrusted code written in C++ can't safely be executed (although it does mean that). It also means that safe languages become easily weaponizable. JavaScript (or Lua, or whatever) embedded in a Go program could paint the heap with malicious addresses.
This argument reduces to, "what if a Golang process exposed enough of its runtime to Javascript so that Javascript would be able to simulate an attacker just having access to Golang in the first place". In reality, if you were wacky enough to bolt Javascript onto Golang, you probably wouldn't do it in a way that would enable heap spray exploits, even if you had no idea what a heap spray exploit was.
The Javascript/C marriage problem isn't "heap spraying", it's that the interpreters themselves are full of exploitable C bugs, which is made much worse by the fact that the Javascript object lifecycle is expressed in an inherently unsafe language and so every tuple of [reference, event] has to be diligent checked. The same simply wouldn't be the case for any realistic marriage of Golang/Javascript, if only because the number of exploitable code conditions in Golang is miniscule compared to that of C.
All heap spraying does is make bugs that are very plausible to exploit easy to exploit reliably. You still have to start with "plausible".
> All heap spraying does is make bugs that are very plausible to exploit easy to exploit reliably. You still have to start with "plausible".
I'm not confident that race conditions in a shared-everything language are not "plausible". My experience is that race conditions are subtle and hard to find, even with a race detector. All you have to do is race on a map or a slice. And virtual calls are everywhere in Go.
Sure, we don't know that it's a problem so far, as nobody has created such a scenario. We're in violent agreement there. I grant that for server-side use cases, it doesn't matter—people use enormous C++ server codebases in production all the time and memory safety issues rarely bite them to the same degree that we see in browsers.
All I'm saying is that I don't have the same level of confidence that Go is free from memory safety exploits that I have for, say, Erlang or Java.
The use-after-free bugs that people are heap-spraying to exploit are plausible because the people who find them can tell you a simple story about how a program writes attacker-controlled data to attacker-controlled addresses. Unlike the Golang hypothetical you offer, they aren't plausible just because someone says they are.
You're wildly off the mark when you say that C++ serverside code tends to survive against attackers looking for memory corruption bugs. They do not. They fail with memory corruption flaws routinely. That was a cheap shot (you tried to create an equivalence class of unsafety between two totally unrelated languages and two totally unrelated sets of bug classes) and it won't work. You're going to have to try harder to make a case, if it's worth it to you.
Nothing is as bad as browser Javascript (it would be hard to conceive of a harder software security problem to design against), but C++ server software is pretty far towards the "unsafe" side of the security spectrum, and Golang and Erlang probably occupy virtually the same spot on that spectrum.
Unfortunately, I think we're basically at an impasse here.
I've described a scenario whereby a Go program that embedded untrusted safe code could fall to memory safety vulnerabilities. To be exact, it creates a slice of interfaces and accesses the slice in a racy way in such a way that it calls virtual functions at the same time it inserts, causing the slice to be reallocated. Then an attacker sprays the heap with addresses of shellcode. This results in arbitrary code execution when calling a virtual method.
You're saying that this is so unlikely as to be implausible, as it's never been observed in practice and might not even work. That's fine, I respect that position. We'll leave it there, and agree to disagree about whether this is a concern relative to languages like Erlang that are designed to be 100% free of memory safety problems. :)
What you've done in this comment is recapitulate the idea of a web browser executing content-controlled code, which is a problem that neither Erlang nor Golang could safely solve, and which no reasonable designer would ever use Erlang or Golang to solve, but layered on just enough abstraction to make that observation sound symptomatic of a problem with Golang.
I'm not trying to be pissy about it; I make bogus arguments all the time too, often without realizing it. You obviously know what you're talking about. I just think in this one subthread, you're wrong.
I don't dispute that segfault almost always means exploitability (I've seen friends write amazing exploits using only the slightest restricted memory corruptions).
But the data races don't bother me at all, first of all you won't encounter them if you embrace a program design facilitating goroutines and channels (the often quoted "don't communicate by sharing, share by communicating"), and second because we now have the tools to detect data races.
Segfaults do not almost always mean exploitability. In fact, the largest class of segfaults (un-offsetted NULL pointer dereferences) are rarely exploitable. The argument that says "look at that program, it segfaulted, it is probably exploitable" is not really valid.
honestly, I don't know. I know Go does concurrency well and I like it, the only reason I said it was equivalent or better was to avoid a flamewar. I've never even used Erlang.