Rust's ownership model, and the borrow checker that enforces it, are a major breakthrough in language design. It's so simple, yet it solves so many problems.
This is what Go should have done. Then Go's "share by communicating, not by sharing" would be real, not PR. Go code often passes references over channels, which results in shared memory between two threads with no locking. In Rust, when you do that, you pass ownership of the referenced object. The sender can no longer access it, and there's no possibility of a race condition. A successor to Go with Rust ownership semantics has real potential. Go has a simpler type system than Rust, and seems to be well matched to the back-end parts of web-based systems.
When Go was first announced in 2009, I thought "this is a nice idea, but I really wish it had generics and non-nullable references". Then Rust was announced a few months later, and I thought "wow, this is what Go should have been."
However, in the intervening time, Rust has diverged farther from Go. It no longer has green threads nor emphasizes message-passing based concurrency. In addition, Go was ready to use much sooner; by having a much simpler type system, and less ambitious concurrency story, Go has been ready to use and build up a library ecosystem for many years now, while Rust is only just now hitting stability to the point where you can write code and not have it break if you don't constantly keep it up to date with the latest nightly.
So, I think that they are both valid strategies. There has been lots of good code written in languages which don't provide as many safety guarantees or as strong a type system as Rust has; it's not an absolute requirement for all code to be written with such guarantees.
By staying simpler and not trying to solve as many problems, Go has been ready for production use for quite a lot longer, and I think that there are many things for which it is still simpler and easier to use.
That said, I do prefer Rust now that it's actually hitting a stable and usable state. I wish it had been ready sooner, but that's the price you pay for needing a lot of time on iterating on how to make these stricter semantics actually usable in practice.
It has never been true so there's nothing to forget.
The number of full-time Rust people paid by Mozilla is roughly the same as number of Go people paid by Google.
Mozilla is a wealthy company and while Google is even more wealthy, it doesn't mean that it spends all its money on Go. Both v8 and Dart are staffed with more people and Dart is not taking off the way Go is.
People are trying to rationalize away the explosive popularity of Go with all sorts explanations, the "it's all because it's Google" being one of them.
The simple explanation is: people use Go because it's a good language. That's the big factor in Go's success.
Go fits a niche and is good enough, for now. But Go as a language is seriously flawed, and its flaws will become more and more obvious as the language gets more popular. People can certainly understand Go made the choice of "minimalism". But minimalism doesn't mean the language shouldn't have features people really need. The infamous "You don't need that in Go" sentence will not fly that much longer,especially given how the Go team and community tries to answer Go short comings. The biggest insult to intelligence is that "go generate" feature.It's just embarrassing.
Go for me is a missed opportunity. I still like it but I keep a close eye on alternatives that don't have their core library riddled with "interface{}" like types.
Because there is a need for something simpler and safer than C,that is easily accessible to "script kiddies",that is fast,that has easy concurrency, that compiles to machine code,that has type safety,that isn't tied to windows, in order for people to produce executables that can easily be distributed or concurrent servers.
Whatever language succeeds in that niche will be the biggest language of the next 10/20 years. Mark my words.
Actually, there is no need for that. Type safety has nothing to do with software being more reliable or more secure. There is, however, a need in language creators, who understand that large standard library with static binaries and cross compilation are super important in a fast moving world where many different operating systems live together on many different architectures.
> Type safety has nothing to do with software being more reliable or more secure.
It is true that type safety doesn't necessarily mean that software is more reliable and secure (it depends on the type system, the semantics of the language, and a host of other things), but saying it has "nothing" to do with it is incorrect. In particular, preventing undefined behavior certainly has an effect on security, whether or not it's through a static type system. As pcwalton has mentioned on several occasions, something like 50% of critical CVEs in Firefox exploited attack vectors that would have been prevented by Rust's type system.
This is where I disagree and believe that such way of thinking about bugs cannot take us anywhere.
Bugs have nothing to do with formal properties of the language. They are the result of people's thinking process. And type system may just as well be the thing someone was dealing with while making his next CVE-worthy mistake.
Factually, Rust's type system guarantees, if consistently upheld, would have eliminated those bugs. Empirically, we know that in typesafe languages like Java, these types of vulnerabilities are far less frequent. These are facts: you cannot "disagree" with them.
Even if we assume that people write buggy code at about the same rate in C++ and Rust, as long as the vast majority of Rust code is not inside an unsafe block, Rust still reaps this benefit, because bugs in safe Rust shouldn't cause memory unsafety.
Finally, yes, bugs do have something to do with formal properties of a language. Really. They do. You can prove that software matches its specification, or fulfills certain properties. People have. SeL4 is a thing. Relying on a static type checker means you only need to trust the static type checker to be correct in order to enforce the appropriate properties, not all code written in the language, and "the Rust type checker" is a much smaller kernel to trust, and will be more widely exercised and heavily scrutinized, than "all other Rust code ever written." This is literally the entire point of static type systems.
I apologize, but this is going to be my last response to you, because I don't think we can have a productive conversation about this.
> Factually, Rust's type system guarantees, if consistently upheld, would have eliminated those bugs.
You are missing my point. It doesn't guarantee to not cause other bugs. And saying that something eliminates some type of bugs is completely misleading, because this is not what matters. It matters to not cause other bugs as well as eliminate some bugs. Which is impossible to guarantee with formal methods. And which is why current academic approach to languages cannot bring us anything, until the whole system changes.
Psychological approach to programming language design is what has to happen. Anything else is broken.
It's impossible to eliminate all bugs, no matter what you do. Psychological design will eliminate some bugs but not all. Static type system will eliminate some bugs, but not all. The best you can do is try to eliminate as many bugs as possible, by doing multiple things that give the best returns, given the bugs you're targeting.
> Actually, there is no need for that. Type safety has nothing to do with software being more reliable or more secure.
If you said this to me in an interview I would politely end the interview there and you would not be hired. A high school student with no programming knowledge at least can be taught. If you're confident in what you just said, you're both ignorant and unteachable.
I really can think of no more scathing indictment of Go than this is the thinking of a typical Go user.
Let's make a small amendment to your simple statement.
>The simple explanation is: people use Javascript because it's a good language. That's the big factor in Javascript's success.
>The simple explanation is: people use PHP because it's a good language. That's the big factor in PHP's success.
Doesn't feel right, does it?
Google and corporate sponsors are incredibly important in building the ecosystem of programming language. Traction may be the single major defining characteristic of language success and one can certainly buy traction with money and influence.
But it's true. Google's association is a big factor in its success. It's not the only reason nor likely the biggest now but it was important at least at the beginning. The fact is that every week we hear about new programming languages. Almost none of them end up being widely used.
Having someone like Google, Mozilla, Apple, Microsoft etc behind it makes a big difference.
It's not so much that Google promotes Go. It's that Google uses Go internally for production code. Thus, the compiler, and the library modules used for web-type services, are thoroughly exercised, and used by in-house people who can insist that bugs be fixed quickly. It took Go far fewer years to achieve stable libraries than, say, Python.
Not to take anything away from your comment on this being a language that has been brewing for years, but people really should give Robert Griesemer more credit for his influence on the language design.
I know he's the least publically visible of the three, but IIRC godoc and gofmt are mostly his babies. After getting used to those I think I miss them more in other languages than I miss the concurrency.
Rob Pike is explicit about early Go development; everything that went into Go was agreed upon by Robert, Ken, and Rob. If all three didn't agree it didn't go in.
But it's not as if they threw a massive amount of manpower or budget at it. I think Go succeeded more by association (to Google) and the fact that it hits a sweet spot for many people - it produces native code, but does not have the complexity of C++ or the unsafety of C.
I have written a fair bit of Go code. But I will switch to Rust without a second though once the ecosystem takes of. The lack of simple things like generics or algebraic data types in Go is jarring.
I think that in hindsight, this may be true, but it took a _very_ long time to figure it out. It took _years_ of iteration before we got to where we are today, and the previous work in this space, which used regions but not borrowing, didn't go further because of the ergonomic issues.
In retrospect, hard things often seem obvious, but that's because the work was already done!
Exactly! I came to Rust thinking I knew all I needed due to trying to solve the same things by hand in the CLR. Like I figured I had a pretty good idea what I needed to allow F# to have fast memory management.
And while the core idea is correct, holy shit there's so many details that the Rust team has had to deal with and iterate over and reduce until it's nice. I suppose this is why the CLR and everyone just punt on the whole issue.
Cyclone definitely had borrowing. See section 3.1 of this paper: https://www.cs.umd.edu/~mwh/papers/ismm.pdf. Generally speaking, every attempt to use substructural types for resource management in a practical language has allowed for temporarily treating a restricted resource as unrestricted.
I had an undergrad senior thesis in 2006 that futzed around with regions and borrowing, and I know I read some papers on their interactions, but I'm not immediately finding anything chasing references.
edit: Of course, I don't mean to diminish the accomplishments of Rust, which had turned these moving parts into a working language. In particular, Rust makes it easy to ignore these issues most of the time.
To put it in startup terms: other people working on your idea only validates it, it's not a threat. Execution is ultimately what matters. I prefer letting academia blaze the intellectual trail, and then have a number of people implement it after. :)
"Should have done" is a silly phrase to use here -- Rust is attempting to advance the state of the art, Go was attempting to use only very-well-understood ideas to make a very-well-understood, simple language.
Perhaps some day we will see a successor language that follows the philosophy of Go using the new stuff we understand thanks to Rust.
> Go was attempting to use only very-well-understood ideas to make a very-well-understood, simple language.
Well-understood like code generators (go generate)? Were generics not well-understood enough, so they decided to go with something that's well-understood to be a poor solution to the problem?
Actually, now that you mention it, I think generics are poorly understood in general. C++, Java, and C# have major differences in their approach to generics, but they have similar syntax so most people gloss over the differences. I think doing generics well requires incorporating higher order types, higher kinded types, and other concepts which don't exist at all in the wild west of C++ templates. This clumsy implementation leads to monstrosities like std::allocator_traits<std::allocator<T>>::rebind_traits<U>, which would be completely unnecessary if C++ had HKT. Java and C# make other trade-offs but also lack HKT and suffer for it.
>I think generics are poorly understood in general
Perhaps by programmers (and the designers/maintainers of certain mainstream languages...), but there are some very strong models out there; we've had well-behaved parametric polymorphism à la Hindley-Milner for decades and Haskell's typeclasses are in my opinion a very well-thought-out approach to ad-hoc polymorphism (and which the more ambitious C++0x version of Concepts in some ways resembled).
C++'s templates are like the untyped lambda calculus of the type-level world; they're perfectly capable of doing most anything you'd want from them, but using them effectively requires a lot of boilerplate and discipline (and as for std::allocator_traits et al., I think that's more an issue of library design than the design of templates).
I've never found subtype polymorphism particularly convincing beyond the case where independent single classes implement meaningful interfaces, and that's basically like a member function–oriented version of typeclasses.
Now, all this said, I haven't actually used Go, so I don't know how well the rest of the language would interact with these ML/Haskell-style systems, but I imagine at least basic parametric polymorphism would be relatively unobstructive.
Concerning std::allocator_traits, it reflects a deeper limitation of the language rather than just a library design flaw. If you had HKT, you could pass std::allocator to a template. You can't do that, you can only pass std::allocator<T> for some concrete T. Equivalently, in Haskell, you can pass IO as a type parameter, so you don't see the same kludges.
I definitely agree with the sentiment about subtype polymorphism.
>I haven't investigated it but variadic template template arguments might help there, though.
They do:
template <template <typename...> class Container, typename T>
Container <T> fin (T n) {
Container <T> set;
for (T i = 0; i < n; ++i)
set.push_back (i);
return set;
}
int main () {
for (int i : fin <std::vector> (10))
std::cout << i << std::endl;
}
You know. As Go get more popular, people will have a critical look at the language and its short comings.That's inevitable.
If the only answer to the issue they raise is "you don't need that in Go", "Use go generate" or "Go isn't for you", Go is going to get a lot of bad rap that will stop the adoption of the language.
If the Go team thinks the language doesn't need generics they are either out of touch(their rights,they don't know owe us anything) or arrogant. Either way, Go will not be the successful language it could have been.
One should never have to resort to interface{} to code anything in Go. Developers want type safety, not write type assertions everywhere. The fact that 1/ Go has generics but you cant create yours(map/slices/arrays) 2/ the core lib is full of these interface{} functions proves the language has a big problem.
If Rust had been missing an important feature like generics, the Servo people would have been telling the compiler people "you're being silly, of course we need generics!". And because Servo is the official test-case project for Rust, the compiler people would have listened.
Go has nothing like this. There's no formal effort to ensure practicality.
It's not about whether someone is using the language, but about whether the language designers decide to listen to a user above themselves.
It's hard to design a language. Everybody is always trying to give their conflicting opinions. Many language designers react by turning inwards and ignoring everybody else.
Generics are maybe poorly understood by the average programmer, but what I meant with my rhetorical question was that there is a lot of literature and there are a lot of people who understand generics very well. Maybe the creators of Go don't understand generics, but then again, maybe if you don't understand generics you shouldn't be developing a high-profile statically-typed programming language.
The Go team might not understand generics, but that doesn't mean generics aren't well-understood.
This is my core problem with Go: they clearly have no idea what a decent programming language would look like and they're getting undeserved attention becauase of their previous (admittedly impressive) work.
I don't think the Go team has any problem understanding anything. I think they have different subjective goals for a programming language than many critics, which leads to a feature set which includes some things critics would leave out and excludes some things critics would prefer (like generics.)
I think Go has gotten fairly decent traction because there are roles for which real programmers find it reasonably well suited compared to existing alternatives, and I think that, not the previous work of the creators, is what drives attention to it.
OTOH, I think Go's period in the sun is going to be a lot shorter than, e.g., Rust's unless we see a Go 2.x that addresses some of the "not in the Go 1.x timeline" things with Go fairly soon.
I do agree that the problem relates to language goals, but their poor choices of goals to pursue do demonstrate a lack of understanding. Their stated goals are frequently solutions that don't really address good use cases. Take for example fast compilation time: the stated use case for that goal was compiling Google's massive C++ code base, which took too long to reasonably compile on a dev machine because of template expansions. Refusing to add generics is an idiotic solution to this problem:
1. Generics aren't templates, and are in fact a solution to this problem. They don't expand at compile time.
2. The long compilation times are indicative of deeper problems: an over complicated code base that should be refactored down, and lack of modularity which would allow partial compilation (only compile the changed modules).
3. Removing generics creates new problems with increased bugs and duplicated code. It's trading compile-time for development time, which has historically rarely been a good trade.
4. Even if this is a legitimate concern for Google (which I still don't think it is), it's not a legitimate concern for 99.9% of projects out there. How many projects actually have millions of lines of code that can't be factored down and modularized? If this is really Go's target then they should admit it's a highly specialized language and not a general-purpose programming language, and that most people shouldn't be using it.
Calling these goals "subjective" as if this justifies them is just a smarter-sounding way of saying "That's just, like, your opinion, man." Opinions can be wrong, and the opinions of the Go team are wrong.
Code generators like RATFOR, lex, yacc, and the C preprocessor? I think it’s fair to say that those are well understood. Generics like Ada Generics, C++ templates, and Java Generics? I think it’s fair to say that we are only beginning to understand the benefits and drawbacks of that approach to code generation, which is why every time I recompile the Java application I’m currently working on, I get erroneous warnings telling my that my varargs are causing heap pollution in my generic static methods. (Which I can suppress, but instead I’m planning to delete those methods.)
Even OCaml, whose approach to parametric polymorphism is worlds simpler than any of the monstrosities mentioned above, just changed its approach by adding GADTs, which I still don’t understand fully despite the best efforts of generous HN commenters.
So I think that, given that Golang/Issue9 decided not to include inheritance, linear types, exceptions, or overloading (even of operators!), in large part because they aren’t well-understood, it’s entirely unsurprising that it decided not to include generics, either.
> Code generators like RATFOR, lex, yacc, and the C preprocessor? I think it’s fair to say that those are well understood.
Yes, that is exactly what I said in the post you are responding to.
> I think it’s fair to say that we are only beginning to understand the benefits and drawbacks of that approach to code generation, which is why every time I recompile the Java application I’m currently working on, I get erroneous warnings telling my that my varargs are causing heap pollution in my generic static methods. (Which I can suppress, but instead I’m planning to delete those methods.)
Just because you don't understand something, doesn't mean that nobody does?
I guess you didn't understand the problem I was describing. I wasn't saying that the problem is that I don't understand parametric polymorphism, or for that matter its realization via code generation, although I’m sure e.g. Philip Wadler understands those things a whole heck of a lot better than I do. The problem was that the Java language designers didn't understand generics, and accidentally implemented varargs in 1.5 in a way that produced a completely unnecessary and avoidable collision with the parametric polymorphism system (designed by Wadler) that they also introduced in 1.5. And there are a variety of similar problems in various parametric-polymorphism systems, which leads me to think that the problem is not merely that some loser at Sun didn't understand these things (and didn't bother to ask Wadler, who surely could have warned him off the particular stupidity I mentioned) but rather that there is a general lack of understanding of parametric polymorphism.
> Go was attempting to use only very-well-understood ideas to make a very-well-understood, simple language.
Many of the ideas in go were not well understood and had only been demonstrated in research languages prior to go. Its concurrency model and associated primitives, and the automatic interface system were in roughly the same state then as Rust's new ideas are now.
That said the ideas in Rust were not as well understood 5 years ago and I doubt that the go creators could have incorporated them well at the time. It took Rust 5 years of experimentation to come to what it is now (and its still got a number of additions to go!).
> Many of the ideas in go were not well understood and had only been demonstrated in research languages prior to go. Its concurrency model and associated primitives, and the automatic interface system were in roughly the same state then as Rust's new ideas are now.
to Go's authors the CSP model was very well understood through three different implementations: Alef, Limbo, and Plan 9's libthread.
First, please don't compare Go and Rust. They are completely different languages with completely different target use cases. Go has very different design goals than Rust, so very little that Rust does would actually be possible in Go, and vice versa.
> Go code often passes references over channels, which results in shared memory between two threads with no locking
Having a complex type system would cut into compile times (an explicit primary design goal, to the point where the Go compiler must be written to read each file exactly once[0], no more).
As for the code you're referring to, Go is not designed to be a language which prevents you from shooting yourself in the foot with provable code. It's designed to be a language which makes it reasonably easy to be sure you haven't, as long as you follow the general idioms and best practices. What you're describing here is definitely not one - I can think of a few instances in which pointers might be reasonably passed over a channel, but they're few and far between.
Also, I would think it's pretty obvious that once you've sent something over a channel you shouldn't try and write to it anymore[1]. Do you have an example of the code you're referring to?
[0] well, technically "at most once", since some files can be skipped entirely.
[1] I'd need to think about this, but I don't think it'd be difficult to detect this statically[2] at compile-time and enforce that stack-allocated rvalues into channels are never used again in the same scope. It's definitely possible to extend `go vet` to handle this, and it may even be possible to write this in as a compiler error in a future version of Go.
[2] Incidentally, one of the reasons that it's so easy to do reliable static analysis on Go code (compared to other languages) is that the grammar is incredibly simple - it's almost entirely context-free, which is very rare among non-Lisps. Having a more complex type system usually requires at least some additional syntax to along with this, which means you'd have to start sacrificing this design goal as well in order to create a more elaborate type system.
> (an explicit primary design goal, to the point where the Go compiler must be written to read each file exactly once[0], no more).
Same in Rust. Thanks to the module system, the Rust compiler never rereads a file more than once.
> [1] I'd need to think about this, but I don't think it'd be difficult to detect this statically[2] at compile-time and enforce that stack-allocated rvalues into channels are never used again in the same scope. It's definitely possible to extend `go vet` to handle this, and it may even be possible to write this in as a compiler error in a future version of Go.
That trivially fails to solve the problem, due to aliasing.
> [2] Incidentally, one of the reasons that it's so easy to do reliable static analysis on Go code (compared to other languages) is that the grammar is incredibly simple - it's almost entirely context-free, which is very rare among non-Lisps. Having a more complex type system usually requires at least some additional syntax to along with this, which means you'd have to start sacrificing this design goal as well in order to create a more elaborate type system.
Rust's grammar is also context-free, as far as I know.
Your proposed static analysis is not reliable. The "more complex" type system in Rust exists precisely so that we can do more reliable static analysis.
Technically, byte-strings (and nested comments) mean the lexer is not regular, but the grammar is still context-free. But anyway, parsing is not typically the bottle-neck these days.
> Thanks to the module system, the Rust compiler never rereads a file more than once.
Are you sure about this? When you use a generic function accepting and/or returning values of type T, I guess the compiler has to generate a version of the function for each instance of T. But the compiler cannot know all the possible use of the generic function without having walked through the whole code first, which implies at least two passes. How does it work?
Ok, you're right, but we're playing with words here :) I understand the file containing the source code of the generic function is read only once, but I guess the AST of the generic function is processed many times, at least once for each instantiation of the function? And I'd venture in guessing that the biggest cost is in processing the AST and compiling it, not in reading/parsing the source?
Generating specialized code from a generic AST is in no way analogous to the exponential-time explosion that is the C++ header system, which is what Go is referencing when it says that it has been designed to read each source file only once. All languages with proper module systems have this property (which is to say, basically all languages that aren't C or C++).
Yes, designing a proper module system and getting rid of the header system is the best way to improve compilation time. But it was not my point. There are other factors impacting compilation time. Look at Scala for example: it has a proper module system, but compilation is still relatively slow, even if better than C++. This is the reason why I'm interested in learning how Rust compiles generic functions, and what is the impact on the compilation time (because the compiler has to generate a version of the function for each possible T), the binary size, and the pressure on the CPU cache (because having many versions of the function makes harder to keep all of them in the cache at the same time).
The Rust compiler actually has built-in diagnostics to let you profile the time that each phase of compilation takes. On mobile but I believe that it's `rustc -Z time-passes`, and whenever I use it on my own code 95% of the time consists of LLVM performing codegen. Nearly every other compiler pass takes approximately 0% of the compilation time (coherence checking being the notable exception, for reasons I have the discern).
It's not playing with words: the parser only runs once, and the processs/reprocessing AST is much faster than interpreting the original source multiple times (e.g. it can be indexed cheaply).
> > (an explicit primary design goal, to the point where the Go compiler must be written to read each file exactly once[0], no more).
> Same in Rust. Thanks to the module system, the Rust compiler never rereads a file more than once.
As I said right at the beginning of my post, I'm not trying to compare Rust and Go directly, because I don't think that's meaningful. I'm explaining why these particular features would be difficult to incorporate into Go. Note that I never said that Rust reads a file more than once, or that Rust's grammar is not context-free. In fact, the word "Rust" doesn't appear anywhere in my comment at all except in that very first paragraph.
I love talking about PLT and would otherwise be interested in having a discussion about static analysis and hearing why you think it would not solve the problem, but I have to say, it's both frustrating and discouraging to post an in-depth response and then get downvoted twice, with the only reply being one which very clearly ignores the very first line of my entire response.
Bystander here, neither upvoted nor downvoted you, but I think one reason for the skepticism is that people have hard the "It shouldn't be hard to implement X in language Y..." refrain many times before, and until somebody actually does implement X in Y, it means nothing. "It shouldn't be hard to make Python at least as fast as V8." "It shouldn't be hard to implement static typing on top of Python 3 function annotations." "It shouldn't be hard to add lambdas to Java [well, they finally did with Java 8...which we still can't use on Android]".
If it's actually not that hard, go implement a checker for Go that does check that statically-allocated rvalues are never used again, and post it here. You'll probably shoot to the top of Hacker News, it'd be a nice open-source project for the resume, and it'd provide a very useful tool for the Go community.
> I would otherwise be interested in having a discussion about static analysis and hearing why you think it would not solve the problem
Stack allocation isn't in the semantics of Go, so that's a pretty weird thing to use a basis of a static analysis. It's also not sound because of interfaces or closures. You would want something more like "fully by-value data with no pointers in it, no interfaces, no closures".
You outlined why they shouldn't be compared because they are different and outlined differences. The response (from one of the rust devs, if I'm correct) explained how some of your purported differences weren't actually different. The response wasn't ignoring your first line, it was explaining how portions of your evidence backing up that first line were factually incorrect.
> First, please don't compare Go and Rust. They are completely different languages with completely different target use cases. Go has very different design goals than Rust, so very little that Rust does would actually be possible in Go, and vice versa.
It's perfectly reasonable to compare any two programming languages, especially where their use cases overlap.
> As for the code you're referring to, Go is not designed to be a language which prevents you from shooting yourself in the foot with provable code. It's designed to be a language which makes it reasonably easy to be sure you haven't, as long as you follow the general idioms and best practices. What you're describing here is definitely not one - I can think of a few instances in which pointers might be reasonably passed over a channel, but they're few and far between.
> They are completely different languages with completely different target use cases. Go has very different design goals than Rust, so very little that Rust does would actually be possible in Go, and vice versa.
But everyone keep trying to though, that's kind of interesting.
The reason I am guessing is because Go billed itself as a "systems programming langauge". It did, go find the original announcement video (or was it a press release). That is what it said.
Later after replacing C idea didn't quite materialize. They clarified that they mean "systems programming language" to mean something else actually. Now if I said that or some other anonymous user, ok, fine, people get confused, don't know enough etc etc. But I don't believe Rob Pike doesn't know what a systems programming language is.
Anyway, I am not saying one way or another but just illustrating that everyone doing the comparison is not completely insane.
Now they way I see Go is more like a Python++ or Ruby++. All those nice concise languages, + concurrency, + speed, + some type safety, + easy static binary deployment. So I agree with you that Rust and Go should not be compared.
My perspective is that Go seems to be trying to aim at network aware services where Java might have been used before.
Internally what I've seen @ Google is it seems to be used here and there for services where Google teams have typically used C++ (but other companies might use Java.)
And yes, some places where Python has been used, Go makes a decent replacement.
Not my cup of tea, but I can see its niche. It's not the same niche as Rust. At least not right now.
> First, please don't compare Go and Rust. They are completely different languages with completely different target use cases. Go has very different design goals than Rust, so very little that Rust does would actually be possible in Go, and vice versa.
Almost all languages are different and thus have different design goals. That doesn't mean it should be off-limits to compare them.
> First, please don't compare Go and Rust. They are completely different languages with completely different target use cases.
Then why does everyone insist on comparing Rust and C++? Objectively Rust is much closer to Go than to C++.
That's just dumb. Rust and Go were, in fact, contemporarily designed to similar constraints and with similar goals. That they made significantly different design choices is an interesting point that should be discussed and not swept under the rug just to win internet points.
At the language level, Rust is substantially more similar to C++ than it is to Go. C++ and Rust both have many properties (lack of GC, pervasive stack allocation [even for closures], move semantics, overhead-free C FFI compatibility) that many other languages lack, and the Rust developers actively work to match C++ on features. None of this is true for Go; Rust's similarities with Go are shared by many other languages as well, many of which are much more widely used than either (hence would probably represent a more informative comparison; e.g. I think posts contrasting Rust and Java would be quite useful, but I have seen very few of them). As such, Go and Rust comparisons tend not to be very illuminating.
>
At the language level, Rust is substantially more similar to C++ than it is to Go. C++ and Rust both have many properties (lack of GC, pervasive stack allocation [even for closures], move semantics, overhead-free C FFI compatibility) that many other languages lack, and the Rust developers actively work to match C++ on features.
I think this is absolutely true, for what it's worth.
The Rust on Rosetta code is exceedingly old. There's a community project to update the examples, but they're waiting until 1.0 to move upstream, so as not to cause them too much churn.
Rust is very similar to modern C++, just without backwards compatibility considerations, and with the (fantastic, IMO) additions of compiler-enforced ownership rules and generics instead of templates, which makes certain type checking work better.
What objective criteria are you using by which Rust is much closer to Go? Go being garbage collected and Rust/C++ not being seems like a much larger gap than anything between Rust and C++.
A managed heap; emphasis on the use of functional constructs; rejection of lots of previously-thought-to-be-essential OOP features; built-in concurrency based on a rejection of the traditional primitives; general de-emphasis of metaprogramming constructs like templates and macros; broad design emphasis on preventing common programming mistakes.
You're saying, I think, that the Go implementation and runtime looks more similar to Java and that C and Rust are clearly in the same family (in the sense of being only losely coupled to a runtime environment). That's the way a language implementer might look at the question, I guess. It's certainly not the only one.
I'm not sure what you mean by this, but when I hear "managed heap" I think of heap memory managed by a garbage collector. This is not a feature of Rust.
> emphasis on the use of functional constructs
Rob Pike wrote (unavoidably, because of the lack of generics) crippled versions of map and reduce for Go and declared that the almightly for loop was superior. I don't think an emphasis on the use of functional constructs follows from this.
> rejection of lots of previously-thought-to-be-essential OOP features
Yep, this is an important similarity between Go and Rust. Rust brings traits, which are (extremely) similar to Haskell's typeclasses, into the mix as well.
> built-in concurrency based on a rejection of the traditional primitives
Sure, but the differences between the languages is crucial here: Go builds safe(ish) concurrency primitives into the language; Rust does not, but provides language features powerful enough to build memory-safe concurrency primitives in the standard library. And, perhaps surprisingly, mutexen and the like are actually encouraged precisely because Rust can make them safer.
> general de-emphasis of metaprogramming constructs like templates and macros
I don't agree that Rust de-emphasizes these. Generic programming is strongly encouraged, and indeed Rust's generics are implemented very similarly to C++ templates. Rust also strongly encourages the use of macros for cleaning up repetitive blocks of code, both inside of and outside of function bodies. And very powerful metaprogramming is on the way (already available in nightlies) in the form of compiler plugins, which allow arbitrary code execution at compile time.
> broad design emphasis on preventing common programming mistakes.
I think this is a feature, or at least an intended one, of every higher-level programming language :-)
On the rejection-of-OOP thing; I agree that seems like the biggest similarity between Go and Rust, but even then, the major replacements for those features are completely different – Rust's traits are more like typeclasses and C++ templates (to some extent) than they are to Go's interfaces (though "trait objects" are like interfaces, but used less often). It also seems like Rust will eventually add some form of more traditional OOP features like inheritance, because the servo project would like such things to implement the DOM (and I think others are also interested for different reasons), which I doubt Go will ever do.
> It also seems like Rust will eventually add some form of more traditional OOP features like inheritance, because the servo project would like such things to implement the DOM
There's been a great community and core team effort to design small, orthogonal language features (or extensions to existing features) that can be used to regain the performance + code reuse benefits one gets from using inheritance, without all the associated warts. The DOM/Servo problem is a tough one and it's going to be very interesting to see if Rust can solve it without resorting to the blunt instrument of inheritance.
Yeah I've followed the evolution of that discussion with interest (or I did until a few months ago, so I'm likely out of date), and came to the, perhaps incorrect, conclusions that it is likely there will be some solution added at some point, and it is likely to share some trade-offs with inheritance.
> Go has very different design goals than Rust, so very little that Rust does would actually be possible in Go, and vice versa.
Oh give this lame argument a rest already. They're both supposedly general-purpose programming languages. Rust shoots for a little lower level than Go, but they can certainly be compared, and the comparisons are valid.
> Having a complex type system would cut into compile times (an explicit primary design goal, to the point where the Go compiler must be written to read each file exactly once[0], no more).
I'm not sure how this is a defense of Go: yes, that's a primary design goal; it's also a terrible design goal. Trading an adequate type system for a short-term gain in compile time is an amateurish mistake.
Compilation time can be mitigated on even the largest projects by only rebuilding changed modules. If you're getting to the point that this isn't working, then maybe it's time to start working on reducing the size of your code.
I've worked on projects that were 500,000 lines of code and compile time was sometimes an issue, but not nearly as large an issue as bugs which would have been caught by an adequate type system (large C#/Java codebases pre-generics).
> As for the code you're referring to, Go is not designed to be a language which prevents you from shooting yourself in the foot with provable code. It's designed to be a language which makes it reasonably easy to be sure you haven't, as long as you follow the general idioms and best practices. What you're describing here is definitely not one - I can think of a few instances in which pointers might be reasonably passed over a channel, but they're few and far between.
We've tried this approach before, many, many times, and the result has been high-profile bugs.
The fact is, these "idioms and best practices" will not be followed perfectly on projects of any reasonable size if they are not enforced by code. Why would you not enforce them with code? And if you're designing a language, why not design the language in such a way as to allow code enforcing best practices? Like, a type system?
> Also, I would think it's pretty obvious that once you've sent something over a channel you shouldn't try and write to it anymore[1].
Yes, just like it's pretty obvious that when you free memory you shouldn't write to it any more. We've never had any issues with that, now have we?
> It's definitely possible to extend `go vet` to handle this, and it may even be possible to write this in as a compiler error in a future version of Go.
I didn't know about `go vet` and I wish I didn't.
So, to avoid a second pass over the code during compilation, you add a second pass over the code during `go vet` that has less capabilities. Good thinking. I'll add that onto the list of other stupid ideas that have been tried decades ago and didn't work but are finding new life in Go.
> Incidentally, one of the reasons that it's so easy to do reliable static analysis on Go code (compared to other languages) is that the grammar is incredibly simple - it's almost entirely context-free, which is very rare among non-Lisps. Having a more complex type system usually requires at least some additional syntax to along with this, which means you'd have to start sacrificing this design goal as well in order to create a more elaborate type system.
Yes, adding syntax for static type checking will definitely make static analysis like type checking harder. Are you fucking kidding me?
> The fact is, these "idioms and best practices" will not be followed perfectly on projects of any reasonable size if they are not enforced by code. Why would you not enforce them with code?
Compiler shall not be an obstacle to a man. We tried enforcing things before, many times. Things got abandoned.
Sigh. We build machines to automate the repetitive, to eliminate the daily drudgery and to repeat steps with perfection (that we would repeat with perfection ourselves if only we were as focused as a machone). So why do we keep finding ourselves arguing that a lazy compiler, which offloads the work of a machine on to a dev team, is an acceptable compromise?
Meta-comment: I believe the difference in opinion here (which seems to recur, over and over, and has for decades) is because the job title of "software engineer" actually encompasses many different job duties. For some engineers, their job is to "make it work"; they do not care about the thousand cases where their code is buggy, they care about the one case where it solves a customer's problem that couldn't previously be solved. For other engineers, their job is to "make it work right"; they do not care about getting the software to work in the first place (which, at their organization, was probably solved years ago by someone who's now cashed out and sitting on a beach), they care about fixing all the cases where it doesn't work right, where the accumulated complexity of customer demands has led to bugs. The first is in charge of taking the software from zero to one; the second is in charge of taking the software from one to infinity.
For the former group, error checking just gets in their way. Their job is not to make the software perfect, it's only to make it satisfy one person's need, to replace something that previously wasn't computerized with something that was. Oftentimes, it's not even clear what that "something" is - it's pointless to write something that perfectly conforms to the spec if the spec is wrong. So they like languages like Python, Lisp, Ruby, Smalltalk, things that are highly dynamic and let you explore a design space quickly without getting in your way. These languages give you tools to do things; they don't give you tools to prevent you from doing things.
The second group works in larger teams, with larger requirements and greater complexity, and a significant part of their job description is dealing with bugs. If a significant part of the job description is dealing with bugs, it makes sense to use machines to automate checking for them. And so they like languages like Rust, C++, Haskell, Ocaml, occasionally Go or Java.
The two groups do very little work in common (indeed, most of the time they can't stand to work in the opposing culture), but they come together on programming message boards, which don't distinguish between the two roles, and hence we get endless debates.
My point was: tools that prevent you from doing things should not do that without explicit permission. Because thinking is hard and any interruption by a tool or a compiler will impose unnecessary cognitive load and will make it even harder, which may lead to a logical mistake. It is much better to deal with the compiler after all the thinking is done, not during.
I'm pretty sure you've never used a language with a good type system then.
You describe a system where you have to keep everything a program is doing that's relevant in your head at once, and when you're forced out of that state, it's catastrophic. You seem to be assuming that's the only way to get productive work done while programming. I happen to know it's not.
If a language has a sufficiently good type system, it's possible to use the compiler as a mental force multiplier. You no longer need to track everything in your head. You just keep track of minimal local concerns and write a first pass. The compiler tells you how that fails to work with surrounding subsystems, and you examine each point of interaction and make it work. There is no time when you need the entire system in your head. The compiler keeps track of the system as a whole, ensuring that each individual part fits together correctly. The end result is confidence in your changes without having to understand everything at once.
So why cram everything into your brain at once? Human brains are notoriously fallible. The more work you outsource to the compiler, the less work your brain has to do and the more effectively that work gets done.
Yes but tools that you from doing thing you would prefer not to have done in the furst place (but still grant you permission to override this when desired) would be a fairer assessment of what a sting compiler is.
We all agree that a null dereference is a bad thing at runtime. I see no advantage for me as a programmer to be allowed to introduce null dereferences into my code as a side effect of "getting things to work" if then when the code runs it doesn't work right. This increases my cognitive load as a programmer, it does not decrease it.
I would argue that you don't think about the compiler anymore when using a language like haskell than you do when using Python. But you do get more assurances about your program after ghc produces a binary than after Python has finished creating a .pyc -- and that is a win for the programmer.
Agreed. But every production language that I'm aware of has an out that allows you to escape its type system, with the exception of languages whose type systems are intended to uphold strong security properties and verification languages that feature decidable logics. I can't remember for sure, but I think even Coq--which is eminently not a language designed for everyday programming--may diverge if you explicitly opt out (though I could be wrong about that).
The questions, to my mind, are
1. How easy it is to opt out?
2. How often do you have to opt out?
3. How easy it is to write well-typed expressions?
4. What guarantees does a program being well-typed provide?
For example, you almost never have to opt out of the type system in a dynamic language, but the static type system is very basic. In a language like Rust, you opt out semi-frequently (unsafe isn't common but it's certainly used more often than, say, JNI), and it can be hard to type some valid programs, but opting out is simple and the type system provides very strong guarantees. In a language like C, you never have to opt out of the type system, and the annotation burden for a well-typed program is minimal, but the type system is unsound--being well-typed guarantees essentially nothing in C.
All languages fall somewhere along this spectrum, including Go. It's just a question of what tradeoffs you're willing to make.
I think it's worth clarifying that Rust's `unsafe` doesn't opt-out of the core type system per se, it allows the use of a few additional features that the compiler doesn't/can't check. I think this distinction is important because, as you say, `unsafe` isn't very uncommon and so it is nice that one still benefits from the main guarantees of Rust by default inside `unsafe`. :)
Sophisticated type systems have not been abandoned by any stretch of the imagination.
We tried static code generators before, as well as linters and static code analysis on untyped code, and those have pretty well been proven to be ineffective. All of which are supposedly "new innovations" in Go. So if you want to defend Go that's not an approach you can really take.
My understanding is that most languages are context free in their syntax, but that type checking and namespace "stuff" are (almost?) always context-sensitive.
On a somewhat interesting note: transferable objects work similarly in web workers. There have been complaints, of course, that this makes it impossible to take advantage of shared memory in cases where it provides clear performance benefits.
Actually, it's possible to share a &mut into a scoped threat too, and perform mutation through it. It's safe because the &mut only ever exists in one place at a time.
Rust has definitely been a pleasure to work with. I have been experimenting with a Future & Stream [1] abstraction in Rust that would allow easily describing complex concurrent and async operations and to allow easy composition, not unlike ES 6 promises.
The interesting thing is that, thanks to Rust's ownership system, it is easy to discover when handles to a future value go out of scope. This allows the library to reuse internal resources to make the overhead of using the library as light as possible.
For example, a Stream is modeled as a sequence of futures (more specifically a future of the next value and a new stream representing the rest of the values, kind of like lazy-seq in clojure but async), but instead of allocating one future for each iteration of the stream, the same internal structure is reused while still being able to expose this API.
Well, Rust doesn't have any opinions regarding blocking or not. Currently, std::io implements blocking IO, but as you pointed out, Rust allows libraries to implement non-blocking / async paradigms.
FWIW I thought the article was quite informative and well-written enough to make the concepts easy to grasp. I'd agree with the opinion that Rust is breaking new ground in an interesting way, and over the next several years I wouldn't be at all surprised to see its ideas becoming quite influential.
The discussion points out something else that too often receives too little attention. That is, the quality of documentation is extremely relevant to product success. Sadly, software is more often than not poorly documented, not only frustrating for potential users, but an unnecessary impediment to uptake of the program.
It appears the Rust developers are serious about describing how Rust works. That's a very favorable sign of their commitment to the project and speaks volumes about dedication to grow the Rust community.
Just a comment: I've noticed Rust literature generally errs on feeling a bit dense and esoteric, which might make things less approachable for some. Then again, that does depend on your target audience. This article doesn't make me feel immediately "fearless" about concurrency, and I'm a fairly experienced programmer and a fan of the language. Perhaps it would have been wiser to start with an approachable example and then taken deep dive about all the nuance from there?
Also, are there any plans to integrate tutorial-style guides similar to those from Rust for Rubyists into the Rust book? There's something incredibly fun and real about learning by example. It also would give the reader a chance to read some idiomatic code.
Which has a bunch of examples that utilize most of Rust's features and are runnable/editable in the browser.
And there was a comment on /r/rust[1] that made a good point about the documentation: It's better for searching for known-unknowns rather than unknown-unknowns. Since the community is still relatively small and idiomatic Rust is still being defined, it's hard to find those known-unknowns.
That being said, I've found that the Rust user forum[2] is very helpful and generally the community is full of people who understand the language way better than I do and who are willing to help.
> Also, are there any plans to integrate tutorial-style guides similar to those from Rust for Rubyists into the Rust book?
I'd been waiting for beta to drop to do a re-organization of the TOC of the book. You can see it on nightly here: http://doc.rust-lang.org/nightly/book/
I've carved out a whole section, "Effective Rust", specifically for this kind of thing. I'm looking for a better name than "Effective Rust", since that's a very common book title, though. Don't want to be too greedy with that namespace!
We do also have Rust by Example, but I admittedly don't give it as much love. Once the book is done...
Never new Rust by Example existed until today actually! You really should link to it somewhere more prominent. I'm fairly far along, but would have loved the example book had I know about it sooner.
It was originally a community project, but was then abandoned for a while, and then donated to Rust proper. As such, it's always been sort of secondary. I plan on linking it more prominently once I've actually gone through it and cleaned it up to my own personal standards, for now I just keep CI green. As such, the text has gotten out of sync with the code in a number of places, and I don't want to mislead anyone.
Fair enough: it's definitely not a good idea to promote something half baked.
It actually took me a while to find the link there just now. I don't think "External Documentation" really imparts the right meaning. Perhaps something like "Other Guides and Resources" would make more sense?
It is actually linked to from the front page of http://www.rust-lang.org/; it's the "more examples" link under the example. But I guess that's just far enough down to be easy to miss; it's just below the fold on my screen.
Sorry if I'm missing something obvious, but how do these locks stop a deadlock from happening? To explain, say that I have some code that locks A, then locks B, then does something, and another bit of code that locks B, then locks A, then does its thing. There's a race condition that can happen where one thread has A and the other thread has B.
This has bitten me so much that I avoid fine-grained locking when possible and instead use a single global lock everywhere, and have new threads start in the locked position, and then only selectively unlock the bits of code that I can prove are thread-safe, or adding fine-grained locks only where absolutely necessary as shown by benchmarking.
> Sorry if I'm missing something obvious, but how do these locks stop a deadlock from happening?
This is the difference between a data race and a race condition. We can't generally prevent deadlocks, I would imagine that's (like all race conditions) is an unsolvable problem at the language level. Maybe some PhD will prove me right or wrong, though...
Preventing deadlocks is not an unsolvable problem. Linear-session-typed process calculi are guaranteed deadlock-free and race-free. (The connection with Rust's ownership type-system makes me wonder if there might be some way to backport this guarantee to Rust, but I suspect the connection doesn't go far enough.) This comes at a price, however: certain process topologies are impossible to construct, and there's no way to ask "give me the first message to arrive on either of these two channels" (no Unix `select()` analogue). The latter in particular bugs me, because `select()` is a necessity in practice.
In general, problems of "guarantee property X" can almost always be solved with a sufficiently advanced type system, but at the cost of (a) inventing and using a sufficiently advanced type system and (b) disallowing some perfectly good programs.
> Linear-session-typed process calculi are guaranteed deadlock-free and race-free
While this may be true for certain restricted systems, this claim is misleading. Session types, as invented by K. Honda and refined by numerous others, guarantee deadlock and race freedom only within any given session. Session initiation (basically messages to replicated inputs) can deadlock and race in many (most?) session typing systems. The decision not to guarantee the absence of deadlock at session initiation points is what makes session types so simple.
It is possible to restrict session types so that they are completely deadlock-free and race-free, but -- as far as we know -- that guarantee comes at the heavy price of making the typing system completely inexpressive, so you cannot type many natural forms of concurrent programming. Alternatively you can give up on simplicity and make the typing system very complicated (e.g. the system pioneered in "Strong Normalisation in the Pi-Calculus" by Honda et al, and its successors). The former is a real problem, as witnessed by the continuing inability of the session typing crowd to get a handle on message passing in Erlang.
Finding a system that is expressive, simple and guarantees deadlock-freedom is a major open research problem (and I suspect infeasible).
I mostly know about the linear logic side of session types; I'm not super familiar with the older line of research starting with Honda. I'll have to read up on it!
I'm not sure what the equivalent in my world of "session initiation" is. If by "replicated inputs" you mean what in linear logic is expressed by the "!" connective (often called "replication"), it's definitely possible to have ! without deadlocks or any observable nondeterminism. See Caires & Pfenning, "Session Types as Intuitionistic Linear Propositions", for example.
I already listed a few of the limitations of this approach (limited process topologies, and no `select()` analogue). I'm not sure whether these qualify as "completely inexpressive", but I do think the lack of any `select()` analogue is pretty limiting in practice. I'm not sure how hard it might be to remove that limitation, but it wouldn't surprise me if that were a major sticking point. Still, overall I think I'm more optimistic about the possibility of capturing most reasonable forms of concurrent programming in a deadlock & race-free manner.
Honda's pioneering papers on session types (and on everything else for that matter) are always worth reading.
Yes, replicated input is !x(v1).P in pi-calculus, and Milner was directly inspired by linear logic's replication.
> Still, overall I think I'm more optimistic about the possibility of capturing most reasonable forms of concurrent programming in a deadlock & race-free manner.
I don't think you can, or indeed want to excise all forms of race condition from concurrent programs, for if you do that, there would be little concurrency left. You could only do deterministic concurrency, which is not what you want in many cases, e.g when you use concurrency to mask latency. For example how would you define a lock or mutex if all races are forbidden? You need race conditions, but you want them in homeopathic doses, you want them constrained and tightly controlled.
I also doubt that concurrent programming patterns like dynamic deadlock detection and removal are amenable to a type-theoretic analysis in linear-logic based systems such as Caires & Pfenning's.
The safe subset of Rust (which is more-or-less linearly typed with elided drops) is indeed deadlock-free. Features like mutexes are written using the unsafe sublanguage because shared memory concurrency is useful. Because potential deadlocks are opt-in rather than opt-out, it's not unreasonable to imagine that future extensions to Rust (or some other language in the same mold) could bridge this gap in a practical way.
This is a bit of a misunderstanding of what Rust's safety guarantees are and what 'unsafe' actually means in Rust.
Rust safety guarantees are based around a few rules involving references, ownership, lifetimes and borrowing. None of them forbid the creation of mutexes or anything else which you will find in Rust.
'unsafe' Rust code shouldn't break these rules any more than the 'safe' code does, the only difference is that the compiler lets you do something which it doesn't know is safe. It's up to you to design the code such that it is.
If code marked as 'unsafe' does something which actually violates one of the assumptions on which the Rust compiler works, then the safety of the whole program is jeopardised.
(There was actually a bit of bike-shedding surrounding possibly renaming the 'unsafe' keyword to better reflect these facts, which included suggestions like 'trustme' and 'yolo', but a satisfactory term was not found)
You are correct. When I refer to the safe sublanguage, I mean the language without any unsafe blocks. Since so much of Rust is defined in libraries, precisely what is allowed in those unsafe blocks is a fairly important detail; they can violate semantic guarantees that "pure" safe Rust provides. In this case, the Rust core team made the deliberate choice (which they have discussed on occasion) not to consider deadlocks unsafe behavior, because they were not able to get it to work to their satisfaction. As another example, there are fairly subtle implications for concurrency around the fact that Rust provides (using unsafe code) a swap function for mutable references. Swap cannot be written in the pure safe sublanguage. Yet another example: in pure, safe Rust, memory leaks are not possible (except via premature abort). This is not true of Rc, hence Rust does not guarantee freedom from memory leaks.
FWIW, transitively-safe Rust is quite uninteresting as an actual programming language, since you can essentially only use some arithmetic and simple user-defined data types. There's no IO and no core data structures like Vec: `unsafe` is designed to be used as a building block to create safe interfaces and the distinction between safe things implemented in the language itself and safe things implemented in libraries (especially the standard library) is fuzzy... and even then, the compiler can be regarded as one huge `unsafe` block.
Of course, what you say is perfectly true, the Rust language itself has no threads or concurrency, and hence is automatically dead-lock free.
Yes, it's somewhat banally true, but it's an important subset of the language because the guarantees of transitively-safe Rust represent a baseline for what you can ultimately guarantee in the language. Work like Patina (ftp://ftp.cs.washington.edu/tr/2015/03/UW-CSE-15-03-02.pdf) is useful for that reason. Transitively-safe Rust may also independently fulfill a useful role in the language; e.g., it is a candidate for the subset acceptable in constant expressions.
I guess what I was trying to get at is that 'race conditions' can fall under the same kinds of errors as logic errors. This is the way that people usually attack the phrase "if it compiles, it works."
I'm not sure what you mean when you say that race conditions can be the same kind of error as logic errors.
I would define a race condition as "an error due to nondeterministic concurrency". If you ban nondeterminism, for example, race conditions are impossible. In practice, nobody wants to ban nondeterminism, so races creep in. There might be smarter ways to eliminate races (ways of only allowing benign nondeterminism), though. So I'm not sure that practical languages without race conditions are an unachievable goal, just a difficult one. Give it another 20 years and we'll see where we're at. :)
I think I'm just being imprecise with my terms, since this is just a comment. I've seen people describe race conditions in practice as "most of the time, message A comes in, then B, but I didn't write it correctly, so in certain cases, B comes in, and then A." Which I would casually describe as racy, but as you point out, would be more accurately described as a pure logic error.
Deadlocks can be statically prevented by ensuring that all code paths acquire locks in the same order. However, Rust's type system is not capable of enforcing that by itself (except maybe in some very limited cases). Some other substructural type systems (e.g. http://en.wikipedia.org/wiki/Substructural_type_system#Order...) might be up to the task, but I'm far from an expert in this area.
That said, the ergonomics of the mutex design and the ownership system would probably eliminate most of the race conditions I've seen. Lock ordering can be effectively established by nesting mutexes: e.g. `Mutex<(X, Mutex<Y>)>` allows you to establish that any locking of Y has already locked X.
Yeah, I've thought about this approach in the past. One problem is that it requires you to acquire the same locks in the same order. That is, if you had Mutex<(X, Mutex<(Y, Mutex<Z>)>)>, you couldn't acquire X, then Z, without first acquiring Y, but it would be perfectly safe to do so.
(Although, come to think of it, you can probably work around that with some cleverness; e.g., you could provide the option to access either the data protected by the Mutex or the next Mutex in the sequence through &mut; whichever you chose, you couldn't access the other until you were through with the children. You still have to keep the mutexes structurally in a linked list, though, which is obviously not ideal for all situations. And you still need to deal with the fact that the reference to the first Mutex must be an &-reference, hence can be replicated indefinitely, which allows you to subvert the ordering guarantees; but this was already a problem, and I think there are some clever ways around this using rank-2 lifetimes).
Rust's concurrency primitives are as powerful as its semantic concepts are expressive - both stunningly so for a language that compiles to this level.
Yesterday I wrote a macro to build simple actors[1], without a care for robustness (this was just a rough draft and I didn't expect it to even work so quickly - amazingly it did!). Now I'm looking at how to fill it out and make it generally useful and get this: I didn't even try to handle failure yet, but it already responsibly joins the thread and deallocates its resources as soon as the actor can no longer receive messages. Automatically! As a natural consequence of Rust's memory model and type system. Despite the fact that Rust knows nothing about concurrency at the language level.
I'd just like to say that I found the tone of this article so much better than all of the other rust guides I have looked at. It managed to be interesting without being annoyingly cute or overly dry or explaining basic concepts which any programmer will know. It makes me want to go back and have another look at the language because it presents the ideas and upsides in such a pleasant manner. Well done.
> the same tools that make Rust safe also help you tackle concurrency head-on.
The tools that make Rust safe, in this instance, being its ownership type system. Why on earth should that be the case? Why would ownership types make concurrency easier? The post gives plenty of in-depth answers to this question, but to my mind it misses the bigger picture.
The big-picture reason why ownership types and concurrency match so well is the Curry-Howard correspondence.
Ownership types are known as linear[1] types in the PL theory literature. Linear types are so-called because they correspond to linear logic, a logic sometimes described as "the logic of resources" rather than "a logic of truth". In linear logic, instead of reasoning about eternal truths like "1 + 1 = 2", you reason about resources, like "I have an apple and a banana". If you also happen to know that "from an apple, I can make applesauce", then linear logic lets you reason that you can obtain the state "I have applesauce, and a banana" (but no longer an apple).
However! Linear logic has another interpretation, a Curry-Howard interpretation. You can also view it as a type system for a concurrent language based on message-passing, where your types describe the protocols message-channels obey. "I have an apple and a banana" corresponds to a channel which will first produce an apple, then a banana, then close. This is known as the session-types interpretation[2].
At first glance it may seem that this interpretation of linear logic and Rust's interpretation are unconnected. I doubt it; in logic, everything connects to everything else. It's no coincidence, for example, that ownership (linearity) is what you need in order to send mutable values across a channel without copying. There is a deeper structure waiting here to be discovered, and I for one am deeply excited about it.
[1] In Rust's case, technically they are affine types. An affine type is a linear type whose values can be freely dropped, i.e. deallocated.
[2] Session types have been around for a long time, but only fairly recently was the connection to linear logic discovered.
For anyone interested in session types, we are currently working on implementing a library for them, in Rust[0]. However, as you point out, Rusts types are affine, not linear, so we're also working on a compiler plugin for Rust that let's you track protected types and make sure they aren't dropped, letting you sort of emulate linear types[1].
Most of this is still in a pretty early state (neither actually compile at the moment), and there are some fundamental problems with the plugin (it's impossible afaik for compiler plugins to examine external crates), but someone might find it interesting.
Eventually, our goal is to use these tools to solve some concurrency problems in Servo.
"First, the `linear` attribute as described here does not create true `linear` types: when unwinding past a `linear` type, the `linear` attribute will be ignored, and a `Finalize` trait could be invoked. Supporting unwinding means that Rust's linear types would in effect still be affine."
Ownership is doable in a dynamic language, provided you are willing to tolerate aborts: C++'s std::unique_ptr is enforced at runtime. Similarly, you can emulate dynamic borrowing using something like Rust's RefCell, which aborts when you attempt to mutably borrow a value twice (though it can be quite difficult to reason about from a usability perspective).
But if you're building on top of F# you have a static type system at your disposal. Usually, languages that incorporate linear types and first-class garbage collection treat linear types as a separate kind, and allow them to interact with standard types only in limited ways. See, for example, https://github.com/idris-lang/Idris-dev/wiki/Uniqueness-Type.... I believe http://protz.github.io/mezzo/ is also exploring this design space.
My lang will have a static type, so that info is usefull.
It will start as a interpreter, but probably the challenge is the ergonomics of it: How make it "easier" to use... probably limit it to only handle resources? (ie: files, handles, etc, like the IDisposable in .net)
(search for { contra godel, direct logic }) Can't find it but he quotes a really interesting exchange between Wittgenstein and Turing (who was attending his class) where W in his inimitable style just waves away paradoxes and says "so what?"
Can someone please write a Clojury lisp that compiles to Rust without immutability (by default) and concurrency primitives, since thread safety is guaranteed by the rust compiler?
I love Clojure, but this would be a completely different beast, a crazed DRAGON probably.
Immutability sure makes this stuff easy in Erlang.
EriPascal, an ancestor of Erlang, used runtime ownership for passing mutable binaries around. Once you send the binary in a message it is a runtime error to use it again afterwards.
I think it will do just fine as an industrial language, when you have the senior architect being the idris expert. His job will be to write the DSL for the engineers to code in. Idris already has hooks into C and Java and who knows what else by then. It won't work well if you expect to replace your java brigade with an idris brigade. Why would you need one anyway? I wonder if I'm being naive.
I am slightly confused about why the MutexGuard type is required in the locking example. Why can locking a lock not just return &mut? The API as written does so eventually anyway, through access(). I guess the API as written allows you to call access() multiple times, but I do not see how this is a useful property if the mutex stays locked for the entire scope of the MutexGuard anyway.
The key is that the MutexGuard is responsible for unlocking (and it does so automatically on destruction). That way, you're tying the scope for the &mut reference to the scope in which the lock is actually held.
Similarly, I quite don't understand what in the type signatures precisely tie MutexGuard to the result of access.
Like, in the "Disaster averted" example the mutex section, how is it detecting that "error: `guard` does not live long enough"? I see that access returns a borrowed mutable T. But, how does the compiler know that that's borrowed from guard instead of a hypothetical second or third parameter or from some other (global?) state altogether?
Edit: Or maybe if a function returns a borrow, then the borrow cannot outlive the scope of the function?
This says that the incoming borrow and the returned borrows have identical scopes (officially called "lifetimes" in Rust; 'a here is a lifetime variable.) It's a very typical pattern: if you return some borrowed data, it generally comes from a borrow you were given, and the shorthand allows you to leave this implicit in cases where it's clear.
EDIT FOR CLARITY: in particular, you're saying that if you take a "sublease" from something you've already borrowed, that sublease is valid for no longer than the original lease. So for example if you borrow a field out of a borrowed structure, the access to the field can last no longer than you had access to the original structure. In this case, we're saying that access to the data lasts no longer than the lock is held.
I take it there's no way of tying the lifetime of the MutexGuard to the lifetime of the .access return, so that you don't have to have this intermediate step?
The version presented in the blog post is simplified. In actual Rust, you don't need to write .access, as it is elided using the Deref trait. So in practice there isn't an intermediate step.
What is interesting is not so much that Rust has shared-memory concurrency (BTW, isn't lthread a M:N threading library? Everything in the article is 1:1) but that it can make this statically safe. Perhaps I am missing something, but I see no evidence from your repository that lthread enforces any of the static safety guarantees discussed in the article, which is not surprising as many of them fundamentally depend on lifetimes and borrowing.
std::unique_ptr is a significant improvement over prior options, but it can neither ensure correct usage of a mutex (in the sense that it can't cause a data race) nor preserve data race safety for references into another thread's stack. Both of these rely on Rust's ability to limit reference lifetimes, which C++11 doesn't really have (C++14 has an extremely limited form of it in rvalue references, but it's not sufficient to replicate what's in this article). unique_ptr also can't really statically guarantee uniqueness (it's done at runtime instead), meaning it doesn't quite work with channels; and in combination with other C++ features (such as references or shared_ptr) you can also see data races through it, since it doesn't prevent mutation; thus, it isn't able to guarantee safety for use with channels either.
unique_ptr transfers ownership. No 2 threads can be working on the same object unless you go out of our way to cheat unique_ptr behavior.
Statically or at runtime guarantees don't buy me much. Regarding mutations, const and copy/move constructors give you want u want. Generally, all you need to do is have 1 thread create an std::unique_ptr<Class> object and pass it to the channel for other thread to pick it up.
Regardless of whether you feel that these static guarantees buy you anything, C++ does not have them and Rust does; the two solutions are not equivalent. Const references only guarantee that you are not mutating through the reference, not that someone else isn't--if you are writing a library that accepts one, you cannot prevent misuse by users. This is an important difference from Rust. In any case, you still haven't explained how unique_ptr helps with the other two things I mentioned (ensuring that data protected by a mutex cannot be accessed without acquring the lock, and safely sharing and/or mutating data on another thread's stack). The blog post explains how Rust does it; C++ simply does not have the necessary types.
This is what Go should have done. Then Go's "share by communicating, not by sharing" would be real, not PR. Go code often passes references over channels, which results in shared memory between two threads with no locking. In Rust, when you do that, you pass ownership of the referenced object. The sender can no longer access it, and there's no possibility of a race condition. A successor to Go with Rust ownership semantics has real potential. Go has a simpler type system than Rust, and seems to be well matched to the back-end parts of web-based systems.