Hacker News new | past | comments | ask | show | jobs | submit login

Rust's ownership model, and the borrow checker that enforces it, are a major breakthrough in language design. It's so simple, yet it solves so many problems.

This is what Go should have done. Then Go's "share by communicating, not by sharing" would be real, not PR. Go code often passes references over channels, which results in shared memory between two threads with no locking. In Rust, when you do that, you pass ownership of the referenced object. The sender can no longer access it, and there's no possibility of a race condition. A successor to Go with Rust ownership semantics has real potential. Go has a simpler type system than Rust, and seems to be well matched to the back-end parts of web-based systems.




When Go was first announced in 2009, I thought "this is a nice idea, but I really wish it had generics and non-nullable references". Then Rust was announced a few months later, and I thought "wow, this is what Go should have been."

However, in the intervening time, Rust has diverged farther from Go. It no longer has green threads nor emphasizes message-passing based concurrency. In addition, Go was ready to use much sooner; by having a much simpler type system, and less ambitious concurrency story, Go has been ready to use and build up a library ecosystem for many years now, while Rust is only just now hitting stability to the point where you can write code and not have it break if you don't constantly keep it up to date with the latest nightly.

So, I think that they are both valid strategies. There has been lots of good code written in languages which don't provide as many safety guarantees or as strong a type system as Rust has; it's not an absolute requirement for all code to be written with such guarantees.

By staying simpler and not trying to solve as many problems, Go has been ready for production use for quite a lot longer, and I think that there are many things for which it is still simpler and easier to use.

That said, I do prefer Rust now that it's actually hitting a stable and usable state. I wish it had been ready sooner, but that's the price you pay for needing a lot of time on iterating on how to make these stricter semantics actually usable in practice.


Well, let's also not forget that a big factor in Go's success is Google.

Google has much deeper pockets than Mozilla to build a language and infrastructure.


This meme really should die.

It has never been true so there's nothing to forget.

The number of full-time Rust people paid by Mozilla is roughly the same as number of Go people paid by Google.

Mozilla is a wealthy company and while Google is even more wealthy, it doesn't mean that it spends all its money on Go. Both v8 and Dart are staffed with more people and Dart is not taking off the way Go is.

People are trying to rationalize away the explosive popularity of Go with all sorts explanations, the "it's all because it's Google" being one of them.

The simple explanation is: people use Go because it's a good language. That's the big factor in Go's success.


> Go because it's a good language.

Go fits a niche and is good enough, for now. But Go as a language is seriously flawed, and its flaws will become more and more obvious as the language gets more popular. People can certainly understand Go made the choice of "minimalism". But minimalism doesn't mean the language shouldn't have features people really need. The infamous "You don't need that in Go" sentence will not fly that much longer,especially given how the Go team and community tries to answer Go short comings. The biggest insult to intelligence is that "go generate" feature.It's just embarrassing.

Go for me is a missed opportunity. I still like it but I keep a close eye on alternatives that don't have their core library riddled with "interface{}" like types.

Because there is a need for something simpler and safer than C,that is easily accessible to "script kiddies",that is fast,that has easy concurrency, that compiles to machine code,that has type safety,that isn't tied to windows, in order for people to produce executables that can easily be distributed or concurrent servers.

Whatever language succeeds in that niche will be the biggest language of the next 10/20 years. Mark my words.


> that has type safety

Actually, there is no need for that. Type safety has nothing to do with software being more reliable or more secure. There is, however, a need in language creators, who understand that large standard library with static binaries and cross compilation are super important in a fast moving world where many different operating systems live together on many different architectures.


> Type safety has nothing to do with software being more reliable or more secure.

It is true that type safety doesn't necessarily mean that software is more reliable and secure (it depends on the type system, the semantics of the language, and a host of other things), but saying it has "nothing" to do with it is incorrect. In particular, preventing undefined behavior certainly has an effect on security, whether or not it's through a static type system. As pcwalton has mentioned on several occasions, something like 50% of critical CVEs in Firefox exploited attack vectors that would have been prevented by Rust's type system.


This is where I disagree and believe that such way of thinking about bugs cannot take us anywhere.

Bugs have nothing to do with formal properties of the language. They are the result of people's thinking process. And type system may just as well be the thing someone was dealing with while making his next CVE-worthy mistake.


Factually, Rust's type system guarantees, if consistently upheld, would have eliminated those bugs. Empirically, we know that in typesafe languages like Java, these types of vulnerabilities are far less frequent. These are facts: you cannot "disagree" with them.

Even if we assume that people write buggy code at about the same rate in C++ and Rust, as long as the vast majority of Rust code is not inside an unsafe block, Rust still reaps this benefit, because bugs in safe Rust shouldn't cause memory unsafety.

Finally, yes, bugs do have something to do with formal properties of a language. Really. They do. You can prove that software matches its specification, or fulfills certain properties. People have. SeL4 is a thing. Relying on a static type checker means you only need to trust the static type checker to be correct in order to enforce the appropriate properties, not all code written in the language, and "the Rust type checker" is a much smaller kernel to trust, and will be more widely exercised and heavily scrutinized, than "all other Rust code ever written." This is literally the entire point of static type systems.

I apologize, but this is going to be my last response to you, because I don't think we can have a productive conversation about this.


> Factually, Rust's type system guarantees, if consistently upheld, would have eliminated those bugs.

You are missing my point. It doesn't guarantee to not cause other bugs. And saying that something eliminates some type of bugs is completely misleading, because this is not what matters. It matters to not cause other bugs as well as eliminate some bugs. Which is impossible to guarantee with formal methods. And which is why current academic approach to languages cannot bring us anything, until the whole system changes.

Psychological approach to programming language design is what has to happen. Anything else is broken.


It's impossible to eliminate all bugs, no matter what you do. Psychological design will eliminate some bugs but not all. Static type system will eliminate some bugs, but not all. The best you can do is try to eliminate as many bugs as possible, by doing multiple things that give the best returns, given the bugs you're targeting.


> Actually, there is no need for that. Type safety has nothing to do with software being more reliable or more secure.

If you said this to me in an interview I would politely end the interview there and you would not be hired. A high school student with no programming knowledge at least can be taught. If you're confident in what you just said, you're both ignorant and unteachable.

I really can think of no more scathing indictment of Go than this is the thinking of a typical Go user.


Let's make a small amendment to your simple statement.

>The simple explanation is: people use Javascript because it's a good language. That's the big factor in Javascript's success.

>The simple explanation is: people use PHP because it's a good language. That's the big factor in PHP's success.

Doesn't feel right, does it?

Google and corporate sponsors are incredibly important in building the ecosystem of programming language. Traction may be the single major defining characteristic of language success and one can certainly buy traction with money and influence.


But it's true. Google's association is a big factor in its success. It's not the only reason nor likely the biggest now but it was important at least at the beginning. The fact is that every week we hear about new programming languages. Almost none of them end up being widely used.

Having someone like Google, Mozilla, Apple, Microsoft etc behind it makes a big difference.


It's not so much that Google promotes Go. It's that Google uses Go internally for production code. Thus, the compiler, and the library modules used for web-type services, are thoroughly exercised, and used by in-house people who can insist that bugs be fixed quickly. It took Go far fewer years to achieve stable libraries than, say, Python.


The biggest factor probably was that Rob Pike had basically been prototyping the language for 20 years.

(But google's name certainly helped.)


Not to take anything away from your comment on this being a language that has been brewing for years, but people really should give Robert Griesemer more credit for his influence on the language design.

I know he's the least publically visible of the three, but IIRC godoc and gofmt are mostly his babies. After getting used to those I think I miss them more in other languages than I miss the concurrency.


Well said.

Rob Pike is explicit about early Go development; everything that went into Go was agreed upon by Robert, Ken, and Rob. If all three didn't agree it didn't go in.


But it's not as if they threw a massive amount of manpower or budget at it. I think Go succeeded more by association (to Google) and the fact that it hits a sweet spot for many people - it produces native code, but does not have the complexity of C++ or the unsafety of C.

I have written a fair bit of Go code. But I will switch to Rust without a second though once the ecosystem takes of. The lack of simple things like generics or algebraic data types in Go is jarring.


> It's so simple,

I think that in hindsight, this may be true, but it took a _very_ long time to figure it out. It took _years_ of iteration before we got to where we are today, and the previous work in this space, which used regions but not borrowing, didn't go further because of the ergonomic issues.

In retrospect, hard things often seem obvious, but that's because the work was already done!


Exactly! I came to Rust thinking I knew all I needed due to trying to solve the same things by hand in the CLR. Like I figured I had a pretty good idea what I needed to allow F# to have fast memory management.

And while the core idea is correct, holy shit there's so many details that the Rust team has had to deal with and iterate over and reduce until it's nice. I suppose this is why the CLR and everyone just punt on the whole issue.


I'm pretty sure Cyclone used both regions and borrowing in ~2005. But it was a research project and not meant for wide use.


I thought Cyclone only used regions, but I haven't read the paper in a while. (and dates from 2002 IIRC)

It certainly had a big influence on Rust, we're fans of the work.


Cyclone definitely had borrowing. See section 3.1 of this paper: https://www.cs.umd.edu/~mwh/papers/ismm.pdf. Generally speaking, every attempt to use substructural types for resource management in a practical language has allowed for temporarily treating a restricted resource as unrestricted.


It looks like you're right.

I had an undergrad senior thesis in 2006 that futzed around with regions and borrowing, and I know I read some papers on their interactions, but I'm not immediately finding anything chasing references.

edit: Of course, I don't mean to diminish the accomplishments of Rust, which had turned these moving parts into a working language. In particular, Rust makes it easy to ignore these issues most of the time.


Absolutely no diminish-ment taken.

To put it in startup terms: other people working on your idea only validates it, it's not a threat. Execution is ultimately what matters. I prefer letting academia blaze the intellectual trail, and then have a number of people implement it after. :)


"Should have done" is a silly phrase to use here -- Rust is attempting to advance the state of the art, Go was attempting to use only very-well-understood ideas to make a very-well-understood, simple language.

Perhaps some day we will see a successor language that follows the philosophy of Go using the new stuff we understand thanks to Rust.


> Go was attempting to use only very-well-understood ideas to make a very-well-understood, simple language.

Well-understood like code generators (go generate)? Were generics not well-understood enough, so they decided to go with something that's well-understood to be a poor solution to the problem?


Actually, now that you mention it, I think generics are poorly understood in general. C++, Java, and C# have major differences in their approach to generics, but they have similar syntax so most people gloss over the differences. I think doing generics well requires incorporating higher order types, higher kinded types, and other concepts which don't exist at all in the wild west of C++ templates. This clumsy implementation leads to monstrosities like std::allocator_traits<std::allocator<T>>::rebind_traits<U>, which would be completely unnecessary if C++ had HKT. Java and C# make other trade-offs but also lack HKT and suffer for it.


>I think generics are poorly understood in general

Perhaps by programmers (and the designers/maintainers of certain mainstream languages...), but there are some very strong models out there; we've had well-behaved parametric polymorphism à la Hindley-Milner for decades and Haskell's typeclasses are in my opinion a very well-thought-out approach to ad-hoc polymorphism (and which the more ambitious C++0x version of Concepts in some ways resembled).

C++'s templates are like the untyped lambda calculus of the type-level world; they're perfectly capable of doing most anything you'd want from them, but using them effectively requires a lot of boilerplate and discipline (and as for std::allocator_traits et al., I think that's more an issue of library design than the design of templates).

I've never found subtype polymorphism particularly convincing beyond the case where independent single classes implement meaningful interfaces, and that's basically like a member function–oriented version of typeclasses.

Now, all this said, I haven't actually used Go, so I don't know how well the rest of the language would interact with these ML/Haskell-style systems, but I imagine at least basic parametric polymorphism would be relatively unobstructive.


Concerning std::allocator_traits, it reflects a deeper limitation of the language rather than just a library design flaw. If you had HKT, you could pass std::allocator to a template. You can't do that, you can only pass std::allocator<T> for some concrete T. Equivalently, in Haskell, you can pass IO as a type parameter, so you don't see the same kludges.

I definitely agree with the sentiment about subtype polymorphism.


You can actually pass std::allocator to a template:

    template <class X> foo {};
    template <template<class> class F> struct bar { F<int> f; };
    
    bar<foo> b;
This is painful to do with a lot of the STL like vector, because they have a lot of default template arguments and there's no implicit currying.

I haven't investigated it but variadic template template arguments might help there, though.


>I haven't investigated it but variadic template template arguments might help there, though.

They do:

    template <template <typename...> class Container, typename T>
    Container <T> fin (T n) {
        Container <T> set;
        for (T i = 0; i < n; ++i)
            set.push_back (i);
        return set;
    }
    
    int main () {
        for (int i : fin <std::vector> (10))
            std::cout << i << std::endl;
    }


You know. As Go get more popular, people will have a critical look at the language and its short comings.That's inevitable.

If the only answer to the issue they raise is "you don't need that in Go", "Use go generate" or "Go isn't for you", Go is going to get a lot of bad rap that will stop the adoption of the language.

If the Go team thinks the language doesn't need generics they are either out of touch(their rights,they don't know owe us anything) or arrogant. Either way, Go will not be the successful language it could have been.

One should never have to resort to interface{} to code anything in Go. Developers want type safety, not write type assertions everywhere. The fact that 1/ Go has generics but you cant create yours(map/slices/arrays) 2/ the core lib is full of these interface{} functions proves the language has a big problem.


Rust has something that Go does not: Servo.

If Rust had been missing an important feature like generics, the Servo people would have been telling the compiler people "you're being silly, of course we need generics!". And because Servo is the official test-case project for Rust, the compiler people would have listened.

Go has nothing like this. There's no formal effort to ensure practicality.


Surely some internal Google projects were developed alongside Go in the language.


It's not about whether someone is using the language, but about whether the language designers decide to listen to a user above themselves.

It's hard to design a language. Everybody is always trying to give their conflicting opinions. Many language designers react by turning inwards and ignoring everybody else.


Generics are maybe poorly understood by the average programmer, but what I meant with my rhetorical question was that there is a lot of literature and there are a lot of people who understand generics very well. Maybe the creators of Go don't understand generics, but then again, maybe if you don't understand generics you shouldn't be developing a high-profile statically-typed programming language.

The Go team might not understand generics, but that doesn't mean generics aren't well-understood.

This is my core problem with Go: they clearly have no idea what a decent programming language would look like and they're getting undeserved attention becauase of their previous (admittedly impressive) work.


I don't think the Go team has any problem understanding anything. I think they have different subjective goals for a programming language than many critics, which leads to a feature set which includes some things critics would leave out and excludes some things critics would prefer (like generics.)

I think Go has gotten fairly decent traction because there are roles for which real programmers find it reasonably well suited compared to existing alternatives, and I think that, not the previous work of the creators, is what drives attention to it.

OTOH, I think Go's period in the sun is going to be a lot shorter than, e.g., Rust's unless we see a Go 2.x that addresses some of the "not in the Go 1.x timeline" things with Go fairly soon.


I do agree that the problem relates to language goals, but their poor choices of goals to pursue do demonstrate a lack of understanding. Their stated goals are frequently solutions that don't really address good use cases. Take for example fast compilation time: the stated use case for that goal was compiling Google's massive C++ code base, which took too long to reasonably compile on a dev machine because of template expansions. Refusing to add generics is an idiotic solution to this problem:

1. Generics aren't templates, and are in fact a solution to this problem. They don't expand at compile time. 2. The long compilation times are indicative of deeper problems: an over complicated code base that should be refactored down, and lack of modularity which would allow partial compilation (only compile the changed modules). 3. Removing generics creates new problems with increased bugs and duplicated code. It's trading compile-time for development time, which has historically rarely been a good trade. 4. Even if this is a legitimate concern for Google (which I still don't think it is), it's not a legitimate concern for 99.9% of projects out there. How many projects actually have millions of lines of code that can't be factored down and modularized? If this is really Go's target then they should admit it's a highly specialized language and not a general-purpose programming language, and that most people shouldn't be using it.

Calling these goals "subjective" as if this justifies them is just a smarter-sounding way of saying "That's just, like, your opinion, man." Opinions can be wrong, and the opinions of the Go team are wrong.


Code generators like RATFOR, lex, yacc, and the C preprocessor? I think it’s fair to say that those are well understood. Generics like Ada Generics, C++ templates, and Java Generics? I think it’s fair to say that we are only beginning to understand the benefits and drawbacks of that approach to code generation, which is why every time I recompile the Java application I’m currently working on, I get erroneous warnings telling my that my varargs are causing heap pollution in my generic static methods. (Which I can suppress, but instead I’m planning to delete those methods.)

Even OCaml, whose approach to parametric polymorphism is worlds simpler than any of the monstrosities mentioned above, just changed its approach by adding GADTs, which I still don’t understand fully despite the best efforts of generous HN commenters.

So I think that, given that Golang/Issue9 decided not to include inheritance, linear types, exceptions, or overloading (even of operators!), in large part because they aren’t well-understood, it’s entirely unsurprising that it decided not to include generics, either.


> Code generators like RATFOR, lex, yacc, and the C preprocessor? I think it’s fair to say that those are well understood.

Yes, that is exactly what I said in the post you are responding to.

> I think it’s fair to say that we are only beginning to understand the benefits and drawbacks of that approach to code generation, which is why every time I recompile the Java application I’m currently working on, I get erroneous warnings telling my that my varargs are causing heap pollution in my generic static methods. (Which I can suppress, but instead I’m planning to delete those methods.)

Just because you don't understand something, doesn't mean that nobody does?


I guess you didn't understand the problem I was describing. I wasn't saying that the problem is that I don't understand parametric polymorphism, or for that matter its realization via code generation, although I’m sure e.g. Philip Wadler understands those things a whole heck of a lot better than I do. The problem was that the Java language designers didn't understand generics, and accidentally implemented varargs in 1.5 in a way that produced a completely unnecessary and avoidable collision with the parametric polymorphism system (designed by Wadler) that they also introduced in 1.5. And there are a variety of similar problems in various parametric-polymorphism systems, which leads me to think that the problem is not merely that some loser at Sun didn't understand these things (and didn't bother to ask Wadler, who surely could have warned him off the particular stupidity I mentioned) but rather that there is a general lack of understanding of parametric polymorphism.


> Go was attempting to use only very-well-understood ideas to make a very-well-understood, simple language.

Many of the ideas in go were not well understood and had only been demonstrated in research languages prior to go. Its concurrency model and associated primitives, and the automatic interface system were in roughly the same state then as Rust's new ideas are now.

That said the ideas in Rust were not as well understood 5 years ago and I doubt that the go creators could have incorporated them well at the time. It took Rust 5 years of experimentation to come to what it is now (and its still got a number of additions to go!).


Go's concurrency primitives are rooted in decades-old research (CSP).

Its interface system is an implementation of structural typing, which may be even older.

You could maybe argue that neither had "industrial-strength" implementations, but that seems like a stretch.


> Many of the ideas in go were not well understood and had only been demonstrated in research languages prior to go. Its concurrency model and associated primitives, and the automatic interface system were in roughly the same state then as Rust's new ideas are now.

to Go's authors the CSP model was very well understood through three different implementations: Alef, Limbo, and Plan 9's libthread.


> Alef, Limbo, libthread.

Also Newsqueak.


Maybe not well understood by programmers but in the programming language world Go's ideas have been around for over 3 decades.


First, please don't compare Go and Rust. They are completely different languages with completely different target use cases. Go has very different design goals than Rust, so very little that Rust does would actually be possible in Go, and vice versa.

> Go code often passes references over channels, which results in shared memory between two threads with no locking

Having a complex type system would cut into compile times (an explicit primary design goal, to the point where the Go compiler must be written to read each file exactly once[0], no more).

As for the code you're referring to, Go is not designed to be a language which prevents you from shooting yourself in the foot with provable code. It's designed to be a language which makes it reasonably easy to be sure you haven't, as long as you follow the general idioms and best practices. What you're describing here is definitely not one - I can think of a few instances in which pointers might be reasonably passed over a channel, but they're few and far between.

Also, I would think it's pretty obvious that once you've sent something over a channel you shouldn't try and write to it anymore[1]. Do you have an example of the code you're referring to?

[0] well, technically "at most once", since some files can be skipped entirely.

[1] I'd need to think about this, but I don't think it'd be difficult to detect this statically[2] at compile-time and enforce that stack-allocated rvalues into channels are never used again in the same scope. It's definitely possible to extend `go vet` to handle this, and it may even be possible to write this in as a compiler error in a future version of Go.

[2] Incidentally, one of the reasons that it's so easy to do reliable static analysis on Go code (compared to other languages) is that the grammar is incredibly simple - it's almost entirely context-free, which is very rare among non-Lisps. Having a more complex type system usually requires at least some additional syntax to along with this, which means you'd have to start sacrificing this design goal as well in order to create a more elaborate type system.


> (an explicit primary design goal, to the point where the Go compiler must be written to read each file exactly once[0], no more).

Same in Rust. Thanks to the module system, the Rust compiler never rereads a file more than once.

> [1] I'd need to think about this, but I don't think it'd be difficult to detect this statically[2] at compile-time and enforce that stack-allocated rvalues into channels are never used again in the same scope. It's definitely possible to extend `go vet` to handle this, and it may even be possible to write this in as a compiler error in a future version of Go.

That trivially fails to solve the problem, due to aliasing.

> [2] Incidentally, one of the reasons that it's so easy to do reliable static analysis on Go code (compared to other languages) is that the grammar is incredibly simple - it's almost entirely context-free, which is very rare among non-Lisps. Having a more complex type system usually requires at least some additional syntax to along with this, which means you'd have to start sacrificing this design goal as well in order to create a more elaborate type system.

Rust's grammar is also context-free, as far as I know.

Your proposed static analysis is not reliable. The "more complex" type system in Rust exists precisely so that we can do more reliable static analysis.


> Rust's grammar is also context-free, as far as I know.

I believe bytestrings are context-sensitive :( I don't remember the exact details here, so I could be wrong. But other than that...


Technically, byte-strings (and nested comments) mean the lexer is not regular, but the grammar is still context-free. But anyway, parsing is not typically the bottle-neck these days.


Oh, that's just lexing though.



> Thanks to the module system, the Rust compiler never rereads a file more than once.

Are you sure about this? When you use a generic function accepting and/or returning values of type T, I guess the compiler has to generate a version of the function for each instance of T. But the compiler cannot know all the possible use of the generic function without having walked through the whole code first, which implies at least two passes. How does it work?


The compiler reads the source code into an internal AST and manipulates that, never touching the external files again.


Ok, you're right, but we're playing with words here :) I understand the file containing the source code of the generic function is read only once, but I guess the AST of the generic function is processed many times, at least once for each instantiation of the function? And I'd venture in guessing that the biggest cost is in processing the AST and compiling it, not in reading/parsing the source?


Generating specialized code from a generic AST is in no way analogous to the exponential-time explosion that is the C++ header system, which is what Go is referencing when it says that it has been designed to read each source file only once. All languages with proper module systems have this property (which is to say, basically all languages that aren't C or C++).


Yes, designing a proper module system and getting rid of the header system is the best way to improve compilation time. But it was not my point. There are other factors impacting compilation time. Look at Scala for example: it has a proper module system, but compilation is still relatively slow, even if better than C++. This is the reason why I'm interested in learning how Rust compiles generic functions, and what is the impact on the compilation time (because the compiler has to generate a version of the function for each possible T), the binary size, and the pressure on the CPU cache (because having many versions of the function makes harder to keep all of them in the cache at the same time).


The Rust compiler actually has built-in diagnostics to let you profile the time that each phase of compilation takes. On mobile but I believe that it's `rustc -Z time-passes`, and whenever I use it on my own code 95% of the time consists of LLVM performing codegen. Nearly every other compiler pass takes approximately 0% of the compilation time (coherence checking being the notable exception, for reasons I have the discern).


> the exponential-time explosion that is the C++ header system

No I don't think so.


It's not playing with words: the parser only runs once, and the processs/reprocessing AST is much faster than interpreting the original source multiple times (e.g. it can be indexed cheaply).


> > (an explicit primary design goal, to the point where the Go compiler must be written to read each file exactly once[0], no more).

> Same in Rust. Thanks to the module system, the Rust compiler never rereads a file more than once.

As I said right at the beginning of my post, I'm not trying to compare Rust and Go directly, because I don't think that's meaningful. I'm explaining why these particular features would be difficult to incorporate into Go. Note that I never said that Rust reads a file more than once, or that Rust's grammar is not context-free. In fact, the word "Rust" doesn't appear anywhere in my comment at all except in that very first paragraph.

I love talking about PLT and would otherwise be interested in having a discussion about static analysis and hearing why you think it would not solve the problem, but I have to say, it's both frustrating and discouraging to post an in-depth response and then get downvoted twice, with the only reply being one which very clearly ignores the very first line of my entire response.


Bystander here, neither upvoted nor downvoted you, but I think one reason for the skepticism is that people have hard the "It shouldn't be hard to implement X in language Y..." refrain many times before, and until somebody actually does implement X in Y, it means nothing. "It shouldn't be hard to make Python at least as fast as V8." "It shouldn't be hard to implement static typing on top of Python 3 function annotations." "It shouldn't be hard to add lambdas to Java [well, they finally did with Java 8...which we still can't use on Android]".

If it's actually not that hard, go implement a checker for Go that does check that statically-allocated rvalues are never used again, and post it here. You'll probably shoot to the top of Hacker News, it'd be a nice open-source project for the resume, and it'd provide a very useful tool for the Go community.


> I would otherwise be interested in having a discussion about static analysis and hearing why you think it would not solve the problem

Stack allocation isn't in the semantics of Go, so that's a pretty weird thing to use a basis of a static analysis. It's also not sound because of interfaces or closures. You would want something more like "fully by-value data with no pointers in it, no interfaces, no closures".


Not to mention no arrays or slices! This burned me pretty hard during my first week with Go.


You outlined why they shouldn't be compared because they are different and outlined differences. The response (from one of the rust devs, if I'm correct) explained how some of your purported differences weren't actually different. The response wasn't ignoring your first line, it was explaining how portions of your evidence backing up that first line were factually incorrect.


It's impossible for your comment not to be an implicit comparison of the two. No amount of disclaiming the intention to compare can prevent that.


> First, please don't compare Go and Rust. They are completely different languages with completely different target use cases. Go has very different design goals than Rust, so very little that Rust does would actually be possible in Go, and vice versa.

It's perfectly reasonable to compare any two programming languages, especially where their use cases overlap.

> As for the code you're referring to, Go is not designed to be a language which prevents you from shooting yourself in the foot with provable code. It's designed to be a language which makes it reasonably easy to be sure you haven't, as long as you follow the general idioms and best practices. What you're describing here is definitely not one - I can think of a few instances in which pointers might be reasonably passed over a channel, but they're few and far between.

Absolutely sure is better than reasonable sure.


> They are completely different languages with completely different target use cases. Go has very different design goals than Rust, so very little that Rust does would actually be possible in Go, and vice versa.

But everyone keep trying to though, that's kind of interesting.

The reason I am guessing is because Go billed itself as a "systems programming langauge". It did, go find the original announcement video (or was it a press release). That is what it said.

Later after replacing C idea didn't quite materialize. They clarified that they mean "systems programming language" to mean something else actually. Now if I said that or some other anonymous user, ok, fine, people get confused, don't know enough etc etc. But I don't believe Rob Pike doesn't know what a systems programming language is.

Anyway, I am not saying one way or another but just illustrating that everyone doing the comparison is not completely insane.

Now they way I see Go is more like a Python++ or Ruby++. All those nice concise languages, + concurrency, + speed, + some type safety, + easy static binary deployment. So I agree with you that Rust and Go should not be compared.


My perspective is that Go seems to be trying to aim at network aware services where Java might have been used before.

Internally what I've seen @ Google is it seems to be used here and there for services where Google teams have typically used C++ (but other companies might use Java.)

And yes, some places where Python has been used, Go makes a decent replacement.

Not my cup of tea, but I can see its niche. It's not the same niche as Rust. At least not right now.


> First, please don't compare Go and Rust. They are completely different languages with completely different target use cases. Go has very different design goals than Rust, so very little that Rust does would actually be possible in Go, and vice versa.

Almost all languages are different and thus have different design goals. That doesn't mean it should be off-limits to compare them.


Yeah, but instead of having an interesting discussion about Rust, much of the thread devolves into a pissing contest.


> First, please don't compare Go and Rust. They are completely different languages with completely different target use cases.

Then why does everyone insist on comparing Rust and C++? Objectively Rust is much closer to Go than to C++.

That's just dumb. Rust and Go were, in fact, contemporarily designed to similar constraints and with similar goals. That they made significantly different design choices is an interesting point that should be discussed and not swept under the rug just to win internet points.


At the language level, Rust is substantially more similar to C++ than it is to Go. C++ and Rust both have many properties (lack of GC, pervasive stack allocation [even for closures], move semantics, overhead-free C FFI compatibility) that many other languages lack, and the Rust developers actively work to match C++ on features. None of this is true for Go; Rust's similarities with Go are shared by many other languages as well, many of which are much more widely used than either (hence would probably represent a more informative comparison; e.g. I think posts contrasting Rust and Java would be quite useful, but I have seen very few of them). As such, Go and Rust comparisons tend not to be very illuminating.


> At the language level, Rust is substantially more similar to C++ than it is to Go. C++ and Rust both have many properties (lack of GC, pervasive stack allocation [even for closures], move semantics, overhead-free C FFI compatibility) that many other languages lack, and the Rust developers actively work to match C++ on features.

I think this is absolutely true, for what it's worth.


Rust and Java? Not the usefulness you're looking for, but this tool popped up on HN today: http://rosetta.alhur.es/compare/Rust/Java/#


The Rust on Rosetta code is exceedingly old. There's a community project to update the examples, but they're waiting until 1.0 to move upstream, so as not to cause them too much churn.


Rust is very similar to modern C++, just without backwards compatibility considerations, and with the (fantastic, IMO) additions of compiler-enforced ownership rules and generics instead of templates, which makes certain type checking work better.


What objective criteria are you using by which Rust is much closer to Go? Go being garbage collected and Rust/C++ not being seems like a much larger gap than anything between Rust and C++.


A managed heap; emphasis on the use of functional constructs; rejection of lots of previously-thought-to-be-essential OOP features; built-in concurrency based on a rejection of the traditional primitives; general de-emphasis of metaprogramming constructs like templates and macros; broad design emphasis on preventing common programming mistakes.

You're saying, I think, that the Go implementation and runtime looks more similar to Java and that C and Rust are clearly in the same family (in the sense of being only losely coupled to a runtime environment). That's the way a language implementer might look at the question, I guess. It's certainly not the only one.


> A managed heap

I'm not sure what you mean by this, but when I hear "managed heap" I think of heap memory managed by a garbage collector. This is not a feature of Rust.

> emphasis on the use of functional constructs

Rob Pike wrote (unavoidably, because of the lack of generics) crippled versions of map and reduce for Go and declared that the almightly for loop was superior. I don't think an emphasis on the use of functional constructs follows from this.

> rejection of lots of previously-thought-to-be-essential OOP features

Yep, this is an important similarity between Go and Rust. Rust brings traits, which are (extremely) similar to Haskell's typeclasses, into the mix as well.

> built-in concurrency based on a rejection of the traditional primitives

Sure, but the differences between the languages is crucial here: Go builds safe(ish) concurrency primitives into the language; Rust does not, but provides language features powerful enough to build memory-safe concurrency primitives in the standard library. And, perhaps surprisingly, mutexen and the like are actually encouraged precisely because Rust can make them safer.

> general de-emphasis of metaprogramming constructs like templates and macros

I don't agree that Rust de-emphasizes these. Generic programming is strongly encouraged, and indeed Rust's generics are implemented very similarly to C++ templates. Rust also strongly encourages the use of macros for cleaning up repetitive blocks of code, both inside of and outside of function bodies. And very powerful metaprogramming is on the way (already available in nightlies) in the form of compiler plugins, which allow arbitrary code execution at compile time.

> broad design emphasis on preventing common programming mistakes.

I think this is a feature, or at least an intended one, of every higher-level programming language :-)


On the rejection-of-OOP thing; I agree that seems like the biggest similarity between Go and Rust, but even then, the major replacements for those features are completely different – Rust's traits are more like typeclasses and C++ templates (to some extent) than they are to Go's interfaces (though "trait objects" are like interfaces, but used less often). It also seems like Rust will eventually add some form of more traditional OOP features like inheritance, because the servo project would like such things to implement the DOM (and I think others are also interested for different reasons), which I doubt Go will ever do.



> It also seems like Rust will eventually add some form of more traditional OOP features like inheritance, because the servo project would like such things to implement the DOM

There's been a great community and core team effort to design small, orthogonal language features (or extensions to existing features) that can be used to regain the performance + code reuse benefits one gets from using inheritance, without all the associated warts. The DOM/Servo problem is a tough one and it's going to be very interesting to see if Rust can solve it without resorting to the blunt instrument of inheritance.

Here's an example from last year of such an attempt, in RFC form: https://github.com/rust-lang/rfcs/pull/250


Yeah I've followed the evolution of that discussion with interest (or I did until a few months ago, so I'm likely out of date), and came to the, perhaps incorrect, conclusions that it is likely there will be some solution added at some point, and it is likely to share some trade-offs with inheritance.


> Having a complex type system would cut into compile times

At least in Rust, type checking takes basically no time compared to things like optimization passes.


> Go has very different design goals than Rust, so very little that Rust does would actually be possible in Go, and vice versa.

Oh give this lame argument a rest already. They're both supposedly general-purpose programming languages. Rust shoots for a little lower level than Go, but they can certainly be compared, and the comparisons are valid.

> Having a complex type system would cut into compile times (an explicit primary design goal, to the point where the Go compiler must be written to read each file exactly once[0], no more).

I'm not sure how this is a defense of Go: yes, that's a primary design goal; it's also a terrible design goal. Trading an adequate type system for a short-term gain in compile time is an amateurish mistake.

Compilation time can be mitigated on even the largest projects by only rebuilding changed modules. If you're getting to the point that this isn't working, then maybe it's time to start working on reducing the size of your code.

I've worked on projects that were 500,000 lines of code and compile time was sometimes an issue, but not nearly as large an issue as bugs which would have been caught by an adequate type system (large C#/Java codebases pre-generics).

> As for the code you're referring to, Go is not designed to be a language which prevents you from shooting yourself in the foot with provable code. It's designed to be a language which makes it reasonably easy to be sure you haven't, as long as you follow the general idioms and best practices. What you're describing here is definitely not one - I can think of a few instances in which pointers might be reasonably passed over a channel, but they're few and far between.

We've tried this approach before, many, many times, and the result has been high-profile bugs.

The fact is, these "idioms and best practices" will not be followed perfectly on projects of any reasonable size if they are not enforced by code. Why would you not enforce them with code? And if you're designing a language, why not design the language in such a way as to allow code enforcing best practices? Like, a type system?

> Also, I would think it's pretty obvious that once you've sent something over a channel you shouldn't try and write to it anymore[1].

Yes, just like it's pretty obvious that when you free memory you shouldn't write to it any more. We've never had any issues with that, now have we?

> It's definitely possible to extend `go vet` to handle this, and it may even be possible to write this in as a compiler error in a future version of Go.

I didn't know about `go vet` and I wish I didn't.

So, to avoid a second pass over the code during compilation, you add a second pass over the code during `go vet` that has less capabilities. Good thinking. I'll add that onto the list of other stupid ideas that have been tried decades ago and didn't work but are finding new life in Go.

> Incidentally, one of the reasons that it's so easy to do reliable static analysis on Go code (compared to other languages) is that the grammar is incredibly simple - it's almost entirely context-free, which is very rare among non-Lisps. Having a more complex type system usually requires at least some additional syntax to along with this, which means you'd have to start sacrificing this design goal as well in order to create a more elaborate type system.

Yes, adding syntax for static type checking will definitely make static analysis like type checking harder. Are you fucking kidding me?


> The fact is, these "idioms and best practices" will not be followed perfectly on projects of any reasonable size if they are not enforced by code. Why would you not enforce them with code?

Compiler shall not be an obstacle to a man. We tried enforcing things before, many times. Things got abandoned.


Sigh. We build machines to automate the repetitive, to eliminate the daily drudgery and to repeat steps with perfection (that we would repeat with perfection ourselves if only we were as focused as a machone). So why do we keep finding ourselves arguing that a lazy compiler, which offloads the work of a machine on to a dev team, is an acceptable compromise?


Meta-comment: I believe the difference in opinion here (which seems to recur, over and over, and has for decades) is because the job title of "software engineer" actually encompasses many different job duties. For some engineers, their job is to "make it work"; they do not care about the thousand cases where their code is buggy, they care about the one case where it solves a customer's problem that couldn't previously be solved. For other engineers, their job is to "make it work right"; they do not care about getting the software to work in the first place (which, at their organization, was probably solved years ago by someone who's now cashed out and sitting on a beach), they care about fixing all the cases where it doesn't work right, where the accumulated complexity of customer demands has led to bugs. The first is in charge of taking the software from zero to one; the second is in charge of taking the software from one to infinity.

For the former group, error checking just gets in their way. Their job is not to make the software perfect, it's only to make it satisfy one person's need, to replace something that previously wasn't computerized with something that was. Oftentimes, it's not even clear what that "something" is - it's pointless to write something that perfectly conforms to the spec if the spec is wrong. So they like languages like Python, Lisp, Ruby, Smalltalk, things that are highly dynamic and let you explore a design space quickly without getting in your way. These languages give you tools to do things; they don't give you tools to prevent you from doing things.

The second group works in larger teams, with larger requirements and greater complexity, and a significant part of their job description is dealing with bugs. If a significant part of the job description is dealing with bugs, it makes sense to use machines to automate checking for them. And so they like languages like Rust, C++, Haskell, Ocaml, occasionally Go or Java.

The two groups do very little work in common (indeed, most of the time they can't stand to work in the opposing culture), but they come together on programming message boards, which don't distinguish between the two roles, and hence we get endless debates.


My point was: tools that prevent you from doing things should not do that without explicit permission. Because thinking is hard and any interruption by a tool or a compiler will impose unnecessary cognitive load and will make it even harder, which may lead to a logical mistake. It is much better to deal with the compiler after all the thinking is done, not during.


I'm pretty sure you've never used a language with a good type system then.

You describe a system where you have to keep everything a program is doing that's relevant in your head at once, and when you're forced out of that state, it's catastrophic. You seem to be assuming that's the only way to get productive work done while programming. I happen to know it's not.

If a language has a sufficiently good type system, it's possible to use the compiler as a mental force multiplier. You no longer need to track everything in your head. You just keep track of minimal local concerns and write a first pass. The compiler tells you how that fails to work with surrounding subsystems, and you examine each point of interaction and make it work. There is no time when you need the entire system in your head. The compiler keeps track of the system as a whole, ensuring that each individual part fits together correctly. The end result is confidence in your changes without having to understand everything at once.

So why cram everything into your brain at once? Human brains are notoriously fallible. The more work you outsource to the compiler, the less work your brain has to do and the more effectively that work gets done.


Yes but tools that you from doing thing you would prefer not to have done in the furst place (but still grant you permission to override this when desired) would be a fairer assessment of what a sting compiler is.

We all agree that a null dereference is a bad thing at runtime. I see no advantage for me as a programmer to be allowed to introduce null dereferences into my code as a side effect of "getting things to work" if then when the code runs it doesn't work right. This increases my cognitive load as a programmer, it does not decrease it.

I would argue that you don't think about the compiler anymore when using a language like haskell than you do when using Python. But you do get more assurances about your program after ghc produces a binary than after Python has finished creating a .pyc -- and that is a win for the programmer.


Agreed. But every production language that I'm aware of has an out that allows you to escape its type system, with the exception of languages whose type systems are intended to uphold strong security properties and verification languages that feature decidable logics. I can't remember for sure, but I think even Coq--which is eminently not a language designed for everyday programming--may diverge if you explicitly opt out (though I could be wrong about that).

The questions, to my mind, are

1. How easy it is to opt out?

2. How often do you have to opt out?

3. How easy it is to write well-typed expressions?

4. What guarantees does a program being well-typed provide?

For example, you almost never have to opt out of the type system in a dynamic language, but the static type system is very basic. In a language like Rust, you opt out semi-frequently (unsafe isn't common but it's certainly used more often than, say, JNI), and it can be hard to type some valid programs, but opting out is simple and the type system provides very strong guarantees. In a language like C, you never have to opt out of the type system, and the annotation burden for a well-typed program is minimal, but the type system is unsound--being well-typed guarantees essentially nothing in C.

All languages fall somewhere along this spectrum, including Go. It's just a question of what tradeoffs you're willing to make.


I think it's worth clarifying that Rust's `unsafe` doesn't opt-out of the core type system per se, it allows the use of a few additional features that the compiler doesn't/can't check. I think this distinction is important because, as you say, `unsafe` isn't very uncommon and so it is nice that one still benefits from the main guarantees of Rust by default inside `unsafe`. :)


Sophisticated type systems have not been abandoned by any stretch of the imagination.

We tried static code generators before, as well as linters and static code analysis on untyped code, and those have pretty well been proven to be ineffective. All of which are supposedly "new innovations" in Go. So if you want to defend Go that's not an approach you can really take.


Compile time + Go toolchain add-ons (generate ..) ..


My understanding is that most languages are context free in their syntax, but that type checking and namespace "stuff" are (almost?) always context-sensitive.


On a somewhat interesting note: transferable objects work similarly in web workers. There have been complaints, of course, that this makes it impossible to take advantage of shared memory in cases where it provides clear performance benefits.


Wouldn't the concept of borrowing avoid that issue in Rust though?


Actually, atomics and locks (which are fully supported) are what avoid that issue in Rust. You can't borrow across multiple threads.


You can now with scoped threads, provided that you don't require mutation.


Actually, it's possible to share a &mut into a scoped threat too, and perform mutation through it. It's safe because the &mut only ever exists in one place at a time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: