The crazy thing is that sum types (Rust enums) and pattern matching have been around for at least 30 years. I'm simply not interested in learning any new language that doesn't have sum types, they allow you to write incredibly expressive and terse code.
For all the complaints about the breakneck change in rust, it's a very "boring" language: all of its features with the single exception of the borrow checker already exist in other languages.
I know this won't be popular to say on HN, but I think Rust has the same problem Perl does. The language has so many systems to learn and a tough syntax that it looks unreadable to people just starting out.
I mean I could see learning Rust being really hard if you only know something like Python or JS. The only "system", as you say, that is present in Rust that doesn't have something analogous in C++ is the borrow checker, and lifetimes still exist in C and C++. Rust is significantly simpler and easier to learn than C++
C++ has a much shorter time to first non-"hello world" program than Rust. C++ has a lot of features, but few of them are mandatory for general development. With Rust you have a pretty steep hill to climb before your first non-trivial program compiles.
C++ and Rust, IMO have a very similar featureset, Rust just puts that upfront as properly part of the language. Those C++ features are pretty much mandatory for general development, and likewise you will find them in most open source and production projects. Programming without them is just C++'s one of many ways that it gives you enough rope to hang yourself.
Yes, you could program C++ without even knowing what std::unique_ptr (and I talk to many college grads with C++ on their resume who don't know what unique_ptr is, or that C++ has more than one type of pointer). But Rust won't let you use raw pointers (as part of the language), whereas in C++ you will be told "make sure you have read Google's 10,000 word style guide before committing any code".
I believe https://news.ycombinator.com/item?id=23715759 works as a response to your point. In my eyes syntax is the least interesting thing of any language, their semantics are way more important, and quite a bit of syntax ends up being derived from it, and the rest boils down to aesthetics. The syntactic complexity that Rust has is there because it is encoding a lot of information, modulo things like "braces vs whitespace blocks" and "<> vs []" which, again, come down purely to style. Also, having a verbose grammar is useful for tools like the compiler and IDEs because having lots of landmarks in your code aids on error recovery and gleaning intent from not-yet-valid code.
It's not any particular feature that makes a language a mess. It's the interaction between the features. It's a bit like mixing paint, it's very easy to end up with greyish poop.
Go was designed by very experienced programmers that understood the cost of abstraction and complexity well.
They didn't do an absolutely perfect job. It's probably true that Go would be a better language with a simplified generics implementation, enums, and maybe a bit more. That they erred on the side of simplicity shows how they were thinking. It's an excellent example of less is more.
Most programmers never gain the wisdom and/or confidence to keep things boringly simple. Everyone likes to use cool flashy things because it makes what can be a boring job more interesting.
But if your goal is productivity, and the fun comes from what you accomplish, then the code can be relatively mundane and still be very fun to write.
Precisely, and this is one area where go fails completely. The features don't interact well at all!
Tuple returns are everywhere, but there are no tools to operate on them without manually splitting the halves, checking conditionally if one of them exists, and returning something different based on each possibility. Cue the noise of subtly-different variants of `if res, err := nil; err != nil` in every function.
Imports were just paths to repositories. Everything was assumed to just pull from the tip of the branch, and this was considered to be just fine because nobody should ever break backwards compatibility. They've spent years trying to dig themselves out from under this one.
Everything should have a default zero value. Including pointers. So now we go back to having to do manual `nil` checking for anything that might receive a nil. But thanks to the magic of interfaces, if you call a function that returns a nil interface pointer, it will directly fail a nil comparison check! This is completely bonkers.
Go has implicit implementation of interfaces which makes exhaustive checking of case statements impossible. So you type-switch and hope nobody adds a new interface implementation. So you helpfully get strong typing everywhere except for the places you're most likely to actually mess something up.
Go genuinely feels like a language where multiple people each had their pet idea of some feature to add, but nobody ever came together to work on how to actually make those features work in concert with one-another. That anyone could feel the opposite is absolutely incomprehensible to me.
Given that I am involved in the Rust project I'm very likely biased, but given that I've focused on the learnability of the language (diagnostics and ergonomics) I have a bit of context on this subject.
When designing a language there are intrinsic (what things the project wants to focus on, be they features of the language or the associated tooling that affect the language, like generics or compilation speed) and extrinsic (external impositions like being able to run on certain platforms, or interfacing with existing technologies like being able to run a statically linked binary in Linux or being able to debug using gdb or calling C libs without runtime translation) design constraints. All languages have (or should have) an objective of being easy to learn, pick up and use long term. It might just not be the top priority.
For the sake of argument you can take Python where expressiveness at runtime and clean syntax are prioritized over speed, Go where fast compilation and multithreaded microservices are prioritized over more complex language features, and Rust where fast binaries and expressiveness are prioritized over ergonomics (when push comes to shove this is the case, otherwise you wouldn't need to call `.clone()` or add `&` to arguments when calling a method ever), you can see how these objectives permeate every decision throughout the language.
When it comes to Rust in particular, I feel it is still a boring language despite the appearance of too many features, precisely because of how they interact between them and fit together naturally. It is not the best fit for every use case, but it is one of the projects out there that is embracing the fact that it can't be as easy to learn as it could be (without sacrificing some of the constraints that make it interesting as a systems language), but we can rely on the compiler being a necessary part of the developer toolchain to make the compiler understand the user's intent when they do things that make sense from extrapolated misunderstanding of the language and help them write the "correct" code instead. This has the added benefit that reading the code is easier because you have to "guess" much less what it is doing. Remember that if the code can confuse a parser it will also confuse humans. On the opposite end of the spectrum you have JavaScript, where it's grammar has a lot of optional or redundant ways of doing the same thing (think semicolon insertion), which makes the act of reading and debugging code harder. This is a reasonable approach in a case like the web, less so in a compiled language that can evolve independently from the end users' platform.