Hacker News new | past | comments | ask | show | jobs | submit login
Rust is mostly safety (graydon2.dreamwidth.org)
487 points by awalGarg on Dec 29, 2016 | hide | past | favorite | 458 comments



I'm a lowly ancient Java programmer and I think Rust is far far more than safety.

In my opinion Rust is about doing things right. It may have been about safety at first but I think it is more than that given the work of the community.

Yes I know there is the right tool for the right job and is impossible to fill all use cases but IMO Rust is striving for iPhone like usage.

I have never seen a more disciplined and balanced community approach to creating PL. Everything seems to be carefully thought out and iterated on. There is a lot to be said to this (although ironically I suppose one could call that safe)!

PL is more than the language. It is works, community and mindshare.

If Rust was so concerned with safety I don't think much work would be done on making it so consumable for all with continuous improvements of compiler error messages, easier syntax and improved documentation.

Rust is one of the first languages in a long time that makes you think different.

If it is just safety... safety is one overloaded word.


I think this comment and the OP are both correct. The point of Rust is safety, in the sense that memory safety should be the default for all programming languages, and has been the default for all non-systems languages since the 90s. The only holdouts have been because people used to claim that memory safety wasn't possible without making a language unusably slow, which Rust has disproven. In all my years of teaching Rust, I can't count how many times I've told people that you can have a memory-safe systems language without garbage collection and have them look me in the eye and say, "what, no, that's supposed to be impossible", and I still get a kick out of it every time.

The importance of Rust is that it's raised the baseline for low-level languages in the modern age. If any future systems languages emerge that don't feature memory safety, it will have to be a deliberate choice that must be defended rather than just an implicit assumption of how the world works.


I really want to love Rust, and while I understand the Rust borrow checker in theory, actually using it in practice has been a major headache. I tried Rust on a simple terminal-based project, and after a week of feeling that I was getting nowhere I switched to Go and had a proof of concept in several hours.

With that said, can you recommend a good source to really understand best practices and patterns for ownership and borrowing? I feel that's the biggest hurdle to using Rust (at least in my case).


Alas, I think a week is too short to give Rust. Go will get you more pay-off in the short term, but as you internalise the rules of Rust you'll really start to reap the benefits. Unfortunately being an experienced user, I'm not aware of a single online source for learning this stuff - I mainly teach people directly in person or on IRC.


Agreed. It took me about two months to really understand Rust, and I was coming from a background of languages like C++ and Scala. It pays off, though, for what I want to do with computers.

The rise of "it demos well so we should use it" is in a lot of ways troubling. The inflection point for productivity doesn't need to be "five minutes in" to be worthwhile if you're doing something that is, itself, worthwhile.


> Unfortunately being an experienced user, I'm not aware of a single online source for learning this stuff - I mainly teach people directly in person or on IRC.

How about http://rust-lang.github.io/book/ ?


Mind sharing the source code? I can whip out a Rust version in the same amount of time. I have been using Rust for two years. I never worry about the borrow checker as it never gives me problems.


If you know swift/objc pretty well with it's ARC reference counting memory management, do you pick up the borrow checker faster?


AFAIK Swift doesn't really have references or anything to enforce unique ownership. If you've ever used a language with pointers before, including the simple pointers that Go has, then that's a good start. If you further understand how escape analysis works in Go (or the various escaping annotations in Swift), then imagine a language where every variable must never escape, and where this is enforced by the compiler.

Mostly I think that the fear of the borrow checker has become more meme than truth at this point. The difficulty with Rust is that it has things found in different languages, so no matter who you are you probably have to learn something: a strong type system and unique ownership and pointers, and if you're coming from a language like Python and Javascript you may know none of this! But that's what we're here for (that's where I came from!), and we like to help. :)


Seeing that people keep testifying to the fact that the borrow cHecker gives them problems, I wouldn't say it's just a meme. Its easy and tempting to imagine problem points disappearing over time, but realistically, some portion of people will always run into it, because it's unique and fundamentally a bit complex. It's better to spread that message, that running into borrowing problems is a common, real, yet temporary hangup that can be overcome.


I would say that this is a common message, to a degree. What I commonly see people expressing about Rust is that it took them a while to internalize the borrow checker, possibly a few weeks, and where before that point it was painful to work with to some degree, afterwards it was a boon as it helped them more quickly spot problems and forced them to consider the problem more closely before committing to code that might need to be scrapped.

As someone who has yet to do more than do minimal dabbling in the language, this is a very positive message. It expresses that there is some work to learning this so don't be put off if you experience it, it's normal, and that the work required to learn it pays off in the end.

That's probably a more appropriate message than that it's not hard. I don't think it's appropriate to express to people that learning pointers in C and C++ is "easy". It's not "hard", but it's not necessarily easy for some people. It requires a specific mental model, and depending on how they learned to program, it may be more or less easy for them to wrap their head around. Afterwards, it's easy and makes sense. I assume the borrow checker follows a similar learning hurdle. That doesn't mean we should forget what it's like before we've learned it though (and at this point, there's probably a lot of people in the midst of learning rust that haven't quite fully internalized the borrow checker).


I'm inclined to agree. It took me a week or two of daily usage to finally "get used" to the borrow checker. It's a bit of a paradigm shift, to be sure, but it's not insurmountable. In fact, I think learning Rust has made even my C code better, because now I'm in the habit of thinking thoroughly about ownership and the like, which is something that I did to an extent before (because you have to to write robust software without a GC), but it was never explicit and I never would've been able to articulate the rules like I can now.

That said, I still occasionally have problems where I feel like I'm doing something "dirty" or "hacky" just to satisfy the borrow checker. It's easy to program yourself into a corner and then find yourself calling `clone()` (the situation, I've been told, has gotten much better in recent releases with improvements to the borrow checker, but alas I haven't had a chance to play much with Rust in nearly a year).

Another thing that I still find difficult is dealing with multiple lifetimes in structs, to the point that I usually just say "to hell with it" and wrap everything in an `Rc<T>`. And sometimes there's simply no safe way (afaict) to do some mutation that I want to do without risking a panic at runtime (typically involving a mutable borrow and a recursive call), which leads to a deep re-thinking of some algorithm I'm trying to implement. That's not Rust's fault, though—it's a real, theoretical problem that arises in the face of mutation. In time, I'm sure there will be well-understood patterns for handling such cases.


I absolutely agree.

If you look at Rust from a systems programmer perspective and compare it with the systems languages OP lists then, yes, safety is THE most radical feature.

But Rust can compete on so many more levels. Web services, user facing applications for example. Languages competing in that space usually bring memory safety, so it's kind of a non-issue. Safety enables Rust to be a viable choice for these tasks, but it needs more than that to be on-par with the other languages. And Rust's got plenty of things going for it, so there's nothing wrong with stopping to play the safety card (since that is expected anyways) and painting Rust as a language that is actually fun to work with.


>But Rust can compete on so many more levels

How ? There are languages with more expressive type systems high level type systems (Haskel/OCaml presumably). There are languages with much more mature libraries, ecosystems and tooling (C#/Java). There are languages with both (F#/Scala).

What is it that makes Rust a good applications programming language ? You said it your self GC doesn't really matter that much in this space and GC based languages are just more elegant and not to mention the tooling is way more mature. Runtime also doesn't matter that much and with the recent changes to .NET can be avoided anyway.

This Rust fanboyism is turning in to the new node hype "use node for everything node is web scale fast because it uses an event loop io instead of thread based io", "use rust for everything because the type safety is literally the best and it's the only language with a strong type system out there". I get it it's new and shiny, I like it to, and it has a strong argument to make in the systems programming - the designers are doing a good job of making trade offs that let you retain low level control while still having memory safety - but these tradeoffs are just that and they come at the expense of higher level stuff, higher level languages don't need to make these because they don't pretend to be systems programming languages.


Yeah, on any given dimension, even safety, you'll find languages that are stronger than Rust. It excels IMO because of the balance it finds between type system, safety, functional vs imperative code, etc. It puts all these together in one package that I feel like I can use for actual work. I don't know of any other solid contenders here, except maybe Swift.

I have to agree that the "Rust all the things!" game is getting old.


When I write in Rust, I have feeling "that's how programming should work". I don't know how to express it in more scientific way. Errors handling, pattern matching, Result type - it's all how it should work. I know, some other languages have similar features, but Rust also has race-conditions protection (very important thing), good package manager out of the box, testing tool (cargo test), very smart compiler and great performance - just all the best.


Lots of good programming languages out there today honestly. Rust is great. I recommend everyone learns themselves a language for each need.

I've got Clojure/Clojurescript, Rust, Python, HTML5 JavaScript, F#/C# and Java.

Clojure/Clojurescript is used for long running processes and web development. So backend/frontend stuff. I also use it for fun experiments, the REPL is great at that.

Rust is used when performance matters, when I want the simplicity of running a native executable with no heavy dependency, and when I need to write low level components.

Python is my goto scripting language for quick scripts, hacks, messing around. It also serves my scientific computing needs, data mining, visualization, etc. Also, this is the best language I've found for doing coding interviews or practice. So easy to whiteboard python.

HTML5 JavaScript is used for most web development, for customizing my editor Atom.io, and other things. With Metro apps, and other OS moving towards an HTML5 JavaScript stack, its also quite good at some user apps.

F# is good for simply knowing an ML language and when I want more functional flair in the .Net world.

C# and Java are used to pay the bills, as they are easiest to find jobs for.

There would be valid alternatives for most of those categories, but I highly recommend everyone invest in knowing one language for each one.


I get the opposite feeling. When I write in Rust, I have feeling "that's how programming shouldn't work". The syntax is awkward and you are expected to spend weeks to get anything done. Ultimately, it seems to come down to Rust fans who have a sunken cost fallacy - I spent weeks learning this, so they think it is worth it.


When I was learning C back in 1990, it took me a long time to get comfortable with it, and that's after years of programming in Assembly language (8-bit, 16-bit, 32-bit). It took me some time to stop writing assembly in C and start writing C in C, if that make sense.

I haven't switched to Rust yet (I still deal with C/C++ at work on "legacy" code, and I'm having too much fun with Lua for my own stuff) but I don't expect to pick it up "quickly" and I'm sure I'll be trying to write C in Rust for some time. It comes with the territory.


Just because some people have a different opinion than you, doesn't mean it is due to fallacious reasoning. Ultimately, it seems to come down to rust haters who have a sunken cost fallacy - I spent years writing bad code in bad lanaguages, so I don't want to give that up to do things better.


Please be more patient :) Flamewars will not help us, definitely. I'm nobody to criticize your way to communicate, but for the sake of Rust community reputation - please let's avoid "Rust vs X" wars.


Please be more patient :) Replying to posts you didn't read will not help us, definitely.


Few weeks is not enough time to get it when you don't like ideas of the language initially. I tried to read one dynamic language (will not name it to avoid fans fury) and realized very quickly that it's not my language. The only difference - I've been never thinking about "sunken cost fallacy" - there are languages I can use and there are others, and this diversity is necessary for healthy evolution.


I spent weeks learning other languages, just as I have Rust, but I still prefer to write code in Rust.


This Rust fanboyism is turning in to the new node hype "use node for everything node is web scale fast because it uses an event loop io instead of thread based io"

If what you are suggesting is true, isn't the biggest problem by far the fanboyism posing as knowledge? Wouldn't complaining about Rust be like living in the age of alchemy, and complaining about someone's particular potion? Isn't the epistemological squishyness of the entire field the biggest problem by far?


No, that's cache invalidation. And naming things.


IMO the biggest plus for rust is actually cargo. Building, versioning, and sharing modular code is essentially copy/paste in C/C++, and compared to that cargo is lightyears ahead.

I actually wish rust would accept its systems niche even more and move the stdlib to crates and make nostd the default mode. Personally, I see no reason to market rust for webapps or gui stuff, it cant/wont compete with Rails/QT for years to come if ever there.


I couldn't agree more with this. Cargo (and the general desire to think and work hard on ease-of-use) is a huge part of what makes Rust a pleasure. It alone would be enough for me to steer someone with experience in neither to Rust over C++ for lots and lots of use cases.


I haven't looked at rust, and this sentiment is exactly what has kept me away. I watched perl, java, python, ruby, node and c++ (boost) fall into the trap of "we know better than the end-user/developers/sysadmins/os vendor, so let's reinvent dpkg poorly".

Why should cargo be any different? It is solving a problem I don't have (debian, ubuntu, openbsd, and freaking illumnos all have acceptable package management), and creating a massive new problem (there is a whole thread below this one talking about rust dll hell between nightly and stable, and the thread links to other HN articles!). From my perspective all this work is wasted just because some developers somewhere use an OS that doesn't support apt or ports.

Sorry this is so ranty, but I really want to know if anyone has had luck using rust with their native package manager.


TL;DR: I think language-centric package managers do a better job at versioning packages per-project. Here's an anecdote to explain what I mean.

-----

Let's say I want to build a piece of software that depends on some software library written in C at version 1.0.1. It's distributed through my system package manager, so I sudo apt-get install libfoo.

~~ some time later ~~

Now let's say I want to build a different piece of software that also depends on foo, but at version 1.2.4. I notice that libfoo is already installed on my system, but the build fails. After a quick sudo apt-get install --only-upgrade libfoo. This piece of software now builds.

~~ Even later ~~

When I revisit the first project to rebuild it, the build fails, because this project hasn't been updated to use the newer version yet.

I'm fairly inexperienced with system package managers, but this is the wall I always hit. How should I proceed?


I'm arguing that you should just have one package manager, so in my world the only way for this to happen is if both the packages you're installing are tarballs that no one has bothered to port to your system. If the language-specific package manager did not exist, then there would be a better chance the package would exist for your OS already.

Anyway, Debian/Ubuntu has multiple fallbacks for this situation:

a. ppa's

b. parallel versions for libraries that break API compatibility (libfoo-1.0...deb, libfoo-1.2...deb that can coexist).

c. install non-current libfoo to ~/lib, and point one package at it (not really debian-specific)

d. debootstrap (install a chroot as a last resort -- this is better than "versioning packages per-project" from an upgrade / security update point of view, but worse from a usability perspective -- you need to manage chroots / dockers / etc).

I suspect the per-project versioning system is doing b or d under the hood. b is clearly preferable, but hard to get right, so you get stuff like python virtual environments, which do d, and are a reliability nightmare (I have 10 machines. The network is down. All but one of my scripts run on all but one of the machines...)

A long time ago, I decided that I don't have time for either of the following two things:

- libraries that frequently break API compatibility

- application developers that choose to use libraries with unstable APIs that also choose not to keep their stuff up to date.

This has saved me immeasurable time, as long as I stick to languages with strong system package manager support.

Usually, when I hit issues like the one you describe, it is in my own software, so I just break the dependency on libfoo the third time it happens.

When I absolutely have to deal with one package that conflicts with current (== shipped by os vendor), I usually do the ~/lib thing. autotools support something like ./configure --with-foo=~/lib. So does cmake, and every hand-written makefile I've seen.

[edit: whitespace]


> talking about rust dll hell

No, what that thread is talking about is that somebody wrote a library to exercise unstable features in the nightly branch of the Rust compiler, and that inspired somebody else to write a sky-is-falling blogpost claiming that nightly Rust was out of control and presented a dozen incorrect facts in support of that claim, so now we have to bother refuting the idea that nightly Rust is somehow a threat to the language.

As for the package manager criticism, the overlooked point is that OS package managers serve a different audience than language package managers. The former are optimized for end-users, and the latter are optimized for developers. The idea that they can be unified successfully is yet unproven, and making a programming language is already a hard enough task that attempting to solve that problem is just a distraction.


From the thread, I got the impression that it is not trivial to backport packages from the nightly tree to the stable tree--people are talking about when packages will land in stable, but I'd expect that to all be automated by test infra, and too trivial for developers to work around to warrant a forum thread.

Anyway, it sounds like I stepped on a FUD landmine. Sorry.

It sounds like you work in this space. From my perspective, debian successfully unified the developer and end-user centric package manager in the '90s, and it supports many languages, some of which don't seem to have popular language-specific package managers.

What's missing? Is it just cross-platform support? I can't imagine anything I'd want beyond apt-get build-dep and apt-get source.


> Anyway, it sounds like I stepped on a FUD landmine.

That's the problem with FUD, it gets everywhere and takes forever to clean up. :)

> I got the impression that it is not trivial to backport packages from the nightly tree to the stable tree

Let's be clear: stable is a strict subset of nightly. And I mean strict. All stable code runs on nightly, and if it didn't, that would mean that we broke backwards compatibility somehow. And even if you're on the nightly compiler, you have to be very explicit if you want to use unstable features (they're all opt-in). Furthermore, there's no ironclad reason that any given library must be on nightly, in that boring old every-language-is-Turing-complete way; people use unstable features because they either make their code faster or because they make their APIs nicer to use. You can "backport" them by removing those optimizations or changing your API, and though that seems harsh, note that people tend to clamor for stable solutions to their problems, so if you don't want to do it then somebody else will fork your library and do it and steal your users. There are strong incentives to being on stable: since stable code works on both nightly and stable releases, stable libraries have strictly larger markets and therefore mindshare/userbase; and since stable code doesn't break, library maintainers have much less work to do. At the same time, the Rust developers actively monitor the community to find the places where relatively large numbers of people are biting the bullet and accepting a nightly lib for speed or ergonomics, and the Rust developers then actively prioritize those unstable features (hence why deriving will be stable in the February release, which will get Serde and Diesel exclusively on stable, which together represent the clear plurality of reasons-to-be-on-nightly in the wild).

> What's missing?

I've already typed enough, but yes, cross-platform support is a colossal reason for developers favoring language-specific package managers. Another is rapid iteration: it's way, way easier to push a new version of a lib to Rubygems.org than it is to upstream it into Debian. Another is recency: if you want to use the most recent version of a given package rather than whatever Debian's got in stock, then you have to throw away a lot of the niceties of the system package manager anyway. But these are all things users don't want; they don't want to be bleeding-edge, they don't want code that hasn't seen any vetting, and they really don't care if the code they're running isn't portable to other operating systems.


> From my perspective, debian successfully unified the developer and end-user centric package manager in the '90s

I think a more accurate assessment would be that both Red Hat and Debian extended their package support through repositories to enough packages that developers often opt for the easy solution and use distribution packages instead of package manager provided ones because it's easy to, and there are some additional benefits if you are mainly targeting the same platform (and to some degree, distribution, if that applies) that you are developing on.

Unfortunately, you then have to deal with the fact that some modules or libraries invariably get used by code parts of the distribution itself, making their upgrade problematic (APIs change, behavior changes, etc). This becomes problematic when using or targeting a platform or distribution that provides long term support, when you could conceivably have to deal with 5+ year old libraries and modules that are in use. This necessitates multiple versions of packages for a module or library to support different versions sometimes, but that's a pain for package managers, so they tend to only do that for very popular items.

For a real, concrete example of how bad this can get, consider Perl. Perl 5.10 was included in RHEL/CentOS 5, released in early 2007. CentOs 5 doesn't go end of life until March 2017 (10 years, and that's prior to extended support). Perl is used by some distribution tools, so upgrading it for the system in general is problematic, and needs to be handled specially if all provided packages are expected to work (a lot of things include small Perl scripts, since just about every distro includes Perl). This creates a situation where new Perl language features can't be used on these systems, because the older Perl doesn't support them. That means module authors don't use the new features if they hope to have their module usable on these production systems. Authoring modules is a pain because you have to program as if your language hasn't changed in the last decade if you want to actually reach all your users. Some subsert of module authors decide they don't care, they'll just modernize and ignore those older systems. The package manager notice that newer versions of these modules don't work on they older systems, so core package refreshes (and third party repositories that package the modules) don't include the updates. Possibly not the security fixes as well, if it's a third party repository and they don't have the resources to backport a fix. If the module you need isn't super popular, you might be SOL with a prepackages solution.

You know the solution enterprise clients take for this? Either create their own local package manager repo and package their own modules, and add that to their system package manager, or deploy every application with all included dependencies so it's guaranteed to be self sufficient. The former makes rolling out changes and system management easier, but the latter provides a more stable application and developer experience. Neither is perfect.

Being bundled with the system is good for exposure, but can be fairly detrimental for trying to keep your user base up to date. It's much less of a problem for a compiled language, but still exhibits to a lesser degree in library API change.

Which is all just a really long-winded way of saying the problem was never really solved, and definitely not in the 90's. What you have is that the problem was largely reduced by the increasing irrelevancy of Perl (which, I believe was greatly increased by this). Besides Python none of the other dynamic languages (which of course are more susceptible to this) have ever reached the ubiquity Perl did in core distributions. Python learned somewhat from Perl with regard to this (while suffering it at the same time), but also has it's own situation (2->3) which largely overshadows this so it's mostly unnoticed.

I'm of the opinion that the problem can't really be solved without very close interaction between the project and the distribution, such as .Net and Microsoft. But that comes to the detriment of other distributions, and still isn't the easiest to pull off. In the end, we'll always have a pull between what's easiest for the sysadmins/user and what's easiest for the "I want to deploy this thing elsewhere" developers.


Cargo isn't competing against nor replacing distribution package managers. Cargo is a build tool, not a package manager. You're free to package Rust software the same way you do non-Rust software for specific distributions. They are entirely different unrelated things with no overlap. Cargo solves a lot of problems that we've been facing for a long time. We have the Internet now, so let's use it to speed up development.


apt & ports don't follow you into other OSes. Language-specific package managers do, without requiring entanglement with the OS or admin permissions. All you need is sockets and a filesystem.

I think the language-specific ones will win for developer-oriented library management for platform-agnostic language environments.


Apt and/or ports follow you to every OS kernel I can think of (Linux, MacOS, Windows, *BSD, the Solarises), though the packaging doesn't always (for instance, the OpenBSD guys have a high bar, but that's a feature).

My theory is that each language community thinks it will save them time to have one package manager to rule them all instead of just packaging everything up for 4-5 different targets.

The bad thing about this is that it transfers the burden of dealing with yet another package manager to the (hopefully) 10's or 100's of thousands of developers that consume the libraries, so now we've wasted developer centuries reading docs and learning the new package manager.

Next, the whole platform agnostic thing falls apart the second someone tries to write a GUI or interface with low-level OS APIs (like async I/O), and the package manager ends up growing a bunch of OS-specific warts/bugs so you end up losing on both ends.

Finally, most package manager developers don't seem to realize they need to handle dependencies and anti-dependencies (which leads you to NP-Complete land fast), or that they're building mission-critical infrastructure that needs to have bullet proof security. This gets back to that "reinvent dpkg poorly" comment I made above.

In my own work I do my best to minimize dependencies. When that doesn't work out, I just pick a target LTS release of an OS, and either use that on bare metal or in a VM.

Also, I wait for languages to be baked enough to have reasonable OS-level package manager support. (I'm typing this on a devuan box, so maybe I'm an outlier.)


Funny, I've found cargo to be one of the major negatives of rust.

Is there anyone out there saying "builds only when connected to the internet so it can blindly download unauthenticated software ... SIGN ME UP!"


> In my opinion Rust is about doing things right.

On the other hand there is a quite dark cloud on the horizon with the stable vs nightly split. You can't run infrastructure on nightly builds; or add nightly builds to distributions.


There are very few libraries that are nightly-only in Rust. Clippy is a big one, but clippy is a tool, not a library, so it's no big deal (we're working on making it not require a nightly compiler).

Rocket is a recent one. I talked with the owner of Rocket and one of their goals was to help push the boundaries of Rust by playing with the nightly features. With that sort of meta-goal using nightly is sort of a prerequisite. Meh.

You can use almost all of the code generation libs on stable via a build script. Tiny bit more annoying, but if it's a dependency nobody cares. A common pattern is to use nightly for local development (so you get clippy and nicer autocompletion) and make the library still work on stable via syntex so when used as a dependency it just works.

The most used part of the code generation stuff will stabilize in 1.15 so it's mostly not even a problem.


I'm sorry your comment has gotten the response it has.

The looming dark cloud of stable vs. nightly only looks like a dark cloud to those outside the Rust community.

The article that made its way up Hacker News awhile ago (https://news.ycombinator.com/item?id=13251729) got pretty much no traction whatsoever in the Rust community.


I have found the split has only gotten better with time. It used to be most package maintainers assumed you were using nightly/beta. The last holdouts I see are diesel and serde which have instructions for using nightly. Even then they realize no one wants to ship code on nightly so they provide directions for making a building using stable rust. Once the procedural macros stuff is stabilized they can stop.

I have been extremely pleased with the rust community and the rust maintainers.

And no I was not paid by them to say this... :)


From my impression, they'll be working with the stable compiler in a few weeks.


That article had so many inaccuracies and/or was hyperbolic. Most of the points there are plain wrong.

http://xion.io/post/programming/rust-nightly-vs-stable.html#...


I think the Rust community didn't take the discussion very far, because it felt like the comparison to Python 2 vs 3 was out of proportion. The single most important difference is that stable Rust code is always compatible with nightly, and comparisons that gloss over that difference feel frustrating. (Other folks have raised more detailed objections too, like the macro features that are about to land in stable.)


I don't see a split here. I know lots of cool ideas have been posted to Hacker News lately, that showcase something possible in future versions of Rust. But I seriously hope people keep these libraries as showcases, not production-quality tools. I'd say this nightly/stable split is a bit out of proportion.

In 2017 we'll get Tokio and lots of Tokio-ready libraries. Some of them already work and compile with stable Rust. And maybe in the end of the year we can take a proper look what we can do with Rocket, or Diesel...


Tokio runs entirely on stable, though 1.0 won't happen until some language changes land.

Diesel does work on stable today, but its nightly features will be on stable in five weeks with 1.15.


> Tokio runs entirely on stable, though 1.0 won't happen until some language changes land.

Are you talking about impl Trait or some other language changes?


impl Trait is what I'm thinking of, yeah.


There's little evidence for a "dark cloud" on the horizon because of Rust nightlies, besides people complaining of dark clouds. I'd suggest providing evidence of problems with nightlies and filing bugs about things that need to be stabilized.


As I noted in https://news.ycombinator.com/item?id=13277477, there are really only two major Rust libraries that are easier to use on nightly Rust (serde and diesel), and both can already be used on stable using a `build.rs` file (which takes about 10 minutes to set up–see the docs for the projects). You'll be able to get rid of `build.rs` in about 5 weeks when Macros 1.1 lands.

That said, if you want to play with nightly Rust, it's pretty trivial. Rustup https://www.rustup.rs/ makes it easy to install multiple Rust toolchains and switch between them, in a fashion similar to rvm and rbenv.


How so? Most big projects have both stable releases and nightly builds that contain unfinished features. Why is it particularly troublesome for Rust?


I am not sure it's really a split when you can build stable on nightly. Nightly is great for experimentation.


Although others don't see the 'dark cloud', as a long time user of Rust it has definitely put me off pushing for it too hard at work. That said, Diesel and Serde are now the only big libs that require nightly, but this is due to them relying on syntax extensions which are going to be stabilized in the next release. So if I were to start a new microservice using those libs it would be ready to go on stable in a few weeks time, which is super exciting. The only other thing would be async IO. Tokio is making big strides in that direction - not sure what the time-frame is on being able to use it on a server nicely though.


> add nightly builds to distributions

rustup run nightly cargo build --release

You were saying? In any case, there's no such thing as a stable vs nightly split. There's pretty much zero libraries that require nightly, and the few that have a nightly option for optional features will no longer require nightly after the macros update lands.


I think Rust is mostly about safety in the same way that skydiving is mostly about safety. Having safety features that you know you can rely on allows you to take risks that you normally wouldn't in order to accomplish some really awesome things.

(I guess in this analogy C is a parachute that you have to open manually, while Rust is a parachute that always opens at exactly the right altitude, but isn't any heavier than a normal parachute.)


Rust safety is ultimately a productivity boost.

For example, if I have a big string, I may create a hashmap where both keys and values are references to some portions to that original string.

Then, I may pass this hashmap to another function that will transform this hashmap into structs that contain reused portions of those string references.

Rust compiler will make sure that the original string is not destroyed or moved in any way in memory while this is happening.

While it is certainly possible to do this in C and C++, the development cost there is way higher. In C++, one would be sensible to stick to a slower version that copies and allocates data, unless he/she is really sure that the need for performance justifies the code complication.

Meanwhile, juggling the references this way in Rust is common and almost sloppy: the hashmap contains references to strings because it was probably created from iterator that iterated over references. The developer might not even need to notice it unless the string reference in result might be kept around longer than original. The compiler will tell about the type mismatch, and the developer will then create a new string from the reference, and then he/she will move on.

There is a similar story when such code needs to be refactored. The word "sloppy" still works here, but in a good way: offloading "being very, very careful" stuff to compiler is deliciously fun.


I wrote about this concept a bit in http://manishearth.github.io/blog/2015/05/03/where-rust-real..., with an example of a situation where in Rust I was able to make things work but in C++ I'd be totally terrified and use a shared_ptr or something.

I like to say that Rust lets you toe the line perfectly, and dance near it as much as you want. C++ does not, since you're afraid you may accidentally cross it.


Is that a joke? Just doing a cursory reading of your blog post was terrifying to me and I could hardly understand why things are so complicated. It almost make it seem like programming is some 'puzzle' (I feel same way about boost library).


What did you find complicated or terrifying there?

The fact that the code was complicated was sort of the point; the generic deriving code in the Rust compiler is in general a very tricky and confusing area and pretty complicated to deal with. In C++ I would have been very careful around code like this and not introduced new pointers. In Rust, I could do this without being afraid of memory issues.

You're not supposed to understand the actual code in the blog post; there's heaps of context and explanations of compiler internals I didn't want to do. I tried to make it clear what task I wanted to do, and how the compiler helped me carry it out; the code is just there to give a better idea of what was going on. If you read closely I do mention that a lot of things should be ignored, there.

The post also walks through the explanations of why the lifetimes work out that way, but that's more for the readers' edification -- I didn't need to actually figure anything out whilst working on it; the explanations are something I thought about later.


I'd argue that in order to properly evaluate your statements regarding C++ (and Rust, for that matter) it actually is important to understand the code and what you're trying to do. Given that there generally is more than one way to solve a problem it isn't unreasonable to think you found a solution that doesn't map (well) to C++ and from there generalize (perhaps incorrectly) that therefore it's impossible in C++ (with similar performance).


Well, in this case there aren't multiple solutions -- my solution was literally the problem statement. I wanted to make a particular array accessible later in the pipeline in a specific API. That was the problem statement (the reason behind this problem was that the plugin API needed to be able to expose this array). This itself was pretty simple. The array arose from a particularly entangled bit of code. Possible ways to persist it are with a regular or smart pointer/slice to the vector, in both Rust and C++.

This isn't an instance of the XY problem.


Few recent posts that I can see are about the compiler internals, design of a tracing GC and a crypto problem. These things might just be complicated anyway :)


C++ users would say that shared_ptr is the idiomatic way to handle this scenario, and it's no more unsafe than Rust is in general use.


Right, but there's a runtime cost associated with it, unlike the borrowed pointer in the rust version. Using shared_ptr is not "toeing the line", it's stepping back from the line in fear you will cross it.


A lot of C libraries and applications are more like a parachute that was folded by someone who saw a two minute video about parachute folding a couple years ago.


Have you done much skydiving? I used to go three days a week, for a couple years, between 4-10 jumps a day at a place that had world class experts. My experience is that only beginning skydivers are constantly preaching safety. They go around (vocally) judging everything they see, and I think they do it because it alleviates their own fear. Instructors would teach safety, but really only to their own students.

I think the safety aspect of Rust appeals to a lot of beginning programmers. They can feel safer looking down their nose at us dangerous C or C++ programmers.

> Rust is a parachute that always opens at exactly the right altitude

This isn't a good metaphor. Frequently it's safer to pull higher, and on some occasions, you're safer opening lower than you had planned... I think a canopy that always opened at the prescribed height would cause many unnecessary deaths. That doesn't say anything about Rust, one way or the other.


> I think the safety aspect of Rust appeals to a lot of beginning programmers.

Maybe, but it also appeals a lot to many of us experienced programmers who know how hard things can bite us. It's not so much that we can't get things right. It's that it's really expensive to revisit old assumptions when circumstances change, and it's phenomenal to be able to document more of these in a machine-checked way.


Please don't get me wrong - I would take safe over non-safe if everything else were equal. It's just that Rust made many other choices that are worse for me than what's in C++. Also, I think it would be very painful trying to explain some of Rust's features to my coworkers (who are generally very smart, but generally not interested in clever programming languages).

> It's that it's really expensive to revisit old assumptions when circumstances change, [...]

That's very dependent on the type of work you do. Over the last 23 years, my job has been to write many small programs to solve new problems. It's not expensive for me because I've aggressively avoided making monolithic baselines. I have medium sized libraries that I drag from project to project, but I can fix or rewrite parts of those as needed without breaking the old projects.


> That's very dependent on the type of work you do.

True, if your code never gets big or old, you can keep all of it in mind and write correct code without too much worry. Though in my experience, it really doesn't need to be very old or very big before tooling starts paying big dividends.

> I have medium sized libraries that I drag from project to project

I'd wonder in particular about those libraries. Certainly you know more about your context. But I expect that there are both contexts where it wouldn't be helpful, and also contexts where it would be substantially helpful but authors don't know what they're missing. I don't have a way of distinguishing the two here.


I think it's a misconception to classify type-safety and memory-safety techniques as 'clever', they should be seen as the bread-and-butter of day-to-day coding. To put it another way, Rust's memory safety is no more clever than C++'s smart pointers, the only difference is what people mistakenly believe about the two.


> I think it's a misconception to classify type-safety and memory-safety techniques as 'clever'

I didn't call Rusts type-safety of memory-safety clever. The clever stuff is lifetime specifications, a multitude of string types, traits as indications to the compiler for move vs copy, Box Ref Cell RefCell RefMut UnsafeCell, arbitrary restrictions on generics, needing to use unsafe code to create basic data structures, and many other things.

If I tried to advocate Rust in my office, many of my coworkers would simply say, "I didn't have to do that in Fortran, and Fortran runs just as fast. Why are you wasting my time?!"


Almost everything you mentioned as 'clever' is trying to achieve either type-safety or memory-safety. To your coworkers I would reply: would you like the compiler to handle error-prone memory deallocations? Or do you want to keep doing it manually and wait till runtime to find potential mistakes?


I don't believe those clever things are necessary for safety or performance. I think many of them are incidental and caused by a lack of taste or just a disregard for the value of simplicity. Rust deserves credit for it's good ideas, but these aren't those, and I believe there will be other high performance (non-GC) languages that are more accessible to non Computer Scientists [1].

> To your coworkers I would reply: would you like the compiler to handle error-prone memory deallocations? Or do you want to keep doing it manually and wait till runtime to find potential mistakes?

They don't really care about memory deallocations - the program will finish soon anyways, and the operating system will cleanup the mess. Sorry, they've already excused you from the office and have gotten back to getting their work done.

Btw, modern C++ programmers don't worry about memory deallocations either. You should find a better bogeyman.

[1] http://benchmarksgame.alioth.debian.org/u64q/compare.php?lan... (yes, most people disregard benchmarks, but you need someway to discuss performance)


> I don't believe those clever things are necessary for safety or performance. I think many of them are incidental and caused by a lack of taste or just a disregard for the value of simplicity.

Well, I just re-read your list of 'clever' features, and can't really see how any of them is incidental, or in fact how some of them are worse than the exact same features in Swift, which you mentioned.

> ... I believe there will be other high performance (non-GC) languages that are more accessible to non Computer Scientists....

Not sure what to make of this comparison, given that Rust beat Swift in the majority of the benchmark tasks. Also, you have to look at the quality of the compilers themselves. Rust is universally acknowledged to be a high-quality compiler, while Swift (especially together with Xcode) are often bemoaned as buggy and crashy.

> They don't really care about memory deallocations ... the operating system will cleanup the mess.

Well, I have to say they are an extremely lucky bunch. Most systems programmers don't have the luxuxy of writing script-sized programs which use the OS as their garbage collector.

> Btw, modern C++ programmers don't worry about memory deallocations either. You should find a better bogeyman.

I was specifically replying to your Fortran example, but for the sake of argument, to C++ programmers I'd ask, 'Would you like to do high-performance concurrency with statically guaranteed no data races?'


> a multitude of string types

There are two string types in Rust, `String` (growable and heap-allocated) and `&str` (a reference to string data). Anything else is just a shim for FFI.


> There are two string types in Rust, `String` [...] Anything else is just a shim for FFI.

I guess I don't have to worry about the non-Scotsman strings then... You've heard the criticisms about Rust's strings before, and I'm unlikely to tell you anything you don't know.


I think it's especially hasty to criticize Rust's string types in the context of C++, given the standardization of string_view in C++ as an analogue of Rust's &str :P


To me, Rust's &str seems a lot more like const char* (with a size tacked on for bounds checking). But you're the expert, so if I did agree they were the same, then C++ adopting it in the STL is practically proof it's a mistake in Rust.

You never addressed my other "too clever" items in Rust. Does that mean, other than strings, we agree?


> Does that mean, other than strings, we agree?

Not necessarily. :P Features exist, and I'm not about to dictate where others draw the cleverness line.


No, I haven't done any skydiving. After writing my comment, I suspected my analogy might not hold up if I knew more about skydiving. I guess just imagine an abstract form of skydiving where you just need to have fun in the sky and then open your chute at the right altitude, and you'd like to wait as long as you can before opening it.

I still think the point is valid, though, even if the analogy isn't.


Fair enough. I'm sorry about calling you out on the metaphor. Instead I'll call you out on the point itself :-)

> Having safety features that you know you can rely on allows you to take risks that you normally wouldn't in order to accomplish some really awesome things.

Unless you rush to publish a public facing version of your code, I can't see why you'd be afraid to take risks in any language. What's so scary about a buffer overflow on your home workstation running data from a source that's never even seen your program? It will just segfault, which is no worse than a Rust panic. If I could exploit your new code, it means I've already gotten so far into your workstation or server that I could just run my own code. Where does the fear come from?


I think it's more like: since you know the compiler won't let you write a buffer overflow, use-after-free, data race, etc., you no longer have to waste time worrying about whether your code might contain such problems, which frees up more mental bandwidth for other concerns. But unlike other languages, you still have confidence that your code will compile to equally performant machine code (e.g. No GC overhead).

The scary thing isn't causing a segfault on your local machine. The scary thing is writing code that could segfault but doesn't do in testing until after you've deployed it publicly. If your compiler rejects code that can segfault, this is no longer a concern. (Or replace segfault with a buffer overflow that leaks your private keys or something equally bad.)

I guess the analogy would be that you can have more fun cavorting across the sky if you knew with 100% confidence that your parachute automatically would deploy itself at the appropriate time (and not a moment sooner).


> I think it's more like: since you know the compiler won't let you write a buffer overflow, use-after-free, data race, etc., you no longer have to waste time worrying about whether your code might contain such problems, which frees up more mental bandwidth for other concerns.

I've spent a lot of time figuring out how to do completely mundane things in Rust. At this point, buffer overflows and use-after-frees are not my biggest concerns in C++.

> The scary thing isn't causing a segfault on your local machine. The scary thing is writing code that could segfault but doesn't do in testing until after you've deployed it publicly. If your compiler rejects code that can segfault, this is no longer a concern.

If your testing didn't catch the problem (which I can fully understand), a panic at runtime is not much different than a segfault.

> (Or replace segfault with a buffer overflow that leaks your private keys or something equally bad.)

I firmly believe the OpenSLL team would've used unsafe blocks in Rust to disable the performance overhead of bounds checking. That whole exploit was caused by sloppy optimizations, and Rust is not immune from that.


> I've spent a lot of time figuring out how to do completely mundane things in Rust. At this point, buffer overflows and use-after-frees are not my biggest concerns in C++.

I could visit a country with a completely different set of laws regarding driving and a different road marking system. After a few days of driving, I might also feel like I've spent a lot of time trying to figure out how to navigate the rules of the road rather than actually getting to my destination when compared to driving in my native lang. I would also be unable to accurately ascertain whether one system was better than the other, because of inadequate experience with the new system. It would be a mistake to assume that I could become proficient enough in such a complex system in such a short period of time as to ascertain whether one was better than the other.

To put it another way, I don't feel like avoiding bicyclists is my biggest problem when driving, but having a dedicated bike lane at all times would probably be a good idea anyways. Sure, maybe you've never hit a cyclist, and never will. That doesn't mean it doesn't happen enough that we shouldn't do something about it, because it does.

> If your testing didn't catch the problem (which I can fully understand), a panic at runtime is not much different than a segfault.

No, a segfault at runtime is something that is possibly exploitable. A Panic is not.

> I firmly believe the OpenSLL team would've used unsafe blocks in Rust to disable the performance overhead of bounds checking.

Even if they did, that would still reduce the portion of the code that needs to be audited to those blocks. Effort could be made to reduce the size and scope of those blocks. There is something to be said for having the ability to categorize and enforce different safety levels in your codebase, when the alternative is no categorization or enforcement.


> I could visit a country with a completely different set of laws regarding driving [...]

Arguments by metaphor aren't my thing. It's very likely I would become more proficient at Rust if I programmed in it more. It's also very likely the poster above would worry less about memory errors if s/he programmed in C or C++ more. Yes, Rust is safer in some ways, but I still can't understand where all the fear of other languages comes from.

> having a dedicated bike lane at all times would probably be a good idea anyways.

I used to live in a city with a lot of dedicated bike lanes. I commuted to work on a particularly long stretch that was very popular for cycling. The majority of the cyclists refused to ride in the lane. It turns out that cars naturally blow the dust and small pebbles out of the main road way, but bikes don't do that in the bike lane. Cars also smooth out the pavement in their tire tracks. The result was a road that's 5 foot narrower for cars (speed limit 45 mph) with bicyclists in it (not moving 45 mph), a generally unused bike lane, lots of uncomfortable passing, and a lot of indignation from cyclists who claimed an equal right of way despite having a separate lane designated for them.

> Sure, maybe you've never hit a cyclist, and never will. That doesn't mean it doesn't happen enough that we shouldn't do something about it, because it does.

The city I live in now has many bike paths, completely separate from major roads. It's also a different climate, so there are less pebbles and they have street sweepers clean the road after snow season to remove the sand. There really doesn't seem to be much interaction between the cyclists and the cars. So should I choose a programming language with bike lanes on major roads or separate paths though the parkways? :-)

> No, a segfault at runtime is something that is possibly exploitable. A Panic is not.

Anything is possible, but it's very unlikely. I will write a program and intentionally put a buffer overflow in it. Can you send me some data that will exploit it?

Here's a metaphor that also isn't one: I'm not afraid of terrorists despite some high profile events in the last 20 years. I certainly wouldn't optimize my life around avoiding terrorist attacks because the empirical evidence shows me the probability is very low.


> It's very likely that I would become more proficient at Rust if I programmed in it more. It's also very likely that the poster above would worry less about memory errors if s/he programmed in C or C++ more.

> The city I live in now has many bike paths, completely separate from major roads.

Which wasn't the point of that at all. It was to point out that you assessment of how much time is wasted working around problems in each case is irrelevant given your vastly different experience levels. There are plenty of people here with quite a bit of C and C++ experience that have weighed in about this, not just the person above who you assess as not having much experience in C or C++.

A bike path is a dedicated bike lane,just not necessarily parallel to the road. You're taking the metaphor too literally to be useful. A metaphor is as useful as you allow it to be. They can be extremely useful in pointing out somewhat parallel situations where people may find their beliefs are different. When that is so, it allows the people involved to examine what is different about the situations that leads to a different opinion, if anything. Sometimes we fall prey to our cognitive biases, and a metaphor can be a shortcut out of that bias if it exists, and you allow it be that shortcut. Driving it into irrelevancy through focusing on minutiae is a useful rhetorical trick, but doesn't actually advance the conversation, and at the extreme end if done purposefully is not acting in good faith.

> Anything is possible, but it's very unlikely. I will write a program and intentionally put a buffer overflow in it. Can you send me some data that will exploit it?

Depending on the segfault? I could. It would take me a lot of work, because it's been nearly 15 years since I paid much attention to that, but I have done it before.

> Here's a metaphor that also isn't one: I'm not afraid of terrorists despite some high profile events in the last 20 years. I certainly wouldn't optimize my life around avoiding terrorist attacks because the empirical evidence shows me the probability is very low.

No, you don't optimize your life around them, but you might also support checking of identities on international flights to prevent access to your nation from known terrorists.

Here's the thing. It's not about you. At any point in time, some percentage of C and C++ programmers are neophytes that may not be as proficient as you at avoiding the pitfalls possible in those languages. Given the average amount of time it takes someone to be proficient in C or C++, divided by the average career length of a programmer of those languages, and you'll have a rough estimate of what percentage of programmers of those languages we might conceivably have to deal with problems from them being inadequate for the job they are assigned. I think that reducing this has such a large impact, that this is of vast benefit to society at large (given the botnets we are currently seeing), and would total billions of dollars.


"not acting in good faith"

An accurate diagnosis, I think. You'll never get anywhere with people like that ... or where you get is not anywhere you want to be. In this case, you have someone arguing against Rust because a) his coworkers don't bother to free memory because their programs will finish soon and b) because he doesn't care whether toy programs that he writes for his home computer are subject to buffer overflow exploits.

And on top of that was missing the point of your analogies that, if not willful, was certainly convenient. To use another one: some people are like quicksand.


> > The city I live in now has many bike paths, completely separate from major roads.

> Which wasn't the point of that at all. It was to point out that you[r] assessment of how much time is wasted working around problems in each case is irrelevant given your vastly different experience levels.

That was almost my exact point, and it's odd you're repeating it back to me. I guess I could've laid it out more plainly.

> [Metaphors] Driving it into irrelevancy through focusing on minutiae is a useful rhetorical trick. [...] at the extreme end if done purposefully is not acting in good faith

Using a metaphor is a rhetorical trick. If you want to explain something to a non-technical audience, maybe analogies "get the hay down to the horses" so they can have at least a limited understanding. However, we both seem to understand programming languages so talking about roads obfuscates the discussion, leaving me to wonder whether there really is a parallel between the two topics. I know more about programming languages than I do about bike paths.

> Depending on the segfault? I could. It would take me a lot of work, because it's been nearly 15 years since I paid much attention to that, but I have done it before.

Even if I offer to run malicious data, it sounds to me like a low probability event - probably lower than my being in an airplane crash or shot by a cop. It's not something I should fear today. Over the last 25 years, I've had lots of segfaults, but I think I've done the most damage by accidentally overwriting files. I'm a little afraid of that.

> No, you don't optimize your life around them, but you might also support checking of identities on international flights to prevent access to your nation from known terrorists.

No, I definitely would not. It's very easy to get into this country, and an organized (dangerous) group would have no more difficulty than the drug dealers do smuggling cocaine. There is no benefit to harassing millions of citizens if you can't actually stop the problem.

> Here's the thing. It's not about you.

Are you suggesting the only people allowed to share their experiences in a thread like this are new programmers and the people pushing their language? I was new once, and I survived lots and lots of segfaults. Don't you think neophytes should hear that? They're definitely getting a large dose of doom and gloom about the bad old days.

> Some percentage of C and C++ programmers are neophytes. [...] I think that reducing this has such a large impact, that this is of vast benefit to society at large (given the botnets we are currently seeing), and would total billions of dollars.

In one of your other comments, you indicated you haven't tried Rust yet. You should - you sound interested. It definitely has its nice parts. However, I don't think you will find the safety features to be a big productivity gain, and you will have to use unsafe code to accomplish tasks from a freshman level computer science book. Think about that - you can't cleanly use the safe subset of Rust to teach computer science to beginners... (you could do it with a lot of compromises)


> Using a metaphor is a rhetorical trick.

Rhetorical tricks can be used to deepen the conversation, or to dismiss points out of hand. The first is useful to the discussion, the second is useful for winning, but at the detriment to the discussion.

> However, we both seem to understand programming languages so talking about roads obfuscates the discussion

I provided a example where it may provide value even if two people are experts in the area being discussed. Metaphors can help explain someone underlying reasoning and motivation in a way that is hard to express technically. People talk past each other enough in discussions by slightly misinterpreting what is trying to be expressed, that I find metaphors a valuable tool. I find many disagreements in text are rooted in people assuming a comment is countering a point of their or someone the agree with, and interpreting it in that light when often they are saying very close to the same thing. Thus I believe expressing a point in multiple ways, even if it's through metaphor, to have merit.

> Even if I offer to run malicious data, it sounds to me like a low probability event - probably lower than my being in an airplane crash or shot by a cop.

First, airplane crashes are extremely rare. Second, being shot by a cop is rare too, depending on your vocation and behavior. Third, remote code execution exploits are not rare, given the relatively small amount of public facing software compared to airplane flights an all police interactions.[1] Were you to author or contribute to any non-trivial size C or C++ project that was publicly available, I would put better than even money on there being an exploit findable in it. There's a vast difference in how much software is written to how much is public facing, but that doesn't mean things that were originally private don't sometimes make their way public years later, for example internal libraries that a company open sources or even just includes in another project that ends up being public facing.

> No, I definitely would not. It's very easy to get into this country, and an organized (dangerous) group would have no more difficulty than the drug dealers do smuggling cocaine. There is no benefit to harassing millions of citizens if you can't actually stop the problem.

So, again, because you can render a metaphor in more detail to make it irrelevant in context doesn't mean that's appropriate. So, in more generic terms, "do you support keeping known detrimental people out of a defined area to facilitate the usefulness of that area"? If can can do so, and it's not to cumbersome on those that are not detrimental, depending on the problems caused by the people in question, at some point it becomes worth it. There are parallels that can be drawn here, if you're willing to entertain the thought. It appears you aren't.

> Are you suggesting the only people allowed to share their experiences in a thread like this are new programmers and the people pushing their language?

No, I'm expressing that a single person't ability to avoid negative behavior has little bearing in an argument regarding community norms and herd behavior, which is what I'm getting at. Whether you are a perfect programmer and never make a single mistake in any language you use doesn't matter when discussing the merits of enforced safety in general as in this discussion regarding C and C++. What does matter is whether other programmers in general do, and what percentage of them, which you've also made a point of expressing. I think that is worth discussing, because I think we either disagree on the proportion of those programmers that can code with adequate safety, or some other facet of them that results in them yielding far more problematic code every year than you think they are producing.

> In one of your other comments, you indicated you haven't tried Rust yet.

I've tried it. I haven't done more than dabble though, while playing it futures-rs. I understand the borrow checker is cumbersome at my level of uncerstanding, and I fought with it. I don't think I have sufficient experience to make an assessment of the language personally based on my level of experience with it, and especially not with how it feels to write in comparison to C or C++, because I strive to avoid using those languages.

> However, I don't think you will find the safety features to be a big productivity gain, and you will have to use unsafe code to accomplish tasks from a freshman level computer science book.

I believe have the ability to define safe and unsafe portions of code is in itself laudable and useful. Allowing me to categorize possibly problematic portions of code is a benefit. In any case, I could essentially write the entire program in an unsafe block and have a C/C++ alike with a different syntax. I'm not sure how "unsafe" can be presented as a downside, when it's strictly a way to enforce separation of a feature that C and C++ don't have.

> Think about that - you can't cleanly use the safe subset of Rust to teach computer science to beginners... (you could do it a lot of compromises)

What, you can't use that explicit separation of what is known safe and known unsafe to point out computational problems and ways they can be solved? I find that hard to believe. Unless you think unsafe is Rust but "lesser, not really". It isn't. It's part of the language. It exists as a concession that sometimes things are needed that can't be proven safe by the compiler, but you may be able to prove to yourself it is.

1: https://www.exploit-db.com/remote/


You seem like a forthright person, but with or without metaphors, we're still talking past each other.

My point about remote exploits, airplane crashes, and cops is not about me. Yes, public facing software needs to be careful, but (fun metaphor) that's like saying prostitutes should use condoms. Web servers, browsers, firewalls, and the like are built specifically to communicate with untrusted entities. That's some of the most promiscuous software out there, and yes it gets exploited. But most people don't need to use condoms with their wives, and nobody is going to exploit software a newbie wrote and runs on his home computer. Safety should not be the fundamental criteria for a newbie programmer to choose a language and learn how to write fibonacci or hello world. When they're ready to write nginx, then they should be careful.

My point about the questionable productivity gain and safety was a reply to your estimate of the billions of dollars lost. If you're not more productive, and you aren't really safe, then you aren't going to save those billions.

> What, you can't use that explicit separation of what is known safe and known unsafe to point out computational problems and ways they can be solved? I find that hard to believe.

I didn't say anything like that. We're talking past each other.

> Unless you think unsafe is Rust but "lesser, not really". It isn't. It's part of the language.

(Metaphor time again) I've got a really safe bicycle. When the safety is on, children can't get hurt while riding it. If you care about the safety of the world's children, they should use my new safer bicycle. Oh, but you can't pedal it on paths I don't provide unless you disable the safety. Is my bike really that safe?

> 1: https://www.exploit-db.com/remote/

I have no idea how many people compiled and ran a program today. It's probably millions. Bayes's theorem might be a useful way to normalize that long list you linked. I don't see a single program from a home programmer on that list.


"I didn't say anything like that."

No one said that you said anything like that. Of course you didn't. But what you said necessarily implied that.

"We're talking past each other."

No, you willfully ignored and misrepresented all his points.

"Oh, but you can't pedal it on paths I don't provide unless you disable the safety."

That's a grossly dishonest misrepresentation the situation with Rust.


> Unless you think unsafe is Rust but "lesser, not really". ... It's part of the language ... sometimes things are needed that can't be proven safe by the compiler, but you may be able to prove to yourself it is.

This. Unsafe is to the borrow checker as 'Any' is to the typechecker.


Perhaps a better analogy would be the way that much of modern medicine is enabled by access to antibiotics. Without antibiotics, the risk of post-operation death by infection would be so high as to rule out many of the procedures that we now consider safe and routine.


I would prefer a surgeon who washed his hands over one who didn't but gave me antibiotics. I've had stitches a few times, but only one real surgery. I never got antibiotics for any of those. Maybe we could skip the analogies? I don't think they help the discussion.


They help in a discussion with someone intellectually honest.


> I think the safety aspect of Rust appeals to a lot of beginning programmers.

Is that a bad thing? All programmers start as beginners, and if C is too painful to begin with then they'll learn via an easier language, and then comfortably spend their whole careers using those easier languages. If we want to expand the field of systems programmers organically, then we need to make tools that don't punish beginning programmers.

> They can feel safer looking down their nose at us dangerous C or C++ programmers.

What makes you feel like anyone's looking down their noses at you? Every language in history has been made to address the perceived flaws of some prior language. Safety is a crucial selling point for a huge chunk of people, and C and C++ have failed to appeal to this market. Just because safety isn't a priority for you doesn't mean that the people for whom it is a priority are suddenly pretentious.


> > I think the safety aspect of Rust appeals to a lot of beginning programmers.

> Is that a bad thing?

The appeal to beginners is fine, maybe even a good thing, but the condescending comments from beginners is a lot like listening to a teenager who thinks they know everything.

> What makes you feel like anyone's looking down their noses at you?

There're are no shortage of obnoxious comments from beginning Rust users here and on Reddit. If you can't see them, it might be because you're aligned with that point of view.

A recent one implied the whole world is going to end because of Heartbleed-like exploits. Don't they realize that despite the occasional high profile exploits, the world is generally running just fine? Don't they realize that the OpenSSL developers would've probably used pools of dirty memory to avoid allocation costs and unsafe blocks to avoid bounds checking had they developed that code in Rust? They got bit by sloppy optimization, and Rust isn't immune to that. I really wish people weren't so afraid of everything that achieving safety is their primary goal.

> Just because safety isn't a priority for you doesn't mean that the people for whom it is a priority are suddenly pretentious.

It's not pretentious if you make your own decision for your own project. It's not even pretentious to spread the good word and say how much you like Rust. It is very pretentious and condescending when you say something like in Graydon's article: """When someone says they "don't have safety problems" in C++, I am astonished: a statement that must be made in ignorance, if not outright negligence."""

Are you going to stand by that sentence? You probably should, because the newbies will love you for it, and it might help increase adoption of your language. It really shouldn't matter if you alienate a few of us old-timers who really don't have safety problems in C++.

To be clear, I like Rust. I've been following it for years, and I'm disappointed that it's not an adequate replacement for C++ (which I really don't like).


"the condescending comments from beginners"

You like to make stuff up.

"If you can't see them, it might be because you're aligned with that point of view."

Or it might not. It might be that you're just being abusive and dishonest.


> There're are no shortage of obnoxious comments from beginning Rust users here and on Reddit. If you can't see them, it might be because you're aligned with that point of view.

Can you give me an example of a comment in this thread that you find to be from a pretentious beginner? Alternatively, if you're calling the author of this article a beginner, I can assure you that he isn't.


The guy's a troll.


> To be clear, I like Rust. I've been following it for years, and I'm disappointed that it's not an adequate replacement for C++

Just out of curiosity, what is it about Rust that means it's an inadequate replacement for C++?


There are many things you could dismiss as style issues, but here is one relating to performance. Rust does not (yet) have integer generics. If I use Eigen (the C++ library), I can declare a matrix of size 6x9 and have the allocation live (cheaply) on the stack. I do this kind of thing frequently (not always 6x9), and in Rust I would pay for heap allocated matrices. The cost in performance can be huge. Maybe this will get fixed in the near future.


Humans are prone to error (fine), therefore you are prone to error (condescension, not fine). Post-aristotelian logic?

I'm not completely serious, it's more complex that this.


"Is that a bad thing?"

Regardless, it's a complete mispresentation of Rust, which is all that zero has to offer.


I'll probably be writing a slightly longer response post to this later, but for now... EDIT: here it is: http://words.steveklabnik.com/fire-mario-not-fire-flowers

I think the core of it is this:

> Safety in the systems space is Rust's raison d'être. Especially safe concurrency (or as Aaron put it, fearless concurrency). I do not know how else to put it.

But you just did! That is, I think "fearless concurrency" is a better pitch for Rust than "memory safety." The former is "Hey, you know that thing that's really hard for you? Rust makes it easy." The latter is, as Dave[1] says, "eat your vegetables."

I'm not advocating that Rust lose its focus on safety from an implementation perspective. What I am saying is that the abstract notation of "safety" isn't compelling to a lot of people. So, if we want to make the industry more safe by bringing Rust to them, we have to find a way to make Rust compelling to those people.

1: https://thefeedbackloop.xyz/safety-is-rusts-fireflower/


I would argue that if the Rust project would have just one mission statement, it wouldn't be "create a safe systems programming language". It would be "move towards a world where safe systems programming is the norm".

What's the difference? Both of the statements have the premise that Rust is – and ought to be – a safe systems programming language. However, the latter captures not only the REAL goal, but also the nuances and tensions: while safety is indispensable, we must do something else too, for the programming society to accept the safe tools we are trying to promote. That means ergonomics, that means performance, that means ease of use, that means wide availability – and that might also mean advocation of visions of a better world, which is what this blog post of Graydon's does.


I really, really like this. Thank you. Well put.


It's important to get the nuance of this statement right. Consider, for instance, the PSF's mission statement:

> The mission of the Python Software Foundation is to promote, protect, and advance the Python programming language, and to support and facilitate the growth of a diverse and international community of Python programmers.

"Facilitate the growth" does not imply that dominance of a certain practice (whether that is use of Python, or adoption of its core principals) is a goal. So you have an incentive for the PSF to say "our international community is growing at a sufficient rate" and focus instead on "advancing the language," which may or may not be aligned with adoption. In such a framework, it becomes easy to justify backwards-incompatible major releases that, regardless of whether their opinion is justified, many users consider to be user-hostile. Framing Rust towards a larger mission that implies user adoption of core principals as a fundamental goal seems much cleaner in that regard, and could conceivably avoid similar pitfalls.


As someone watching from the sidelines... maybe it's the cynic in me, but I think you're giving too much credit to the "but I don't have safety problems" folk. Judging by how defensive they get, it's just a post-hoc rationalization to why they won't invest time learning it. That's fine but, IMHO, not a marketing problem and just good old resistance to change. There's an endless stream of excuses to choose from, no matter how you pitch Rust to them.

I'm all up for a marketing shift though. I agree Rust is much more than safety and I think graydon2 missed your point there. The fireflower simile is excellent. Plus I've seen lots of people confused about Rust, including here on HN. Those are the people to whom marketing failed and should be targeted better.


Or we were severely disappointed when we realized Rust doesn't really give you that safety.


No, that's not it.


> I think "fearless concurrency" is a better pitch...

I would go a step further, "fearless programming". Though I would hesitate on 'easy'.

Rust gives you fearlessness in all the things, but it does mean learning new style and discovering new solutions to old problems. To fully understand 'Send' vs. 'Sync', for example, means really groking the Rust type system. Once you get the type system, then fully utilizing it with the expressive generics becomes unlocked, and then at that point you've transcended from Rust dabbler to fully fearless Rust user.

Once this world of development is unlocked to you it is mind-blowing, but it is a journey to get there, and not everyone will have the heart to make it. It comes in stages, is wonderfully rewarding, will make you a better programmer in you other favorite languages, but I think we should be careful with statements like 'Rust makes it easy'.

It does make hard things easy, but only after you've fully embraced Rust. This feels more accurate: "Hey, you know that thing that's really hard for you? Rust makes it fun."


Making software free of data races and memory safe (assuming you don't use any unsafe code...) is still a long way from being free of serious defects.

Rust is cool enough that it doesn't need to be promoted with excess hype.


I think the problem is that Graydon is better at technology than marketing. Getting language adoption is largely a marketing problem combining what those in control of the language push and how those in the community pull. Rust's success comes from doing that dynamic correctly.

If it was just safe and not C/C++, it would be another Ada, Modula-2, D, etc. It's important to market all the key benefits of it in a way that lets potential users know it will help them solve problems faster and with less trouble down the road.


I was surprised to see Ada in the list of unsafe languages, since it always was sold to me as being designed for safety. A bit of searching leads me to believe that Ada is better about memory even though it mostly uses types for safety, and better enforcement of bounds on array access should solve overflow issues regardless. Am I missing something?


Ada still requires manual heap management, although it can be mostly automated.

So you might occasionally see the unsafe package being used to deallocate memory, even though there are better ways to do it, e.g. controlled types.

The other point, is that Rust prevents data races via the type system, while you can deadlock Ada tasks if the monitors aren't properly implemented.


> while you can deadlock Ada tasks if the monitors aren't properly implemented.

It's not clear to me if you're suggesting otherwise, but you can definitely deadlock Rust as well (although it's true that Rust statically prevents data races).


Ah, that was my understanding.


The other thing he semi got wrong is on safe concurrency. Ada has Ravenscar and Eiffel has SCOOP. Ravenscar didn't need a GC since it was for real-time while Eiffel has one. Before them, there was Concurrent Pascal. The author would be right if he said Rust had much better approach to safe concurrency in terms of expressiveness and performance.

Ada side is producing a new model for parallel and concurrent programming called ParaSail:

http://www.embedded.com/design/programming-languages-and-too...


Any idea on what the status of ParaSail is? Seems to have been pretty quiet lately :(


Have no idea. Contacting Taft et al about that and some other things... especially adding Rust's dynamic or concurrency safety to Ada... is on my backlog for now.


I think it's important to continue to do research in this area. I use Rust now because it is a great language with a strong community, tooling, and momentum, in spite of it's flaws and blind spots, but I see it more as a stepping stone to even better, safer, more expressive systems languages. Rust has challenged our preconceptions on what is possible - I think we may be able to push it even further. My preference is to try to move more towards Idris and the lambda calculus, but it would be interesting to see what an Ada-spin on it would look like.


Well there is this famous Ada failure:

http://www.math.umn.edu/~arnold/disasters/ariane.html

Although that was not a failure of language safety, but an overflow issue. The software was designed successfully for another rocket, and was reused for a rocket that didn't match the original specification.


It's a specifications issues, not a programming issues.


I disagree. The programmer is the expert in types, so it is the programmers duty to ensure that the possible values stored in a variable of a given type are compatible with the type selected. Particularly when it comes to these sort of critical applications.

Programmers blindly following specs put together by people who have no claim to expertise in these matters without questioning the assumptions behind the spec is the cause of all manner of disasters. And "but that's what the spec said to do" is all to common an excuse when the problem is with subtle runtime behavior issues that fall squarely in the programmer's domain of responsibility.


The software component in question was implemented according to its specification, and never failed in the environment for which it was developed, the Ariane 4.

The decision to re-use the component as-is in the Ariane 5 without sufficiently investigating the consequences of the higher horizontal velocities that it is subject to compared to the Ariane 4 cannot so obviously be blamed on the programmer that implemented it years before in a different context.


Thanks for the extra context, and alternate interpretation. You seem like you might know this story better than the writer of the referenced article, but you and the author seem to be making contradictory causal claims. I hold to my conclusion if the author's story is taken as authoritative. If yours is more authoritative, then it sounds like your conclusion is better.

I kind of took the story as an allegory when writing my comment. The article is quite vague about the details of the situation. And for all I know, it IS programmers that write specs for this European Space Agency unmanned rocket project. But the way the story is told aligns with a more universal experience of programmers blaming specs for the failings of programs, even when they should have recognized that the program was misspecified before implementing it. I ran with that interpretation because it was illustrative of something important, but it is not particularly surprising to me that the details are being questioned. The article was never a rock solid account.


I believe you have three choices with Ada:

1) No manual memory management, everything is static and you have memory safety

2) Garbage collection, you have memory safety

3) Manual memory management, you lose memory safety

Rust provides memory safety in the case of manual memory management.


GC was dropped from the Ada2005 standard, because no Ada83 compiler ever implemented it.

Ada provides more ways to automate memory management, though.

Controlled types are Ada's version of RAII, used for arenas and smart pointers.

Also in Ada you can dynamically allocate everywhere, so a common pattern is to use a subroutine parameter to do stack allocation. If it fails, by throwing an exception, recall the subroutine with a smaller size.


Rust is about letting the compiler slap you for your mistakes in the privacy of your own Xterm, instead of letting Jenkins do it 10 minutes later, in front of all your co-workers.


Those slaps represent a bunch of tests that were being reinvented for each program that have now been factored out and up into the compiler.


Maybe it will change in future. Currently slaps seems so hard that developers are still smarting and not producing code for production


We've been in production with Rust almost six months already. Couldn't be happier with the language. Works like a charm with our consumers.


Can I ask which company you're with? Are you already listed on https://www.rust-lang.org/friends.html ?


Appears to be 360dialog.


Rust/C++/Clojure/Scala in services and Python as a general glue language for ops.


I should add our company there soon...


Rust is being used in a number of places in production for a wide variety of things: https://www.rust-lang.org/en-US/friends.html


I know and I would also try Rust if I can make some small but useful things at work. But I mostly deal with various combinations of XML, SOAP, HTTP, LDAP etc. Rust does not have anything over Java, which I use currently, in my usecases.


It's perfectly reasonable to say "Rust isn't appropriate for my use case". Your comment higher up was more along the lines of "Rust isn't appropriate for anyone" which is far less reasonable.


If you're going to put words in his mouth, you should make them much stronger words. It's not a valid argument either way, but it'll seem more dramatic. (He didn't say either of your quotes...)


I downvoted you initially, but changed to an upvote to hopefully ungrey your comment.

The use of quotation marks on the Internet (especially on Internet discussion forums) has become non-standard, and I can see how it could be confusing. I think on HN that we tend to use italics or email-style

> block-quoting

to indicate direct quotations of posts or user comments.

Quotation marks on forums like HN tend to be used either to mark dialogue (things spoken out loud) or to mark paraphrased or "hypothetical" thoughts. This is different from the use of quotation marks in formal English writing, as described by Wikipedia [1]. Here, the quotation marks are used to separate the "paraphrased thought" from the rest of the sentence.

I'm actually finding it hard to describe exactly how quotation marks are used this way on the Internet; it's something I've just developed a "feel" for.

There's more discussion of this phenomenon here. http://metatalk.metafilter.com/23184/Should-we-keep-quotatio...

[1]: https://en.wikipedia.org/wiki/Quotation_marks_in_English


Sorry, next time I'll say something like "If you're going to misrepresent his intention" so as not to confuse a quoted sentence. And I won't use the word "say", because clearly nobody says anything in a text forum. /s

I find it very obnoxious when people exaggerate what someone else said so as to make it easier to contradict. I gather you don't have any problem with that? Yet you do have a problem with people calling it out as bad behavior? Are you sure you know why you're policing anything?


He didn't misrepresent anything. You're the one doing all the misrepresenting, exaggerating, and being obnoxious.


I have to deal with a pre-REST, pre-SOAP XML API, which I would love, love, to be able to handle in Rust. But until Serde-XML is able to deserialize more of that stuff and preferably handle XSDs, I'm stuck too.


Maybe you could call out to a C XML library in the interim?


It takes time to get used to writing Rust, dealing with the borrow checker, and fulfilling the proper trait bounds (Sync, Send, Sized, etc). I don't run into nearly as many compiler complaints now that I've written my fair share of Rust code. I feel like I'm quite productive in Rust, actually.


What exactly is being figuratively slapped? And what exactly do you propose the compiler should do when the code is obviously wrong?


Its the safe and performant that attracts me.

If you look at Rust from C then the point is safety, but if you look at it from the other direction, e.g from F# then what attracts you is that you will get the same safety guarantees (and perhaps a few more) but without the GC and heap overhead.


> e.g from F# then what attracts you is that you will get the same safety guarantees (and perhaps a few more)

As a big rust fan, I wouldn't go that far. You can offload a lot more work to the type system in a language like F# or Haskell. Rust is very safe from e.g. a memory perspective (excepting unsafe operations), but there are additional levels of assurance you can get by aggressively forcing the type system to catch logic errors for you that you can't really do in Rust.

As for performance, I agree, although a more accurate description would be that it's much easier to get C-level performance with Rust code while you have to put in some more effort to get it in any high-level functional language.


Bingo. I've written high perf F# for a DB indexing and searching. The entire time I was wishing for allocation-free inlined closures and stack allocation. And for a few places, I'd really like an easy way to do asm or get really top notch codegen (integer decoding). Rust seems a lot like a fast ML. Not quite as concise as I'd like but worth it for the perf without being ugly mentally.

And the perf will surpass C in some situations due to abstraction. One popular open source platform spends 30% CPU time on malloc and strcpy because tracking ownership was so difficult and it wasn't obvious it'd be a hotspot. In Rust that would be a non issue from beginning.


You may be interested in MLton, which is an ML compiler that achieves high levels of optimisation by aggressively specialising your code: generic functions get specialised to the type and higher order functions get inlined to eliminate closure allocation.


In case you missed that there's a big disillusioned C++ crowd out there.

Just hear the pain: https://news.ycombinator.com/item?id=13276351

And some of them are watching you with great interest.


And there's a tired security crowd watching Rust with great hope; C++ and C have created innumerable security holes at the expense of "convenience". Cryptographic libraries, codec libraries, image conversion libraries, OS kernels, sandboxes, virtual machines, browsers, (the list is endless) have all suffered glaring security holes from the lack of memory hygiene afforded by C and C++.

Any time your code takes in untrusted input, it should not be written in an unsafe language.


Exactly. Which is why I've been so critical, in Rust discussions, of the excessive use of "unsafe". The reply is usually something equivalent to "it's not unsafe the way I do it". Sometimes the claimed performance gain isn't there. I had a link yesterday to a forum post where someone was complaining that using an unsafe vector access function didn't speed up their program. Optimizer 1, programmer 0.

(Early in my career, I spent four years doing maintenance programming for a mainframe OS. Every time a machine crashed, taking a few hundred users off line for several minutes, I got a crash dump, which I had to analyze and fix. Most of the errors were pointer problems in assembly code. When Pascal came out, I thought we were past that. Then came C. I had hope for SafeMesa, but nobody outside PARC used it. I had hope for Modula I/II/III, but DEC went under. I had hope for Ada, but it was considered a complex language back then. Rust finally offers a way out of this hole. Don't fuck up this chance.)


I am still skeptical that "excessive use of unsafe" is actually a thing happening in Rust. Almost all the unsafe I see is for doing FFI (either for interfacing with a library or OS primitives). There's a bunch of it for implementing datastructures and stuff, and extremely little unsafe being used "for performance". Off the top of my head nom and regex do this in a few places, and that's about it. Grepping through my cargo cache dir seems to support my assertion; most of the crates there are FFI (vast majority is FFI) or abstractions like parking_lot/crossbeam/petgraph.

I agree that we should avoid unsafe as much as possible and be sure that unsafe blocks are justifiable (with stringent criteria on justification). I'm don't think as-is this is currently a problem in the community.

It's good to be wary though :)


You keep making that claim without backup. Two days ago I posted links to extensive use of "unsafe" in matrix libraries. (Some of that code was clearly transliterated from C. Raw pointers all over the place.) That's entirely for performance; all that code could be safe, at some performance penalty.

I'd suggest using only safe code for whatever matrix/math library gets some traction, and then beating on the optimizer people to optimize out more checks.


I just gave you backup; I grepped my whole .cargo cache dir (both the one used by servo and my global one). You have also made your claim without backup -- you have repeatedly claimed that this is an endemic problem in Rust, with only individual crates (most of them obscure ones) to back it up, and I only usually make my claim in response to claims like yours -- the burden of proof is on you. Anyway, I do provide some more concrete data below, so this isn't something we should argue about.

Marices fall under the abstraction umbrella IMO. This is precisely what unsafe code is for. However, I totally agree that we should be fixing this in the optimizer, with some caveats. Am surprised it doesn't get optimized already, for stack-allocated matrices. I'm wary of adding overly specific optimizations, because an optimization is as unsafe as an unsafe block anyway, it just exists at a different point of the pipeline. If there's a general optimization that can make it work I'm all for it (for known-size matrices there should be I think), but if you have a specific optimization for the use case imo it's just better to use unsafe code.

The raw pointers thing is a problem, but bad crates exist. They don't get used.

I recently did start going auditing my cargo cache dir to look for bad usages of unsafe, especially looking for unchecked indexing, since your recent comments -- I wanted to be sure. This is what I have so far: https://gist.github.com/Manishearth/6a9367a7d8772e095629e821...

That's a list of only the crates containing unsafe code in my global cargo cache (this contains most, but not all, of the crates used by servo -- my servo builds use a separate cargo cache for obsolete reasons, but most of those deps make it into the global cache too whenever I work on a servo dep out of tree)

I've removed dupe crates from the list. I have around 600 total crates in my cache dir, these are just the ones containing unsafe code.

Around a 70 of these crates use unsafe for FFI. Around 30 are abstractions like crossbeam and rayon and graphs.

I was surprised at the number of crates using unchecked indexing and unchecked utf8. I suspected it would be less than 10, but it's more like 20. Still, not too bad. It's usually one or two instances of this per crate. That's quite manageable IMO. Though you may want to be stricter about this and consider those numbers to be problematic, which I understand.

I bet you're right that many of these crates can have the unchecked indexing or other unsafe code removed (or, the perf penalty is not important anyway). I probably should look into this at some point. Thanks for bringing this to my attention!


I looked at a few.

"itoa" is clearly premature optimization. That uses an old hack appropriate to machines where integer divide was really expensive, like an Arduino-class CPU. It's unlikely to help much on anything with a modern divide unit.

"httpparse", "idna", "serde-json", and "inflate" should be made 100% safe - they all take external input, are used in web-facing programs, and are classic attack vectors.

Not much use of number-crunching libraries; that reflects what you do.

I'll look at some more later. How to deal effectively with incoming UTF-8, especially bad UTF-8, may need some thinking.


I maintain two of the crates you called out so here is a bit more detail on the use cases:

"itoa" is code that is copied directly from the Rust core library. Every character of unsafe code is identical to what literally everybody who uses Rust is already running (including people using no_std). Anybody who has printed an integer in Rust has run the same unsafe code. It is some of the most widely used code in Rust. If I had rewritten any of it, even using entirely safe code, it would be astronomically more likely to be wrong than copying the existing code from Rust. The readme contains a link to the exact commit and block of code from which it is copied.

As for premature optimization, nope it was driven by a very standard (across many languages) set of benchmarks: https://github.com/serde-rs/json-benchmark

"serde_json" uses an unsafe assumption that a slice of bytes is valid UTF-8 in two places. This is either for performance or for maintainability, depending on how you look at it. Performance is the more obvious reason but in fact we could get all the same speed just by duplicating most of the code in the crate. We support deserializing JSON from bytes or from a UTF-8 string, and we support serializing JSON to bytes or to a UTF-8 string. Currently these both go through the same code path (dealing with bytes) with an unchecked conversion in two important spots to handle the UTF-8 string case. One of those cases takes advantage of the assumption that if the user gave us a &str, they are guaranteeing it is valid UTF-8. The other case is taking advantage of the knowledge that JSON output generated by us is valid UTF-8 (which is checked along the way as it is produced).

Here again, both of those uses are driven by the benchmarks in the repo above and account for a substantial performance improvement over a checked conversion.


"serde_json" uses an unsafe assumption that a slice of bytes is valid UTF-8 in two places. This is either for performance or for maintainability, depending on how you look at it. Performance is the more obvious reason but in fact we could get all the same speed just by duplicating most of the code in the crate.

Could that be done safely with a generic, instantiated for both types?


Yes, that is what we already do. The two unsafe UTF-8 casts are the two critical spots at opposite edges of the generic abstraction where the instantiation corresponding to UTF-8 string needs to take advantage of the knowledge that something is guaranteed to be valid UTF-8.

What we have is as close as possible to what you suggested.

As I mentioned, we could get rid of the unsafe code in other ways without sacrificing performance. Ultimately it is up to me as a maintainer of serde_json to judge the likelihood and severity of certain types of bugs and make tradeoffs appropriately. There are security-critical bugs we could implement using only safe code, for example if you give us JSON that says {"user": "Animats"} and we deserialize it as {"user": "admin"}. My judgment is that using 100% safe code would increase the likelihood of other types of bugs (not related to UTF-8ness) and the current tradeoff is what makes the most sense for the library.

From another point of view, performance and safety are synonyms in this case, not opposites. If we use 0.1% unsafe code and perform faster than the fastest 100% unsafe C/C++ library (which is what the benchmarks show for many use cases) then people will be inclined to use our 0.1% unsafe library. If we give up unsafe but sacrifice performance, people will be inclined to use the 100% unsafe C/C++ alternatives.


Yeah, this was basically my conclusion too.

I'm somewhat okay with the parsing ones using unsafe if we can be very sure that the unsafe code actually has a performance impact, and be very careful about it. Some of them already do this, but not all.


There's also the tired sysadmin crowd who are tired of rebooting thousands of hosts for kernel, shell, libc, etc. patches. And tired of patching web, mail, dns, etc servers. I'm sure there are really smart C and/or C++ developers out there that never make mistakes but I've spent a large part of my career patching/upgrading really smart peoples code.

For me, safety is the killer feature in Rust. It's also exciting because it brings systems level programming to a new generation of programmers without all the risk.


> Any time your code takes in untrusted input, it should not be written in an unsafe language.

So basically just about all programs, all of the time?

https://www.owasp.org/index.php/Don't_trust_user_input


I agree, but people seem to feel that their code should somehow be exempt from such advice, and so sacrifice safety for performance. This leads to today's sorry state of affairs.


The problem is that safety doesn't sell. If you're getting a new IoT heat lamp you look at the price and not the firmware's code. To your surprise, the first hacker coming along toasts your cat.


Rust may ultimately be the better solution for many or most cases, but right now SaferCPlusPlus[1] may be the more expedient solution for existing C/C++ code bases.

> Any time your code takes in untrusted input, it should not be written in an unsafe language.

Not just that, but my theory is that untrusted input should only be stored in data types specifically designed for untrusted input [2], and should undergo safety/sanity checks during conversion to more high-performance types. For example, a general rule might be that untrusted integer inputs may only be converted to (high-performance) native integers if their value is less than the square root of the max integer value.

[1] shameless plug: https://github.com/duneroadrunner/SaferCPlusPlus

[2] https://github.com/duneroadrunner/SaferCPlusPlus#quarantined...


I agree with the premise of the article.

However, I feel that Steve Klabnik is trying to dispel myths about Rust not being anything "but" safety, to shape how other Rust developers talk about Rust, not denying that Rust's central purpose is around being a safe language.

This is because there is a lot of miscommunication about Rust. A lot of people who aren't immediately sold on the language walk away thinking it's slow (it's not), it's complicated (not really), and not production ready (it actually is). And that's because Rust developers don't know how to talk about Rust. I am guilty, for one.

Since Steve is such a huge part of RustLang development, it's his duty to direct the conscious effort to promote the language.

No reason to get into a debate over click-baity titles. :)


The issue with safety is that nothing is really safe. Once you have some level of safety in your programming language, you realize that there are still a lot of other sources of hazard (hardware errors, programming logic errors etc.)

So I guess, it would be better to say that Rust is about decreasing unsafetyness or whatever the correct word for that is.

edit: since I see posts about Go, this is evidently another approach toward decreasing unsafetyness by providing fewer and easier to understand primitives so that the programming logic is harder to write wrong. It might come at a moderate cost for some applications.


> The issue with safety is that nothing is really safe.

There is a trade-off between safety and expressiveness. Clearly you can always shoot yourself in the foot if your language is expressive enough (like any Turing-complete language).

But I think that is beside the point here. This is about eliminating whole classes of errors.

A good type system (e.g Rust's, Haskell's..) can eliminate all type errors from your programs.

A good memory model can eliminate all unsafe memory problems.

There are also languages that can eliminate all data races from your programs.

All these advances in PL theory make it easier and safer to deal with hard problems like concurrency, memory management etc. and thus allow us to focus on what our programs can actually do.


> A good type system (e.g Rust's, Haskell's..) can eliminate all type errors from your programs.

It will eliminate errors related to the use of a given programming language. It will not necessarily avoid systemic errors. The programming language is only one part of the problem. Safety is a wider issue than just the use of a programming language.

Especially since the systems we use are often dynamic with changing requirements.


> It will eliminate errors related to the use of a given programming language. It will not necessarily avoid systemic errors.

Like I said: It will eliminate a specific class of errors, namely all type errors. Your program will literally not compile if there are any type errors.

> The programming language is only one part of the problem. Safety is a wider issue than just the use of a programming language.

Sure, I don't disagree with that statement. But it's important to recognize that eliminating whole classes of errors is extremely valuable and allows us to focus on the important things.


Every type system eliminates all its own type errors by definition. Even the trivial system with one type eliminates all its own type errors (vacuously, since there are zero of them).

There is no universal set of errors called type errors. What are type errors depend on your type system. A good type system allows more errors to be encoded as type errors so you can catch them at compile time, but it doesn't mean anything to say that a language like Rust or Haskell eliminates all type errors. There are certainly type systems which could catch more errors.


Sure, not all type systems are created equal. And there are indeed type systems that can catch more errors than Haskell's (although that usually comes at the price of losing type inference).

But I read OP's point as "Well, you can never catch all programming errors with PL_feature_X, so why even bother." And my point is simply that a lot of PL features make formerly hard things easy and thus allow you to go faster and focus on more interesting things.


You completely misunderstood what he was trying to tell you.


There was nothing interesting in what he was trying to say.

Yes, no programming language perfectly eliminates all classes of unsafety. But that's no reason to let the perfect be the enemy of the good! "The issue with safety is..." no issue at all. Being safe in a bunch of problem domains (Rust) is still strictly better than being safe in very few if any of them (C).


So you did it on purpose. That just makes you a bad actor in the conversation.


I am not whoever you imagine you're responding to (rkrzr, I guess)


No, that's you.


More to the point, Rust and other statically typed languages* eliminate type errors at compile-time.

In e.g. Python, the following code:

    foo = Bar()
    foo.baz()
will compile without complaint but, supposing that baz() is not a member of class Bar, will cause errors when the code is actually run. In statically typed languages this will be caught by the compiler and treated as an error; type errors are simply not allowed in compiled programs.

This distinction is significant, as Python and other dynamically-typed languages require comprehensive test suites for any non-trivial software written in them, shifting the burden of ensuring type safety to the programmer. Testing systems for statically typed languages don't need to concern themselves with type safety. Dynamic typing also carries performance penalties at run-time (checking type safety for e.g. every member access).

* Some statically-typed languages (e.g. C++) allow for very specific subversions of type safety at run-time, but its usually clear to the programmer when they are doing something dangerous.


"but its usually clear to the programmer when they are doing something dangerous"

Um, no. Every pointer is fundamentally unsafe, and a lot of C++ code is written with pointers through and through.


"Every type system eliminates all its own type errors by definition."

Nonsense. C/C++ will happily produce a warning, if you specified -Wall, that you violated a type constraint and then go ahead and compile your program. To suggest that such violations are part of C's type system is to pedantically and willfully miss the point.


> A good type system (e.g Rust's, Haskell's..) can eliminate all type errors from your programs.

Do you consider silently trimming value during mandatory explicit type conversion a type error?


No, because if you explicitly converted to another type, that's what you wanted.


Sometimes you want it to be lossy, but not most of the time, and yet there is no choice. I had a bug caused by that, that's why I remember it. Silent explicit type conversions are essentially unsafe.


What do you mean by "silent explicit type conversion"? If you said "silent type conversion" I'd read that as "implicit type conversion". But you said "explicit", which means you've got it very clearly in your code that you're doing a type conversion (to a smaller type), so what's silent about that?


Silent in terms of compiler not complaining.


Why would the compiler complain? Doing a narrowing type conversion is a perfectly legitimate thing to do. So when you ask the compiler to do it, it should.


It forces you to explicitly say that you know you're forcing a value into a narrower type; I think the fact that might mean loss of information is understood, by definition. What would you like it to do?


I would like it to tell me if I accidentally converted to a narrower type. This is a problem, because I don't necessary see the type I'm converting from due to type inference or simply am too far from the context where that type is declared and have to make assumptions to keep going. These assumptions of course fail sometimes and cause bugs. Same problem with precisions, by the way. I'm not sure how exactly compilers should fix this, the easiest fix seems to simply have different operators to explicitly allow lossy conversions, when necessary. But the bigger deal would be to treat numbers as sets of possible values with solvers or whatever to warn about mistakes where you use unhandled values in the code somewhere.


Rust actually has this, this works:

    fn main() {
        let x = 5i32;
        let y: i64 = x.into();
        println!("{}", y);
    }
this doesn't work:

    fn main() {
        let x = 5i32;
        let y: i16 = x.into();
        println!("{}", y);
    }
you have to write:

    fn main() {
        let x = 5i32;
        let y = x as i16;
        println!("{}", y);
    }
where `as` has the potential of being lossy!


I agree that this is where we should be headed; it seems to me that Liquid Haskell, which was submitted to HN recently[1], could actually do what you need, since it uses an SMT solver to check for preconditions.

The casting function could specify the valid input values, and force the programmer to handle the rest of the cases when the input type is wider.

[1] https://news.ycombinator.com/item?id=13125328


> Silent explicit type conversions are essentially unsafe.

This is a contradiction, something cannot be both silent and explicit.


Sounds like an implicit conversion.


In Rust, lossy conversions only occur if you you explicitly write `var as type` and even that syntax is limited to certain types e.g. you can't coerce an integer to a function. In order to do something crazy like that, you'd need to call the unsafe `mem::transmute` function. The language cannot be much safer in this regard short of disallowing any sort of type conversions.


"Clearly you can always shoot yourself in the foot if your language is expressive enough (like any Turing-complete language)."

This is a common misunderstanding of being Turing complete. A program running in an interpreter isn't unsafe just because the programming language and the interpreter are Turing complete. Being Turing complete doesn't mean being able to gain root access, overwrite the boot block, scribble on the hard drive, etc.


> A good type system (e.g Rust's, Haskell's..) can eliminate all type errors from your programs.

It depends what you call a "type error". Is calling `car` on a `nil` instead of a `cons` a type error?


In common lisp 'nil is of type 'null which is a subtype of 'list, which is a union of the types 'null and 'cons so it wouldn't be an error. Other lisps might chose to do it differently.


It is in Rust, although I'm sure you can come up with something where the type system won't save you.


True, they messed up in their PR a bit with bold claims about safety. It definitely would be better to be careful with the words they use.

Like this "safe concurrency" claim sounds really fearless to me, even though I know they mean some guarantees towards thread safety and all that, not actual safe concurrency.


The docs are very clear about what safety means, but agreed that the subtleties can get lost in advertising. https://doc.rust-lang.org/book/unsafe.html#what-does-safe-me...


Yes, for instance, it's easy to create concurrent programs that are semantically wrong (in other words inadequate for use) albeit correct in terms of "types" because the coder made an erroneous assumption about determinism somewhere. The type systems that we see nowadays do not help with that.


> Rust is about decreasing unsafetyness or whatever the correct word for that is.

I think the word you're looking for is "safety". Safety is inherently relative and mostly about risk management. There's no such thing as 100% safe by definition.


"unsafety" rather ? ;) decreasing non-safety is not the same as increasing safety. One starts with the assumption that things are safe. The other does not.


Yeah, I'm a bit worried that Rust is raising the floor, but maybe lowering/hardening the ceiling when it comes to code safety. I mean, if you consider static (compile-time) versus dynamic (run-time) safety, Rust leans heavily toward the former, and presumably gains a performance benefit because of it. But Rust acknowledges that it is not practical to achieve memory safety completely statically and so provides dynamically checked data types as well (vectors, RefCell, etc.).

As you consider higher (application) level notions of safety, it generally becomes less practical to achieve that safety statically (at compile-time), so you'd want your language or your framework or whatever to facilitate the implementation and performance optimization of dynamic (run-time) safety. At the moment I'm thinking about automatic injection of run-time asserts (of application level invariants) at appropriate places in the code. (At the start and maybe at the end of public member functions for example.)

If you subscribe to this idea, then it sort of follows that Rust's borrow checker may be "in the wrong place". That is, rather than forcing you to write code that is memory safe in a particular statically verifiable way, Rust could have instead enforced memory safety by injecting run-time checks into the code and optimizing them out when it recognizes code that appeases the borrow checker. (Of course the optimizer could report what run-time checks it was not able to optimize out, if you wanted to self-impose static verification.)

(Statically optimized) dynamic safety is more scalable than statically verified safety. As a "systems language", Rust may be less concerned with higher/application level safety. But I think this might be a little short-sighted. The definition of "system" is expanding, and the proportion of "higher level" safety concerns along with it.


> If you subscribe to this idea, then it sort of follows that Rust's borrow checker may be "in the wrong place". That is, rather than forcing you to write code that is memory safe in a particular statically verifiable way, Rust could have instead enforced memory safety by injecting run-time checks into the code and optimizing them out when it recognizes code that appeases the borrow checker.

That kind of lack of transparency about what in the hell your code is doing at runtime is really inappropriate in a system's language.

It's an interesting idea, and it would be neat to play with in a language that wanted to restrict itself to more business-logic level safety concerns, but it would absolutely come at the cost of not being appropriate for systems-level tasks.


Hmm. Is it less transparent than vectors which use implicit run-time bounds-checks? Don't RefCells use implicit run-time checks? (Btw I don't know Rust very well, so feel free to correct me.) And what about the question mark operator for dealing with exceptions/errors? Isn't there a lot going on under the hood there?

But yeah, I can understand the sentiment of wanting to minimize that kind of thing in a lot of cases. Perhaps, like C and C++, Rust might consider bifurcating into a "high transparency" language, and a "high productivity" superset of the language. In that case, would all of the existing Rust language make it into the "high transparency" subset?

Like I said, the problem is that no one's defining what a "system" is. Haven't they written a browser rendering engine in Rust? Is that a "system"? Is there any part of the browser that does not qualify as a system?


RefCell does use run-time checks; that's it's entire reason for existing. They're "implicit" in the sense that they're inside of the functions, but that's the job of calling the function, so.

Question mark is roughly six lines of code, it's a match statement on Result, which has two cases.


So do you agree that Rust should remain a "high transparency" language? Do you have an opinion on a "high productivity"/"application level safety supporting" superset of the language? Rust seems to be creeping out of the system space and into the application space. Will/should Rust go out of its way to support application level programming? (By, among other things, facilitating enforcement of application level invariants?)

edit: Or is that just looking too far ahead?


To me, this is a library concern. Rust the language should remain high transparency, but that doesn't mean that programming in it should force you to always deal with every single thing if you don't want to. Look at the recent addition of ?, for example: you can get very succinct code if you don't want to deal with the particulars, but you still have access if you want to. I think good Rust libraries will end up like that.


Exactly. There is always the danger of self-hypnosis, by repeating 'memory safety means safety, full stop' too often.


In the context of Rust, "safety" usually means "memory and data-race safety".


Yes! Rust adds a way to manage it, in a two-tier system. There is `unsafe` marked code blocks and code without. The trusted code base has to be in the part marked `unsafe`.

It's simple (only the two tiers), but it is another tool for abstraction and managing complexity.


when compiler doesn't let you write race conditions or unintended variable mutation - it's a huge thing, not just "decreasing unsafetyness". Although I hope Rust will also get rid of arrays bounds errors.


If you're a C++ programmer, Rust is mostly about memory safety.

If you're a Java programmer, Rust is mostly about tighter resource usage.

If you're a Python programmer, Rust is mostly about type safety and speed.


I'm a polyglot programmer and for me Rust is mostly about the awesome abstractions and the great community.


> If you're a Javascript programmer, Rust is mostly about the awesome abstractions and the great community.

;)


If you are a c++ programmer, rust is also a lot about developer ergonomics. Syntax is nicer, build system is built in, less UB to think about, package management comes built in, and finally getting rid of those damn header files is such a relief.


I do not mean to pick on C++: the same problems plague C, Ada, Alef, Pascal, Mesa, PL/I, Algol, Forth, Fortran ... show me a language with manual memory management and threading, and I will show you an engineering tragedy waiting to happen.

I think if programming is to make progress as a field, then we need to develop a methodology for figuring out how to quantify the cost-benefit trade-offs around "engineering tragedies waiting to happen." The fact that we have all of these endless debates that resemble arguments about religion shows that we are missing some key processes and pieces of knowledge as a field. Instead of developing those, we still get enamored of nifty ideas. That's because we can't gather data and have productive discussions around costs.

There are significant emergent costs encountered when "programming in the large." A lot of these seem to be anti-synergistic with powerful language features and "nifty ideas." How do we quantify this? There are significant institutional risks encountered when maintaining applications over time spans longer than several years. There are hard to quantify costs associated with frequent short delays and lags in tools. There are difficult to quantify costs associated with the fragility of development environment setups. In my experience most of the cost of software development is embodied in these myriad "nickel and dime" packets, and that much of the religious-war arguing about programming languages is actually about those costs.

(For the record, I think Rust has a bunch of nifty ideas. I think they're going down the right track.)


> A lot of these seem to be anti-synergistic with powerful language features and "nifty ideas."

I think this is a pretty big myth that only applies to some of these language features.


I think this is a pretty big myth that only applies to some of these language features.

If you admit that it applies to some language features, then it's not a myth by definition.

In Smalltalk, #doesNotUndertand: handlers and proxies and unfortunate "clever" use of message sends synthesized within custom primitives could result in outsized costs. (It's where method_missing comes from in Ruby.) It's not that you couldn't do powerful and useful things with those facilities. It's that large projects that were around for years tended to accumulate "clever" hacks from bright young developers with a little too much hubris. Often, those costs would be incurred years after the code was written.

Yes, it only applies to some language features. But it clearly does apply to some of them. I don't think it's easy to come by quantified costs for these. Doesn't this strike you as a problem for our field?


The original Rust author make great points about safety. I think this new thrust on marketing emerges from Rust Roadmap 2017 which puts Rust usage in industry as one of the major goal. Currently Rust is about Go's age but nowhere close in usage. As the roadmap says "Production use measures our design success; it's the ultimate reality check." I agree with that.


> Currently Rust is about Go's age but nowhere close in usage.

Rust was released in 2015, it's merely one and a half years old, while Go was released in March 2012.

If you count the inception period of Rust (pre-1.0) you should also count Ken Thompson's and Rob Pike's work at plan9, which doesn't make more sense …

Fun fact: Go's first commit is 44 years old [1] ;)

[1] https://github.com/golang/go/commit/7d7c6a97f815e9279d08cfae...


It is not my intention to show Rust in bad light. I roughly mean to say both languages have put about 6-7 years of engineering effort by now but usage differs by an order of magnitude or so.

I agree that they had very different priorities in beginning and it changes with time. My goal was to merely point out core rust people in Mozilla and elsewhere now recognize that industry usage is an area of high importance in coming months/years in contrast to purely technical concerns of past.


I think your not looking at this correctly. Swift also had a very fast pasted release cycle like Go.

Rust took a different path, the developers until the 1.0 release basically said; use at your own risk, we reserve the right to change anything and everything and break it all. This freed them of trying to keep the language backward compatible.

After the 1.0 release, there have been nearly no breaking changes introduced to the language, and they have signaled that they want to keep this stability going into the future. This is a big difference from Go which decided to go for an earlier public release, and now is much more constrained on how it can change (if they don't want to break all the stuff built on it out there).

So it's not fair to include the 6-7 year development cycle, as that could be more thought of as a research period, one that laid the groundwork for the safety in everything which is the basis for Rust now.


I said 6-7 years of engineering effort not development cycle which are often linear. I am not blaming for taking long to get things right. If authors think they need more time then of course they need more time. Right now they really want to have broader industry usage and this can't be any clearer when they say:

"Production use measures our design success; it's the ultimate reality check. Rust takes a unique stance on a number of tradeoffs, which we believe to position it well for writing fast and reliable software. The real test of those beliefs is people using Rust to build large, production systems, on which they're betting time and money."


Well, Go is Limbo with some Oberon-2 touches.


I thought it was supposed to be Oberon-2 with some Limbo, C, etc touches. That's part of how it becomes my slam dunk against C in anther discussion. ;)


If you read the Inferno programming guide, you will see how much the languages resemble themselves.

Major differences are that Limbo uses a VM based runtime with dynamic loading and Abstract Data Types.

But your approach is also good, still Oberon had some issues that were eventually improved in Oberon-2 that Go lacks.

On the other hand Oberon-07 is even more bare bones than Go.


I only glanced at Limbo. I'll have to take a more detailed look at it. Inferno was definitely interesting. It was even ported to Android to replace Java runtime by people at Sandia.


Most languages resemble themselves very strongly.


You lost me there, regarding Go vs Limbo.


Dumb joke. "resemble each other" is a more unambiguous way to say what you meant.


It might be Go’s age overall from initial inception, but a typical point at which people start paying attention is the 1.0 release. In Go this was March 2012, for Rust it May 2015, over three years later.


To be honest, Rust took so long to stop making breaking changes and stabilize, I sort of tuned out on it -- it was never at a point it made sense to start using.

Has Rust actually settled down on some stability?


A little over 18 months ago, 1.0 was released. We've had very strong compatibility guarantees since then.


While that is true, in Go, almost no libraries are written to use non-stable features. This is not the case in Rust.


There are really only two popular Rust libraries that use unstable features:

1. serde, the best serialization/deserialization library. This works on stable now using the `serge_codegen` crate and a custom `build.rs` script. This will Just Work on stable with no extra setup once Macros 1.1 lands, theoretically in about 5 weeks. But I'm using it on stable now in a half-dozen projects, thanks to a `build.rs` script.

2. Diesel, the high-level ORM library for Rust. This works on stable using a `build.rs` script, and 90% of it will work on stable without the `build.rs` script once Macros 1.1 ships.

There are a few other experimental libraries like Rocket (which looks very slick) that only work on nightly Rust. But I don't think any of them are particularly popular yet.

Personally, I maintain something like two dozen Rust crates and applications, and only two use nightly Rust. Both need Macros 1.1, which should be on stable in about 5 weeks.


serde works on stable without serde_codegen or custom build scripts, too. It works like a regular library in fact, just with less features (no code generation feature). A custom data structure might not be able to use the default code generation for its impls anyway.


This is true of diesel as well: http://diesel.rs/guides/getting-started/


Right. Go and Rust have completely different development models, so that wouldn't make any sense.

To recap, in Rust, to make additions to the language:

1. Small additions mean make a PR.

2. Big additions mean make an RFC, then a PR if accepted.

3. These PRs go behind a feature flag that lets us track the feature, and only allows it on nightly.

4. People who desire the new feature try it out. (This is what you refer to.)

5. If any issues are found, they're fixed, or, in the worst case, the feature is removed.

6. The feature is made available to stable users, the feature flag is removed, and backwards incompatible changes can no longer happen.

What would be un-healthy is if everyone had to rely on nightly for everything. At the moment, most people use stable Rust. And of the people that use nightly, the largest single feature will be stable as of the next stable release for Rust. But some people are always going to be using nightly features, and that's a good thing: it means that by the time it hits stable, it's already undergone some degree of vetting.


Any idea when custom allocator will become stable? That's the only thing holding us to nightly.


I literally had a conversation about this yesterday. We need someone to champion the work. If that's you, we should get in touch.


email sent


Excellent! It might take me a day or two; I have some stuff to look up in order to give you a proper response.


> Most people use stable Rust.

However, many popular or important libraries like Serde or Rocket require the use of nightly. I recall the article a very short while ago on the front page that noted how Rust has effectively diverged into two languages, stable and nightly.


This article has been unanimously qualified as FUD by the Rust user community. Nightly is for innovations and experimentations, not for real use :

- Rocket is an amazing example of innovation, but it's currently just an experimentation.

- Serde works on stable for a long time but they experimented a more ergonomic version on nightly. They iterated upon it with Rust developers to create a new and well thought stable API which will land in the next version of Rust (macro 1.1 in february).

Rocket is likely to follow Serde's path during 2017 and will eventually work on stable in 2018. Building a great langage takes time ;)


Serde does not require nightly, though it is nicer to use on nightly. That's the "will be stable as of the next stable release for Rust" I alluded to above.

Rocket just came out; I think it's an extremely interesting library, but https://crates.io/crates/rocket shows that it's been downloaded 618 times. It hasn't exactly taken the world by storm yet. I think it shows great potential though! But it's not a good case of showing that the Rust ecosystem largely depends on nightly.

The article you refer to contained a number of factual errors.


Over the last year of intermittently dossing around with Rust I encountered the need to use nightly regularly, with various crates refusing to compile (and usually I stopped my experimentation there as my barrier to entry was anything harder than pacman -S rust cargo). Yet I did not know until just now that Serde is moving off a dependency on nightly and it is encouraging that this is a trend with similar libraries; I stand corrected.


Serve creator here. The core library 'serde' never actually required nightly. It was just the helper library 'serde_derive' which can automatically implement the serde traits that needed nightly. dtolnay has been doing a wonderful job getting us off nightly. We can't wait to finally get rid of using it.


If you happen to remember which libraries those were, I'd be interested in hearing about them!

> my barrier to entry was anything harder than pacman -S rust cargo

Today, that also wouldn't be the case: "rustup update nightly" and "rustup run nightly cargo build" or whatever, with extra stuff to automatically switch per directory.


We've been using Rust on production since the summer and we only use the stable compiler.

These "popular or important libraries" are nice use cases what you might be able to do with Rust in future. But relying on them right now and using them in production is not really a good idea.


That article posted half a week ago [1] claiming Rust developers and libraries rely on nightly was FUD. It's terrible that this is being repeated, because it is simply not true.

[1] https://news.ycombinator.com/item?id=13251729


Er, there are no unstable features in Go... the language is deliberately frozen.


What constitutes a strong compatibility guarantee?

How many breaking changes happened in the past 18 months?



    >> Currently Rust is about Go's age but nowhere close in usage.
However, Go has been suitable for production projects for several years longer than Rust.

Rust sits in a very useful niche not served by other languages, and in steady state will probably be more popular than Go.

Go has a very well designed ecosystem. I like it, use it, and am very impressed with everything about it I have seen. However, I don't see use cases that Go serves vastly better than other available alternatives.


Go has one very significant advantage over Rust: Simplicity. Reading someone else's Go code is such a breath of fresh air compared to reading someone else's C++ code (or god forbid, Haskell). Having a standard formatting further enhances this. I feel like this aspect of Go is not given enough weight. It's a huge benefit to large software projects and large software companies. I also suspect that this may be why the Go team has been so conservative re: generics: If they aren't implemented carefully, they could unacceptably complicate the language.

Rust is a powerful language, but with that power comes the ability to write "fancy" code -- esoteric, unreadable, and unmaintainable.


In my experience working with new and experienced Go developers that simplicity is pretty superficial.

Take a look at SliceTricks[0]. These are all pretty simple operations in other languages that the Go authors think users should be forced to write manually so they understand the costs. Reading these in code reviews is definitely not a breath of fresh air.

I also see Go users develop bad habits that will burn them in other languages, like returning a reference to a "stack" allocated variable. In the same vein, if you care about performance you have to internalize the escape analysis rules, which most new users won't know about.

There are tons of footguns[1] in Golang as well. I constantly see people get burned by issues at runtime. I truly believe that the Golang benefits are short-sighted. You get some upfront development gains that you end up pain for with less reliable software. I personally would rather pain that cost upfront where possible with the compiler telling me when something is wrong.

[0] https://github.com/golang/go/wiki/SliceTricks [1] http://devs.cloudimmunity.com/gotchas-and-common-mistakes-in...


Go is just the current incarnation of the so-called New Jersey approach.

> I believe that worse-is-better, even in its strawman form, has better survival characteristics than the-right-thing.


> re: generics: If they aren't implemented carefully, they could unacceptably complicate the language.

Can you explain why?

For example, simple generics which could be implemented as regex doesn't seem to complicated? Am I missing something?

For example (in Java), an ArrayList <String> could be turned into a StringArrayList by a basic internal pre-processor.


How do you handle error messages? Can you make functions, methods or their parameters generic? Can you dynamically link generics?


> Currently Rust is about Go's age but nowhere close in usage.

Citation? I see a lot of people talking about both, but not very many public projects in either. Rust at least has a "killer app" on the way in the form of Servo, whereas I haven't heard of any user-facing programs in Go.


FWIW, Go and Rust are 16 and 43, respectively on the TIOBE index:

http://www.tiobe.com/tiobe-index/

A Github search turned up 2,658 Go repositories with more than 100 stars:

https://github.com/search?l=go&q=stars%3A%3E100&type=Reposit...

compared to 348 Rust repositories:

https://github.com/search?l=rust&q=stars%3A%3E100&type=Repos...

Notably, Docker has more stars than Go itself.

Edit: you may also be interested in IEEE Spectrum's interactive list of the top programming languages:

http://spectrum.ieee.org/static/interactive-the-top-programm...

With the default parameters, Go and Rust are 10 and 26, respectively.


It's also worth noting that you're a bit less likely to see rust users because the kind of software that wants rust instead of golang tends to be the kind that gets written by a large company with deep pockets and a preference for closed source repositories.


If only Go was useful in building closed source products. Google must not have gotten the memo about Rust yet.


Contrary to popular legend, Google is not particularly strong on Go use, and it's not "THE official language".

It wasn't officially commissioned or officially adopted by Google to solve Google's coding problems as some believe.

It was merely initiated by a small team in Google, as their proposal for solving Google-scale coding problems. And has never been mandatory for new Google projects etc.

Go is ONE of the allowed languages, from what I know, but tons of stuff is written in Java, C++ and Python with no intentions of switching.

All those years, only a few, and basically trivial with respect to Google's needs, use examples for Go have come out of Google-land (a proxy/balancer for MySQL used in YouTube, Google Downloads caching, etc).


Why would they switch a perfectly working software code with something in Go? Java is 15 year older than Go so obviously lot more code would be in it. More interesting case is Dart as despite having 'official' Google support, industry wide usage is rather tepid compared to Go.


Some googler claimed that Go currently has single-digit MLOC in Go, and that it's growing. But they have way more C++ and Java code.


Go: https://github.com/golang/go/wiki/GoUsers

Rust: https://www.rust-lang.org/en-US/friends.html

Without counting Go usage roughly looks 10 times more than Rust.


Docker and Kubernetes are relatively popular, and both are written in Go.


Go is the language in the LXC, Docker, containerisation space.


Which is something I never understood. Since you are mainly wrapping OS API, why not take a higher level langage ?


For all the benefits of higher-level languages. Remember that the designers of C had to agree on every feature that went into Go: a language they co-designed for better programming experience & results. Wirth-style languages also compile ultra-fast for quick iterations.


My question is really why not choose something really higher level ? Go is like stuck in the middle. If I need very low level, I'd choose rust. If I need high level, I'd choose Python. Go is kinda filling a weird niche between the two and I rarely find a case where I feel like it's the best choice.


What language you have in mind?


Anything that is robust, dynamic, with a big ecosystem. Python, Ruby, whatever have you. You don't need raw speed since the OS API does most of the work, and the ease of programming would make it more productive. In the end, they added Python anyway for stuff like "compose", so I'm missing the point of using Go for this. The Go code is not even network bound, the ability to get multicore easily is not a bit advantage here since you spawn a new process anyway, so really, why ?


Go has much lower memory usage, high throughput, std ssl/tls/http libraries, high performance GC and produce fully static binaries. I do not think Python/Ruby can provide required performance for containers, cluster scheduling, orchestration etc. Even Java with much higher perf compared to Python/Ruby is not suitable because of high memory usage.


> lower memory usage, high throughput [...] high performance GC

Irrelevant, as most ressources will be consummed from underlying OS API.

> std ssl/tls/http libraries

So does Python and Ruby. But even if it didn't, you provide a package anyway.

> produce fully static binaries

You can do that with Python and nuikta. But there is need for it, since docker is provided as an msi/deb/whatever that take care of distribution.

> high memory usage.

On your xGb RAM server, the memory usage of your container is the least of your problem. Your DB will dwarf it, your app will dwarf it. Anything you put in your containers will take 100 times more memory.


OCaml is my first thought in that space.


Ocaml is a good choice, and Anil Madhavapeddy used it for his unikernel ideas. But Ocaml doesn't have strong backing any more. JaneStreet alone isn't enough.


InfluxDB and Prometheus are written in Go. Similarly, etcd.


Docker is go, and afaik has always been.

https://github.com/docker/docker


Rust code is also going into Firefox, slowly for now, but it will speed up over 2017.


Part of the issue stopping me from jumping in is that it feels like the language is still changing in ways large enough to make it difficult to learn. That may not be true anymore, but it seems like it would take a lot of work to keep up with the current 'best practices'.


It's true that idioms are still developing; there are tools like clippy to help you learn them, though.


> As the roadmap says "Production use measures our design success; it's the ultimate reality check." I agree with that.

I don't. It's a measure of the overall success. Design is but a small part of that. Community, outreach, corporate back up… play a huge part in the success of a language.

Go is a wonderful example: doesn't even provide parametric polymorphism (generics), and they got away with that! Feels like Google backup matters more than the core language here. Either that, or someone please explain why omitting generics today is not a big mistake. Feels like dynamic scope all over again.


Think about who is picking up which language and how many programmers with what background there are. Go offers solutions for certain problems that many of programmers have. I.e. performance and ease of deployment are the biggest ones for people coming from scripting languages, and there are a lot of them. Having easy to pick up syntax doesn't hurt easier. But generics are not as valuable for them at the beginning. And Go's syntax has like a dozen of critical problems either way, might as well add generics in 2.0 together with all the fixes.


Google backup is really more towards Dart when compared to Go. Dart has much more Google contributors vs Non-Google. Dart Dev summit seems to be totally sponsored by Google and it even had free entry whereas Gophercon is independent of Google.

Though Dart has generics but not much usage in industry.


> Go is a wonderful example: doesn't even provide parametric polymorphism (generics), and they got away with that!

Only because one can opt-out of the type system and use a catch all "interface {}" .Actually even the std lib does that ... a lot.


> As the roadmap says "Production use measures our design success; it's the ultimate reality check." I agree with that.

C (and later C++) became popular because Unix was successful. Safe systems programming and safe browsers are nice to have but not completely safe if the underlying OS is unsafe (Windows in particular). Rust's "killer app" would be a safe OS. The first attempt (Redox) is already there.


That's what I'm getting from the safe features emphasis and the fiercely C/C++ competitive slant argued by the evangelists in this thread.

I was looking through the pointer safety portion of the safety brochure: https://doc.rust-lang.org/book/unsafe.html#what-does-safe-me... and wondering how to approach unwrapping an ethernet packet on the wire in the easy unsigned pointer-specific way it can be done in C and came across a pcap implementation in Rust (sample source): https://github.com/ebfull/pcap/blob/master/src/raw.rs Are you serious? Rust is not for me, thanks.


> countless lives lost

I have no doubt that people have had their lives ruined, or even died, as the result of flaws in system programming, but is anyone actually tracking this? Is it "countless?"


No. Nobody is counting. That's why the OP said countless.

"What have you done? That vase was priceless!"

"Oh good, I was afraid it was expensive."


While that's a funny definition of the word, technically speaking the word "countless" does mean "too many to count".


I think it was humor :)


TDD: "If you're not doing this, you're not a real engineer."

Rust: "If you're not doing this, you're a murderer."


I'd really like to see an organization when evaluating tools or practices, do an actual risk assessment even just once. Even a hand-wavy conversation where we make 0th-order number-of-zeros estimates on any of the quantities involved in making such a decision.

I'm a huge fan of thorough designs, engineering formalities and quantifying application performance, but if we're building some fast food ordering app is expected to increase sales by 10 percent as soon as it's done and there's no risk from quality or availability or performance of the application (other than the 10 percent extra sales, we expect people to walk in / drive thru like normal if it under-performs on those metrics) then it's obvious the business value is in launching a minimal implementation as soon as possible using tools that optimize for productivity, not safety or performance.

Too often I see dogmatic pure functional TDD line of business app with continuous deployment infrastructure that could break for two weeks at a time without impacting anyone, or the copy-pasted-code-from-google-searches business-critical application that loses money for every hour it's offline but has no software development lifecycle or monitoring or alerting or anything.


Yep. That may not be the intention but it certainly comes off that way.


Well, if no one is tracking it, there is no count and that would thus be countless.

But in all seriousness, I am curious about this as well.


It's not "countless" it's "uncountable" because we couldn't agree on what a software caused death was.

In my CS program we had an ethics class that included stories of bad X-ray machine software that overdosed people. Bad, bad bad. I don't think many people died as a direct result, but 10 years later there was probably a spiked cancer incidence. Did software kill people? Well yeah....kinda.

In airplane systems, there have been a number of cases where bad alerting / warning systems basically either misled the pilot or lulled them into a false sense of security prior to events that caused crashes. Did software kill people? Uhm, yes, I think, sort of, but not directly?

It's only when we get fully sentient AI that arms itself and decides to clean up the human pestilence that we'll be able to draw a straight line there. :)


Errors in the software of the radiation therapy machine Therac-25 directly lead to the death or serious injury of six patients. Three patients died within weeks or months. http://radonc.wdfiles.com/local--files/radiation-accident-th...

Errors in the software of a MIM-104 Patriot resulted in failure to locate and intercept an incoming missile and the death of 28 soldiers. http://www.gao.gov/products/IMTEC-92-26

Errors in the software of a Chinook helicopter may have lead to a crash that killed 29 people. http://www.publications.parliament.uk/pa/ld200102/ldselect/l...

Errors in the software of Toyota's throttle control system may have lead to the death of 89 people. https://betterembsw.blogspot.ch/2014/09/a-case-study-of-toyo...

More: http://www.baselinemag.com/c/a/Projects-Processes/Eight-Fata...


There was no error in the Toyota throttle control system. It was a pedal error with floor mats, and a separate pedal design error for other models.[1], combined with operator error. If you are interested, I highly recommend Malcolm Gladwell's podcast Revisionist History, which did an episode on this[2]. Long story short, your brakes can easily stop your engine at full open throttle, and in not much more space than braking without any throttle. Unfortunately, a foible of human behavior seems to lead us to get flustered and often not do the right thing in situations like these.

1: https://en.wikipedia.org/wiki/Sudden_unintended_acceleration...

2: http://revisionisthistory.com/episodes/08-blame-game


I agree that the pedal design, human error and floor mat problems are much more likely causes. I don't claim it was a software error for sure. My understanding is that indeed no specific software error was identified, but it also was never ruled out for sure. Neither of the two links seem to contain anything to that effect either.


To my memory, the podcast lays out a fairly comprehensive argument that it was just human error in the case of mechanical problems (as we'll as noting the car computers in all the cases show the brake wasn't pushed), and back it up with decades of research showing that this is a common problem, so that's about as close to definitive as you can get in this situation IMHO.


> Errors in the software of the radiation therapy machine Therac-25…

Caused by unsigned 8-bit integer overflow combined with (presumably) treating 0 as falsy. May or may not have been solved by using Rust (but probably wouldn't have because it sounds like a logic error in "clever" code).

> Errors in the software of a MIM-104 Patriot…

Caused by loss of precision in floating-point calculations. Would not have been solved by using Rust.

The other two were more likely human or mechanical than (unspecified) software errors.


Integer overflow is defined to panic in debug builds, and either do that or two's compliment overflow in release builds. The current implementation overflows. However, zero is not false.


I assume that treating 0 as false was intentional in this case (it probably was written in PDP-11 ASM) and a direct translation of the Therac code therefore would be

  if counter > 0 {
      deploy_radiation_shield();
  }


Yeah, you'd certainly have to see the actual code to be sure, I bet you're right.


How many people have died from terrible UIs in apps that are commonly used while driving? Obviously, personal responsibility is primary here, but I don't think the app designers and builders should be absolved of all guilt.

I was just thinking about this the other day when using Spotify in the car. Spotify's UI is pretty good, but after they lost all my saved songs, I've had to resort to only using playlists, which require 3x as many clicks to add a song.


Three deaths from the Therac-25: https://en.wikipedia.org/wiki/Therac-25.


Yes as we move forward we should strive for more safety but 'countless lives lost' looks to blame software for that.

I think realistically we need to ask: would it be a better situation if no system program were allowed for lack of safety. Because I think humanity in general is better with existence of unsafe software vs no software exist at all unless it is safe as per rigorous safety standards.


It was a dumb statement unless he meant... given context of software maintenance... lives "wasted" toiling away on C or C++ monoliths. There are indeed countless lives and billions wasted on that that might be reduced or situation improved by better tooling.


Even by solving safety, erros in software will still happen. The software errors cited are much more about bad engineering, something that can't be solved by safety only.


> Modula-3, Eiffel, Sather

Nice to see these languages on Rust's team radar, specially Sather.

Just shows how you guys have researched prior work, congratulations.


It is funny he doesn't mention OCaml (because of his work with it). Yes OCaml has a GC but from what I recall it has a rather low overhead GC (in terms of latency. I need to find source but I remember it being impressive compared to others). I believe there was some attempts to incorporate some real time libraries but it was a long time ago.

Depending on what you define as system programming OCaml also has the whole bare metal Mirage OS [1] thing as well.

And of course OCaml is a pretty darn safe language (albeit its threading/concurrency libraries are fragmented so the safety is more type safety than concurrency).

[1]: https://mirage.io/


Yeah, Caml Light (OCaml's predecessor) was my introduction to ML family of languages.

All the languages on my snippet also have GC, but are less known than OCaml, maybe hence why he omitted it.


To be clear, Graydon doesn't work on Rust anymore, and hasn't in years. His knowledge of the space is absolutely impressive, though. :)


Only inaccuracy I find is in his claim on safe concurrency in other languages. It existed in Concurrent Pascal (used in Solo OS), Ravenscar Ada (widely deployed), and Eiffel SCOOP (widely deployed). Rust isn't the first doing this. It's just apparently the best at it in system space. Ada camp is making a comeback, though, with ParaSail that looks interesting:

http://www.embedded.com/design/other/4375616/ParaSail--Less-...


ParaSail development looks stalled since its main designer joined AdaCore.

Just my impression, anyone feel free to correct me.


Yes, I've not seen a Napier reference in years. Really impressed to have seen that.


The author states: "A few valiant attempts at bringing GC into systems programming -- Modula-3, Eiffel, Sather, D, Go -- have typically cut themselves off from too many tasks due to tracing GC overhead and runtime-system incompatibility, and still failed to provide a safe concurrency model."

Nim follows a different approach.

Details: http://nim-lang.org/docs/manual.html#threads

Benchmark (with Rust): https://github.com/costajob/app-servers


I think Rust is not about safety, but about reusability. Do you like to take on a dependency on someone's code when it is in C? The answer is: roll your own code. Rust means the end of that.

Rust means software that can be written once and used "forever". Thus it enables true open source. In comparison C/C++ pay a mere lip-service, by also giving you, along with the code, lots of reasons to worry.

This is the real innovation behind Rust.


To be fair to C, I think C's answer here is dynamically linked shared libraries. They have lots of problems, but still they're very widely used. Even Rust programs dynamically link libc by default.


I think it's a bit funny that in an industry that (supposedly) prides itself on "meritocracy", there are many people that refuse to use (or learn) performant memory-safe languages, when memory-safe code is always better than memory-unsafe code (in terms of resource usage, reduction of bugs, etc, etc.).


It's a shame there is no mention of ATS, which also attempts safe systems programming usings an advanced type system.


What about bare-metal options? Is there any development effort in that direction?

Most of the C that I do these days is Arm Cortex-Mx work. Realtime cooperative multi-tasking using an RTOS on the bare metal. It seems like Rust would be a great option for that kind of work if the low-level ecosystem were complete enough.


There's a lot of interesting stuff, including http://www.tockos.org/.

Some osdev stuff still needs nightly, but there's been really great movement on tooling, with stuff like xargo, and https://github.com/rust-embedded/ is a group of people trying to work towards improving things generally by identifying common needs, etc.


https://github.com/hackndev/zinc is one effort in that vein. I'm not sure what else exists though.


That actually looks very good. They are supporting STM32F4, which is one part I use a lot. I see they are using the GNU linker (and presumeably binutils) which makes total sense. No reason to reinvent all of that, and that tool chain is robust.


I completely agree. This is what I wrote on Reddit in response to Klabnik's post:

Rust can make such an important contribution to such an important slice of the software world, that I really fear that trying to make a better pitch and get as many adopters as quickly as possible might create a community that would pull Rust in directions that would make it less useful, not more.

Current C/C++ developers really do need more safety. They don't need a more pleasant language. Non C/C++ developers don't really need a language with no GC. Now, by "don't need" I absolutely don't mean "won't benefit from". But one of the things we can learn from James Gosling about language design is, don't focus on features that are useful; don't even focus on features that are very useful; focus on features that are absolutely indispensable... and compromise on all the rest. The people behind Java were mostly Lispers, but they came to the conclusion that what the industry really, really needs, is garbage collection and good dynamic linking and that those have a bigger impact than clever language design, so they put all that in the VM and wrapped it in a language that they made as familiar and as non-threatening as possible, which even meant adopting features from C/C++ that they knew were wrong (fall-through in switch/case, automatic numeric widening), all so they could lower the language adoption cost, and sell people the really revolutionary stuff in the VM. Gosling said, "we sold them a wolf in sheep's clothing". I would recommend watching the first ~25 minutes of this talk[1] to anyone who's interested in marketing and maintaining a programming language.

If Rust would only win over 10% of C/C++ programmers who today understand the need for safety, say, in the next 5-10 years, that would make it the highest-impact, most important language of the past two decades. In that area of the software world change is very, very slow, and you must be patient, but that's where Rust could make the biggest difference because that's where its safety is indispensable. A few articles on Rust in some ancient trade journals that you thought nobody reads because those who do aren't on Twitter and aren't in your circle may do you more good than a vigorous discussion on Reddit or the front page of HN. Even the organizational structure in organizations that need Rust looks very different from the one in companies that are better represented on Reddit/HN, so you may need to market to a different kind of people. So please, be patient and focus your marketing on those that really need Rust, not on those outside that group you think you can win over most quickly because they move at a faster pace.

[1]: https://www.youtube.com/watch?v=Dq2WQuWVrgQ


"Current C/C++ developers really do need more safety. They don't need a more pleasant language. "

The number of C++ developers griping in HN threads about language-level problems effecting their work tends to disagree. It's not as if they wanted C++ to be designed that way. It was just only one with big companies' support that had zero-cost abstractions for programming in the large & great compatibility with C libraries. Many would love a better language. Matter of fact, they tell us in Rust threads here.

" Non C/C++ developers don't really need a language with no GC."

They might benefit if the app is aiming for max performance (esp HPC), memory efficiency (eg embedded), or minimal latency (eg real-time. From there, they basically choose among C, Objective-C, C++, or Fortran. Rust has features superior to those in terms of safety & abstractions. On HPC side, Julia is already showing the kind of takeup a language can get if it's faster than Python/NumPy but not C or Fortran.

"if Rust would only win over 10% of C/C++ programmers who today understand the need for safety, say, in the next 5-10 years, that would make it the highest-impact, most important language of the past two decades."

I agree. Let's hope it happens. With that on the demand side, we'd also see tools like Frama-C, Saturn, Astree, and so on start popping up for the unsafe part of Rust. On top of a shitload of libraries doing things the safer way since community encourages it. It's really the network effects that matter in programming languages. I think Rust might have good, network effects if it takes off.


> Many would love a better language. Matter of fact, they tell us in Rust threads here.

Absolutely, but my emphasis was on the difference between "love" or "want", and "need". Switching a programming language is extremely expensive, doubly so in systems programming. Organizations won't pay the price for something that doesn't make a big impact. Unlike many other new languages, Rust does have the potential to make a big, bottom-line impact, but that impact is almost entirely due to safety.

> They might benefit if the app is aiming for max performance (esp HPC), memory efficiency (eg embedded), or minimal latency

I don't think max performance is an issue at all, nor even minimal latency where you need it (there are realtime GCs, and GC languages do support arena GC allocation in embedded and realtime settings, like realtime Java), but definitely it's a big gain in memory efficiency.

But again, I said they could benefit. It's just that the main focus shouldn't be on those that would benefit, but on those for whom the language is indispensable, or tremendously beneficial.

> I think Rust might have good, network effects if it takes off.

Absolutely, it's just that if you want Rust to really have an impact rather than just be popular, you need that network effect to be in the right network, and for systems programming, that network is not well represented on HN and Reddit. While these venues are good for marketing, and while the message will get to some of the most important crowd, it may have a negative effect as it would attract many that don't really need Rust, and if they end up making up most of the community, they may slowly push the language in directions that may make it less appealing to those who really need it.


"I don't think max performance is an issue at all"

I'll consider believing that when you remove both the performance-enhancing aspects and their marketing from your company's products that appear to target enterprise space of people using GC's and concurrency. Rust's safety w/out performance hit affects those two, specific areas. I think they might be interested.

Not to mention businesses doing analysis they want to happen faster, game developers, real-time groups wanting no runtime (or close to it), HPC that definitely cares about max performance, and small (or cloud) companies like those switching from Python to Go specifically due to lower costs from higher performance. Maxing performance is always a benefit if you can tell them it saves time or money but comes with the tools essentially free. That's actually how a lot of better JVM's (esp AOT) were sold.

"(there are realtime GCs, and GC languages do support arena GC allocation in embedded and realtime settings, like realtime Java)"

They're not default in these GC languages. We both know about them but most people don't. I've been telling Java & C# developers about those things for years with not one ever having heard of it before. Recently explaining to people that think an OS can't be written in Go that both OS's w/ GC languages & real-time, concurrent GC's existed. So, in their minds, there's horrid C/C++ w/ no safety, all these languages with GC's that have GC issues, and now a safe language with no GC. It's a perception thing that gives Rust an advantage over tech you described.

" but on those for whom the language is indispensable, or tremendously beneficial."

I'll cede you that. This would almost solely be aimed at the C, C++, Objective-C, and Fortran crowds. People stuck with caveman tools and unsafety.

"and for systems programming, that network is not well represented on HN and Reddit."

Definitely true.

"end up making up most of the community, they may slowly push the language in directions that may make it less appealing to those who really need it."

This is a real risk they should consider more. The best route would probably be to send people to the sites, conferences, and companies heavily into C and C++ for many use-cases. Get all this feedback they're getting from them at least as much as the others if not more. That might inform the language design in a way that addresses the risk you're bringing up.


> Rust's safety w/out performance hit affects those two, specific areas. I think they might be interested.

:) Let me put it this way: if for some reason Rust ever takes a serious market share from Java (or other similar languages) in the enterprise space, I will surely be well into my retirement by then, so my interest isn't financial. I was a C++ developer for many years, working on very large soft (and some hard) realtime systems, and I was a Java holdout, but once most of the defense industry switched, virtually nobody (including us) ever complained about a performance decrease. So even if the claim that GCs adversely affect performance in large, complex programs (where RAM overhead isn't an issue) were true, the number of organizations that build software of that kind and where this performance would matter more than in defense, is very, very small (not to mention that most of the current performance deficiencies in Java are not related to GC). If that's the kind of user that would be a significant portion of Rust's userbase, then Rust is in trouble. Companies that sell ultra-low latency GCs for a fraction of the cost it would take for enterprise shops to adopt a language like Rust, are, well not exactly on their way to the Fortune 500. The systems software space -- drivers, kernels, filesystems, and the entire embedded space -- is at least one, if not two, orders of magnitude bigger.


I wasn't saying you were shilling to dodge competition so much as better performance is in your marketing material like many others. ;) I agree that a lot of the space they should be targeting won't have a humongous difference between a well-tuned GC and a typical, native implementation. You're right on that.

"Companies that sell ultra-low latency GCs for a fraction of the cost it would take for enterprise shops to adopt a language like Rust, are, well not exactly on their way to the Fortune 500"

That's a good point. Switching costs would be huge in many of these organizations. They'll make better inroads with something within their existing stack (typical) or doing new projects in the improved language (also typical). They're not rewriting all that stuff, though.


With rust you don't need to rewrite though. Just integrate it a little at a time.


Hopefully. With the legacy codebases or preferences, that means hiring people that really know C or C++, training them for Rust, and maintaining two codebases in one. It becomes trickier. Hopefully the incremental option works well for Rust but it didn't with most languages & platforms.


The reason why Java lacks a primitive unsigned type is that Gosling asked around at Sun about unsigned arithmetic and almost everyone got it wrong.


Java definately made the best decision here. Sad that we still have to live with unsigned types and the promotion rules.


Perhaps they made the right decision with int, short and long, but there's one case where they absolutely made the wrong decision: byte.

Having signed bytes is extremely obnoxious. Doing any sort of bit- flipping/shifting requires you to add a ton of "& 0xFF" operations that would be completely unnecessary if byte was unsigned. The main use case for bytes is to represent data, not integers and bytes shouldn't have been treated that way.


The right choices are bounds checking or autoboxing. I very rarely want arithmetic modulo n (and when I do, I want to choose n), but never do I ever want signed overflow quietly giving wrong answers.


Can you explain why?


I disagree. Popularity is everything. Grab the developers you can get first. Once you have a popular language with ready-to-be-hired developers big hardware companies will be far more willing to entertain using the language.

When choosing languages people look at community, libraries, existing in-house expertise and availability of programmers. All these require popularity.


We are doing some work like that, but not a ton yet. You need connections to do so, in my experience, and that's tough.


Doesn't Mozilla have good connections? It's not like you're a small, unknown organization.


Rust and Mozilla are two different things, even though a team at Mozilla (myself included) heavily contributes to Rust.

Mozilla is a big place, so that means, in order to take that route, I'd have to figure out how to make connections within Mozilla to know the people who'd know those people. You're right that this might be helpful, but it's not something that happens overnight, that's all I'm saying.


That's fine. It's a slow-moving industry anyway. I think it's better to get the right people slowly than the wrong people quickly, and by the right people I mean those who really need Rust, and to be more specific, experienced systems programmers that have worked on large, complex projects (the more complex the software, the bigger the impact the language can have) and know what's really important and what's less so. If those people help direct the features, it would make it easier to get more of the "right" people. OTOH, if, say, Haskell people (I chose that example randomly, but I think they tend to be language enthusiasts and drawn to every new typed language, whether they absolutely need it or not) are those early adopters who help direct the language, the language may become less appealing to those who really need it.


This is a little off topic, but when I looked at his post I thought, "Wait .. is that LiveJournel?" .. and yes it is apparently. Or at least a fork of it called DreamWidth.

Interesting to see forks of older OSS Perl web apps still in use today.


Caveat lector: I'm one of the two founders of Dreamwidth.

Yes! Dreamwidth is a fork of the old LiveJournal code that is still being developed by a small team of folks who would rather see a small site run in an open, transparent fashion than be beholden to large corporate interests and the whims of monetization.

We're a very happy little family and it's always exciting to see Dreamwidth links end up on HN. :)


I haven't used Rust, but generally speaking wouldn't you say that safety in some sense includes the other nice features? For instance, if it were safe but not fast (like, say, a GC language) it wouldn't be useful. So it has to be safe and fast (which it sounds to me like it is). Okay, so what if it were safe, fast, but a real hassle to use? Well that's not very useful either. So it has to be safe, fast, and usable. Just like Moxie's approach to security: focus on usability, so people actually use the damn thing. And it sounds like all the other nice features make Rust more usable.


"Our engineering discipline has this dirty secret, but it is not so secret anymore: every day the world stumbles forward on creaky, malfunctioning, vulnerable, error-prone systems software and every day the toll in human misery increases. Billions of dollars, countless lives lost."

Billions of dollars and countless lives lost? I'm not saying that buffer overruns aren't a thing but this seems like marketing claims without substance. Yes, I read through the examples below, still think he's overstating it.


> [Go] failed to provide a safe concurrency model.

What did he mean by this?


Go doesn't protect you against race conditions, it merely offers some concurrency tools. There is nothing to declare ownership of objects in memory. So the compiler doesn't (can't) complain if you share memory and access it simultaneously. At best, there are runtime checks. Rust does offer compiler protection against that.

Edit: "merely", relatively to Rust :) on an absolute scale, still way better than C for concurrency.


As a total beginner (learning programming by myself since 2 or 3 years), i am always asking myself, how often "little" things like race conditions break something in production. Sure thing, some applications need to be safe-super-safe. But is it worth to switch over from go to rust as a beginner, since go is the unsafer language? I know, that there is no ultimate language. But i always asked myself i am missing a point, since i never really had problems like that occur. I also wrote something little in clojure (wanted to write something in lisp, and clojure presented itself somehow modern). But many people told me about how bad clojure is, because it is not type-safe. So i switched to go. And i have to say, it is really nice, to be forced to use the right types as input. But will i have the same experience when i switch over to rust and think to myself: "Whoa yes, i never though about that aspect, but it really helps me as a beginner who tends to write slobby programms!"?


> how often "little" things like race conditions break something in production

Often. Way too often. And it always break at 3 in the morning when you are hiking somewhere where there is no internet connection.

Besides, race conditions tend to break on the worst possible ways. It either silently corrupts data, or makes your system stop working but stay active, so the OS does not know it must launch a new instance.


Think about effort in vs results gained. Example:

You can drive nails with a knife by holding the nail and hitting it with the side of the knife. It takes work to do right with you occasionally cutting yourself. Some people will even loose a hand.

You can use a hammer. You will get the nails in smoother with less rework required. You occasionally hurt yourself but not as bad as a knife. You can also smash the nerves to point you don't feel anything.

You can use a nailgun with safety switch. It's ultra-fast and can't hurt you if you're aiming where the nails go. It cost a bit more. It can hurt you if you turn the safety off or aim it at yourself. Simpler than mental effort into using knives or hammers safety. Less tiring, too. Some people even think they're fun.

That's C, C++, and languages that are memory-safe respectively. Rust's advantage of no GC is like a nail-gun that costs same as a hammer. It has the benefits but not the primary disadvantage. If you're a craftsman, you're time and energy are valuable. Use the best tools you can for whatever work you're doing. That is, what gets it done quickly, done right, and easy to fix later w/ smaller problems.


>As a total beginner (learning programming by myself since 2 or 3 years), i am always asking myself, how often "little" things like race conditions break something in production

Rookie mistake (and a shoot-yourself-in-the-foot-at-2-am) mistake coming up:

package main

import ( "fmt" "sync" )

func main() { var wg sync.WaitGroup

for i := 0; i < 10; i++ { wg.Add(1)

		go func() {
			fmt.Printf("i= %d\n", i)
			wg.Done()
		}()
	}
	wg.Wait()
}

https://play.golang.org/p/XDzFq_XK_1

And what about this code:

package main

import "fmt"

func main() {

var intArray []int for i := 0; i < 1000; i++ {

		intArray = append(intArray, i)
		fmt.Printf("i= %d\n", i)
	}

	fmt.Println(intArray)
}

https://play.golang.org/p/UuI4uESZ_f

If you don't care if intArray is in the proper order, you might do something like this:

package main

import ( "fmt" "sync" )

func main() { var wg sync.WaitGroup var intArray []int

for i := 0; i < 1000; i++ { wg.Add(1)

		go func(i int) {
			intArray = append(intArray, i)

			fmt.Printf("i= %d\n", i)
			wg.Done()
		}(i)
	}
	wg.Wait()
	fmt.Println(intArray)
}

What's wrong?

On my multi-core computer (though not on playground), len(intArray) could be as low as 500!

Why?

Because a = append isn't atomic.

Go, which is so pedantic about "stupid" mistakes (including something, changing your mind, and not "unincluding" it), didn't catch this. Not an error and no warning.

But the worst part is that when you loop until 10, it works. You can unit test it, integrate test it, and have it break at any point in time.


You can also check out some CMU public calendars and read their lecture notes about safe standards in programming or get introduced to contracts just so you get better at eyeballing your own libraries to look for obvious errors https://www.cs.cmu.edu/~rjsimmon/15122-s16/schedule.html

They also have distributed programming course lecture notes that are open to the public with golang specific info on how to deal with race conditions.

You'll have the same experience in Rust if you research why they've done certain things, but can quickly get lost on the mailing list unless you've read books like EoPL http://www.eopl3.com/


> i am always asking myself, how often "little" things like race conditions break something in production

It can basically corrupt the whole program.

I bet half go apps out there have data race. People who boast about Go's simplicity can't even see the elephant in the room. A simplistic type system doesn't fix unsafe concurrency. You'd think that safe concurrency would be an important design goal for a highly concurrent language, well apparently it isn't.


Speaking as an experienced programmer: all the time, in subtle ways, sometimes that you don't notice for a long time. Sometime benign, sometimes your data has been being corrupted for months before you notice.

My standard advice on concurrency is, quite simply, don't. It always looks way simpler than it actually is. If you're doing something that _requires_ it, get all the help you can. Type-level enforcement sounds _excellent_ to me.


Yep. I deal with a platform that has some sort of race and accesses a destroyed object. It's incredibly tricky to find it, and customers only hit it at the worst time, with certain traffic loads. It's a subtle bug that's lasted years and is in a codepath used millions of times a day.


If you want to avoid data corruption in concurrent software, there's always Erlang as an option.


I haven't checked Erlang too well, but is it so that the concurrency in Erlang software is basically actors only? Of course in this space you can also use Akka and even Rust and C++ have actor libraries available.

But there are definitely use cases where you don't want to use actors, where you might want to compose several futures together without the overhead of actor mailboxes.


> it so that the concurrency in Erlang software is basically actors only?

Erlang's concurrency is "don't communicate by sharing, share by communicating" enforced at the language level: an Erlang system is composed of processes which each have their own heap (and stack) and an incoming "mailbox", an Erlang process can only interact with the world by calling BIFs (built-in "native" functions, Erlang syscalls if you will) or sending messages to other processes, and messages can only contain immutable data structures (mutation happens only at the process level).

Of course one of the sources for this design is that Erlang comes from a world where 1 = 0 (if you don't have redundancy you don't have a system) thus two processes may live on different nodes (erlang VMs) on different physical machines and shouldn't behave any differently than if they were on the same node.


So basically it's like Scala/Akka, except in Erlang you can send functions which is a nice feature. One thing that has kept me from using it in production is its dynamic typing. Once you've been spoiled with a good type system, it's quite hard to go back to dynamic typing.


Akka is modeled on Erlang but the JVM has very different characteristics. The Erlang VM is optimized for immutable languages and message passing. GC is very different (almost a non-issue) because of the process model.

And pattern matching in Erlang is a joy every developer should experience. I've not been impressed with my limited exposure to Scala, which also bears the burden of trying to be a kitchen sink language.


In fairness, Go has an extremely good race detector (run/test with -race to enabled it), and makes no guarantees about memory or correctness if you write code with races in it


Last time I checked, the race detector could only find a certain percentage of data races – those easier to detect .

The crux with such systems is you tend to rely on them...


It becomes a problem when you're building applications with complex interactions, and is even worse with large codebases that are tough to debug and reason about.


Data races definitely happen in the real world and they are awful to track down, because they only happen n% of the time. All of my least favorite bug-fix experiences have been data races.


I don't know why you are downvoted since your question is legit.

I'll answer given my own experience :

At my former company, we had a websocket-based service that allowed symetric communication between the clients. We had around 10% connection failures, and we thought it was causes by websocket well-known incompatibilities with some network stack, and we had a fallback to ajax polling.

Several month later, someone touched this part of the code and found a data race. After we fixed it, the failure rate dropped to 5%.

Is 5% of failure big ? It depends on your business. In our case it wasn't too bad because we had a backup plan. But if every customer have a 5% chance of application breakage each time he connects, you may have trouble building a big user-base.


These are some of the bugs I hate the worst. When you have a plausible explanation about why it's slow, or fails sometimes, and it isn't under your control (or is way out of your purview), it's far to easy to stop lookingand not find your own bugs that exacerbate the problem. Because if you don't have a bug, and it is all that external problem, you just wasted all that time looking. Finding that bug later is especially painful, as you realize a little more time initially may have saved so many problems later. :/


I've had to deal with several race conditions. They're easy to miss and difficult to debug once you have a system that starts scaling up to millions of transactions per day.


Go has data races, which can break memory safety in some limited cases: https://research.swtch.com/gorace


I have been anxious about using Rust as webserver. But so far there is no mature framework that I think can use. I had a look at few like mio and iron framework etc. It has no mature Websocket implementation or an http package mature enough to be used in production. I am looking forward to make an ultra efficient PubSub server that supports HTTP poll and Websockets. Hope my dream comes true :)


Expect lots of movement in this space, with tokio coming out with an initial release soon!


I think that safety is often doing small things clearly. When you read about thread safe computing, you end up with many rules FP make impossible. So even it's mostly safety, it encompasses a larger area in disguise.


Even if Rust adds increasingly more "unsafe" features in order to appeal to new developer groups, I agree that it should remain a "100% safe by default language", and they should continuously try to improve the performance of the safe code, rather than get lazy and say developers can just use the unsafe syntax if they want 3x the performance. This would only lead more and more developers to increase the usage of unsafe code. It would be even worse if Rust would allow unsafe code by default for any future feature.


There is no serious proposal to make anything unsafe by default, and I wasn't proposing we do such a thing. The change I'm talking about is how we talk about Rust, not making changes to the language at all.


How is Rust better than and different from D?

    https://dlang.org/
Does anyone concerned about security use D?


There are a lot of reasons. There's a terrific in-depth discussion of D on the Rust subreddit from a few weeks ago. https://www.reddit.com/r/rust/comments/5h0s2n/what_made_rust...


As someone looking at this influx of discussion from the point of view of a curious bystander, I can't help but be annoyed by two persistent misconceptions that keep being perpetuated in many statements of this kind.

1) Memory safety is or should be a top priority for all software everywhere. The OP goes so far as to state: "When someone says they "don't have safety problems" in C++, I am astonished: a statement that must be made in ignorance, if not outright negligence."

This is borderline offensive nonsense. There are plenty of areas in software design where memory safety is either a peripheral concern or wholly irrelevant - numerical simulations (where crashes are preferable to recoverable errors and performance is the chief concern), games and other examples abound. It's perfectly true that memory safety issues have plagued security software, low level system utilities and other software, it's true that Rust offers a promising approach to tackle many of these issues at compile time and that this is an important and likely underappreciated advantage for many usecases. There's no need to resort to blatant hyperbole and accusations of negligence against those who find C++ and other languages perfectly adequate for their needs and don't see memory safety as the overriding priority everywhere. Resorting to such tactics isn't just a bad PR move, it actively prevents people from noticing the very real and interesting technical properties that Rust has that have little to do with memory safety.

2) Rust is just as fast or faster than C++.

Rust is certainly much closer to C++ in performance than to most higher level interpreted languages for most usecases and is often (perhaps even usually) fast enough. Leave it at that. From the point of view of high performance programming, Rust isn't anywhere close to C++ for CPU-bound numerical work. For instance, it does not do tail call optimizations, has no support for explicit vectorization (I understand that's forthcoming), no equivalent to -ffast-math (thereby limiting automatic vectorization, use of FMA instructions in all but the most trivial cases, etc.), no support for custom allocators and so on. I'm also not sure if it's possible to do the equivalent of an OpenMP parallel-for on an array without extra runtime overhead (compared to C/C++) without resorting to unsafe code, perhaps someone can correct me if it's doable.

Over the past week or so, motivated largely by a number of more insightful comments here on HN from the Rust userbase, I've tried out Rust for the first time, and found it to be quite an interesting language. The traits system faciliates simple, modular design and makes it easy to do static dispatch without resorting to CRTP-like syntactic drudgery. The algebraic/variant types open up design patterns I hadn't seriously considered before in the context of performance-sensitive code (variant types feature in other languages, but are usually expensive or limited in other ways). The tooling is genuinely excellent (albeit very opinionated) and easily comparable to the best alternatives in other languages. I'm not yet sure if I have an immediate use for Rust in my own projects (due to the performance issues listed above and easier, higher level alternatives in cases where performance is irrelevant), but I will be closely following the development of Rust and it's definitely on my shortlist of languages to return to in the future.

However, I would have never discovered any of this had I not objected to the usual "memory/thread safety" story in a previous HN discussion and received a number of insightful comments in return. I think focusing on the safety rationale alone and reiterating the two hyperbolized misconceptions I listed above does a real disservice to the growth of a very promising language. I think Steve Klabnik's blog post to which the OP responds is a real step in the right direction and I hope the community takes it seriously. Personally, I know a few programmers who've entirely ignored Rust due to the existing perception ("it's about memory safety and nothing else") and in the future I'll suggest Rust as worthy of a serious look as an interesting alternative to the prevailing C++-style designs. I'm certainly glad I tried it.


Apparently, the parent post is now too old to edit, so I'll post a correction here. In light of helpful input, I was wrong about the following:

1) Tail call optimizations are fine, the documentation is just a bit ambiguous on the matter.

2) Explicit vectorization is available in nightly.

3) The equivalent of -ffast-math is technically available, though very inconvenient to use. There may be workarounds, I'm not sure.

These points, coupled with the ability to do performant threading (in theory, even with the syntax I'd prefer), go a long way to alleviating some of my performance concerns. Well written (nightly) Rust may be closer to C++ in numerical performance than I initially thought. I'd like for some of these things to be much more convenient to use than they are currently, but the opportunities are there.


Memory safety is important for everything because it is a prerequisite for any other form of correctness. There's no guarantee that a violation of memory safety will result in a crash, memory corruption is just as possible, resulting in a bad numerical computation or a broken game. It may be that the risk of an actual problematic memory safety violation in numerics/gaming is small enough to not worry, but it is still something to consider.

The rayon library offers similar functionality to OpenMP, including a parallel map/reduce (etc) over a vector, all in safe code for the user.

I believe a operations that allow -ffast-math style optimisations were recently added to the floating-point types, allowing one to specify individual places where reassociation (etc) is OK. This obviously isn't as automatic as -ffast-math, but usually one has only a few small kernels where such things are relevant anyway.

Lastly, two smaller points:

- C++ doesn't do tail call optimisation just as much as Rust doesn't do it. Compilers for both can (and do) perform TCO, the languages just don't guarantee it.

- C++ doesn't do explicit vectorization either, not in the standard. If you're willing to move into vendor extensions then nightly Rust seems somewhat equivalent and does allow for explicit SIMD.


> Memory safety is important for everything because it is a prerequisite for any other form of correctness.

That's obviously true, but not what I (or the OP) was talking about. My point was that in many applications memory safety doesn't rank highly as a separate concern, in addition to computing correct output from expected input. Because of this, describing Rust as a language that solely focuses on memory safety isn't very interesting to large groups of developers that work on such applications.

> The rayon library offers similar functionality to OpenMP, including a parallel map/reduce (etc) over a vector, all in safe code for the user.

I did look at rayon when I skimmed through the available ecosystem. The runtime cost of that approach wasn't obvious to me from the documentation, and it doesn't quite let me keep the loop-based control flow usually employed for numerical calculations (because of the need to refer to different indices within the loop, etc.), but it's certainly a viable approach. Not a direct replacement to OpenMP loops, though.

On the subject of -ffast-math, I did not encounter these recent additions you mention in the language documentation, but I'll take another look on the issue tracker and elsewhere. Thanks for the information.

On tail call optimisations, I don't believe your statement is entirely correct. It's true that C++ compilers don't guarantee TCO (although, empirically they're very good at it), but Rust doesn't seem to be able to do it at all. It's explicitly stated in the language documentation and there's a recent issue on the subject [1].

And on explicit vectorization - I'll take a look at the latest nightly Rust and edit my post accordingly if explicit SIMD is already usable. Glad to hear it.

FWIW, I think Rust has made great progress considering the age of the language, and I'm glad to see SIMD and other improvements being implemented. My objection was simply against the hyperbolic assertion that Rust has already attained full parity or even superiority over C++ in performance.

By the way, is there a way to turn off runtime bounds checking for vectors? That's another common performance sink in numeric computing.

[1] https://github.com/rust-lang/rust/issues/217


> My point was that in many applications memory safety doesn't rank highly as a separate concern, in addition to computing correct output from expected input

Indeed, I was trying to cover that in my comment. I agree that it isn't an explicit selling point to such people, but I think that it should be:

- numerics/scientific computing/machine learning are slowly taking over the world. It is bad to have random/occasional heisenbugs in systems that influence decisions from the personal to the international.

- games are very, very often touching the network these days, and thus are at risk of being exploited by a malicious attacker.

Of course, people in those domains aren't necessarily thinking in those terms/have deadlines to hit/are happy with their current tooling.

> The runtime cost of that approach wasn't obvious to me from the documentation

It is low. I've even heard rumours that the core primitive has the lowest overhead of all similar data parallel constructs, including, say, Cilk. This, combined with aggressive use of "expression templates" (with a special mention to Rust's easily inlinable closures), means I'd be surprised if rayon was noticably slower than OpenMP for the straight-forward map/associative-reduce situations. More exotic transformations are more dubious, given rayon has had far less person-hours put into it.

> it doesn't quite let me keep the loop-based control flow usually employed for numerical calculations (because of the need to refer to different indices within the loop, etc.), but it's certainly a viable approach

I'm not sure loop-based control flow is actually necessary, since (I believe) one can, say, parallelise over an enumerated iterator (e.g. slice.iter().enumerate()), which contains the indices. One can then .map() and read from the appropriate indices as required.

> On the subject of -ffast-math, I did not encounter these recent additions you mention in the language documentation, but I'll take another look on the issue tracker and elsewhere. Thanks for the information.

To shortcut your search: https://doc.rust-lang.org/std/?search=fast . (I apologise that I didn't link it earlier, I was on a phone.)

> On tail call optimisations, I don't believe your statement is entirely correct. It's true that C++ compilers don't guarantee TCO (although, empirically they're very good at it), but Rust doesn't seem to be able to do it at all. It's explicitly stated in the language documentation and there's a recent issue on the subject [1].

I guarantee that rustc can do tail TCO. I've spent a lot of time digging around in its output. The compiler uses LLVM as a back-end, exactly the same as a clang, and things like function calls look the same in C++ and in Rust.

That issue is 5 years old, and closed, and is (implicitly) "teach rust to have away to guarantee TCO", see the mention of 'be' vs. 'ret': they're keywords, 'be' theoretically being used like `be foo(1, 2)` and meaning "this call must be TCO'd" (i.e. my stack frame must be foo's stack frame).

Lastly, if you're talking about the documentation being [0], I think you're misreading it, in particular it says:

> Tail-call optimization may be done in limited circumstances, but is not guaranteed

That said, it is reasonable that you're misreading it, given "Not generally" is being technically correct ("rustc cannot do TCO in complete generality", i.e. there exists at least one tail-call which won't be optimised) in a way that is confusing in normal English. I personally think it would be better if it started with "Yes, but it is not a guarantee" rather than "No".

[0]: https://www.rust-lang.org/en-US/faq.html#does-rust-do-tail-c...

> By the way, is there a way to turn off runtime bounds checking for vectors? That's another common performance sink in numeric computing.

Yes, the get_unchecked and get_unchecked_mut methods. This takes the same approach as the -ffast-math equivalents: disable when required, rather than sacrifice reliability across a whole program. That said, Rust's iterators (which also power rayon) are more idiomatic than manual indexing, when they work, and generally avoid unnecessary bounds checks more reliably.


Thanks for this great comment thread, Huon. :) I didn't know we had fastmath stuff now!


Thanks for the insightful post. I stand corrected on the TCO issue, I had indeed misread the documentation, and I've posted a correction to my original post accordingly. The bounds checking issue is also resolved to my satisfaction.

> I'm not sure loop-based control flow is actually necessary, since (I believe) one can, say, parallelise over an enumerated iterator (e.g. slice.iter().enumerate()), which contains the indices. One can then .map() and read from the appropriate indices as required.

The loop-based control flow is, strictly speaking, never necessary, as the two formulations are mathematically equivalent. It's just that for many algorithms the loop-based approach is more intuitive (to many people) and readable, and has less boilerplate. Your simple_parallel library looks syntactically closer to what I'd like, though I'm not sure if it's still being maintained.

> To shortcut your search: https://doc.rust-lang.org/std/?search=fast . (I apologise that I didn't link it earlier, I was on a phone.)

Thanks. I'm glad the option is there, but the current implementation looks quite tedious to use in long numeric expressions, and would greatly sacrifice readability. Ideally, I'd like something along the lines of fastmath { <expression> } blocks or function / loop level annotations. Is something like that possible with Rust's metaprogramming, perhaps?

> - numerics/scientific computing/machine learning are slowly taking over the world. It is bad to have random/occasional heisenbugs in systems that influence decisions from the personal to the international.

That's theoretically true, but (in my opinion) practically irrelevant. The thing about numerical kernels is that input is constrained by the mathematics involved and the output is rigorously verifiable. In practice, I can't imagine a realistic case where a memory safety error would not be caught by the usual verification tests that any numeric code of any importance is routinely subject to. That's why programmers in this domain don't normally think about memory safety as a separate issue at all, it's just a small and not particularly remarkable part of the normal correctness testing. A guarantee of memory safety still doesn't free you from having to do all those tests anyway. Obviously, this is much different for systems software that takes inherently unpredictable user input that's difficult to sanitize.

> - games are very, very often touching the network these days, and thus are at risk of being exploited by a malicious attacker.

The argument here is, I think, much stronger than for numerical software. Nevertheless, even for online games (that are still only a subset of computer games), network data is comparatively easy to sanitize and memory safety issues wouldn't typically lead to exploitable attacks. I've never heard of a game used as a vector for a serious attack in any context, but at least it's somewhat conceivable in theory.


Does Rust have template metaprogramming? And does it look more clean and organized than boost's C++ implementation?


Rust has three main forms of metaprogramming: generics, which are kind of like templates, but more like concepts (in C++ terms), macros, and compiler plugins.


I don't think compiler plugins count yet, since they're currently not stable.


That's an important point, yes, thank you. Their RFC was accepted two weeks ago. We'll see how long they take to make it into stable.


worth noting that Rust generics aren't (by design!) as powerful as C++ templates. macros and compiler plugins OTOH...


Wait, Rust has macros that expand to code and mess up debugging, confuse tooling and everything else just like C++? Isn't that exactly what the language should have avoided?


Macros in Rust are very different than macros in C++. They're similar to syntax-rules in Scheme: https://doc.rust-lang.org/book/macros.html


They do expand to code. They shouldn't mess up debugging or confuse tooling. They're much more similar to Lisp-style macros than the C preprocessor.


Oh, ok, the word macro is heavily overloaded and it seems I misinterpreted the Rust use of macros. But oh boy it costs lots of rep to ask a question.


Trust me, every C and Lisp programmer in the world is upset that both languages use "macro" to mean such wildly different things. :)


C++ macros don't expand to actual semantic "code", in terms of an AST or even lexed terms. It blindly expand spans of textual characters, which I agree no post-C language should ever do again.

Languages as old as Lisp do AST-level macro expansion, with the actual programming language itself computing the expansion. Any code template blocks in the macro body are processed and validated in the native syntax of the language so that nesting and such aren't mungable.


C and C++ macros are worse than textual.

They work with abstract token sequences.

This, per se, is not inherently worse, except that, oops: undefined behavior is worked into the spec. For instance, if you have a macro argument X which holds the ( token and another one Y which holds 123, and you paste these together using X ## Y, you get undefined behavior: two tokens are pasted to form something which is not a single, valid token.

A purely textual preprocessor wouldn't have an UB issue of this type.


Rust is great, however the safety aspect gets in the way sometimes

The right granularity for error handling is important, as well as making it easy to handle (abort? providing a default value? doing something else?)

It's not that it is not important, but code usability is important as well, lest it goes on the way of C++ hell (though I don't think it can get that bad, there are some warts - like "methods" and traits)


> The right granularity for error handling is important, as well as making it easy to handle (abort? providing a default value? doing something else?)

Option<T> and Result<T,E> achieve just about the best level of granularity I could imagine for error handling.

Suppose you're trying to fetch a value from a map (use case for Option<T>), or read some value over some fallible I/O stream (use case for Result<T,E>).

Want to abort if the value is missing or an error occurred? Just call unwrap() and ready yourself for (completely safe!) crash reports. unwrap() is just a way to dynamically assert certain invariants.

Want a default value in those cases? Just call or() or or_else() on the Option<T> or Result<T,E> value you have.

Want to do something else? Use Rust's pattern matching features and branch depending on whether the value was obtained, on conditionals relating to the value (or error), and more!


I can see Option and Result irking people who don't want to embrace a more functional (programming) mindset. They can be kind of a pain to deal with procedurally without the and_then, or_else, map and such. The new '?' construction will definitely help with this.

The appeal of exceptions, to me, has always been that they allow you to largely separate your error handling from your procedural logic since the catch block can usually go at the end of the method/function and can usually be written later. This means you can write and test code and then come back and handle error conditions after you've gotten the success path working. Option/Result force you to at least acknowledge the possibility of an error at the point when you're writing your logic. Whether that's ?/try!, .unwrap(), a match or writing in a more functional manner, error handling can't literally be an afterthought the way it is with unchecked exceptions.

I'm not arguing that this is a bad thing, and it may be a push in the right direction for many programmers. But it's still a push and many don't enjoy being pushed.


At least I'm not the only one disappointed in Rust's error handling strategy. I still firmly believe that exceptions are the best error handling strategy we've concocted so far and that Rust (and Go) represent a big step backwards in language design in this respect. It's still hard to get over Rust's default abort-on-OOM behavior that arises from the awkwardness that would arise from surfacing the possibility of allocation failure from every part of stdlib.

> the catch block

"The" catch block? For each function? Some people misunderstand exceptions and think that every function needs a catch block that cleans up resources allocated in that function. I hope you're not one of these people.

Idiomatic exceptional code has very few catch blocks. That's part of the appeal.


My criticism might be due to a bit of impedance mismatch, that's for sure

(Or maybe I'm just missing exceptions - because, as an example: I want to read a file and what I care about is getting a fd, anything before that "doesn't matter" and I don't want to care about every step)

On the subject, this page is good for those curious about it https://doc.rust-lang.org/book/error-handling.html


That sounds like a good use case for the ? operator in Rust 1.13+.


What about stack overflows? I heard that rust no longer protects against those for benchmark reasons.


That's incorrect. On some platforms, stack probes are not yet implemented because the patch to LLVM hasn't been merged, and we need it to do this properly. Someone is working on getting that through right now.


And stack probes are one corner case of stack overflow. Rust protects against stack overflow in most cases, on all platforms, in every case on Windows, and is intended and designed to protect against all stack overflow, but it has a bug due to missing features in LLVM that nobody has taken the time to fix.


What is "some platforms"? There is plenty of stuff in github that seems to indicate "only windows".


There is a reason Rust has debug and release builds as well to what other commenters have said. In the debug builds there are safeguards against almost everything at performance cost, the way you are supposed to use it is to test your debug builds thoroughly and then you can trust the release builds to be safe because your code doesn't change, the compiler just doesn't insert as many guards.


In general, this is less true than in other languages though; for example, assert! is still in release builds in Rust. If you want that semantic, you need debug_assert!


Safety stopped here: "curl https://sh.rustup.rs -sSf | sh" So did my interest.


It's convenient for the majority of people. There are regular downloads available (https://www.rust-lang.org/en-US/other-installers.html) and you can probably also get Rust through your system package manager.


"I trust your binaries but I don't trust your shell script for installing those binaries."


This is effectively what most package managers do too. If you're going to be running code from a third party, you need to be able to trust that command, or there's really nothing you can do, as some equivalent of it is going to run.


There's nothing stopping you from downloading the script and peeking at it yourself. There's also nothing stopping you from using the many other installation methods available.


io::stdin().read_line(&mut guess).expect("failed to read line");

My eyes have seen the glory of RUST, it's really javascript, right?


[flagged]


This comment crossed into incivility. Please remain civil when commenting here.

https://news.ycombinator.com/newsguidelines.html

https://news.ycombinator.com/newswelcome.html

We detached this subthread from https://news.ycombinator.com/item?id=13282431 and marked it off-topic.


I think you have an inconsistent threshold for civility. The post I replied to had it's fair share of snarkiness, insincerity, exaggerations, and patronizing.


I don't read it that way, but even if you're right, you need to follow the rules regardless.


Not a very civil tone. Just go ahead and ban my IP or something... I'm not particularly interested in appeasing you.


re: https://news.ycombinator.com/item?id=13282559

> Couldn't find one thing that could be simpler, eh? I think you just said you wouldn't admit you're wrong under any circumstances.

I'll happily acknowledge being wrong. Please point out how any of the features you mentioned could be simplified or removed without compromising on safety.

> You can call that 6 to 3 if you want, but it seems pretty neck and neck....

In any case, this is the problem with benchmarks--even with hard numbers we won't agree on who's winning because everything is up for argument.

> Swift and Rust use the same code generator. Rust is riding on LLVM's coat tails.

The code generator backend (LLVM) is not the same thing as the compiler frontend (lexer, parser, typechecker, AST tree shaker, etc.). I am specifically referring to the Swift compiler frontend. If you don't believe me, here's a dedicated repo for tracking Swift compiler crashes: https://github.com/practicalswift/swift-compiler-crashes

> Most programmers don't have the discipline to keep their programs simple. So yes, I'm lucky to be in a place where lines of code is considered a cost instead of a benefit.

I think we're talking about different things here. I'm visualising long-running software that drives automation of your systems. You're talking about quick jobs that do some task and then exit.

> ... we ... tend to use multiple processes instead of multiple threads. This scales very easily across a cluster of computers, something Rust won't do for you.

I mean, which language does scale multiple processes across clusters easily for you? I'm very curious to find out.

> Go ahead and have the last word if you like - I won't reply. I've clearly made you defensive, and I don't think this discussion is likely to turn friendly again.

If you won't engage in discussion, then how am I the defensive one?

    ¯\_(ツ)_/¯


[flagged]


You've repeatedly become uncivil in this thread. That's against the rules here. We ban accounts that do this, so please post civilly and substantively, or not at all.

We detached this comment from https://news.ycombinator.com/item?id=13282695 and marked it off-topic.


[flagged]


When have they broken stuff post 1.0?


Since the parent is content to just argue, in the interest of transparency, we have made one or two small fixes so far.

Specifically, I'm thinking of https://github.com/rust-lang/rfcs/pull/1214 , which was a soundness fix that went through warnings in 1.4 and became an error in 1.7.

Code like this:

  struct Foo<'a, X> {
      f: fn(&'a X)
  }
Would compile on 1.4, but not 1.7. The fix is:

  struct Foo<'a, X: 'a> {
The compiler would tell you exactly what to replace, and that's all it took to upgrade. This was worth fixing a soundness hole. We originally thought that it would become an error in 1.5, but we monitored the ecosystem to see how many packages upgraded, and it wasn't until 1.7 that we were comfortable making the change. This meant that there was no effective break, even though code technically broke.

Those are the circumstances in which this might happen: very important, yet trivial to fix. We've only done it once or twice, and the vast majority of code written in the real world that built on 1.0 in practice builds on 1.14 today.


  struct Foo<'a, X: 'a> {
take a look at tha example. Just look at it! Are you redirecting from stdin and then redirecting to stdout? Or is this supposed to be C++ templates? 'a, X: a' is shell escaping, or what? The syntax is insane, and even if Rust lived up to all of your (plural) claims, how could I ever bring myself to program in something as hideous as that?

Some more syntax insanity which is Rust:

  let path = Path::new("../word.lst");
let's map this to your target demographic again (me):

I first learned of "let" in BASIC. Path::new - so we're doing C++ now? C++ BASIC. And then there's "new", which means I'm instantiating an object. Do you seriously expect system programmers to start programming in an object oriented language, when it's clear to us that it's a horrible way to reason about data? (Hint: go back to the '50's and '60's, you'll find this wonderful paradigm called functional programming, which has a lovely sideeffect of being both stateless and reentrant. And then you'll discover this wonderful language called LISP.)

  Err(why) => panic!("failed to open {}: {}", path.display(),
So what is this now, =>, SmallTalk? And exception handling! I. HATE. EXCEPTION. HANDLING. Why? Because if I'm using exception handling, it means that I was too lazy to do proper error checking and correction in the algorithm of my program! That is unacceptable for system programs, and frankly, for any type of a program.

  Error::description(&why)),
Then all of a sudden, we're back to C++. C++ is an epic, colossal failure, precisely because of its complexity: it introduced more problems than it solved.

So Rust is a mish-mash of all these different syntaxes, and maps to zero in the C programmer's model and domain. When you (plural) embarked upon this project, didn't you know that if you want to replace something with something else, you have to map to your target audience's prior experience and knowledge?

And I still haven't addressed your claim (which you (plural) completely ignored too), that your language and the compiler, and the algorithms for memory management and safety have no flaws. That's the implication of a safe language, that the programmer's implementation of memory management is flawless! Where does Rust come from? Ah yes, from the Mozilla Firefox team. And do you know how badly Firefox runs on my Solaris 10 system? It crashes all the time, it's slow, and the latest version I have (45.5.1 ESR), the audio started cracking and popping. There is no way I'm going to trust the caliber of programmers who don't care about my platform and about code quality so as to release something like that (and that's not the first time).

We are not friends right now: your product is bad, and I don't trust you. And you (plural) are very aggressively pushing for replacing something simple which works (C) with your insane programming language. Between Rust and ANSI Common LISP, the choice is clear for me: anything that I can't implement in C, shell, or AWK, LISP is going to be my destination. Functional programming. Machine code when I'm done in the REPL. Metaprogramming. Stateless. Perfect. I'd just as soon program in Ada again, rather than Rust.

By the way, I watched your talk on Ruby and Rust[1]. After watching the amount of insanity you had to go through to print one line on stdout, I wanted to cut my veins and throw myself out of the window: I could have printed half of encyclopaedia Britannica in shell or AWK by that time. But that wasn't the worst part. The worst part was that you saw absolutely nothing wrong with all of that insanity, in fact you found it "cool".

[1] https://www.youtube.com/watch?v=Ms3EifxZopg


Quite a few comments you've posted recently have crossed into incivility. You can't do this on HN. If you keep doing it, we will ban your account.

Please take extra care to be civil when disagreeing on HN. Snark, acerbic overstatement, and personal rudeness are all unwelcome here.

We've had to warn you about this more than once before. You've also posted some good comments, so I'm inclined to give you another chance, but if you want to keep posting here, please fix this and make sure it stays fixed.


Considering the amount of censorship ("snark", "acerbic overstatement", "personal rudeness", and repeated threats "we will ban your account"), I am not at all convinced I want to keep posting here: "Hacker News" seems to have degraded into a club where people who stroke each others' egos get praised and rewarded, and where any criticism is labeled and severly reprimanded, even when it is warranted. I also do not appreciate being dictated in which style I am to express myself. Lastly, the cultural bias criteria for what consititutes "personal rudeness" is, from my point of view, insensitive in the extreme.


Even once is one time too many.


No programming language, even C or C++ or Java, lives up to that standard.


They are very -very- close to that standard.


Totally. But OP is being absolutist about it. See my comment below.


ANSI Common LISP lives up to that standard. POSIX AWK lives up to that standard. C's versioning lives up to that standard. ksh93 lives up to that standard. All of those are backward compatible, and can churn through older versions of their own syntax with no problem.

But that's not the point, and you know it: the point is you guys were hacking like crazy, without any engineering. That's why the syntax of Rust is insane, and why the language is incredibly complex, and expressions ugly, even for the simplest of things.

And while we're at it: is there a formal specification for Rust, that would say, allow me to implement my own, standards-compliant compiler?


> C's versioning lives up to that standard.

C has introduced breaking changes into newer versions of the standard. I don't know as much about ANSI Common Lisp or AWK.

By "formal" spec, that depends; do you mean "a spec", or "a spec proven with formal methods"? The latter is undergoing work at various universities. The former doesn't exist yet, but is a goal of next year, and we've already taken some steps towards having it exist.

I'm not going to bother with the rest.


Out of interest, what breaking changes have been introduced to C in C99 and C11 (I’m assuming we’re using C89 as a baseline) beyond changes in corner cases in tokenization due to the introduction of “//” comments (and perhaps the removal of gets)?


The removal of gets is what I was thinking of.

FWIW, I think that's the right thing to do. My point is just that the OP has completely unrealistic expectations of how actual programming languages work. What actually matters is, how much pain do you feel when upgrading to a new version of the language? The removal of gets violates the parent's "even once is one time too many", even though in practice, it's a complete non-issue.


The removal of gets is what I was thinking of.

gets(3C) is a standard C library function; it has nothing whatsoever to do with C the language. Proof:

  Standard C Library Functions                             gets(3C)

  NAME
       gets, fgets - get a string from a stream

  SYNOPSIS
       #include <stdio.h>

       char *gets(char *s);

       char *fgets(char *s, int n, FILE *stream);
 
Rust is targeted at people who write system code in C (me). But how the hell are you going to appeal to my demographic group, when you don't distinguish between libc and C the programming language?

What actually matters is, how much pain do you feel when upgrading to a new version of the language?

And this is where you (plural) err: take the illumos source code, for example. Take the GNU/Linux source code! No, take the OpenBSD or FreeBSD source code. How many lines of C code does any of those code bases have? You expect us to rip it all out and replace it with a different language. What is our cost of replacement?

That's the burning issue! And even that's ignoring the fact that the syntax of Rust and the concepts like borrowing are insane, and map to nothing in the experience of your target demographic, the system programmers.


> ANSI Common LISP lives up to that standard.

The interface to environment objects in Common Lisp the Language 2nd Edition did not make it into the ANSI standard. If you're going to fail Rust for making changes pre-1.0, then you should fail ANSI Common Lisp too.


> ANSI Common LISP lives up to that standard.

There is only one version of the ANSI Common Lisp standard, so it is trivially compatible with itself.

> C's versioning lives up to that standard.

Then how do you explain this astonishing work of art?

https://kristerw.blogspot.com/2016/07/code-behaving-differen...


Easy: you tell the compiler which version of C you're compiling, and it churns right through it without a peep. Versioned formats, cornerstone of every good engineering implementation.


Why is this guy downvoted constantly? Whenever a lively discussion comes up, HN manages to stifle it.


As someone looking in that hasn't voted one way or the other on this thread, I would say it's because there's been no substantiation of the claims being made, even after it was specifically asked for.

HN does not look kindly on people that make definitive claims without evidence, and especially not on those that refuse to provide evidence when then requested. There are less inflammatory ways to make the statements presented here, and ways to do it with evidence. That wasn't done.

I have seen the "breaking changes" aspect discussed multiple times in the past, and Steve has even addresses it in a prior discussion (linked to by him from this discussion)[1] and this thread itself farther up[2]. If you respond to him there with any questions, I'm sure he would be happy to answer or point you to relevant discussions.

1: https://news.ycombinator.com/item?id=13267399

2: https://news.ycombinator.com/item?id=13278367


You didn't answer my question :)


I did so answer your question, you just don't like my answer.

If you expect me to go look at every single thing that they changed, I'm not doing that. This isn't a pissing contest.


I asked you to provide a single example as proof and you have not done so. There is no pissing contest in asking you to provide a tiny bit of proof for your statements.


My claims are based on Steve Klabnik's own article on the subject, "four years with Rust":

http://words.steveklabnik.com/four-years-with-rust

I guess you somehow missed it, which is surprising considering just how extremely aggressive Rust people are at pushing the language. And frankly, I'm getting sick and tired of having to repeat and quote every single thing over and over and over again. The Anglosaxon system of god-like deference to sources, which themselves might be utterly wrong, is fundamentally flawed.


The daily post about Rust and Go is getting tiresome... every single day we've got one.


I read them all, you don't have to - just scroll by. I for example don't like the 'daily' snake oil salesman type of posts that pop up here, so I just skip them - easy.


> "Safety in the systems space is Rust's raison d'être."

I think this quote points to the REAL underlying issue here.

Rust is a language primarily built for systems programming. It has many strengths to celebrate, and brings curated best practices as well as its own novel features to systems programming.

However, most programmers in 2016 aren't "systems programmers" anymore. At the very least, most programmers who actively talk-up new technologies on web forums are not systems programmers. The majority (or at least the majority of the vocal and socially engaged) are web developers, mobile developers, CRUD apps and microservices, etc.

As interesting as Rust may be in the systems space, it doesn't bring much compelling new hype to the table for web stuff.

You have yet-another-concurrency-approach? That's great, but most web developers rely on an app server or low-level library for that, and seldom have to think about concurrency up at the level of their own code.

You have an approach for memory safety without a garbage collector? That's great, but most web developers have never even had to think much about garbage collection. Java, Go, etc... the garbage collection performance of all these languages is on a level that makes this a moot point 99.999% of the time.

You have a seamless FFI for integrating with C code? That's great, but after 20 years of web development I can count on one hand the number of times I've seen a project do this. And those examples were Perl-based CGI apps way back in the day.

Rust people seem almost dumbfounded that everyone hasn't jumped all over their language yet. And from a systems programmer perspective, memory safety without garbage collection does sound amazing. But you guys really need to understand that Hacker News and Reddit hype is driven by web developers, and that community isn't even sure whether or not type safety is a worthwhile feature! So really, it's amazing that you've managed to draw as much hype as you have. It's not about the mainstream popularity of your language, it's about the mainstream popularity of your field.


The way the industry is moving, it seems that:

(1) Web apps are increasingly being split into a separation between presentation-only frontends + multiple "purely API-oriented" backends;

(2) There is a trend towards static typing in webapps.

We're currently using Go (plus some legacy Ruby apps we haven't rewritten yet) to implement microservice backends that serve APIs, and the frontend is all client-side JavaScript using React. (We also use Node.js to do server-side rendering of static HTML for Googlebot, but for a typical visitor, all the magic happens in the browser.)

For us, Go works well, and the compilation and static typing is a much-appreciated safety net compared to the everything-goes world of Ruby, not to mention much better performance and memory usage (one app is went from being multiple Unicorn processes consuming 2GB RAM in total, down to a single process using ~60MB and a fraction of the CPU usage).

But this isn't "systems programming" at all, and yet for me, Rust is very much on the table as a possible next language. I'm not sure if its complexity is large enough of a hindrance yet. Go is already a challenge for junior programmers who are used to dynamically typed languages, Rust much more so. I'd love to be able hire all seniors, but all the hot startups are taking them. (Though in that sense, Rust may even serve as a good carrot.)

As for static typing on the web side, TypeScript — which is essentially static typing for JS — is also most definitively in our future. The last year or so, Microsoft has made it easier to work with a mix of legacy JS and TS, so you no longer have to convert the entire codebase to migrate, which is great.

I don't think every company is going to move away from their classical Rails or PHP stack, of course, but there's definitely a trend, and I can imagine Rust becoming a popular alternative to the other statically-typed languages, including Scala and Java.


I've written web services in Rust, and I'm quite happy with it. Iron is a pleasure to use, and the stuff I write is incredibly performant. There's also something amazing about compiling a project to a single binary and deploying.

Rust is only about a year old. Once the library support grows, it's going to be a giant in just about every space I can think of.


The memory safety of Rust is over consumed.

1. several small languages also introduce the type system to try to solve the memory safety problem. But all of them are less famous. Because there are many reasons that makes a language being accepted massively from other tons.

2. in many cases, it is not hard to do manual memory management. There are many great software done with manual memory management. Although I admit the quest to memory management is always wonderful for system. But go to the follow #3.

3. linear/affine type system[1] is not the panacea. The case of "used exactly once" is just a small case. Forcely to this pattern makes large boilerplates. And constraints and verifications to system can not be done many levels and aspects. Is this truly valuable to add all into type system?

4. memory safety of Rust comes with price, which have added many complexities and language burdens to itself. Who like to read the following function declaration?(just borrowed as example):

fn foo<'a, 'b>(x: &'a str, y: &'b str) -> &'a str

5. So, finally, the question arises: does the current form of the memory safety of Rust deserve as the hope of next industry language? I'm afraid...

[1] https://en.wikipedia.org/wiki/Substructural_type_system


>Who like to read the following function declaration?(just borrowed as example):

> fn foo<'a, 'b>(x: &'a str, y: &'b str) -> &'a str

Anybody who cares about the validity of pointers would like to read that. It very explicitly tells you what must be true of the arguments for them to be valid and for how long the result will be valid.

In general, you seem to be over-valuing conciseness. If you think conciseness is more important than memory safety, feel free to not use Rust. But frankly, it is trivial to make a language more concise. That's not a hard problem whatsoever.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: