Hacker News new | past | comments | ask | show | jobs | submit login
Microsoft: Rust Is the Industry’s ‘Best Chance’ at Safe Systems Programming (thenewstack.io)
523 points by adamnemecek on June 13, 2020 | hide | past | favorite | 440 comments



A lot of 'anti-rust' sentiment here.

Personally, I think the article is somewhat correct. Well vetted and tested C (or C++) is great, but even the guys who are looking for these things get it wrong sometimes.

Rust isn't perfect, and I think the negative reaction is people thinking that evangelists consider it perfect. But I certainly wouldn't call it perfect, and I'm rather fond of Rust.

That said, I still think it's the "best chance", that doesn't mean it's the only possible thing to make headway into more safe systems, but it means that it has the 'best chance'..

Maybe you can make D memory safe, but when I was getting into D development I couldn't even get a compiler working.

Maybe Ada is your favourite, but I don't know anyone writing Ada and integrating it with a C codebase.

--

Frankly it comes down to the fact that Rust was designed specifically for replacing components of a large C++ codebase that it will be adopted. If you can write a driver that works with linux that's written in rust and you don't have as many memory safety issues or concurrency problems (and theres no overhead) then that's reasonable.

Even if Rust never gets supported by the kernel's buildsystem officially or if the code can not be upstreamed.


"Anti-rust sentiment"?

I'm a Rust user, a fan of the language, and I believe it's a great step forward for systems programming.

However, it's kind of ridiculous how on any thread about Rust, anyone who brings up criticism, no matter how valid, gets downvoted into oblivion. Just look at what happened to the comments critical of Rust on this post.

The Rust community really needs to take care not to become an echo chamber.


I'm one of the leads of the Rust language and community, and we absolutely welcome constructive criticism in many different forms. That's part of how we improve the language. I regularly upvote comments that point out things Rust needs to improve, as well as linking them from appropriate github issues or Zulip discussions.

The whole "RESF" thing is not in any way welcomed by Rust community and governance folks. It's not funny, it's harmful, and anyone posting it is doing a disservice to the language they purport to support.

To anyone reading this: please don't downvote constructive criticism of Rust. Save your downvotes for the comments that wish Rust wasn't as welcoming to everyone. ;)


constructive criticism in many different forms

The thing about constructive criticism, when it comes to something like programming language design, is that its usefulness varies inversely with the importance of its aim.

Core features of the language are highly important (obviously) but criticizing them is not very useful since they're highly unlikely to change. Thus, only the most peripheral and superficial parts (as well as new additions) are subject to valid, constructive criticism.

People who have criticism for (or are otherwise skeptical of) the core parts of Rust are not being constructive, then, and so these fights can break out.


I've seen (and participated in) useful, productive discussions about core aspects of Rust; occasionally such things can even be improved. It takes a lot of care to have a productive discussion on such things, but it can absolutely happen.


Constructive doesn't mean actionable. Not sure where that assumption comes from. It may be that only future languages can reap the rewards of a discussion on core features that are unlikely to change in rust but that's still possibly valid constructive criticism.


What does RESF stand for? I know of "Rust Evangelism Task Force" and I am trying to match against "Rewrite Everything ..." or similar, but I am lost...


I had to Google to find out... Rust Evangelism Strike Force https://twitter.com/rustevangelism


Same thing, "Rust Evangelism Strike Force".

I'm fairly certain it's a meme.


Rust as a language is great and at this point I'd much prefer to use it over C or C++ any day of the week if the application and context permit. However, I don't think I'd ever want to participate in the Rust community or really even have any contact with it. Everything I've seen of Rust's community has been deeply toxic and hostile to anyone who falls short of worshipping the language and aggressively promoting and defending it at every chance.


I'm really confused by your comment because the experience that I have had and in surveys of my coworkers in the Rust community has been uniformly amazing: 1) the technical discussions are always respectful, 2) the in-person conferences have clear code of conducts and the speaker panels represent the best of the engineering and human community, and 3) the community actively encourages people asking newbie questions and goes out of its way to be supportive and non-judgemental when replying.

The only counter example that I can think of is the way that the actix unsafe maintainer's behaviour was handled (brigadiering bands of Reddit evangelists flaming him for his own flaming). However, I chock that up to an anomaly caused by Reddit's toxic community and not representative of the Rust community.


I wonder which community you’re talking about. My experience is rather positive. The infamous Rust evangelization task force is just a bunch of fanbois to put in your killfile.


I’d be curious what your exposure to the Rust community has been and where?

Within the Rust reddit or forums, they specifically dissuade blind evangelism and there are often posts critical about the language that get quite a lot of traction.

Outside the main rust community, there are definitely zealots but they are usually not accepted by the larger community, and are usually called out.

I’m not saying it’s perfect, nor that there aren’t zealots who go overboard. However it’s probably one of the least toxic of the programming communities I frequent (swift, c++, Python, c# , go etc)


I think this is a combination of two things, and is true of many new programming platforms (languages, etc.).

1. If the tech hasn't made full contact with reality yet. For instance, there aren't a diverse set of large projects written in rust yet, so it's a little easier to imagine that it's perfect.

2. We should acknowledge that technology choices have a big impact on one's career. Sure, we are all software engineers that could work in a lot of environments; but we are simply more productive in the ones where we have some experience/depth. If someone is learning rust today, in many respects that's a bet, and they (subconsciously or consciously) have a vested interest in rust's success. More experienced people will usually have a more balanced risk portfolio, spread between proven tech and a few up-and-coming technologies. But for people earlier in their career, it might be a pretty major bet for them.


They’re also super weirdly political. As a not-especially-political guy I’m super turned off by all the “activism” I see coming out of the community, even where it’s totally irrelevant. I remember seeing a whole thread about how the rust mascot crab is canonically non-gender-binary or something. Wtf?


If discussing the gender of the crab is politics, then surely assuming the crab is one gender or another is also politics. The idea that you can avoid politics in anything is a lie told by people whose beliefs conform to mainstream society. What you're really asking for is everyone to agree with the mainstream view.

It's kinda like the old military policy of "don't ask don't tell" - the policy wasn't actually don't ask don't tell - the actual policy was "pretend to be straight". Guys didn't get in trouble for talking about their girlfriend.

Beyond that, as someone that doesn't care about politics, I just skip the thread. There are dozens I don't care to read about. It's just another one. No big deal.


> If discussing the gender of the crab is politics, then surely assuming the crab is one gender or another is also politics.

A) sane and professional communities just don’t talk about irrelevances like this at all, which is my point

B) if some fringe political belief rejects some standard bread-and-butter assumption, it’s not really useful to describe running with the default assumption as “political”

“If abolishing private property is politics, then surely locking your door is politics.”


If communities want to go down that road, fine, but then they need to get out of this weird limbo where some views are allowed and some aren't. That means that someone must be just as free to say, "I believe the rust crab should have a normal gender because non-gender-binary is not a thing." without getting banned or cancelled. Right now it's reverted to a very don't-ask-don't-tell-like environment where one can either speak in support of such things or not say anything at all, effectively pretending to agree just as in your military example servicemen had to pretend to be straight.


In my experience many people (unfortunately) generally feel free to disparage non-gender-binary concepts on internet communities


Maybe in other places on the internet, sure, but I was referring mostly to the code-of-conduct-gated development communities that pretend not to discuss politics while allowing specific viewpoints above others.


I can see a difference between light-hearted speculation about an imaginary crab's gender and saying that non-gender-binary people don't exist. Can you think of any reason why one of these might be allowed, while the other might not be?


I guess I can't, because they're both political statements. Politics is either allowed or not. I assume, however, you have some reason in mind, by how you phrased the question.


Interesting - well, honestly, not sure how anything wouldn't be political if discussing a crab is politics.

Not that that's necessarily wrong - I'm definitely sympathetic to the argument that the personal is political. But that argument would justify more technical communities taking political stances, not fewer.


Discussing a crab is not politics, but attributing to it a non-existent gender is intentionally contrary to most people's social beliefs (especially outside of the SV bubble) and is thus political. Calling it a normal boy or girl isn't.

If, however, some community members wish to do that, they open to others the option to speak their minds on the topic, including stating that such an action ought not to be taken because no such gender exists. Contrast this with how most "codes of conduct" are written: they would permit community members to speak in favor of such a decision but not against it, as doing so would be "hateful", "bigoted", or "exclusionary".


Don't think of it as a community being political. It's not like people pick a programming language and then suddenly one day decide to be political, in part due to their programming-language choices.

Think of it instead as "people who are political" looking for a community (not necessarily because of that community's politics, but often for orthogonal reasons, e.g. a language being relatively new and so not yet used by anything they dislike) and then those people being political about their programming wherever they land. If some percentage of programmers are like that, then just statistically, some language is going to end up with a lot of them, whether its maintainers wish to cultivate the community image as being such or not.


My first reaction was: why is a non-gender-binary crab mascot considered "political"? I'm assuming this came up incidentally in the context of its name and pronouns?

By this implied definition of political, more or less any human social interaction is political in the broader view.


It’s one example out of many where rust users bring a political topic (and yes, post-gender queer theory stuff is political) into a discussion where there’s no reason to do so except to leverage the discussion as a platform for boosting your political belief. It’s very unprofessional, very counterproductive, and a huge turn off for people who don’t have the same fringe politics.


You're right that it's primarily a social issue, but people will always try to use the state to enforce their social beliefs, so they become political. See New York City's ordinance that can cause those who won't use a made-up "pronoun" to be fined [0], undoubtedly a use of force by the state to enforce a certain set of beliefs.

[0]: https://www.washingtonpost.com/news/volokh-conspiracy/wp/201...


I have to say my experience with the community has been really good. Whether it be the reddit channels, discord channels or conversations with people putting rust related stuff on youtube.

Maybe you had a particular bad experience after someone mistook your behaviour as trolling - not that you would have been ... I just haven't witnessed any bad behaviour in the community yet - either towards myself or others.

If anything, I have appreciated at how much support the informed members are giving to the relative newbies like myself. Not only in advice but listening without ridiculing if the speaker is not that knowledgeable on the topic of conversation.

@jasondclinton, I agree with you in regards to the bad experience the actix project owner got in regards to unsafe code use. It was sad to see that happen to an open source contributor (or anyone).


That is how the Mozilla community works and the kind of people they attract.


This was my opinion too, might be a generalisation, yet it is the impression I and it would seem others have!


Then don't come near the Clojure community either, that's the most cultist one I have encountered.


That has a lot to do with Rich Hickey and Cognitect encouraging cultishness.


Would you care to elaborate? I'm very fond of the language but have found people to be generally pleasant. The only criticism I have is that its a pretty small community so finding fellow Clojurists tends to be hard.


(Not the person you responded to.) My direct interactions with the Clojure community have been very pleasant and helpful, and I have zero complaints there. But when searching around for common questions related to Clojure, I often run into answers along the lines of "Clojure doesn't have X because our language is so superior we don't need it", which does sound cultish.

For example, I was looking for GraphQL ClojureScript libs a few days ago, and found a thread where the first answers were essentially:

- "If you can’t find a framework that does it, chances are it’s so easy to do in Clojure/ClojureScript that nobody bothers to put an abstraction there"

- "Datomic has been doing what GraphQL offers plus much more since like 2013"

Both answers are a bit cultish. The first suggests a lack of enlightenment. The second suggests that salvation can be found by embracing the cult even further and locking yourself into a proprietary database invented by Clojure's author and used almost exclusively by Clojurists.

(Edit: This thread was from 2017. Such answers may have become more rare since then.)


The second was probably me. Sorry about that. My words reflect me as an individual who is sometimes socially challenged, not the wider Clojure community.


Ouch—I didn't link to the thread in my previous comment specifically because I didn't want to point fingers at people who were trying to help back in 2017, even if I disagree with their answers, but here you are anyway! For what it's worth, I'd rather have more people in the Clojure community than fewer, even if I don't always share the same viewpoints.


At least they don't bring politics in it.


I think the downvoting is pretty ubiquitous across HN now-a-days. This will also happen with any criticism of Apple, for example. Most of HN is an echo chamber as evidenced by the often off topic posts and discussion that reach the front page. I still come here for insight but I rarely have in-depth discussions here because HN and the crowd simply no longer cater to that kind of discourse for the most part. This is probably due to its increased popularity.


where do you go instead?


I just started having the conversations with my peers and colleagues instead. The net benefit of getting upvoted here is minimal and the social/networking aspects as well. In person convos really can be much more rewarding and fruitful because you can focus it rather than post on HN and some person comes along who is offended can derail and kill the conversation. I’m honestly more jaded about online interaction than most though.


If you ever need a penpal, hit me up.



> anyone who brings up criticism, no matter how valid, gets downvoted into oblivion.

When I made this comment there was nothing I could call "valid" criticism.

There are valid criticisms though; Lack of dynamic linking and immaturity (and most crates only building in the nightly channel, and not easily knowing if there was a version buildable on stable) are things I can think of as large weaknesses.

I know these and I'm not a full time rust programmer.


Dynamic linking is quite possible. You have to use the C ABI however when interfacing with code that's not part of the build.


We do this _all the time_ as we write almost all of our software in Rust, but lots of it is for use with/in embedded systems developed in C.

It's true that you can mate up to C by exporting the C ABI everywhere. This isn't a free lunch though. There are annoying and/or weird side-effects you sometimes have to deal with when leaning heavily into doing this.


> This isn't a free lunch though. There are annoying and/or weird side-effects you sometimes have to deal with when leaning heavily into doing this.

as a person not super-experienced in rust, im curious what kind of side-effects? binary size bloat?


That works in theory but often not in practice. The problem is that many C libraries keep their API backwards compatible but not their ABI. For example, a function might become a macro or the size of a struct might change.

It is possible to keep the ABI backwards compatible by practicing good C hygiene, but many library authors doesn't know how. Especially when it comes to Windows which is a platform most C library authors are not intimately familiar with.


Dynamic linking requires a backwards-compatible ABI essentially by definition. If the ABI changes, that involves a bumped SONAME and version number hence binary objects should never end up being linked in incompatible ways.


Few library authors understand that. In recent years libmq, libpcre, and libssh have all introduced ABI breaking changes without bumping the SONAME. Unless you have very detailed knowledge of how dynamic linking works on all supported platforms it is very hard to know what constitutes an ABI break.

The reason it mostly works on Linux is because header files are available. For example, if a function in library A that program B depends on is changed to a macro then all that is required is for B to be recompiled. Which B almost always is because most users get their software from package archives that are recompiled when dependencies are updated.

On Windows there is no such thing as an SONAME and Microsoft has essentially given up on ABI compatibility. If you write C or C++ in Visual Studio you must ship with it with the language runtime dlls for that version of Visual Studio. Microsoft doesn't guarantee ABI compatibility with any other version of the runtime.


It looks like the Windows equivalent is called a "Side by side manifest". It has to be opted-in-to from both the shared object (.DLL for Windows) side and the loading application, though. On Linux, it just works - it is idiomatic to include major version in SONAME.


libc usually does pretty well, FWIW.


> The Rust community really needs to take care not to become an echo chamber.

Exactly. The language has been proven to be stable, however a valid argument is the unsoundness risks of a project when they begin to depend on lots of third-party crates.

The moment one imports a crate or third party dependency into their project, unless the dependency explicitly specifies the lack of unsafe{}, all bets are off on the 'safety guarantees', especially when 'that crate' is used in multiple projects.


"All bets are off" is an insane way to characterize this. IF a project is 99.99% safe code and has one unsafe dependency that is a massive difference from a project that is 100% unsafe. The surface area of the project that may be vulnerable is going to be relatively tiny.

As for an echo chamber, yeah the Rust community downvotes way too much. I find myself upvoting just to counter it.

But the downvoted comments here are legitimately bad. Someone being pedantic about the 70%, and being wrong, and another person calling this propaganda.


Rust doesn't check for numerical over and underflows unless you run in debug mode which almost no one does because it is about two times slower than release mode. I.e. Rust doesn't live up to the safety standards many programmers coming from dynamic languages are used to where over and underflows are checked.

Many people in the Rust community appear to be unaware of this. Which is actually a big deal. Many of the bugs with the most catastrophic consequences are caused by over and underflows. Consider for example a gas pedal indicator kept as an unsigned that underflows causing the indicator to reach its maximum value.

If safety was my primary concern, I wouldn't choose C++ but I wouldn't choose Rust either.


You can pass "-C overflow-checks=on" to rustc to generate an optimized build with over/underflow checks enabled, even in a release build. Also, note that over/underflow behavior is defined to always be two's compliment wrapping when overflow checks are disabled.


> Rust doesn't check for numerical over and underflows unless you run in debug mode which almost no one does because it is about two times slower than release mode.

Totally, I wish Rust could express over/underflows better!

> Rust doesn't live up to the safety standards many programmers coming from dynamic languages are used to where over and underflows are checked.

> Rust doesn't live up to the safety standards many programmers coming from dynamic languages are used to where over and underflows are checked.

Yeah, that's possible. Thought it won't lead to unsafety, it is definitely a gotcha.

> Many people in the Rust community appear to be unaware of this. Which is actually a big deal.

Maybe, not sure.

> If safety was my primary concern, I wouldn't choose C++ but I wouldn't choose Rust either.

Really depends on the goals and constraints, of course. I'd be really concerned about using rust where an integer overflow could kill someone. I'd be really happy to use rust where a program crashing is totally acceptable, but exploitability isn't.


Rust is memory safe, not infallible. Catching more runtime errors is a something rust can improve on. Erlang being one example of a strategy.


> "All bets are off" is an insane way to characterize this.

Well, it depends. If the unsafe{} code is also unsound, then "all bets are off" is quite correct because you can't assume memory safety or lack of UB, even in your own code. In practice, you can use cargo-geiger to warn about crates that internally rely on unsafe code, and cargo-audit will warn about known security issues (including but not limited to unsoundness and memory-unsafety).


> then "all bets are off" is quite correct because you can't assume memory safety or lack of UB, even in your own code.

The problem with "All bets are off" is it's not meaningful. What does that mean? It's so vague.

Lots of unsafe code is safe. Lots of UB is not exploitable. I can link to a vulnerable crate and not be vulnerable.

Can there be an exploitable bug in rust code? Yeah, sure. But saying "all bets are off" gives the impression of "why even bother if you link to unsafe code", which is absurd. Yeah, we should strive for more safe code, audit unsafe code, be very cautious about using it, but "all bets are off" come on.


> Lots of unsafe code is safe.

This seems like a weird statement to make to me when defending Rust simultaneously. The whole selling point of Rust is that it's supposed to let you write provably safe code. When "unsafe" is used, what you're writing can no longer be proven as safe by the compiler. Maybe you've convinced yourself it's actually safe via some other argument, but people have been convincing themselves that their code is memory safe for decades and they're often wrong. More insidiously, relying on anything in Rust that uses "unsafe" makes it seem like the compiler is proving your own code is safe, but that is no longer true.

With that said, I don't really disagree with what I think your main point is. But seeing unsafe code in Rust is similar to seeing unsafePerformIO in Haskell libraries. Both of these are much more common than the languages' respective evangelists would have you believe, and at some point it actually does start to call into question whether the languages are successful at their basic design goals. (Spoiler: I believe that they are successful more than they are not, and even in the face of the proliferation of "unsafe" code in both languages, they're still worth using.)


> The whole selling point of Rust is that it's supposed to let you write provably safe code.

I disagree. A significant purpose of rust is to encapsulate unsafe code, and provide safe abstractions. Lots of unsafe code is safe - most of it, even, I'd bet.


> Lots of unsafe code is safe

I mean this with the greatest possible respect, but you replied by simply restating verbatim the quote that I originally responded to, so I'll just point you to my previous post.


`unsafe` the keyword vs unsafe the concept. Lots of `unsafe` code is safe when used through a safe wrapper. Verifying that is the duty of the programmer. The compiler can't prove it, but quite often the programmer can.

IMO it's good practice to provide a comment with such proof for every `unsafe` block.

So `unsafe` does not imply unsound, but unsound does imply `unsafe`.


This is what I was trying to say. Unsafe doesn't imply exploitability, at all.


Sorry if I misunderstood your post then.


The compiler has very limited "proof" capabilities even in principle, especially wrt. unsafe{} code. From a principled POV it would be nice to be able to provide a formal, compiler-checked proof that some block of code that has to rely on "unsafe" operations doesn't actually violate the language safety guarantees and can be wrapped safely. But this will have to depend on a formal model of "unsafe" itself which is still lacking. And it wouldn't even help for things like FFI, which involves calls into outside code for which the semantics are not formally known.


Well, there's two interpretations of "all bets are off". One is the runtime one, which is if you have a bug in unsafe code, then it is possible that all the other code in your program is safe but you're still totally out of luck because it can invalidate every guarantee. However, from an auditing perspective, the only code you need to look at for memory safety issues is that particular snippet, so you're in a much better position from that perspective.


"All bets are off" is not an engineering way to look at the problem imho. You alter your risk tolerance with the benefit gained. It's just trade-offs and the ability to do that is good.

I'm not going to argue you should use rust or not use rust but I think the state space is not binary and people reading will benefit from that.


This is true, but it's also something the Rust community discusses all the time, along with what they can do about it. It seems like a huge leap to the conclusion that the Rust community is an echo chamber because some middlebrow dismissals got downvoted on Hacker News.


Even legitimate criticisms get downvoted or countered with dishonest marketing speech like 'borrow checker doesn't matter'.


IMO crates are the biggest danger to Rust. It may very easily turn into the next NPM.


I think an even more problematic corollary is the desire to put nothing in the standard library, which cause the proliferation of crates that are the "standard" way of doing things but don't really have the guarantees/benefits that come with being part of "the" standard library.


Yeah, I agree with this.

I see it kind of like Go's lack of generics, in a strange way. With Go, no one was explicitly "anti-generics", it's just the core team couldn't find a design they liked, and as time went on it became sort of a meme. "Go is opposed to generics." "No, here's that blog post by Russ Cox about finding a way to implement them." Etc, etc.

Similarly, the rust core team was afraid to settle on designs for certain libraries, pointing out things like python's various standard library HTTP modules, and at least "for now" shunting a lot of functionality out into the ecosystem. When you have limited manpower, this makes sense, but I think a lot of people have since rationalized that decision as philosophically the right thing to do, not just a way of prioritizing development work.

In other words, I feel like it's a case of a smart tactical decision growing into an identity.


The saddest example I’ve come across so far is the lack of strftime, or any way to even print a clock time / date time in std. Even JavaScript has that. In a supposedly way past 1.0 language one has to rely on 0.x third party libraries to do even the most basic things.

Disclosure: only dabbled in Rust a bit, currently maintain two crates of some traction.


The rand and regex crates are two other examples. I would also prefer to have bindgen in the standard library.

I’d love to use Rust in the OS I work on, but integrating our build system with Cargo and dealing with external crates is a huge obstacle.


> I would also prefer to have bindgen in the standard library.

That's a challenging one. I'd like to have native support for C integration as well, but we'll have to balance that with the stability of the corresponding clang interfaces, and the requirement to ship extra libraries that the Rust toolchain doesn't currently ship.

Working on it, though.

> The rand and regex crates are two other examples.

rand I'd agree with, though it would need paring down to not have as many dependencies. For regex, there are multiple tradeoffs to be made; for instance, the most popular regex crate is fast and does most of what people want but doesn't have backreferences or lookahead/lookbehind assertions.


Question: is bindgen std quality yet? It seems there are still largely unsolved problems, e.g. https://github.com/rust-lang/rust-bindgen/issues/1549 which is causing hundreds of warnings in a -sys crate of mine and for which I have no recourse.


> the requirement to ship extra libraries that the Rust toolchain doesn't currently ship.

Ah, that's something I did not consider...


I think the issue there is that versions are arbitrary and being able to tell when a library is 1.0 is a non-decidable problem. There are lots of 0.x crates which are stable and feel feature complete, but aren't specifically versioned as 1.0.

I would actually prefer if the Rust std would not include functionality that's not essential, for example sockets don't seem essential and the standard library does not implement them in a particularly complete and well thought out manner either IMHO.


It would indeed be nice to have some sort of middle ground, where a selection of outstanding-quality crates can be officially "endorsed" by the Rust development team, but the endorsement can be withdrawn freely if something better comes along. C++ has something like this via Boost, and Haskell has their Haskell Platform, so it's not a new idea.


Actually, I am looking for more than this because I think Rust actually has a number of libraries that I think function fairly similarly to Boost. What I really want is the language committing to a library that is available without a third-party dependency, basically under a "linking exception" to be extremely close to "use however you want", supported by the people developing the language, designed alongside the language to remain ergonomic, with support that is tied to the language, … Personally, I consider this to be a very valuable feature from a programming language. Having a "pretty good" solution available "for free" is immensely value compared to having to go look online for every little thing, evaluate the landscape, audit it, ensure the licensing matches, consider whether will remain supported, and try to work around its idiosyncrasies.


The Rust teams do maintain some crates outside of the standard library.


I think the proposal is specifically to bless a set of crates (that may include those developed by Rust teams, but probably also others such as regex).

Being in std implies some kind of quality standard, but it also implies no backwards incompatible changes. A set of libraries blessed by a trusted group could give you the quality standard without constraining backwards compatibility.


Regex is one of those crates.


No, they don’t. If the Rust team maintains a crate officially and they will always do so (for reasonable values of "always"), then it is part of the standard library. Otherwise, it isn’t.

It is as simple as that. When serious, important projects depend on a standard library is because they want those guarantees. Having some members of Rust work on some crates does not give anyone those guarantees.

Which is why all those crates must be in std, or declare that the standard library is a set of crates (including core, std and others).


I don’t see how that follows. Rust could easily have a (small) standard library that is guaranteed to be present everywhere and any of a number of optional libraries that are maintained by the same team that builds the compiler and the standard library.

That’s what java spent years on getting (https://www.oracle.com/corporate/features/understanding-java...), and I think that’s the way to go for any language. It prevents the standard library from bloating, yet provides quality libraries that can be trusted. It also allows some implementations to be standard without providing all the bells and whistles. That can be useful on small platforms, and allows for ‘heavier’ optional libraries (for example a SVG renderer, PDF parser, etc)


... I don’t see how that possibly follows. Not everything the project does is the standard library. The compiler, rustup, all kinds of tools.


> [crates may turn into the next NPM]

Can't cargo refer to arbitrary git repos or even just plain directories? I don't see how it's any different from C or C++ in that case.


It can, but then so can ruby's Bundler and elixir's Hex. But neither of them have the same issues as npm. I think npm's issues arise from a culture of making lots and lots of tiny dependencies, probably spawned by JS's anemic standard library.

At work, we have a number of Phoenix web apps with JS frontends. Looking at one randomly, there's... checking... 48 elixir dependencies and... checking... 911!! JS ones. I have to spend literally hours every week keeping those stupid JS dependencies up to date across our apps. The churn is enormous.

I think the fear is that as long as rust keeps its standard library small, a culture may develop to have more and more in the crate ecosystem, which proves to be a huge maintenance burden, as those are freer to update in breaking ways, and cross-depend on each other.


I think NPM obsession with small libraries comes from the lack of an decent optimizing compiler that can remove/not-include unused code, combined with a deployment scenario (frontend) where code size matters.


It can, yes.


Except npm has order of magnitude more users.


What part of the Rust community are we talking about here the parts I've interacted with (mostly a local meetup and the rust discord) have been great and don't feel like an echo chamber

do you just mean reddit and hacker news posters because I find calling that part of the community pretty absurd that's just lowest common denominator internet yelling


What would you have happen instead?

IMHO, on an ideal platform for debate, the audience's experience of the debate should itself resemble the statistical distribution of opinions held by the debaters.

I.e., if 80% of people think X while 20% of people think Y, then comments saying Y should not make up 90% of the debate by any metric (number, word-count, above-the-fold representation, etc.) If Y did take a majority share by any metric, that'd be a misrepresentation of the debate, that is likely to cause the audience to come away with a misunderstanding of the issue.

I don't know how close we are to an "ideal platform for debate" on HN (probably not very)—but I feel like a certain degree of people reflexively downvoting contrarian opinions (no matter the subject) probably bring us closer, not further, from that point. Contrarian opinions should not be presented as if they were orthodox opinions. They should be given some representation—I'm not advocating for censorship—but they should only be highlighted in any algorithmic way, to the exact degree that they're recognized as something some appreciable group of people really believes.


I think this is wrongheaded, for the simple reason that whether an idea is contrarian or not is irrelevant to its merit as an idea. Misrepresentation is only possible through sophistry, not through volume, or even placement. Therefore, it makes sense to downvote only things that are 'sophist', in the sense that they contain no kernel of communication, but rather, just obscure or muddy - not things that are contrarian. Indeed, to do so is in itself an attempt to cloud and thwart communication, as you're manipulating placement and representation to make arguments less or more present, independent of their actual content.


This presumes that people have infinite time and motivation both to debate, and to read the ensuing infinite debate.

Rhetoric works to win debates because debates are time-limited (and the audience's attention is both time- and attention-limited), and so you can win just by e.g. preventing your opponent from ever getting a word in edge-wise, or forcing them to answer for trivial side-issues and therefore never have time to say what they really want to say; etc.

What I'm railing against here is the https://en.wikipedia.org/wiki/False_balance: the rhetorical rearrangement of non-equal evidence into a debate presented to the audience as if each side had equal support behind it.

And, make no mistake, that sort of rhetorical rearrangement is what discussion forums with comment voting are doing when the community rearranges the comments with upvotes/downvotes: they're curating which comments appear first (and thus which comments appear at all, to those without interest in reading the whole page.) They're changing the impression of aggregate weight-of-evidence someone gets by skimming the debate.

But my claim is that this is not a bad thing, when it is done in the goal of presenting the debate accurately. If one side of a debate has little evidence, but more voices, those voices should be rhetorically damped down to compensate, until the force of their combined voice matches the volume of the evidence on their side. That is the proper goal of editorial curation in e.g. news journalism: to let authoritative primary sources speak loudly, secondary sources quietly, and kooks and rumormongers not-at-all. And it should, IMHO, be the proper goal of a community's editorial curation of a comments section, as well.


I agree that rhetorical damping is necessary - I just think that it should be done on how much a given post contributes to a discussion, not by how much it contributes to the majority opinion's side of a discussion.

I think Carl Schmitt, for instance, should have been put in prison. His ideas are horrible. I still think they are worth reading.


I think this will get better as Rust becomes more mainstream. At the moment, I would guess that most developers who use rust do so by choice. By contrast, many developers who use C++, Java, JS, or even Go (see docker/kubernetes ecosystem) have little say in the matter. It's not that people don't like these languages, it's that they've been successful enough that some people, who aren't crazy about them, are writing code using them.


It seems to me some people are genuinely excited too much about it and some people are trying to show off that a 'hard' language is easy for them.


> The Rust community really needs to take care not to become an echo chamber.

This is a natural consequence of the restrictive rules imposed on the rust community.


> the restrictive rules imposed on the rust community.

Can you be specific?


Not to speak for GP, but perhaps they mean https://www.rust-lang.org/policies/code-of-conduct ?

> Respect that people have differences of opinion and that every design or implementation choice carries a trade-off and numerous costs. There is seldom a right answer.

> Please keep unstructured critique to a minimum. If you have solid ideas you want to experiment with, make a fork and see how it works.

I am not "anti-CoC" but I do see how the particular wording of this one could be interpreted to silence pretty much any technical discussion that someone doesn't like.


I think many in the rust community would feel the opposite. Technical conversations, especially tough ones, are far easier when you come in with some enforced civility. More people can contribute their differing technical opinions, not fewer.


I work with autistic people.

Not as a doctor, but as software engineer, I teach programming to autistic people that work in the field and are hired because they are autistic, not despite it.

This particular company looks for them, trains them and put them to work on software projects, just like any other software house would do.

One thing they are often bad at is "enforced civility" not because they are uncivil people, but because their though process is different from ours and forcing them to adhere to some rigid form of presenting opinions that has no other use than enforcing a rule just for the sake of it, bores them in the best of cases, makes them angry in others, but makes uncomfortable in general.

You shouldn't decide how people in a community interact with the community, you should value their contributions and just that.

Civility can be enforced of course, but post-facto, after further investigation.

Doing it preemptively in the COC sounds bad to me.

But I could be wrong.

BTW I have interacted with Rust community and have been downloaded every time I've said something on the line of "maybe it's not the silver bullet"

So my experience says less people can contribute.


I don't understand why an autistic person would be incapable of expressing a technical issue in a way that isn't sexist/racist, etc, which is basically 99% of compliance with a CoC.

Downvoting has nothing to do with the CoC.


Autistics are not neurotipical

The Rust COC is so vague that anything can be flagged as "non welcome"

Nobody said racist or sexist and I don't understand were it's coming from

If they don't want sexism or racism, they should write sexism and racism

An autistic person will react differently from neurotipical when confronted about manners, in a way that the COC forbids

Communities should be goal based, tribes are based on rules and enforcing them


> Nobody said racist or sexist and I don't understand were it's coming from

> We are committed to providing a friendly, safe and welcoming environment for all, regardless of level of experience, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, religion, nationality, or other similar characteristic.

"Non welcome" doesn't appear in the CoC. The most vague part, I would say, is:

> Please be kind and courteous. There’s no need to be mean or rude.

The bar is extremely low here. No one is saying "you have to say please and thank you or you're banned" it's more like don't personally insult people to an extreme. That could be more explicit, since some people may have a harder time interpreting that - seems like valuable feedback.

> An autistic person will react differently from neurotipical when confronted about manners, in a way that the COC forbids

I am quite sure that many members of the rust community are far from neurotypical and they do fine. Autistic people are capable of technical conversations in which they are not attacking people.


> on any thread about Rust, anyone who brings up criticism, no matter how valid, gets downvoted into oblivion. Just look at what happened to the comments critical of Rust on this post.

That's just not true.


> Well vetted and tested C (or C++) is great

The review process and standards maintenance must be very strict.

Otherwise, as someone said, "A C compiler is a mechanism to turn coffee into security advisories".


There is actually lots of Rust supporters in HN (and what other places you'll find people who actually know what Rust is). This extra-hype made people sensitive for possibly two reasons: Too much coverage (re-write everything in Rust) and the other one is the fear of realizing that Rust is going to take over some day.

Yes, I'm a Rust moon-boy ;)


> extra hype made people more sensitive..

Sure, because you people talk about it literally everywhere.

> other one is the fear of realizing that Rust is going to take over some day.

It is not. It is a good systems language, not so much for application domains where other languages get shit done in half the cognitive load. just because rust community likes to promote rust for everything and involves in dishonest marketing that borrow checker is not a problem, rust is not the optimal language for applications domains where high guarantees and low resource usage aren't mandatory.


Or we've been around long enough to hear the same bullshit (a technical term) enough times that we are allergic to how different it is this time.

I've still to hear why Rust is better than Ada and I caught the tail end of that madness.


I know very little about Ada, but googling it leads me to understand that freeing memory in Ada is considered unsafe. If that's correct, safely freeing memory (generally in a destructor, much like C++) might be one advantage of Rust.


I think a good insight of rust and other static languages is it moves a lot of testing into the language itself. I once saw an extension of the testing pyramid by adding 'static analysis' and I think that's a good way to think about it.

With more static analysis baked into the language itself although, comes the trade off of more restrictions and longer compile times.


If I might ask, when was the last time you have D a shot? Available documentation has improved recently, though I think there's a big need for improved tooling docs.. in my case, it took forever for me to figure out how packages work with `dub`.

However, if you can get past the startup barrier, D is a real pleasure to work with.


> Even if Rust never gets supported by the kernel's buildsystem officially or if the code can not be upstreamed.

That is not how the (Linux) kernel likes to work…


What do you mean? I have the Rust working in Kbuild; I haven't published the patches. At Linux Plumber's conf for later this year, I've submitted a proposal to discuss the barriers to adoption within the kernel proper. Some very senior kernel devs are supportive.


You have it working, but if you're writing drivers and stuff the kernel might change out from under it and you'll have to continuously keep it up-to-date if it's not in-tree, right?


Yes; but what I'm talking about (Kbuild) is in-tree support for Rust. Bindings are regenerated when the kernel is built.

As far as "how the Linux kernel likes to work" either it's in tree, or it doesn't matter. (Not my opinion per se, IDC, but look at upstream's collective treatment of ZFS and other out of tree drivers in general; they remove APIs they know external modules are using). I do generally find out of tree drivers to be a PITA to build outside of the kernel, and sometimes find curious things in their build scripts. Updating NVIDIA drivers seems to be...not great.


> A lot of 'anti-rust' sentiment here.

"here" meaning where? Can you point to it, because there's not "a lot" on this thread.


Rust makes sense for replacing C++ user space programs. I don’t see how it’s relevant to the Linux kernel at this point.


Due to its strong and simple C bindings, there's no reason new kernel modules can't default to being written in Rust, and allow Rust to spread from the "outside in" to the heart of the kernel. I don't advocate wide-scale rewriting of the kernel, but I do think default-to-rust for new kernel components and modules is rational and improves the system.


A mixed-language codebase will always be more complex than a single language one. So it's not just a matter of Rust integrating well-enough with the rest of the kernel, but it has to offer enough improvement for the cost to be worthwhile.

So to evaluate whether it's worth introducing rust, you'd ideally want to find a self-contained part of the kernel that is responsible for many CVEs and you'd want to show a substantial reduction of CVEs with the rewrite of just that component. Not impossible, but also not so easy.


The kernel is required to be compilable with very old versions of GCC. So unless all users/companies agree on lifting that restriction, there is not much to do. rustc is not stable nor old enough.


So it's currently GCC 4.6+, with patches queued to bump that to 4.8+. Also, you can use Clang. So I'm not sure how rustc being used for some parts of the kernel has anything to do with developers "restriction?"


Via LWN, looks like the GCC 4.8+ patches got merged into the kernel's 5.8 merge window recently, with a footnote suggesting this be bumped to GCC 4.9+.

https://lwn.net/Articles/822527/

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...


What about a Rust to C translator? It would produce "safe" C right?


You would also need to comprehensively annotate the Linux C headers in order to provide the information that Rust expects (e.g. for each ptr instance whether it's a "raw" pointer, a shared/exclusive reference or a piece of "owned" data, whether it can be NULL etc. etc.). This might actually get accepted upstream though. Having the kernel-build itself depend on Rust could be far more problematic, given e.g. the variety of hardware that Linux must support.


I would think such a thing would be more like "compiling" rust to C. As in the C code would have no means to maintain most of the safety guarantees, rather they would be verified at compile time, and the produced C would be meant to be used unchanged. So as long as nobody ever edits the C manually, it would still presumably be safe.


That's what I really meant. Write it in Rust but then output C instead of ASM. The parent is right though. A component/driver would need to use the C headers from the Rust side.


Not only that, but Rust depends on LLVM.


Last time I looked at this comment it was flagged [dead]; I'm glad someone was able to remedy that as I was unable to upvote.

I think this is a good criticism, but might not be true forever and is certainly something to keep in mind.

As I said in the GP, Rust will not be upstreamed into the kernel- this is one of the reasons why.


If a comment is dead that you think is worthwhile, you can follow the permalink to it and vouch for it. I imagine that feature is gated behind some point threshold, but my guess is whatever that threshold is you're past it.


Oh, nice, thanks for that. :)


It doesn't have to, but yes, it currently pretty much requires LLVM.


https://lwn.net/Articles/797828/

Perhaps the way to do it is to rewrite common bits that get CVEs, just so it doesn't happen again.


The output of the translator would be right, but messy, and someone would have to go back and clean it up. Doing that work would almost certainly introduce bugs. How many, and whether it outweighs is the question.


There is a lot of work being done on a translator now [0]. However, the goal is to translate C to equivalent Rust and then allow developers to intervene to migrate to safe Rust.

[0]: https://c2rust.com

EDIT: Oops, read the parent comment backwards.


That's C to Rust, not Rust to C. (mrustc provides something like the latter, but it doesn't actually support full Rust.)


We just seen a news that Firefox is reusing Chromes regex c++ code, so from my point of view Firefox is barley pushing for Rust components but Rust fans expect the kernel to push harder. IMO when Firefox and all it's componetns/dependencies are Rustifed then I think we can ask for the kernel to attempt the same.


One component choosing to use a mature implementation in another browser instead of a rewrite from scratch in rust means very little.

We can look at actually statistics instead of just one random anecdote. For instance, according to [1], there are 2 million lines of rust code in firefox, 6 million lines of C++, and 3 million lines of C. Very roughly speaking that's 20% of the lines of low level language being in Rust.

Firefox only started requiring rust to be built about 3 years ago, this is a pretty breakneck pace of replacing C/C++ with rust.

[1] https://www.openhub.net/p/firefox/analyses/latest/languages_...


I personally would also check if the libraries Firefox uses are Rust, I mean the browsers this days are huge, you have image,audio,video,WebGl,pdf and a ton more features so you need all the dependencies to also be memory safe especially the ones that handle untrusted input.


Firefox JS engine is in C++ even though rust has been available for years now, they've said more than once that they have no plans on porting Spidermonkey to Rust.

The most recent information I've found is from a Spidermonkey Dev here on HN who said they might even port the Regex engine to Rust

That didn't happen either

https://news.ycombinator.com/item?id=18988024

Something tells me that there's more than meets the eye.


It seems unlikely that there is any great conspiracy at play here to me... they found a good solution to a problem, they implemented it. Spidermonkey is a highly optimized carefully examined code base, it's not easy to replace wholesale or particularly low hanging fruit to be replaced with rust.

Mozilla is working on replacing the wasm compiler, and eventually the compiler for "ion MIR" (an intermediate langauge that js is compiled to), with cranelift, a rust implementation of a compiler backend. See [1] for pretty pictures of this.

It's been possible to use this backend to some extent since 2018 [2]. The best place I was able to find in the last 5 minutes to describe the current state of the integration is the doc comment at the top of this file [3].

In other words it looks like spidermonkey is not not being ported, it's just not being entirely ported yet.

[1] https://github.com/bytecodealliance/wasmtime/blob/master/cra...

[2] https://old.reddit.com/r/rust/comments/9mvnrk/in_firefox_nig...

[3] https://github.com/mozilla/gecko-dev/blob/master/js/src/wasm...

Edit: Another component of spidermonkey that is being rewritten in rust is the JS parser, that project is called smooshmonkey. You can read about it in the spidermonkey news letters [4] [5] [6], and there's an nice comment describing it at the top of this reddit thread [7]

[4] https://mozilla-spidermonkey.github.io/blog/2020/01/10/newsl...

[5] https://mozilla-spidermonkey.github.io/blog/2020/03/12/newsl...

[6] https://mozilla-spidermonkey.github.io/blog/2020/05/11/newsl...

[7] https://old.reddit.com/r/rust/comments/h0ddpi/smooshmonkey_n...


If Firefox wrote a new regex module and didn't use Rust that would contradict the poster's advice, but them using an existing mature bit of coded doesn't seem to prove much.


What Firefox done makes sense, but it does not look good to the idea you can just rewrite in Rust function by function. A regex engine seems to me a thing that is easy to TDD, so I would expect if rewriting in Rust is that easy Firefox would rewrite the most dangerous parts in Rust already (I assume regex is used with untrusted input but I am clueless so i could be wrong). IMO the giant army of Rust fans should put their energy working on Firefox or donating to have Firefox rewriten before 2030, because the story that is heavily promoted is that you can rewrite shit in Rust super easy, function by function, etc but the reality is different.


Firefox didn't choose the C++ regex implementation because rewriting pieces of code with well defined interfaces is hard. It was for a few reasons: 1) at the time they made the decision, there was no existing rust regex library that matched the way that JavaScript worked, 2) they didn't want to write their own/rewrite existing code because regex libraries are complicated beasts, 3) the C++ code is mature and well tested, so the primary benefit of a rust implementation would be more speed (because it's easier to safetly reason about shared data) they didn't think that optimizing the regex engine was a priority.


It makes sense, I am still a bit disappointed (so is a subjective thing) hopefully the RIR crowd gets more rational and reads your explanation and see that is not so easy.


>. A regex engine seems to me a thing that is easy to TDD Why go to all that effort when the thing you need is already available? I don't think there are that many people that argue that everything, even the stuff that works fine and doesn't have issues, should be re-written just because rust exists.


C code can be rewritten function-by-function, even programmatically via c2rust. C++ is a bit messier and even the FFI interoperability story wrt. Rust is not quite complete.


Systems programming in any language would benefit immensely from better hardware accelerated bounds checking.

While Intel/AMD have put incredible effort into hyperthreading, virtualization, predictive loading, etc, page faults have more or less stayed the same since the NX bit was introduced.

The world would be a lot safer if we had hardware features like cacheline faults, poison words, bounded MOV instructions, hardware ASLR, and auto-encrypted cachelines.

Per-process hardware encrypted memory would also mitigate spectre and friends as reading another process's memory would become meaningless.


> Systems programming in any language would benefit immensely from better hardware accelerated bounds checking.

[Mostly discussed deeper in the thread already but felt it was worth bringing up directly as a reply to this]

This is an active area of development though still plenty of work to do. The Cambridge University Computer Lab have been doing research in this area in the form of CHERI: https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/ this gives you hardware capabilities which are effectively pointers with bounds given out by the OS. Want to access something? You need the appropriate capability to do so.

ARM announced MTE last year, which whilst far less capable than CHERI begins to give you something (though it's targeted for bug hunting effectively rather than providing any actual security/safety properties): https://community.arm.com/developer/ip-products/processors/b...

ARM are also building a hardware CHERI implementation, Morello: https://developer.arm.com/architectures/cpu-architecture/a-p.... There's already CHERI implementations running BSD on FPGAs, this will take it to the next level with real silicon using modern ARM processors (maybe you could run android on it for instance?).

I think Intel have been looking along similar lines but I'm far less in touch with what they're up to so I don't have links.


> Systems programming in any language would benefit immensely from better hardware accelerated bounds checking. ... The world would be a lot safer if we had hardware features like cacheline faults, poison words, bounded MOV instructions, hardware ASLR, and auto-encrypted cachelines.

This isn't really true. The main performance cost of these safety checks (and largely of others such as overflow checking) is inability to optimize because the compiler needs to preserve partial results/states in case a fault occurs. The checks themselves are trivial.


Partial results like, say, a hardware exception firing instead of forcing the compiler to reason about it?

I want efence on steroids.


To the maximum extent, bounds checking should be elided via greater compiler knowledge of what exactly is happening. This would leave arbitrary bounds checks limited to user input, in which case the vast majority of the performance penalty goes away right? "Bounds" are a higher-order programming language concept that I suggest may not have a place in hardware.


Compilers already do this. It would still be nice to have cheaper checks; compilers are not omniscient.


Yeah, for sure.

I think I was having trouble envisioning what exactly a "cheaper" check could look like in hardware. Bounds checking is basically read length, subtract your index, and a conditional branch (potentially with a hint that it would succeed).

To do this properly in hardware I suppose you'd need a list of memory regions that are "live" and default the rest to "dead", though how many do you support? What does updating the list look like? Page tables are pretty slow to update, and those don't change too often. Array tables would be pretty gnarly, and impose a further penalty on context switching as they'd have to be thread-local and app-local.

I wonder if this is a case akin to spinlocks. Sure, I'd love a lock that doesn't busy-wait, but there's not really a cleaner solution -- in hardware or otherwise.

Maybe I'm just not seeing something obvious, though!


You might find an almost-practical (not shipping yet, but should very soon) example helpful: https://community.arm.com/developer/ip-products/processors/b...


That's brilliant, and I retract my statement completely. Thanks for sharing!


In addition to ARM MTE, see also Cheri: https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/


> The world would be a lot safer if we had hardware features like cacheline faults, poison words, bounded MOV instructions, hardware ASLR, and auto-encrypted cachelines.

Sure, the world would be a lot safer if we used microkernels, too, but the tech world has been obsessed with performance over all other characteristics for decades.


My first full time professional job (1976) involved writing assembly language for the TI 960, a process control minicomputer with a 64K data address space and a different 64K instruction space.

At first, I just thought it was odd to make the hardware more complex by having two address spaces. However, this prevented a common cause of difficult to find bugs in asm programming, and I came to appreciate that hardware could make programming safer.

Hardware architecture, programming languages, and compilers advanced rapidly, but safety always seemed to take a backseat and was left up to the programmers. I’m glad to see developments like Rust and I look forward to using it for a real project soon.


You have it on CHERI, ARM and SPARC, it was Intel that screw up.

So make use of Solaris SPARC, iOS or Android 11 (ARM HMT is a requirement).


You mean that Intel should invest in something like MPX?


MPX is dead, and was so expensive that 100% software bounds checking was cheaper...


That was my point. :-) It's not clear that hardware bounds checking acceleration is actually a meaningful win.


Sure it is, Intel screw up.

Solaris SPARC, iOS and now Android 11, all make use of some kind of hardware validation in memory accesses.


MTE today, Morello as the experiment for fully safe C tomorrow: https://developer.arm.com/architectures/cpu-architecture/a-p...


Any help is welcomed. Microsoft is also having a go with Checked C.

The main problem is forcing developers to actually use them, I guess that is why Google has decided to make MTE a requirement for Android 11 on ARM devices.


When people talk about Rust's advantage in safety, is it just the compile-time checks about Ownership? Could not the C compiler be given the same thing, with a flag like "strict mode"?

Sorry if my question is really stupid. My experience has been with scripting languages, not systems programming. But I read the chapter in the Rust Book about Ownership, and I think I get it. In a sense, is not the compiler simply inserting function calls to free() in the right places (and warning you if it can't)? Couldn't this be added to the C compiler? In fact I'm not sure what the difference is from a linter.

The article has the phrase "C/C++ Can’t Be Fixed", so it anticipates my question. But its answer is that programmers either won't remember or won't bother to set their C to "strict mode", run the linter, or whatever, because "static analysis comes with too much overhead: It needs to be wired into the build system. . . . If it’s not on by default it won’t it won't help." But if this is the only argument, it seems weak. I agree that it is more effort, but isn't it much more effort to switch out your entire language and ecosystem?


> is it just the compile-time checks about Ownership?

Sometimes. There's a lot more to safety than memory safety, but that's the easiest to define.

Other aspects might be safety against null values, safety against poor error handling, etc. Those are much harder to define.

> Could not the C compiler be given the same thing, with a flag like "strict mode"?

Not really, no. There's a lot of research into proving C code safe, and automation of it, and it's not really feasible for general C codebases. There are tons of static analysis tools that try to get part of the way there, but they can be slow, miss things, and have many false positives.

It would require, at minimum, some seriously ugly annotations. There are papers that demonstrate such annotation based approaches with C. One that I read ended up looking quite a lot like Rust, at which point, you wonder why you wouldn't just write rust.

Not a stupid question at all, by the way.

> Couldn't this be added to the C compiler?

Rust is able to insert those frees because it has lots of information about memory usage at compile time. C doesn't, therefor C can't insert those frees in the general case.

Other issues, cultural ones that is, like "people just disable it" are fair but not the biggest issue imo. But to further emphasize that point there are many runtime mitigation techniques (ASLR, stack cookies), and they get disabled all the time because, and this is the extra sad part, they legitimately find bugs and crash a program that otherwise may have appeared to work, and the fastest "fix" is disabling it.

Re: Effort, totally, yes. But the effort isn't really split. There are tons of people working on safer C/C++ through tools like fuzzing, static analysis, and runtime mitigations.


> you wonder why you wouldn't just write rust

Because it is another language.

If the researchers behind the borrow checker had implemented it on a safe variation of C/C++, then it would have been much easier to push for it in many companies. Something like the Python 2/3 split is better than another completely different language.

Don’t get me wrong, Rust brings improvement in other areas too, which is fine, but when we are talking about many-million-LOCs, you don’t care that much.

I am still waiting for someone to bring lifetime annotations to C and C++. Those I could use right away. Rust, not so much.


But adding rust-like lifetime annotations to a many-million-LOC code base would result in totally bricking the code with 99.99999% certainty.

I tried to port an (Earley) parser from C to Rust. It wasn't even a port, but more of a rewrite with similar function structure. The first part worked well, even though I had to resort to an arena (a special type of allocator that can guarantee lifetimes at the expense of freeing them later than strictly possible). Then I tried to get in shared parse trees. It was a nightmare. It could simply not be done with the same efficiency as in C. I had to resort to using array indices instead of pointers, or use ref counting where it wasn't needed. This had quite some impact on the calling functions.

And that was a few hundred lines of plain, non-tricky C.


If your C/C++ is sane, then your code was already written with lifetimes in mind and enforcing them is not a big deal.

There will be places, no doubt, that you cannot do it, but if you can easily cover 80% of the code, that is way better than nothing and then you can work on redesigning the 20% (perhaps even rewriting it in another language since it has to be changed, for instance to Rust). Same argument as Rust uses for unsafe blocks.

There have been specialized annotations for things like mutexes for a while in several projects which help static analyzers prove things too, so it is not new.


Doesn't unique_ptr in C++ provide that?


unique_ptr allows for use-after-move, so it's not perfect.


The issue is that C/C++ Can’t Be Fixed backwards compatibly. You would need to add lifetime annotations: existing code wouldn't compile. You could make a Rust-style language that is closer to C, but it wouldn't be any to use than Rust.

And at that point why not just use Rust. It doesn't require you to switch out the entire ecosystem, you can happily link Rust and C, and C++ together in one binary. In fact, as an end-user it's sometimes easier to use a popular C library from Rust because someone will have wired it into Cargo (Rust's build system) and it will "just work": you don't have to mess around with working out what kind of build system the library is using and integrate it with whatever your using.


> And at that point why not just use Rust

See my sibling comment. Using Rust implies many more changes than just moving code to an incompatible-but-close version of C and C++ that adds lifetime annotations and unsafe blocks. Compare it with the Python 2/3 split vs. rewriting all your Python 2 code to, say, Ruby.

Then there is the OOB issues too, like using another build system, a single compiler based on LLVM, etc.


Given the degree of changes required in some codebases, I think some teams would rather treat it as a rewrite job.

Even moving large enterprise codebases from python 2 to python 3 is taking so long they had to push that deadline back years. Given the distance you'd have to cross to push an older C++ codebase to something with similar guarantees as Rust, I don't know if any company would voluntarily cross that gap.


> adds lifetime annotations and unsafe blocks

Worth noting that the vast majority of lifetime annotations in Rust are “elided”, i.e. inferred by the compiler.

I’d say that a successful “safe C” compiler would accept any existing C code that never invokes undefined behaviour, without a runtime performance penalty.


> I’d say that a successful “safe C” compiler would accept any existing C code that never invokes undefined behaviour, without a runtime performance penalty.

That's unfortunately what's not possible. The C source code doesn't contain enough information to prove safety (a lot of real world C code depends on invariants that happen to be true at runtime, but aren't provably true at compile time)


Yes, it probably is impossible. On the other hand, C programmers are asked to write code which doesn't invoke undefined behaviour otherwise their code can break in unpredictable ways, so either we just don't have sufficiently good checking algorithms yet or it is impossible for them too.


What you described is just a destructor. C++ already has those. If you think that destructors are all there is to Rust then you are missing the entire point.

Rust has an affine type system [0] and this is something you either have or you don't. There is no way to retrofit it without rewriting all C code in existence.

When you look at linear types it becomes pretty obvious how restrictive they are but this restriction is also what allows them to make guarantees about memory safety. With linear types every variable must be used exactly once. This means that no matter through how many functions you pass a variable eventually you have to call the destructor to destroy the variable.

Rust doesn't have linear types because they are too restrictive. Instead it has affine types where variables are used at most once and if they are not used the destructor is called automatically. However even that is too restrictive. The definition of "use" in Rust is a borrow and the affine type system rules only apply to mutable borrows. You can have an arbitrary amount of read only borrows. The end result is a language where it's not possible or much harder to implement data structures that do not follow these rules such as doubly linked lists. You will have to use completely different data structures in a variant of C that has an affine type system. This will probably result in an affine only ecosystem that is completely disjoint from the rest of C. The Linux kernel will definitively never be rewritten in affine C.

[0] https://en.wikipedia.org/wiki/Substructural_type_system


"strict mode" is not a subset - you need stronger type information than what C provides to get Rust level checking - if you wanted to add that to C and also make it expressive enough to be usable you'd eventually end up with something that's closer to Rust than C.


This exists, and is called static analysis. There are tools that can analyze code paths and build a theoretical model of lifetimes throughout a problem.

Mind you, I don't know the formal theory enough to say whether you can build a static analysis tool that is actually capable of doing what Rust's lifetimes accomplish in a provably complete manner. Maybe someone else can chime in here.

However, there are other disadvantages to this approach. Not having it in the language means everyone working on the program has to be using the exact same static analysis tools, and they have to be using it on the entire program, including libraries. Rust ensures that the lifetimes are enforced consistently, since there's only one possible engine enforcing them.

There are some very impressive code analysis tools used in the industry (with C and C++, in particular), but far as I know, they are all commercial and fairly expensive. As far as I know, open source tools like Valgrind are not powerful enough.


Even if such a tool exists it would be along the lines of a tool that prevents your C code from containing turing complete constructs so that it can be formally proven. It will complain about perfectly valid code. Are you willing to use a tool with false positives? I've seen many C programmers that are highly confident in their abilities. If someone told them to use a tool that marks their obviously correct code as "potentially wrong" they will just stop using that tool the same way they refused to use Rust.


That is exactly what is expected from developers that need to certify their code via MISRA, AUTOSAR and other security critical certification processes.


Rust is definitely great for systems programming, and the speed is fantastic too.

Take a look at some of the system utility tools on crates.io, such as ripgrep (like grep), bat (like cat), fd-find (like find), lolcate-rs (like locate), procs (like ps).


All of those are MBs in size, compared to KBs for their C counterparts. Something needs to budge in Rust land for small utilities to be viable.


That's kind of unfair since operating systems ship with large libraries that C can dynamically link to, but rust has no such advantage.


Rust binaries effectively statically link a lot of code via monomorphizing generics. I may well be wrong but I don't think you could get Rust binaries down to C sizes just by separating all the dynamically linkable parts.


That is Rust's own fault for not making their standard library ABI stable beyond a single release (and making releases so frequent). If they kept the ABI stable for just a year and gave library authors a way to do the same, distros might choose to dynamically link the stdlib, serde, clap, and other commonly used crates.



> All of those are MBs in size, compared to KBs for their C counterparts.

What difference does that make in the real world?


Sometimes a binary optimized for size is faster than one optimized for speed. (Due to cache residency)

Embedded systems frequently have flash size limitations in the megabytes or even kilobytes.


An order of magnitude difference... my ARM boards have 16 MB SPI flash. No way I am fitting more than a single rust binary on there.


No way are you fitting a normal operating system on there.

You can certainly compile rust to fit on there without any trouble though, you just have to try... the same way you have to try to compile any language in that sort of environment really. You can get to about 30kb while still keeping the standard library [1], you can get something with a fully featured web server in a few hundred kb [2].

[1] https://github.com/johnthagen/min-sized-rust

[2] https://jamesmunns.com/blog/tinyrocket/


Rust has partial-but-improving embedded support. I don't think you ever run any of these cmdline tools on a 16MB embedded system.

And that's fine. You wouldn't run most C cmdline tools on an embedded system either.


Wouldn't this be a no std situation then? I do agree with the other poster that binary sizes only matter in specific cirumstances though.


> Rust excels in creating correct programs. Correctness means, roughly, that a program is checked by the compiler for unsafe operations, resulting in fewer runtime errors.

This is a fairly low bar for correctness, and not really one I would really agree with :/


It's an extremely low bar, but one that hasn't been met by some popular languages.


I find that monadic error handling and null safety is as much of a game changer. It gets discussed less, and is obviously not unique to rust, but coming from Java I see what I’d guesstimate to be an order of magnitude difference in unhandled error cases.

The same logic can be applied in Java or C++ or probably even C, but it also comes down to the wide spread and idiomatic approaches of the language. The rust eco system is not necessarily coherent around the details of errors, but error handling is generally rigorous. The benefits of that can’t be overstated. The compiler is a huge help in getting it right in a way that Java checked exceptions never was.

I’m sure that there are other languages that are showing similar wins, though. I haven’t written much Swift, but I imagine that the experience might be similar?


Calling just “memory safety” correctness is a low bar and misses all sorts of other correctnesses (type safety, progress, matching-the-spec, etc.). However, it is a fundamental property: without memory safety, no other correctness can be guaranteed.


So is it a property more important for inexperienced programmers?


You think only inexperienced programmers cause memory errors?

I am no fan of rust and actually hate its evangelism. But this is ridiculous.


... more ...


Launching the missiles is memory-safe!

Deleting the production database is a thornier philosophical question, but Rust's position is that it is.


Yeah, there's a couple of issues, one of which you mentioned :) Others are the fact that "runtime errors" are not necessarily bad; the goal is to prevent exploitable runtime errors. Crashing when you detect an out-of-bounds access is a bug and a runtime error, but one that is safe from a security perspective and as such something that Rust will do readily when necessary.


The bar here is C.


Whenever I think about Rust vs C I think of Curl, where they've been working on a C program for decades that is used by an incredible amount of software, and they're still fixing memory leaks in it as recently as November 2019: https://curl.haxx.se/changes.html#7_67_0


Rust considers memory leaks as safe. However I do find it easier to avoid memory leaks in Rust due to it’s stronger scope based dropping than when I write C.


You can leak memory in Rust too.


I think that bar is met by some term which means something synonymous with "this is better than C". When dealing with "correctness" I generally would like the program to do what I intended.


I agree, and sincerely, I'm a Rust programmer and "memory safety" is definitely not why I use Rust. There are lots of memory safe languages but I feel more productive and get more satisfaction writing Rust than, say, Python, Lua or C++.

In Rust I feel like almost every time the program works on the first time it compiles successfully, even without writing tests. Probably a combination of very well designed API, builtin RAII, very explicit syntax, compiler strictness and clear docs. I also cannot forget to handle errors (even if I just decide to propagate or crash).

This is what "correctness" means to me, but it isn't exactly enforced by the compiler itself.


The bar is set by the worst programmer not the worst language.


The language works for the programmer. Those with a higher focus on safety (like rust) make the worst programmer better.


Combined with the large number of vulnerabilities in mature, well-designed programs implemented by skilled programmers. The mythical programmer that can use C/C++ without bugs doesn't exist.


The mythical programmer that can use Rust without bugs doesn’t exist either. I deal with higher level languages than rust and we’re capable of writing just as dangerous stuff through implementation errors.


While bugs like logic errors clearly can exist in all languages, those that do things like memory management/enforce memory safety for the programmer eliminate whole classes of bugs.

No language/tool/etc will stop all programmer mistakes, but they can definitely stop some.


The fact that Microsoft keeps saying it wants to move away from C/C++ is great IMHO.

> Microsoft c++ continues to be written and will continue to be written for a while

I interpret this as saying that they cannot instantaneously convert all their existing code to Rust, and it seems fair enough. However, it seems a little odd to start new projects in C/C++ when you are publicly saying it's the wrong thing to do.

Infact, just a little over a month ago they announced a new QUIC implementation written in C/C++ (MsQuic)

Is this to be expected since Microsoft is such a big company? Or maybe that lib is not important and is not going to be used in, say, Edge?

Moreover, isn't networking code the worst kind of code to write in C/C++ (I am thinking of heartbleed for example)? I would really love some explanation.


The more I learn about Microsoft the more this joke makes sense:

https://en.wikipedia.org/wiki/Manu_Cornet#/media/File:%22Org...


Which is why they are the main drivers for C++ Core Guidelines, and have contributed the initial effort of a C++ lifetime analyses to clang, while making it available in all Visual Studio editions, including Community.

They are quite clear that a constraint version of C++ still gets to play in a secure world.

And while Azure Sphere still gets to be written in C (due to Linux kernel), it has a security layer (Pollonium) that takes care of bad code.

Finally, since they failed to move people into UWP programming model, sandboxing has been coming into Win32 as well, to the point that on Windows 10X, everything is sandboxed, including Win32 apps.


What role (if any) does GCC play in Rust? I understand that Rust relies on LLVM. If the industry continues to move more and more to Rust then will GCC begin to lose relevance? Or will the GCC developers add a Rust ~~backend~~ frontend? Has there been any movement toward that anyway?

There are many gaps in my understanding of Rust with regard to LLVM and GCC.


> There are many gaps in my understanding of Rust with regard to LLVM and GCC.

The rust compiler generates some intermediate code (LLVM IR) that's fed into LLVM, which optimizes and turns it into machine code. You can check out what this looks like in the rust playground. Getting GCC to compile rust would be a matter of getting the rust compiler to generate GCC IR.

Cranelift (built with rust) is an alternative to LLVM with it's own IR. The cranelift backend for rust should help future projects that want to do something similar, I guess.


"will GCC begin to lose relevance"

Systems programming is not web where they change the framework and 'script flavour every six months.

Systems programs live for decades - and hence their critical infrastructure like GCC will as well. C will be with us at least 3 decades more. Longer if Linux kernel will use it still.



Wouldn’t you need to add a RUST frontend?


ah right


I’m not sure about GCC, but a significant amount of work has gone into a cranelift based backend for rust (https://github.com/bjorn3/rustc_codegen_cranelift).


There was a rust frontend for GCC[0]. Will need a lot to work though.

[0]: https://gcc.gnu.org/wiki/RustFrontEnd


> then will GCC begin to lose relevance?

That has already began. For example all the popular browsers (Chrome and all the derivatives, FF, Safari) moved away from GCC.


Just starting to get into rust, got a nice a rest boilerplate that uses sqlx (compile time verification of sql) and some embedded stuff on a pi working with rust (PWM, Serial)

My questions is where to find jobs with rust. I’m really enjoying it, but my resume is heavy python/react atm



Whilst it is true that it is Amazon’s _own_ first SDK, one already exists - Rusoto - https://github.com/rusoto/rusoto (disclaimer: I made some contributions earlier in the year to help out with the transition to being async-await-friendly).

I’d be (negatively) surprised if AWS started from scratch instead of from Rusoto. That was the approach taken for the original Go SDK, which was adopted from Coda Hale’s work.


I have used Rusoto to build a small tool to list all ECS container images of all clusters of different AWS accounts. It helped me at my company to have a quick overview of application versions deployed in different environments (1 account per environment).

I migrated it recently to async/await and the code was much easier to read.

Could have done it in Python's boto3 or any other SDK. Rust does not give a performance benefit since it is mostly waiting for AWS answers but that was an opportunity for me to just write some Rust code.


I recall many/most of the contributors to Rusoto being AWS employees.


Where are you living? Are you OK with working for a big company?

Edit: for people looking for rust work in the Bay Area, you can message me. Email is username @gmail


I’m in Austin/Houston area. But for the right opportunity I wouldn’t mind Bay Area for awhile.


You might not have to relocate, send me an email if interested.


Cloudflare uses Rust and has job openings in Austin.


[flagged]


In what sense?

Just because I don’t work there anymore doesn’t mean that I won’t recommend others give it a shot if they think they might enjoy it.


Must have missed something? I thought you recently joined Cloudflare.


I was there for a bit over a year, but left a few weeks ago.


start a business and do your projects in Rust. most clientd just care avout outcome. thats what I do and i have a lotnof fun trying new stuff with every new project.


Those clients may have trouble taking those projects in house or bringing them to another 3rd party developer!

As an advisor on these kind of projects I usually advise clients to be very conservative about the tech used by 3rd party developers or agencies..


the strategy of course is to serve them as long as possible ;-)

no seriously. you are right. however that never had been a problem. mostly weird langauge choices also attract pretty capable developers.


Currently doing that, not a big fan of 40 hours for some one else.


of course you should try to keep IP and resell if possible. there is no proper endgame in services. still better than being a slave employee


That's if you use 'Safe Rust', then the safety guarantees hold true. However, the risks increase as soon as you start using unsafe{} or import a crate that also has unsafe{} code which is why I like using cargo-geiger on my projects to see the unsafe{} uses in my code and dependencies [0]. Use unsafe{} at your own risk. [1]

[0] https://github.com/rust-secure-code/cargo-geiger

[1] https://arxiv.org/abs/2003.03296


The difference is that rust is safe by default. When doing something routine/trivial, you are typically operating with safe code.


Unsafe is not a problem in my experience. The same way a kernel might have some part written in asm (for freebsd it's like 1% I believe) your rust code will have 1% of Unsafe code. Might be more depending on what you are doing. However the same way few bugs happen in the asm code of a kernel, your unsafe will not have many bugs.


Actually, a lot of bugs happen in the assembly part of the kernel. Processors are hard ;)


Do you have a source for that claim?


A couple examples off the top of my head: all the recent speculative execution stuff, OpenBSD's buggy SMAP implementation, some recent bugs ARM's PAN.


You specifically responded to a claim that there were relatively few assembler bugs due to it be a relatively small body of code. So in that context, I would like you to show that the handful you've highlighted is "a lot" against the backdrop of bugs in non-assembly code.

My semi-informed (but not quantified) opinion is that you are broadly wrong and that there are not a lot of assembler bugs relative to C/C++ bugs.


OK thats how many 10? How many are not asm bugs? Like it's not many considering the total number of bugs.


Relative to what?


Relative to zero.


No its relative to the total amount of bugs.


In the context I was replying to, it's not. My response is to the claim that writing code in assembly has "few" bugs.


No, it's pretty clear from the full context of the comment you are replying to that it is expressing a relative measure[1]:

> Unsafe is not a problem in my experience. The same way a kernel might have some part written in asm (for freebsd it's like 1% I believe) your rust code will have 1% of Unsafe code. Might be more depending on what you are doing. However the same way few bugs happen in the asm code of a kernel, your unsafe will not have many bugs.

[1]: https://news.ycombinator.com/item?id=23510862


Right, comparatively.


Proven asm code like in TAOCP is safer than any other language. For example you know exactly what the FPU will do. Good luck with any compiler.

I know that is not what people mean with "unsafe", but it is getting tiresome.


TAOCP was written in different times, namely one where microcode was not a widespread thing. As it stands, your asm "proof" relies on having no µcode update. CPUs are no longer easy to understand like 8086s.


What is the exact aspect of "systems programming" that precludes the use of higher-level languages which are safe by default? Unless we are writing device drivers or OS kernels, why are we using these low-level languages for mission-critical applications?

Even parts of managed languages which are typically frowned upon from a "systems programming" perspective, such as GC, can be viewed as a positive when trying to iterate. GC can be treated like a backstop which helps to keep your application from exploding while you work on memory management, and then you can incrementally address heap alloc (aka latency) concerns with an application that actually runs (for a little while at least). Trying to iterate on memory pressure and related concerns with a C/C++ application is nothing short of nightmare fuel from my perspective, but I spend most of my time in higher-level languages these days.

I think it is also very hard to make a performance argument in 2020. Especially when you are comparing low level against something like the newest C#/.NET Core stack. Just take a look at the most recent TechEmpower plaintext benchmark (round 19). How many new business "systems" are being "programmed" that aren't ultimately just web servers passing HTML/JSON around, executing business logic, and reading/writing bits of data to some database?


>I think it is also very hard to make a performance argument in 2020.

I think it is very EASY to make a performance argument in 2020. In every domain, from my phone, to my desktop pc, to our huge AWS servers at work, I encounter software that is slow. We struggle with our C/C+ database at work running on top of a C/C++ operating system. We can't buy a bigger server and it isn't fast enough to handle the load we get recently recently due to covid. You think I should be happy to incur GC overhead in both the OS and Database? No way! That would cost us a lot of money, and require a more complex architecture. (which also costs money)

On my desktop and phone I encounter applications that are annoyingly slow to start up, to compile, all kinds of things. The last thing I want is JVM or .NET runtime overhead occuring even at the level of my operating system, device drivers, etc adding to that. CPU cache space is precious, the low level things that everything else runs upon should be as efficient as possible.


I work in embedded software for network switches and almost everything is written in C. And most of the code is pretty new.

C++ fired back because integration was ugly (the SDKs are designed for C usage) and a lot of C developers that doesn't understand C++ well.

Go is used a bit, but again, training people for using it correctly is hard. And using the C SDK from Go is an obvious NO.

A lot there think that Rust would be in a similar situation that C++. Nice to have, but too much low level dependencies (hardware SDK) and high level (developers proficient in the language) to expect it to happen. Also, calling the SDK would convert everything I'm unsafe, so we are like in C, but with a language that nobody understands.

So, there are technical reasons, but also human resources and economic ones.


Ultimately I think the switch would pay off for most companies in the medium to long term. There are valid reason C is still the primary embedded development language but many are going away. One is that Rust only uses clang/llvm which doesn't support many targets. Still there are so many low level data races and challenges with embedded development that having a Rust environment would be a huge improvement, IMHO. The C / FreeRTOS project I’ve been using lately has random data races in the LwIP library which reset the device. It’s almost impossible for me to debug so I workaround by adding some delays here and there. C++ wouldn’t help resolve data races so it's just an extra cost. Also C++ exceptions on embedded are tricky, so the Rust error handling has closer semantics to C error handling.

A complete Rust system would likely help solve most of these issues and drastically reduce these bugs, increasing development speed. Unfortunately the lack of native SDK in Rust is annoying, but some groups are making progress on STM and no-lib Rust. It’d be great if STM makers adopted their sdk and made an official one!

Currently since Rust doesn’t support my chip, I’m using Nim to generate C. It doesn’t solve all the data races noted above but having generics and easy data structures like tables and vectors is fantastic and I'm not second guessing the code all the time. A deterministic GC covers my memory usages fine.


I work on network switches too, but we take the C SDK source, reverse engineer it, and write a bunch of C++ software that mostly does the same exact thing. It's already insane, I would not be surprised if some Rust components were added.


Yep, embedded is a whole other area than software that runs on a server host or desktop/laptop workstation. That shouldn't stop us though from improving safety where the restrictions of embedded development don't apply.


For some systems programming problems, the issue is real-time constraints. You need something to happen on a deadline, and adding GC prevents you from making that guarantee. Or, even if you don't have hard real-time constraints, sometimes you need very low latency in order for your controller to work properly.

Also, those high level languages require an OS. And many systems programs run on bare metal or a very minimal OS.

Edit: Also, the abstractions that higher level programming languages provide are often not really needed for low-level programming. In some ways, low-level programming is easier than application programming, since the actual programming required is often fairly straightforward and constrained by the application.


^^^ Yes. I see LOB apps and web-services being (re-)written in Rust from very high-level langs at great cognitive cost but little runtime benefit. Rust by all means should be treated as a better C(++) for applications where is is necessary -- but it rarely is. And most Rust code I see is simple and context/request-bounded - basically it amounts to a overly-complicated version of creating a private heap for the duration of a request, and then throwing it away, as making Rust happy with even mildly interesting data-structures that are expected to be long-running is a chore.

C# (per your example) is not orders-of-magnitude away from from C++ perf like Python is, but is w/in a very small constant factor, and often even faster for poorly-written C++ as GC is more efficient than calling copy c'tors where they needn't be called, for example.

And if you introduce features such as Span<> (and friends) and ref returns into your C# where you need them, you now how have safe GC-free/reduced code with perf that rivals Rust in the sections of code where you need it, with full GC as a backup.


Well written C# definitely has a performance ceiling that greatly reduces what things need C++ or Rust, but idiomatic C#, what most people actually write, what is taught and encouraged is pretty slow. (lots of linq, arrays of pointers to objects on the heap, that are iterated over with iterators instead of directly, calling virtual functions that are not devirtualized, etc etc etc)

One thing I like about Rust is the idiomatic way is usually very close to the fastest way. There are exceptions of course. But it you tend to get steered to efficient solutions.


Agree with all. But if 80% of your code can be performant with "ideomatic' code, 15% needing a bit more attention (such as not making use of certain abstractions, like link-to-objects, etc.), with the other 5% requiring diligent attention (Span, etc.) - then your making gradual perf gains as you wish, all within the same language, runtime, and memory pool. This is much better than a re-write and dealing with FFI, or just as likely, full-on serialization between your old code and Rust code.

With that said, there isn't much of an excuse for non-0-cost enumerators, etc. -- given that refactoring tools switch between foreach, linq, indexing or ptr/spanning, you'd think the compiler-chain or jitter could do the same. Roslyn is a better back-end for meta-programming, but it's too bad MS denies us the benefits of surfacing meta-programming to the language or at least tool level, or we'd have tools that could easily convert linq to highly optimized code.

You could also have compiler-enforced "levels" of C# code that had restricted features/semantics, such as "alloc-free", etc. - right now the choice is just "default" or "unsafe".


I think having a mode where LOH allocations are treated as a runtime exception (i.e. with a helpful exception message pointing to the offending object type) could help to guide developers down this path really well. Something like:

"LOH allocation detected for object 'MyNamespace.MyObject'. Attempted allocation bytes: 90K (maximum 85,000 bytes)."

Then one level up from that restriction would be the totally alloc-free approach you mention which could potentially mean the GC could be put into a special idle/framework-only mode. For alloc-free, I believe you could enforce this at compile time, as opposed to run-time for the LOH condition.


Alloc-free would be nice if the CLR/CLI supported it, but I understand there’s too many major architectural changes required, e.g. to allow class object instances to live in the stack and limit their lifetime to prevent an invalid object reference. Even today you can’t `stackalloc` an array unless you’re using `unsafe`.


I’m going to disagree with you that idiomatic C# is somehow unnecessarily “slow”.

The performance overhead of a virtual vs non-virtual call on the CLR is negligible - same as with C++. For proof, look at Direct3D which is performance-critical, yet Direct3D’s API is based on COM which is all about vtable-calls.

Linq itself is also very decent. I won’t argue that Linq is perfect (e.g. suboptimal list allocations when the total output length can be known in-advance), but its hardly “slow” - certainly not enough to impact any production workloads - you’d only observe a difference between hand-written loops and a Linq expression in a contrived synthetic benchmark.


> Direct3D’s API is based on COM which is all about vtable-calls.

It isn’t. It looks like COM, but it isn’t.


Did things change with DX12?

Just asking because this article is quite clear about how DirectX is all about COM: https://docs.microsoft.com/en-us/windows/win32/prog-dx-with-...

> The Microsoft Component Object Model (COM) is an object-oriented programming model used by several technologies, including the bulk of the DirectX API surface. For that reason, you (as a DirectX developer) inevitably use COM when you program DirectX.


DirectX yes, Direct3D no.

I am not sure about D3D9, but 10 and 11 and 12 are not COM.


We are talking about device drivers and OS kernels, as well as mission-critical realtime code that can't tolerate random latency injections (garbage collection). It's important to have a safe alternative to C and C++ in these contexts.


Naive question - what makes Swift less suitable than Rust for this?


Swift doesn’t make low-level control of memory and other low-level details easy.

Long-term: I feel that Swift was meant as Apple’s answer to C# and as way to retain XCode users as Objective-C became increasingly unfashionable over the past decade and Apple lost employees who could properly maintain it. Swift won’t go away, but I suspect that Apple doesn’t want to have to pay to maintain Swift while the rest of the industry gets to use it for free and port it to platforms that Apple won’t see any revenue from: i.e. Apple doesn’t want to be like Sun/Oracle and have other companies (Google) use their work (Java) for a competing platform (Android’s Dalvik).

So, I’m wary of adopting Swift - while Apple does currently support it on non-Darwin platforms right now I fully expect Apple to lose interest in maintaining first-class support for Windows and Linux at some point, just like how OpenStep was repurposed as a Mac-only when it became the rational decision for them.

Maintaining a thriving developer ecosystem and platform requires a degree of trust, transparency and a history of commitment to a product - and Apple (as a company) is the anathema of transparency.


Apple does write software for other platforms [1][2]. If they continue their shift to services I only expect that to increase.

However I think you are correct in that they won't want to put much effort into it. Their support for developers is already somewhat half-hearted if you compare it to Microsoft. Using XCode over the last few months has made me appreciate how good Visual Studio is. Apple's developer documentation has me missing Microsoft's.

[1] https://play.google.com/store/apps/developer?id=Apple+Inc.&h... [2] https://support.apple.com/en-gb/HT210384


Isn't Swift limited to just a few platforms?

It seems that it's not available both for the largest desktop platform, Windows, and for the largest non-desktop platform, Linux on ARM (https://swift.org/blog/swift-linux-port/ says "Currently x86_64 is the only supported architecture on Linux.") i.e. all the IoT devices which need code in a "systems language" and Android phones.


Swift doesn’t have sufficient control over memory (where things get allocated and using or avoiding the reference counted garbage collector).


There do exist real-time GCs, though. And there is no fundamental reason OSs couldn't be written in high-level, managed memory languages. Some of the oldest OSs were.


> GC can be treated like a backstop which helps to keep your application from exploding while you work on memory management, and then you can incrementally address heap alloc (aka latency) concerns with an application that actually runs (for a little while at least).

This is not a good approach in general. GC is only really needed in applications when working with data that's graph-like and potentially involves cycles of object references. A software project should be properly designed at the outset so that GC can be used in a "pluggable" way where it's strictly needed. Rust will probably enable this kind of use in the future, but other languages such as Go can already be used to comparable effect, seeing as they're generally limited to smaller pieces of code ("microservices") running in their own separate address space.


There are already a couple gc libraries in rust. I wouldnt say Go is strictly "pluggable gc" since...well you can't get rid of the gc. You can try and keep it from doing meaningful work but thats quite different than "RAII managed memory with a fallback to a gc library that targets my particular use-case"

Fully agree with you that GC should be abstracted away and used for where it shines: cycles and graphs!


is there any pluggable GC libary in rust that is both correct and stable?


The only one I've heard of is Boehm GC. Is it not correct or stable? I do not have any experience or special knowledge of it myself.


Boehm is conservative, unfortunately.


There was some consideration for a formal library based rust GC back in 2018 such that one could write GC<CyclicDataType> and get the benefits of a GC such as

- Not leaking memory - Not invoking unsafe - Not angering the rust compiler

None of those efforts stabilized to my knowledge, and the libraries developed for this task all have experimental/research tags associated to them.

For better or worse all the guides for using rust strongly encourage avoidance of primitive cyclic structures such as doubly linked lists, graphs, and other structures in favor of statically allocated fixed size variants. Like golang's preference for map/list as the only primitive structures, this is a constraint that one can get used to for most problems. But it doesn't feel great that as a language there are basic algorithms and datastructures that can't be idiomatically coded in the language.


There’s a new GC library released the other day https://blog.typingtheory.com/shredder-garbage-collection-as...

Still extremely new, of course, but has promise!


You can already use reference counting when needed. Pretty close to what you are envisioning.


Interoperability with other languages can be one reason, for example writing a library can can be reused in different languages. The sad truth is that exposing a C ABI is still the most widely adopted approach for that. And that is quite hard with many higher level languages, like Java/C#/Python/JavaScript etc.


> What is the exact aspect of "systems programming" that precludes the use of higher-level languages which are safe by default? Unless we are writing device drivers or OS kernels, why are we using these low-level languages for mission-critical applications?

IMHO "systems" here means running on an OS, not network systems or internet apps.

So None, but you want to write libraries that are easily linked to all manners of 3rd party languages and can squeeze the last drop of OS performance. Your language with automatic garbage collection cannot do that. If all you end up doing is wrapping stuff with C or C++ headers then better use C or C++ at first place.


The biggest problem is that nobody actually made a programming language with a GC that is worth using for system level programming. In theory nothing prevents the existence of such a language but it simply doesn't exist. For example, stop the world pauses are the primary reason why languages with a GC are unsuitable for systems programming. You can write one heavily optimized critical section that runs in real time and then be interrupted by some unrelated unoptimized low priority code that allocated memory and triggered a GC stop the world pause. Basically average performance in a GC language is dictated by the slowest performing section of the code base. A trivial model that avoids this problem would be a language where each thread has its own garbage collector. If a slow code path triggers a collection cycle it will not interrupt other threads. The second problem is that using a library written in a managed language requires initialization of the runtime. One could resolve this problem by switching away from C FFI to shared memory IPC where a separate process is running in the background.


It really sounds like what you are describing here is Nim. In fact, Nim even does per-thread heaps. It also implements a soft real-time garbage collector which for most use cases avoids GC pauses.

So I would say there is definitely at least one GC language that is worth using for systems level programming.


> I think it is also very hard to make a performance argument in 2020. Especially when you are comparing low level against something like the newest C#/.NET Core stack.

I agreed until this. I don't take anything not AOT compiled with an optimizing compiler as fast.

JITs may work well for numeric benchmarks with one tight loop, but there is extra memory and energy overhead.


I heard the Rust API changes between versions such that it’s a bit of a pain upgrading systems. Is this true?

Also, I’ve read traits can be used as invariants for a pseudo-dependent type system. Has anyone had experience with this?

I’m considering learning it instead of C++ for some systems work.


> I heard the Rust API changes between versions such that it’s a bit of a pain upgrading systems. Is this true?

No, this isn't true. https://blog.rust-lang.org/2018/07/27/what-is-rust-2018.html... was the last change and it was optional and automated migration.


For your first point, you might be thinking of ABI and not API?

Rust doesn’t have a stable ABI but the API is non breaking as long as you aren’t on nightly AND opting into unstable features.

If you stay on stable Rust, you’ll not really have breaking changes between versions.

Regarding ABI , that means that if you compile rust into a library with one version, you can’t reuse that compiled library in a newer version of the compiler, unless you go through a stable ABI like exposing a C interface.

However if you recompile your code, it’ll work. You need to keep them compiled by the same compiler version.


If your systems work do not require stable compilers or rely too much on external C++ APIs, then you will be mostly fine.


It’s mostly to trace process performance, network load, stuff like that.


There is no reason to assume it is the "Best Chance". There are a lot of other chances too, some even since more than two decades (e.g. Ada/Spark). Even C++ itself has not yet shot all the powder. And just as one can write secure code in C++, it will turn out in time that even with Rust not everything is as secure as propaganda would have you believe.

I would be interested to know if this is really the official opinion of Microsoft, or if someone just wants to make a name for himself.


Having a minimal runtime is, I believe, a requirement to make any serious headway into a lot C/C++ use cases. The places where minimal runtime is not important have already seen big shifts to garbage-collected languages ~20 years ago.

You mentioned Ada, which does have a minimal runtime. It might be interesting if the community and ecosystem made a strong push into general-purpose programming, like rust has. I would give it another try.

Rust seems to handle heap allocations more fluidly (or at least closer to what I expect), but perhaps that wouldn't be a deal-breaker if I knew more about progamming in Ada. Rust also has better C integration -- it can match C structure layouts and operate on them seamlessly with rust semantics. But basically, it seems plausible that Ada could work with the right ecosystem around it.

Regardless, Ada has had decades to make the leap into general-purpose programming, and it hasn't done so yet. Rust is viable today for a lot of important use cases, and is not far off for a lot of others.


I don’t understand your argument. Ada, C, C++, D, Fortran... and other old languages can be used right away for general-purpose applications.

Swift, Zig, Rust, Go, etc. are popular because there has been a resurgence of new languages in recent years and a lot of new, young developers are helping develop libraries for them.

It is simply not as fashionable to program in old languages so some new developers don’t.


The article's thesis: "Microsoft has deemed C++ no longer acceptable...needs to move...best choice on the market today is rust".

If moving away from C/C++ to something, C and C++ are out of the running. You can claim that they should still be considered, but they've been around a long time and people have tried very hard to make them safer. There's been some success, but there's not a lot of evidence that real C/C++ codebases will become dramatically safer in the next 5 years. It looks like some people are looking pretty hard at alternatives.

Ada and fortran are both older than C++, so it is not clear that there ever was or ever will be a lot of net movement from C++ to either of those unless something exciting happens. It would be cool if Ada did get more appealing for C++ migrations, but I just don't see it happening.

That leaves D and Rust. Apparently a lot of people think rust has the mix of excitement, community, and technology to make it happen. We don't know for sure, but it seems like a reasonable bet to make if you are trying to dislodge C++ due to frustrations about memory safety.


I think the fact that Rust is very popular in the polls, but still hardly ever used, rather indicates that people primarily want to escape from their current dissatisfaction, but then ultimately don't take it seriously with switching to another technology, so as with Ada or others already available. Besides, C++ is also evolving, and formal verification has been possible for many years. There are also proven ways to avoid memory problems. Rust would have to prove itself first. Maybe it solves one problem better, but ten others worse.


The thesis of the article was "best chance". So if not rust, then what? C++ and Ada have lots of great possibilities, but those possibilities aren't making it into practice (for whatever reason) despite a lot of time and effort.

Apparently, the author thinks that rust might make that leap sooner than Ada/C++. It seems like a reasonable guess, and with some backing by major players, it might well happen.

Sure, there's a lot of uncertainty while we are waiting to see the results from a diverse set of large projects. I have used rust enough to be annoyed by it, but none of the annoyances seemed like a major blocker to rust's future.


> but those possibilities aren't making it into practice (for whatever reason) despite a lot of time and effort.

What do you mean? Ada is used routinely in some industries.


By "possibilities", I mean the possibility as an alternative to (or major safety improvement on) C++, which is what the article is talking about: "Microsoft has deemed C++ no longer acceptable...needs to move...best choice on the market today is Rust".

Ada is a possibility, but it's just not there and does not show signs of being there soon. There's probably not a net movement from C++ to Ada today, and there is no reason to expect differently over the next 5 years.

Rust is not a sure bet either. Lack of major real systems is a big question mark. But basically the article's thesis makes sense to me.


> I think the fact that Rust is very popular in the polls, but still hardly ever used

In what sort of polls is Rust's popularity overrepresented (when compared to industry usage)? A lot of people on Internet forums like Rust, but a lot of people on Internet forums like a lot of languages, especially new ones.

And what is your definition of "hardly ever used"? I can think of plenty of programs I use frequently that make use of Rust (e.g. Firefox). Compare the use of Rust to, for example, Racket or Haskell or Julia (all fine language). I've definitely interacted with programs written in Rust far more than any of those three languages. So at least Rust is seeing some niche usage.

Not trying to come to Rust's defense, just trying to understand this point.



"Hardly ever used" sounds like it might be the result of your personal filter bubble?


see https://insights.stackoverflow.com/survey/2019

Only a small fraction of "Rust fans" are actually using it.


I wonder what percentage of folks (not exclusive to Rust lovers) are stuck with a language at work or in major side projects because of momentum, existing code bases/etc.


I don't think it's just a filter bubble. Look at any programming jobs site. Count how many listings you see for Rust versus any language you'd consider commonly used.


We were not specifically talking about present availability of jobs. I am personally aware of many projects being done in Rust and from my experience, it seems to be growing still.


It might also indicate that a lot of the projects that Rust would be a great fit for tend to live for a long, long, time. That slows down market penetration, regardless of how much better it is.


I wonder if maybe Ada/Spark could achieve a resurgence with a Reason-style syntax modernization. A lot of people are repelled by the word-heaviness of Algol/Pascal-style BEGIN/END.


Momentum counts, and Ada/Spark have little mindshare right now. If next to nobody ever adopted them in the last two decades unless mandated by requirements, what chance do they have for the near future? For whatever reason, Rust is surging in popularity and is currently the most promising bet for memory safety to become commonplace in the system programming circles.


Rust is the new utopia; at the moment there is only the (founded) assumption that you write more secure software with it. Only time will tell whether this is actually the case and whether the same number of security problems will be displayed in Rust applications in ten years' time. But as is well known, hope is nourished by promises of salvation.


The problem with this attitude is:

* The claims aren't baseless. We've seen for decades already that memory safe languages have fewer memory safety issues than languages that are not memory safe.

* Asking for hard proof is silly in software. There's tons of research into programming languages and their impact on software projects, across a number of areas, and it's all fairly useless - it's extremely expensive to do right, and impossible to objectively measure.

Common sense isn't propaganda.


> that memory safe languages have fewer memory safety issues than languages

That's what I would assume ;-)

By the end of the day it's also a question of performance. For embedded, realtime or mobile core technologies there will always be a need to shortcut.

> Asking for hard proof is silly in software.

Not only in software. That's what the scientific method is for. That needs time. Currently there is a debate whether the switch from Ada to C++ for the F-35 is the cause of all the observed problems and budget overruns. There is still not enough data to know. But there was a decade-long track record for Ada in avionics. And C++ also had to establish itself for twenty years before anyone dared to develop an F-35 with it.


> Asking for hard proof is silly in software.

You can however prove that the language solves a particular attack ;)


Do you have a reference where this has been done with Ada/Spark?


There's a second story to Rust appeal: not just systems people hoping to gain safety rails previously only enjoyed by high-level programmers, but also applications programmers hoping to gain some low-level capability without losing safety rails. The former is far more important, but the latter is driving much of the public love for Rust (there simply aren't that many systems programmers out there)


People seem to want a language that gives you similar memory handling to C++, which Ada doesn't really do. In any case, developers don't seem to be too keen on adopting Ada.

> And just as one can write secure code in C++, it will turn out in time that even with Rust not everything is as secure as propaganda would have you believe.

I think the point is more that Rust is memory safe by default, and you have to opt out of it. In C++ on the other hand, memory safety basically falls to the developer, with plenty of crazy pitfalls to watch out for in the language. It's not that it can't be done, it's just harder to do it right. Rust makes the easy, default thing safe, and the unsafe thing requires extra effort.


> In any case, developers don't seem to be too keen on adopting Ada.

Yes, unfortunately; the trend seems to go in the opposite direction; but anyway: if I really wanted to switch to something new and spend the effort to invest in new tools and education, then I would rather choose a technology with a decently long track record demonstrating that it fulfills all my expectations because of which I left my original technology. Rust will take one or two decades for this; Ada/Spark has already been through this.


If everyone adopted this mindset, there'd be no new programming languages because no one would learn them until they'd been around for 20 years. Sometimes it makes sense for companies to take a risk on a new technology.


If it's your company and your money, then of course you can decide yourself whether you want to take that risk; most likely not immediately in a multi-million dollar project though.


To be fair, rust is a good systems programming language, for scenarios where correctness is important and runtime footprint must be small. I have no problems with it promoted as a good systems programming language. It seems much more modern than Ada or something and may lend itself better to formal verification.

What I hate is rust fanboys' propaganda that rust is great for every purpose including CRUD stuff because crab god moral and borrow checker is moral responsibility. There is lot of dishonest marketing sounding like rust doesn't have cognitive overhead compared to other languages. It gives me an impression that, in best case, it is selection bias among people that successfully learn and use rust, or in worst case, some people trying to show off that they learned a 'hard' language and it is 'easy' for them.

Secondly rust community is very political and polarising that many people don't like. It almost seems full of non-STEM-worthy people although they are a vocal minority..


IMO Rust's cognitive cost is paid upfront. Other popular languages shove that cognitive cost as technical debt into the future.

An example from a while back: needing to rewrite your whole original Ruby code base in Scala. Or having to write your own PHP compiler.


Can you elaborate on the "propaganda" comment?


Rust users have a reputation for being eager to promote their language. There's a running joke of a "Rust Evangelism Strike Force" that roams the internet looking for places to "Rewrite it in Rust" and this is, in my experience, at least partially true.


The rust evangelism strike force seems to be matched only by the rust criticism strike force - the rub of it is that once these things get started they seem to be pretty much self-sustaining, with every Rust discussion immediately attracting two large groups, each of which are only there to argue with the other. Overall it makes everyone look bad, but certainly has the impact of making it more difficult to find reasoned discussion around Rust's advantages and shortcomings.

There is a similar phenomena around Microsoft, with nearly every news item about them attracting two such camps, so this news item happens to find itself at a rather unhappy intersection!

For what it's worth, as someone who's working on a first project in Rust right now, the actual Rust documentation and direct Rust community seem to be friendly and helpful. It's only in more 'neutral' discussion forums like here where most things about Rust devolve into trench warfare. I also feel that, subjectively, the core Rust communities (e.g. the rust-lang forums) are pretty honest about identifying places where Rust or its tooling currently fall short (plenty of "I don't know that there's a good way to do this right now.").


Attract people with promises based on unwarranted denigration of other technologies. They should present appropriate, independent studies instead, not just claim something. In the article itself you can read that even Microsoft with its billions of lines of C++ code will certainly never switch to Rust. And in C++ 23 something similar to what is not even thought through in Rust today is also planned.


> And in C++ 23 something similar to what is not even thought through in Rust today is also planned.

Source? The closest thing in C++ to what Rust offers today is the C++ Core Guidelines https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines but these are best-effort in nature and do not have a goal of comprehensively ensuring soundness or memory-safety.

And I'm not sure why you're implyng that the Rust featureset "isn't even thought through" when the Rust Belt project has been doing a lot of work on modeling it in a principled and clearly-understandable way, and this has already contributed usefully to the design of newer Rust features.


There are a lot of mechanisms in C++ to avoid the issue Rust want's to solve with the borrow checker altogether. And there are proposals going in the same direction as the borrow checker. But in the end, human error can somehow circumvent all these security mechanisms, no matter how much hoopla they are praised today.



As mentioned by your parent, matching Rust’s abilities here is a non-goal for this project. Still very glad to see it!


> And in C++ 23 something similar to what is not even thought through in Rust today is also planned.

I'll believe it when I see it. The C++ committee consistently fails to deliver on safety features.



[flagged]


That's my real name, sorry if it's latin sounding. If you follow the developments of the standard commitee, you know what I'm talking about.


> If you follow the developments of the standard commitee, you know what I'm talking about.

I don't, or at least not closely enough apparently; can you tell the rest of us what you are talking about?


Here is an example from a leading exponent of the commitee, ironically also from Microsoft: https://github.com/isocpp/CppCoreGuidelines/blob/master/docs...


Ah, well, if it's your real name, my rule absolutely does not apply - apologies for making assumptions. Your response is quite appropriate.


> but I just have low tolerance for people with Latin-sounding screen names

Did someone scare you as a child?


Since it's the chap(??)'s real name I am going to be coy myself and refrain from further comment. I was wrong to comment that way, and I'm leaving the post up only to remind myself not to be a dick.


Noted. Very gracious.

FWIW Rochus is quite common name on Süd Tirol on the Italian Austrian border. Well, I've met a few.


Don't worry, I can take a lot ;-)


Are there good examples of people replacing C++ code with Ada and Spark?


Unfortunately not that many; I also don't believe that all the exited folks will really switch to Rust. Maybe this one is interesting for you: https://blogs.nvidia.com/blog/2019/02/05/adacore-secure-auto...


Ada doesn't have a borrow checker.


Why should it need one?


To support freeing heap allocations in a memory-safe way without relying on a GC.




Looks like that is effectively a borrow checker. So, the reason to need one still seems valid.


You asked for a BC in Ada, but this is SPARK, which we use for formal verification of Ada code, so there is no need to also have it in Ada; besides that you can do much more in SPARK than just BC and even Ada has much more powerful type features than any other language which in very many cases makes memory management or pointer handling unnecessary (e.g. you can declare much more things by value on the stack, even with variable lenght data structures).


> You asked for a BC in Ada, but this is SPARK, which we use for formal verification of Ada code, so there is no need to also have it in Ada;

Sorry, I didn't. I was just pointing out why you'd want a borrow checker. I think you might be referring to another poster.


Oops, my bad.


Is Rust a good choice for applications programming? Does it support GUI (dialogs, etc).



I am finding Rust to be a most excellent language for implementing GUI. There are several active projects working on this (linked in sibling comments), and I think progress is encouraging.


I think it a language that let's you write your own GUI toolkit. Like all gui solutions are kinda bad.


https://areweguiyet.com/

TL;DR it has some support; however most libraries are just C bindings, or else they are very domain-specific.

You can, but it is not as smooth as using other languages.


If they want to use Rust, and have more faith in it than their own Verona[1], then they should discontinue Verona dev and let those developers go back to Pony[2]. I imagine having their core team working on Verona full-time is considerably delaying Pony development.

[1] - https://github.com/microsoft/verona

[2] - https://github.com/ponylang/ponyc

(edit: formatting)


I would go as far as saying it's Microsoft's best chance at safe systems programming, but no further.

I love Rust, but I'd refrain from such sweeping statements until other options evolve and get the opportunity to be time tested.


Has anyone worked on rust for microcontrollers? STM32 and Arduiono ?



Surprised this is coming from the company that helped build f* (https://fstar-lang.org/)


The second danger to Rust would be MS pulling Embrace, Extend, Extinguish with it. Dont think so? What are they on now, DX12? Still trying to own the graphics API.


I think 'best chance' at safe programming is the advances in chip design. I expect PMMU to become more sophisticated and solve more of the memory safety problems in hardware than in software. Then, the programming model will evolve and safety features will become more language agnostic and broadly available. Then, I fully expect compiler+runtime+os to evolve to take advantage of these hardware features for software written in any language.


I'm a big fan of Rust, they only issue I have is that the compile time is painfully slow. Hopefully it can be improved in the future.


A lot of tooling is built around C/C++. In particular, Microsoft binaries are now almost completely built on the Microsoft Visual C++ compiler which produces MSVC binaries, whereas Rust relies on LLVM.

Maybe providing/funding a different back-end would be a worthwhile endeavor for Microsoft to do or fund.


Great. Now rewrite all the slow C# apps in Windows. :)


Does anyone have a link to a good Rust vs. D comparison?


I'm disappointed they didn't double-down on what I feel is the best general purpose programming language C# (and related F#).

It has a great toolset, generates good code, and with a good microkernel architecture (which Windows 10 is, mostly) 90% of "systems" programming would be fine in a garbage-collected language.


It's a shame its such an ugly language then


If only it had dynamic linking...

I know, I know, efficient, separate compilation and parametric polymorphism are hard to get together, but a systems language should not follow the same compilation model as, e.g., Haskell.


You can use dynamic linking, it's just restricted to the C ABI. But this has its benefits in that if your dylibs are written properly, they'll be usable from C and C-compatible languages as well. "Wrapping" code can be statically linked and provide higher-level features.


When is Microsoft going to release first-class Rust support in Visual Studio and Windows, including debugger and official Rust bindings for their most commonly used APIs like Win32, Direct3D, etc.?


I don't think this is exactly what you want, but you might be interested in it:

https://github.com/microsoft/winrt-rs


Not really, but thanks! Hopefully they will start adding more bindings...


VS Code is the best Rust editor.


Yes, but an editor is just one of the pieces.


You shouldn't trust someone who misleads with: "Now, 70% of the CVEs originating at Microsoft are memory safety issues, Levick said. “There is no real trend it’s just staying the exact same,” he said. “Despite massive efforts on our part to fix this issue it still seems to be a common thing.”"

There is a trend, and the trend has been a MASSIVE reduction in memory errors. The percentage of security issues that are memory errors says nothing of the totals, which is what matters.

Microsoft could go through 2021 with only one CVE and we'd see articles like this saying "100% of security vulnerabilities are caused by memory errors, we must switch to Rust!"


Here is the data from Microsoft Security Response Center on this [1]. Slide 5 shows double as many CVEs as 4-5 years prior, triple as many as 6-8 years prior. Further, slide 10 shows 70% of CVEs are memory safety and that ratio has been constant since 2006. It follows that the absolute number of CVEs which are memory safety is double 4-5 years prior and triple 6-8 years prior, which is inconsistent with a massive reduction.

The magnitudes here are in the many hundreds of CVEs per year from Microsoft (and growing), not "one CVE in 2021". 70% there is not a negligible number.

[1] https://github.com/microsoft/MSRC-Security-Research/blob/mas...


Now divide the number of CVEs in each year by the LoC maintained. You're again looking at completely the wrong number. Line-for-line, C/C++ written today at MSFT/Goog has less memory errors than 10 years ago, and even less exploitation of memory errors. Anyone who lived through the rise and fall of Internet Explorer intuitively knows this.


Dividing by the LoC is a mistaken way of looking at this data. Regardless of the amount of code in a browser or in Windows, an attacker may only need ONE exploitable bug to cause mischief. If the amount of code in Windows grows by a factor X from year to year, the CVEs per LoC better be shrinking by at least factor X (this is where Rust comes in) or else the system is getting less secure. Thus the absolute number is the relevant metric, and indeed is the number reported by Microsoft Security Response Center.


> the CVEs per LoC better be shrinking by at least factor X (this is where Rust comes in) or else the system is getting less secure

Do you really think windows 95 was more secure than win 10? Or that IE6 was more secure than the latest IE? The newer versions are way more secure, it's not even close. Your data is giving you incorrect conclusions, because you're combining and cutting the data in ways that don't make sense.


> There is a trend, and the trend has been a MASSIVE reduction in memory errors. The percentage of security issues that are memory errors says nothing of the totals, which is what matters.

Except that CVE's seem to be increasing over time across the industry, not falling. There is no overall reduction in the numbers.


Indeed, software quality is in decline across the industry.

I think this is more of a "make an idiot-proof system and the world will make a better idiot" type of situation.


How about designing the web around accepting arbitrary code from remote computers and executing it the browser VM.


I agree. While Rust is clearly a step forward in programming languages (and we need a step forward), modern tools built to deal with the issues common in C and C++ like coverage-guided fuzzers, ASan, Valgrind, etc. have made huge improvements in finding memory safety vulnerabilities and other issues. Mitigations have also improved, making it more difficult to exploit memory safety issues.

While I think that unmanaged (i.e. no runtime or interpreter) memory-safe programming languages are a good idea, they're not magic and we shouldn't ignore the other major security tooling improvements recently.


I should note that fuzzers/address sanitizer/Valgrind are very good tools, but they are not perfect. Projects that use them heavily still see issues due to memory-safety bugs. (And yes, memory-safe programming languages are not a magic bullet to fix all bugs; they just help with memory safety…)


> There is a trend, and the trend has been a MASSIVE reduction in memory errors.

Citation?


[flagged]


Because Microsoft (and Google, who shares this opinion on memory safety in c++) are the groups that develop asan and similar tools, and who have some of the best practices around code review, testing, and CI.

Yet they still say rust is better in the long run. Why so you think that is?


Valgrind was not developed by Google and probably accounts for the main reduction. Asan has seen a way more recent takeup in OSS.

Microsoft and Google are not monoliths, and this particular submission is from a "cloud developer advocate", so I attach exactly zero importance to it as far as C/C++ are concerned.


> , so I attach exactly zero importance to it as far as C/C++ are concerned.

He's repeating statistics that members of the C++ committee will agree with, so I'm not sure what's controversial.

> Valgrind was not developed by Google and probably accounts for the main reduction. Asan has seen a way more recent takeup in OSS.

You miss the point. Despite these things, and despite aggressive compiler flags and everything else, the majority of bugs are memory safety issues. Whether you look at windows, chrome, or the Linux Kernal itself (KASAN). That seems fairly conclusive.

And yet here you are arguing, what exactly? That the person making the statement isn't technical enough for you, so it's all lies?

As a bit of an aside,

> Valgrind was not developed by Google and probably accounts for the main reduction

I never said it was. However one of the most active maintainers works at Mozilla, who's opinion on the safety of C++ is also probably in line with Google and Microsoft here, given their relationship with Rust.


I think this has been discussed here each time someone cites that infamous 70% statistic in a Rust thread:

This is about CVEs. CVEs are about exploitable vulnerabilities, and most of useful software in that area is in C/C++.

In the OSS project I'm familiar with, most critical issues are not memory safety issues. Most are logic bugs.

I'm not familiar with the safety testing in Windows, which for a start does not support Valgrind or the sanitizers.

Neither am I familiar with the feature oriented culture of Chrome.

If anything, I'd be interested in the numbers of the internal ad-critical C++ code in Google or a HFT bank.


> This is about CVEs. CVEs are about exploitable vulnerabilities, and most of useful software in that area is in C/C++.

I'm not sure what you're getting at here. The claim isn't that 70% of CVEs are memory vulns, but that 70% of CVEs in C/C++ are memory vulns. So how much or how little C/C++ is used is irrelevant.

> If anything, I'd be interested in the numbers of the internal ad-critical C++ code in Google or a HFT bank.

Do you think that Google wouldn't be pushing as hard as they are for improvements in this space if they thought things were fine and dandy?


> the trend has been a MASSIVE reduction in memory errors

What is the evidence for this? I'm not familiar.


PL is still a fast moving field, so claims that any language is a “best chance” for something are likely to have a short shelf life.


It's sort of ironic that today I tried to install Rust through rustup-init.exe, on Windows, and it didn't work because of its dependency on Microsoft Visual C++ Build Tools 2019. Rustup keeps saying "Install the C++ build tools before proceeding", though it did install from the rustup provided link.

Developer installs continue heavily invested in dangling pointers (called paths) and other unsafe practices, C/C++ style. I find that baffling.


The context here is that this is the only way to get link.exe; if there were any easier way to do it, we would!


I think I found the reason: after installing 76MB of Build Tools (the default suggested by the MS installer) I have a "C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools" folder.. but no link.exe in it.

So I installed the full package, 4.6GB and now it works.

I guess Microsoft isn't being helpful, despite Rust being their best chance ;)


Why do you need link.exe? Why don’t you use LLVM’s linker?


It’s sort of definitional, the MSVC target uses the MSVC linker by default. Some platforms, like wasm, do use lld by default, and you can override the linker if you choose.

MSVC is the default because it’s the platform default, just as the Linux targets default to glibc rather than MUSL, for example.


One can argue that using the default linker is a good choice, and that is a good argument, but I don’t see how that is definitional.

Btw, wasn’t Chrome and others switching soon to LLD?


I mean “The MSVC target uses the MSVC linker so that’s why you need the MSCV linker.”

Yes, they are. Not sure how that’s relevant, though.


Best chance at creating an even more dystopian future where authoritarian governments and corporations control exactly what "your" computing devices can do and you can't "jailbreak" out of?

Yes, I know Rust is not a panacea, but I feel like the whole paranoia around safety/security is just leading us towards that sort of society, technology included. It seems that there was somewhat of a balance before, which allowed for some sort of "civil disobedience" (jailbreaking, rooting, cracking, etc.), and now that freedom is slowly disappearing.


This is the strangest critique of writing more secure code I've seen.

Yes, more secure code does reduce the chance of jailbreaks, but insecure code is overwhelmingly used more by criminals and state actors to try and break into your computer.

Secure Code helps all computer users, the 'little guy' most of all.

And you can always vote with your wallet (or, you know, vote) to keep your digital liberties. Blaming a better tool for eroding your liberties seems ... strange.


State actors and high tier criminal orgs will always have a way into your digital life.

In a way, making code more secure is just creating an asymmetry where regular joe can't break into a device but only state actors and other major players can.

I can sort of understand GP's sentiment. If someone really wants to hack me, they will. Or they'll find some other vector. The prevalence of memory safe code doesn't really impact my own security as an individual. But it does impact my ability to jailbreak and own my devices.

There was a similar discussion around the intel Trusted Computing concept. It was not a platform that ultimately benefited the user, it was a platform for user-hostile forces to 'trust' that the user isn't interfering with the code they want to execute on the user's device.

Slowly but surely we're moving towards a world where user devices are merely thin clients over corporate and government owned systems, and don't really belong to the user at all.


A good hacker can always break in. In fact the most common and most effective hacking doesn't involve technology at all: social engineering. All this sort of walled garden thinking does is make it just a little bit more challenging for people to learn more about the tools they've purchased and own outright, and how to best make use of them for their own needs.

Today, when macOS gives me warnings and asks permission for inane things, like terminal needing permission to look into ~/Downloads or ~/Documents, I and everyone else with some experience in technology just rolls their eyes at this request and move on. This would be like if my new toothbrush asked me 'are you sure you want to brush your teeth in this bathroom?' However, to someone not familiar with technology and Apple's security logic of recent years, these requests might seem terrifying, and scare off what could be bright minds who could have contributed to the field of computer science had they not been questioned at every turn by their operating system of their intentions. You learn by breaking things, after all.


My first thought: oh no. M$ likes Rust and will probably contribute and demolish it. Just look at the Github system status history since M$ took over. Or Azure, Windows 10, ...


This is magical thinking, switching language won't solve the problem.

Complexity is the problem, hard-to-read code is the problem, over-engineering is the problem.


> This is magical thinking, switching language won't solve the problem.

It won't solve the problem, but it can be an important step towards a solution.

> Complexity is the problem, hard-to-read code is the problem, over-engineering is the problem.

I actually disagree. Many memory safety issues are not the result of hard-to-read or over-engineered code, but are instead due to careless memory management. Safe Rust code (no matter how hard-to-read or over-engineered) can guarantee memory safety.


If it was that easy to fix, this problem would have been fixed a long time ago, by many other "safe" languages.

My bet is that Rust won't help nearly as much as people tend to think.


> If it was that easy to fix, this problem would have been fixed a long time ago, by many other "safe" languages.

There are many other memory safe languages. Those usually have a GC though.

> My bet is that Rust won't help nearly as much as people tend to think.

Why not? Memory safety issues are common and safe Rust code eliminates them.


I think its systems thinking. With 40,000 developers, Microsoft is too large to fix the problems you mentioned. They need a broad stroke change to reduce vulnerabilities. The article says they've tried many things and believe moving to rust might work.


Yeah, MS clearly has serious maintenance issues with their codebase, for obvious reasons.

They tried many things in the past, including C# and .NET, I'm not sure if that really helped or made things worse.


I don't see why people take "Rust has affine types" and "affine types help us have safe systems programming" and then think "Rust is the only safe systems programming language."


Because people like you, who suggest that there are alternatives, fail to mention them.

Which other languages close to C, without a run-time, and without a garbage collector, give you proven memory and thread safety?

Not C, not C++, not D, not Nim, not Go, not Swift, not Ada, ...


Without a garbage collector? A garbage collector is a gift, not a restriction. A garbage collector is a fallback for when other mechanisms fail, and a helpful tool for prototyping.

A garbage collector is a function that frees memory through tracing. It doesn't stop you from freeing memory another way. D's GC isn't a Java or JavaScript or Go GC, it doesn't have overhead-inducing write barriers.


If my software could afford a garbage collector, I would write it in Python or a JVM language, not in D.

My software that can afford a garbage collector typically doesn't need a high level of safety either. If it fails, I restarted, and if it gets compromised, so be it.

> D's GC isn't a Java or JavaScript or Go GC, it doesn't have overhead-inducing write barriers.

I wouldn't use D for any application. If my application requires high-performance and/or correctness, I'd use Rust or C++.

D buys you memory safety with a garbage collector, but it doesn't buy you thread safety, so from the safety POV, it is not enough. It also isn't free. If you disable it, you can't use most libraries. And if you enable it, you pay for a big runtime. Also the performance that D does buy you comes with the cost of being a language at the same level of complexity of C++.

Rust gets you much better safety than D, no run-time by default and all libraries support this, better meta-programming than D, etc. all at a lower complexity cost than D.

Sure D is better at prototyping than Rust, but Python and many other languages are much better at prototyping than D. If I need to write a prototype, I'll use python. And if I need to make my prototype fast, I'll just write that part in Rust and call it from python. Gives me the best of both worlds.

Picking D would mean picking something that's not bad but not great at anything either.


C and Ada variations have done so for many years, way before Rust was a thing.

Ada and D are also in the process of adding memory safety to the base language too.

The actual question is why nobody cared that much until now to put it in mainstream languages. The answer is, as you know, browsers.


> C and Ada variations have done so for many years, way before Rust was a thing.

Not giving any concrete examples is just proving my point.

Which C variations are both thread and memory safe? Which Ada variations are both thread and memory safe ?

AFAIK, such variations do not exists, and people claiming that they do on the internet and then failing to provide a simple link when requested multiple times by others to do so just seems to confirm that.


What C variations have given safe system programming for many years? I'm not sure I know of any.


MISRA C is a subset of C / set of rules that is used in the auto industry, and supported by many tools.


> Rule 20.4 (required): Dynamic heap memory allocation shall not be used

So anyway, what subset of C allows the use of dynamic heap memory allocation and enforces safety?


> MISRA C is a subset of C / set of rules that is used in the auto industry, and supported by many tools.

That's true, and I know you are not claiming this, but MISRA C is neither memory nor thread safe, which is what we are talking about here.

Writing MISRA C code with data-races is trivial, and no "linter" for it finds those.


What are some other memory-safe languages that occupy the same position in the trade-off space as Rust?


D with its borrow checker. D has a GC- yes- but that merely gives you the option of having tracing delete your memory. You can manually free GC memory and have it freed through RAII.


Um, how is that better than Rust, which has better integration of, and soundness for its borrow checker?


"best chance" not "only"

FWIW, I suspect C with proper tools is fine, just fine.

Also Ada is still a thing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: