Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am not a Rust or C++ fanboy, and I am happy that we are moving towards a future where there are less memory bugs, but… are we really?

- https://www.infoq.com/news/2021/11/rudra-rust-safety/

- https://cve.report/vendor/rust-lang

And you can find online similar links and research on the topic.

All it takes is that you use that specific piece of code at the wrong time, and that’s it: your system which you once believed to be safe is not anymore safe. And you know what? You don’t even know what a sanitizer is, because they told you that Rust is a memory safe language and you don’t need anything else than the rust compiler - this is how I find 99% of today’s articles about Rust.

Sure, today being Rust less used than C/C++ will clearly show less CVEs.

The question is … once it’s in the wild (and by this I mean “major” libs or serious shit done in Rust) how can you prevent such bugs without a runtime sanitizer?

Beware that such CVEs can affect Rust Standard library as well, not just an unknown library.

Again, there is no way of escaping bad programming and bad practices, no matter which language you use. Someone might argue that in Python or Java you can still do bad stuff, however, the likelihood is way lower, especially because most of your Java libraries will very likely be written in Java - unless you know it’s using JNI under the hood etc.




> they told you that Rust is a memory safe language and you don’t need anything else than the rust compiler - this is how I find 99% of today’s articles about Rust.

It is - as long as you don't use unsafe. Which is very rare, so we've made huge progress here already. Validation for cases where unsafe is necessary is needed and welcomed, but doesn't change the fact that 99% safe Rust is much better than 100% unsafe C/C++.

> Again, there is no way of escaping bad programming and bad practices, no matter which language you use.

Escape entirely maybe not. Eliminate most of it - definitely.


> It is - as long as you don't use unsafe.

That's what I tried to show in the previous post: it's just not true, because if you rely even on an unsafe function from the standard library affected by a CVE, you're just fucked as if it was in C, C++ or other languages. The only difference is that you don't know what's happening under the hood and you feel "safe" because that's how they sold the language to you. Until you get hacked and "hey we didn't know...".

You just use that "unsafe zip/unzip" function you find in the docs, and with maybe with the wrong input (a weird filename or so) it happens to create undefined behavior that opens up the door to vulnerabilities.

I really believe that Rust, Swift and lots of new[er] programming languages improve in terms of memory safety (at least on the high level), however, we need to admit that they are being sold for what they are not -> memory safe. When I read memory safe it means that it cannot happen at all, and therefore I don't have to think about that kind of stuff when I write a program.

It'd be more honest to say: it's memory safe until:

- you use unsafe

- the libs you rely on use unsafe (and go figure once you start to pull a lib that pulls another lib etc.)

- the standard library functions you use use unsafe

It's an important remark, because the next generations of programmers will build programs based on a false assumption - that they don't have to worry about certain types of errors, while it's just not true.

EDIT: maybe it's just my definition of memory safe too "strict", not sure. Memory safe to me means -> it just can't happen. That's it.


"Memory safe to me means -> it just can't happen. That's it."

I'd suggest rethinking this definition. You are always at the mercy of the lower-levels of your system, both in the runtime and the compiler. By this definition, nothing is memory-safe, since it is possible for your code in a "memory safe" language to encounter memory-safety issues in its runtime or the operating system.

Memory-safe means the bug won't happen in YOUR CODE. If your safe Rust code calls a library that uses unsafe incorrectly, yes you can encounter a memory-safety issue. But the memory safety bug is in the library, not your code. It is even possible to encounter a memory-safety issue in Rust if you never use unsafe, but in that case the memory safety bug is in the Rust compiler, not your code.


> By this definition, nothing is memory-safe, since it is possible for your code in a "memory safe" language to encounter memory-safety issues in its runtime

I am trying not to be extreme here, and of course I agree we objectively can't guarantee 100% safety, especially what's happening outside our code. However, importing a library happens within our code, it's in our executable, even if you don't write 90% of what's inside.

As far as I know, the JVM checks at runtime for overflows, dangling pointers, etc. So in theory there I only have to worry about using a good operating system that doesn't do funny things, because I know that the runtime has my back (at some cost, of course).

> Memory-safe means the bug won't happen in YOUR CODE.

So what happens when Rust replaces a major component in some important framework or library and your team chooses to use it because "it's written in Rust therefore it's safe to use", but actually it's vulnerable to some weird CVE because of some unsafe calls under the hood? The only way to find it is by using some sanitizer at runtime, or to exclude the dependency, but hey can you really rewrite that major library in a safe[r] manner?


>As far as I know, the JVM checks at runtime for overflows, dangling pointers, etc. so ... I only have to worry about using a good operating system...

No, you have to worry about your Java runtime too. The JRE has hundreds of thousands of lines of C++ and can have memory safety issues:

https://www.cvedetails.com/vulnerability-list/vendor_id-5/op...

> So what happens when Rust replaces a major component in some important framework or library ... but actually it's vulnerable to some weird CVE because of some unsafe calls under the hood?

You have a memory safety issue. Shit happens. If you are using Java, your runtime or OS can have a memory safety bug. If you are using Rust, the compiler, a library you consume, or the OS can have a memory safety bug.

The purpose of Rust is to reduce the surface area where such bugs can hide. In Rust, that surface area is the compiler and unsafe code. Well-written Rust minimizes unsafe code and audits it carefully, precisely because it is understood that this is where memory safety issues are most likely to hide.

The goal of Rust is to enable these low level components, like the JVM, to be written in a memory-safe language too.


> the JVM checks at runtime for overflows

It doesn't. Overflows wrap around in Java.

> dangling pointers

Only to managed memory. You may still allocate something through Unsafe and you're on your own, just as in Rust. And some popular Java libraries do use Unsafe / FFI under the hood.


I think it is just as (un)true that Rust programmers don't need to worry about those errors, as it is that Java/Python/JS/whatever programmers don't need to worry about them.

And this is really the only meaning for "memory safe" that can apply to anything that runs on real world hardware and operating systems. It's always conditional on the correctness of the compiler checks, the runtime, and in Rust the unsafe blocks.

An important factor here is that, because this scheme makes the trusted/human-verified part of the codebase a bit more extensible than usual, a lot of things that would typically live in the compiler or runtime get moved into libraries, just as a matter of flexibility and architecture.

So as long as people understand that these guarantees are conditional on the correctness of that trusted code, "memory safe" is a pretty reasonable name for what's going on. And I don't think people really expect to be immune to bugs in their compiler or runtime just because the language is "safe"!


> And I don't think people really expect to be immune to bugs in their compiler or runtime just because the language is "safe"!

I would really ask you to rethink what people think and expect. Maybe you work for a company with lots of people who understand this kind of things. I am more used to people who just jump on the "oh this lib does this, let me use it".

Everywhere I read only Rust is memory safe, you don't have to worry about anything (just don't use safe). As I mentioned before, your code might be implicitly vulnerable because of other libs, and if today it's not maybe an issue, it might be in 5 years when you have tons of libs out there rewritten in Rust and "hey they need to use unsafe code, and I can't exclude them".

For me it's important to have a proper definition and I am not happy about all the marketing around it. I still believe it provides great improvements for the code you own, so you can't mess up with pointers, use after free, and weird things like that.


I am a bit confounded by this comment- if people are pulling libraries with no concern for the maintenance or security implications, that's going to be an issue regardless of memory safety!

The lowest common denominator of every team in the world simply cannot be our target audience for every technical term we use. There is a minimum level of background people need to learn before they can be effective, and I don't believe we can get around that just by using maximally-pedantic language everywhere all the time.

If you are actually encountering misleading Rust materials, I certainly support any efforts to clarify them, but in my experience people are actually pretty good about that already. TFA here has an entire section on `unsafe` in Rust, for instance.

It is helpful to have some way to refer to this approach to language design, and "memory safe" (like "type safe") has a long history with a precise definition. But perhaps you have some alternative "forum thread friendly" term in mind that you would prefer over "memory safe"?


> When I read memory safe it means that it cannot happen at all, and therefore I don't have to think about that kind of stuff when I write a program.

I've never needed to use unsafe in my Rust applications, which means that my code is 100% memory safe (by Rust definition of course). I expect libraries I am using to provide safe abstractions over unsafe, if there is any there. Sure, there were bugs (including Rust stdlib), but such small scale effectively means that this problems disappeared comparing to languages that aren't safe and that means all I need to do is maybe update some dependency once a year. It's that rare, so I will call it safe and solved problem. We can move to next one (and there are many).

> It'd be more honest to say: it's memory safe until:

Which is what most Rust introductions I've seen will say. I don't think anyone misunderstood this part.


Did you read this bit?

> It could appear that these results undermine the belief that Rust safety model represents an improvement over other languages, e.g. C++, but this would not be correct, say the researchers behind Rudra, who still consider Rust safety a supreme improvement.

They found ~100 security issues in 45k packages. That's clearly better than C++.


I didn’t say it’s not an improvement. I just wonder who will you blame once you have a hacker exploiting that exact CVE which you didn’t know about because they sold you rust as memory safe language, so you didn’t take the time to run any sanitizer or similar.

Does Rust provide a way to check if you’re using unsafe code? What if I want to disable that? If I need to make a mission critical software I need to be aware of what I am deploying. If on the other hand we want Rust to be the new JS for backend, then yes so be it, we improved over c++, well done.


> I didn’t say it’s not an improvement.

You… almost literally did?

> I am happy that we are moving towards a future where there are less memory bugs, but… are we really?


> You… almost literally did?

I don't understand this comment, to be honest. What does it add to the conversation?


Why shouldn’t this kind of inconsistency be pointed out?

I’m happy to believe GP didn’t say exactly what they intended to or simply misremembered what exactly they said previously. It’s certainly something I’ve done before in an online conversation. And when it’s happened to me I’ve appreciated having it pointed out explicitly so I could clarify what my thoughts actually were. Often it’s because I misstated my opinion without realizing.

Either that’s the case here (what I’m choosing to believe) and it gives GP an opportunity to explain further or GP is engaging in this discussion in bad faith. In either case, I don’t see the downside.


You might be right, it's probably just easier to ask :)

So, when I wrote the message I was really physically and mentally tired, I wrote from the phone which unfortunately doesn't always help me write 100% consistent sentences.

Finally I just used wrong wording. As I mentioned in another comment, the improvement is clear and it's the new way forward. In my opinion it simply doesn't solve all memory safety issues, and while with c and c++ you know unfortunately what you get, with Rust or Swift you might get a false sense of 100% safety where there is not.


In my experience, almost no Rust programmers believe that it's a silver bullet that results in 100% safety. Certainly orders of magnitude fewer people than people seem to believe exists.

What it does do is drastically cut down the number of places you have to deeply audit.


Rust would have prevented about half of the CVEs in C code (I've seen a few different studies with somewhat different results, half is close enough for discussion). The other half is on you to write good code.

Note that the half Rust would prevent tends to be less impactful, still a CVE, but the exploit is less impactful to end users.


If we leave it to the programmer, what are we improving?

We did a big improvement, but why can’t we disable “unsafe”?

That would leave absolutely no margin for such errors.


Rust has a culture where people don't use `unsafe` unless absolutely necessary. That is generally good enough in my experience.

If you want to go further, you can disable unsafe in a crate by adding #[forbid(unsafe)].

And if you need more control than that, there's probably tooling out there that will help depending on what exactly you need.

https://github.com/rustsec/rustsec/tree/main/cargo-audit

https://github.com/rust-secure-code/cargo-geiger

https://github.com/crev-dev/cargo-crev


> If you want to go further, you can disable unsafe in a crate by adding #[forbid(unsafe)].

Cool, so it's possible to exclude dependencies which include unsafe stuff! That's awesome.

See, this is the kind of stuff I was looking for.

From the perspective of a team writing new Rust code:

1) Don't allow unsafe (you can have an easy code search for this) 2) Forbid unsafe cargos

Finally: how do you catch unsafe in the standard library?


> 1) Don't allow unsafe (you can have an easy code search for this) 2) Forbid unsafe cargos

This can be accomplished with cargo vet (https://mozilla.github.io/cargo-vet/how-it-works.html?highli...)


> We did a big improvement, but why can’t we disable “unsafe”?

Because there are things which literally can not be safe, and rust will not let you get away with pretending.


> Because there are things which literally can not be safe, and rust will not let you get away with pretending.

Can you post maybe a good article explaining this?


There is a time when you need unsafe. Hopefully those are rare, but most rust projects will need it. (remember Rust is aiming for systems programming, some domains never need unsafe, but others will need it).

I'm a c++ guy interested in Rust, my understanding is Unsafe lets me design custom high performance interfaces that do weird pointer tricks (which is needed to interface to the C and C++98 interfaces I work with), and it is on me to bounds check. Then the users of my interface don't use unsafe because I did all the nasty parts and their code is easy to write well.


> most rust projects will need it.

If you mean "writing unsafe in your codebase yourself," this isn't borne out by the numbers.

If you mean "depending on unsafe somewhere in your dependencies" then 100% of Rust code needs unsafe, just like any other language. Interacting with hardware, many operating systems' APIs, these aren't created in a way to guarantee it, and you need to interact with them to do anything.


I'm biased here because I work in embedded where I can't see any way to write code that directly interacts with hardware without unsafe. There are other world out there that I don't know much about.


Unsafe is necessary for Rust to meet its goal of being a systems language. If Rust didn't provide unsafe, then Rust programmers would be forced write this code in C, which would be just as unsafe, but a worse developer experience.


Yes.

They found 250 bugs after analyzing the entirety of rust and all of it's packages.

There are probably more memory safety bugs in the python standard library and top 50k packages.


The rust project itself also has an extremely aggressive CVE policy (which many find too aggressive): if unsoundness is found in an API it gets a CVE, no matter how convoluted, and how unlikely it is to get into that situation, with absolutely no guarantee of any code in the wild coming even close to the unsoundness.

Essentially, the Rust standard is that more or less every line of the C++ standard is a CVE.

Hell, the Rust project releases CVE for things other languages literally just shrug about e.g. https://blog.rust-lang.org/2022/01/20/cve-2022-21658.html

The C++ people just go "lol nothing in std::filesystem is safe we don't give a shit". The spec pretty much says it's UB to have other programs interact with the filesystem: http://eel.is/c++draft/fs.race.behavior#1.sentence-2


Many CVEs in the Rust ecosystem aren’t filed by the Rust project itself. Anyone can file a CVE, it does not require involvement of upstream.

That said things that happen in the language or standard libraries like the one you linked, are often (but not always!) filed by the project.


Every single time you (or anyone else) points to the number of CVEs as a metric, you're contributing to the computing world being less safe because you're incentivizing people to file fewer CVEs.

https://en.wikipedia.org/wiki/Goodhart's_law




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: