That's what I tried to show in the previous post: it's just not true, because if you rely even on an unsafe function from the standard library affected by a CVE, you're just fucked as if it was in C, C++ or other languages. The only difference is that you don't know what's happening under the hood and you feel "safe" because that's how they sold the language to you. Until you get hacked and "hey we didn't know...".
You just use that "unsafe zip/unzip" function you find in the docs, and with maybe with the wrong input (a weird filename or so) it happens to create undefined behavior that opens up the door to vulnerabilities.
I really believe that Rust, Swift and lots of new[er] programming languages improve in terms of memory safety (at least on the high level), however, we need to admit that they are being sold for what they are not -> memory safe. When I read memory safe it means that it cannot happen at all, and therefore I don't have to think about that kind of stuff when I write a program.
It'd be more honest to say: it's memory safe until:
- you use unsafe
- the libs you rely on use unsafe (and go figure once you start to pull a lib that pulls another lib etc.)
- the standard library functions you use use unsafe
It's an important remark, because the next generations of programmers will build programs based on a false assumption - that they don't have to worry about certain types of errors, while it's just not true.
EDIT: maybe it's just my definition of memory safe too "strict", not sure. Memory safe to me means -> it just can't happen. That's it.
"Memory safe to me means -> it just can't happen. That's it."
I'd suggest rethinking this definition. You are always at the mercy of the lower-levels of your system, both in the runtime and the compiler. By this definition, nothing is memory-safe, since it is possible for your code in a "memory safe" language to encounter memory-safety issues in its runtime or the operating system.
Memory-safe means the bug won't happen in YOUR CODE. If your safe Rust code calls a library that uses unsafe incorrectly, yes you can encounter a memory-safety issue. But the memory safety bug is in the library, not your code. It is even possible to encounter a memory-safety issue in Rust if you never use unsafe, but in that case the memory safety bug is in the Rust compiler, not your code.
> By this definition, nothing is memory-safe, since it is possible for your code in a "memory safe" language to encounter memory-safety issues in its runtime
I am trying not to be extreme here, and of course I agree we objectively can't guarantee 100% safety, especially what's happening outside our code. However, importing a library happens within our code, it's in our executable, even if you don't write 90% of what's inside.
As far as I know, the JVM checks at runtime for overflows, dangling pointers, etc. So in theory there I only have to worry about using a good operating system that doesn't do funny things, because I know that the runtime has my back (at some cost, of course).
> Memory-safe means the bug won't happen in YOUR CODE.
So what happens when Rust replaces a major component in some important framework or library and your team chooses to use it because "it's written in Rust therefore it's safe to use", but actually it's vulnerable to some weird CVE because of some unsafe calls under the hood? The only way to find it is by using some sanitizer at runtime, or to exclude the dependency, but hey can you really rewrite that major library in a safe[r] manner?
> So what happens when Rust replaces a major component in some important framework or library ... but actually it's vulnerable to some weird CVE because of some unsafe calls under the hood?
You have a memory safety issue. Shit happens. If you are using Java, your runtime or OS can have a memory safety bug. If you are using Rust, the compiler, a library you consume, or the OS can have a memory safety bug.
The purpose of Rust is to reduce the surface area where such bugs can hide. In Rust, that surface area is the compiler and unsafe code. Well-written Rust minimizes unsafe code and audits it carefully, precisely because it is understood that this is where memory safety issues are most likely to hide.
The goal of Rust is to enable these low level components, like the JVM, to be written in a memory-safe language too.
Only to managed memory. You may still allocate something through Unsafe and you're on your own, just as in Rust. And some popular Java libraries do use Unsafe / FFI under the hood.
I think it is just as (un)true that Rust programmers don't need to worry about those errors, as it is that Java/Python/JS/whatever programmers don't need to worry about them.
And this is really the only meaning for "memory safe" that can apply to anything that runs on real world hardware and operating systems. It's always conditional on the correctness of the compiler checks, the runtime, and in Rust the unsafe blocks.
An important factor here is that, because this scheme makes the trusted/human-verified part of the codebase a bit more extensible than usual, a lot of things that would typically live in the compiler or runtime get moved into libraries, just as a matter of flexibility and architecture.
So as long as people understand that these guarantees are conditional on the correctness of that trusted code, "memory safe" is a pretty reasonable name for what's going on. And I don't think people really expect to be immune to bugs in their compiler or runtime just because the language is "safe"!
> And I don't think people really expect to be immune to bugs in their compiler or runtime just because the language is "safe"!
I would really ask you to rethink what people think and expect. Maybe you work for a company with lots of people who understand this kind of things. I am more used to people who just jump on the "oh this lib does this, let me use it".
Everywhere I read only Rust is memory safe, you don't have to worry about anything (just don't use safe). As I mentioned before, your code might be implicitly vulnerable because of other libs, and if today it's not maybe an issue, it might be in 5 years when you have tons of libs out there rewritten in Rust and "hey they need to use unsafe code, and I can't exclude them".
For me it's important to have a proper definition and I am not happy about all the marketing around it. I still believe it provides great improvements for the code you own, so you can't mess up with pointers, use after free, and weird things like that.
I am a bit confounded by this comment- if people are pulling libraries with no concern for the maintenance or security implications, that's going to be an issue regardless of memory safety!
The lowest common denominator of every team in the world simply cannot be our target audience for every technical term we use. There is a minimum level of background people need to learn before they can be effective, and I don't believe we can get around that just by using maximally-pedantic language everywhere all the time.
If you are actually encountering misleading Rust materials, I certainly support any efforts to clarify them, but in my experience people are actually pretty good about that already. TFA here has an entire section on `unsafe` in Rust, for instance.
It is helpful to have some way to refer to this approach to language design, and "memory safe" (like "type safe") has a long history with a precise definition. But perhaps you have some alternative "forum thread friendly" term in mind that you would prefer over "memory safe"?
> When I read memory safe it means that it cannot happen at all, and therefore I don't have to think about that kind of stuff when I write a program.
I've never needed to use unsafe in my Rust applications, which means that my code is 100% memory safe (by Rust definition of course). I expect libraries I am using to provide safe abstractions over unsafe, if there is any there. Sure, there were bugs (including Rust stdlib), but such small scale effectively means that this problems disappeared comparing to languages that aren't safe and that means all I need to do is maybe update some dependency once a year. It's that rare, so I will call it safe and solved problem. We can move to next one (and there are many).
> It'd be more honest to say: it's memory safe until:
Which is what most Rust introductions I've seen will say. I don't think anyone misunderstood this part.
That's what I tried to show in the previous post: it's just not true, because if you rely even on an unsafe function from the standard library affected by a CVE, you're just fucked as if it was in C, C++ or other languages. The only difference is that you don't know what's happening under the hood and you feel "safe" because that's how they sold the language to you. Until you get hacked and "hey we didn't know...".
You just use that "unsafe zip/unzip" function you find in the docs, and with maybe with the wrong input (a weird filename or so) it happens to create undefined behavior that opens up the door to vulnerabilities.
I really believe that Rust, Swift and lots of new[er] programming languages improve in terms of memory safety (at least on the high level), however, we need to admit that they are being sold for what they are not -> memory safe. When I read memory safe it means that it cannot happen at all, and therefore I don't have to think about that kind of stuff when I write a program.
It'd be more honest to say: it's memory safe until:
- you use unsafe
- the libs you rely on use unsafe (and go figure once you start to pull a lib that pulls another lib etc.)
- the standard library functions you use use unsafe
It's an important remark, because the next generations of programmers will build programs based on a false assumption - that they don't have to worry about certain types of errors, while it's just not true.
EDIT: maybe it's just my definition of memory safe too "strict", not sure. Memory safe to me means -> it just can't happen. That's it.