Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> You have some misconceptions about C and undefined behavior.

The discussion is about RAM and Rust, not C. And the particular use of uninitialized data in C that corresponds to the linked article (as a target buffer for a read call) is clearly not undefined behavior.

This is a classic HN tangent, basically. You're making the discussion worse and not better.

The question is "Why can't Rust act like C when faced with empty buffers?", and the answer has nothing to do with undefined behavior or unsafe. They just got it wrong.



Okay, I think I see. In the linked article, due to the behavior of `read` uninitialized memory is never read. The same would be true in equivalent C code.

However, in C, the programmer doesn't need to prove to the compiler that uninitialized memory is never read, they are just expected to prevent it from happening. In this case, it's clear to the programer that there's no undefined behavior.

In Rust though, the compiler must be able to statically verify no undefined behavior can occur (except due to unsafe sections). It's not possible to statically verify this in either the Rust or C case, because not enough information is encoded into the type signature of `read`. The article discusses a couple of ways that information might be encoded so that Rust can be more like C, and discusses their trade-offs. C explicitly sidesteps this by placing the responsibility entirely on the programmer.

So to directly answer your question "Why can't Rust act like C when faced with empty buffers?", it's because the Rust compiler cannot yet statically verify there's no undefined behavior in this case, even though there is in fact no undefined behavior, and one of the primary design goals of Rust is to statically prevent undefined behavior.

And to what's perhaps the initial question, this is discussed using the term "safety" simply because Rust defines things which can't be statically verified to not invoke undefined behavior as "unsafe". Perhaps a better term would be "not yet statically provable as safe", but it's a bit of a mouthful.


> it's because the Rust compiler cannot yet statically verify there's no undefined behavior in this case

Uh... yes it can. It's a memory write to the uninitialized region. Writes are not undefined, nor unsafe, and never have been. They aren't in C, they aren't in hardware. Writes are fine.

The bug here is API design, not verification constraints.


The issue isn't writes to uninitialized memory, it's reads from uninitialzed memory. The compiler doesn't know how much of the buffer `read` writes. The docs say it returns a unsigned integer with how many bytes it wrote, so a programmer can know the later read from `buffer[0..num_bytes_written]` is valid, but the compiler doesn't know what the number returned from `read` represents, so from the compiler's point of view, the whole buffer needs to be initialized regardless of what read does for reads from it to be valid. That means it has to be initialized before it's passed to read, otherwise the compiler can't prove the elements which are later read from the buffer are initialized.


I'm basically going to give up and declare victory. You're saying more or less exactly what I said above and that you took issue with, which is that the fundamental problem here is one of API design (Rust's kinda sucks) and not the safety of the underlying primitive being abstracted, which has never been at issue. And certainly nothing about undefined behavior, given that it's 100% well defined.


The write is fine, but the subsequent read isn't (unless you know that the write happened, which the Rust compiler doesn't).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: