Hacker News new | past | comments | ask | show | jobs | submit login
New Linux glibc flaw lets attackers get root on major distros (bleepingcomputer.com)
166 points by PaulHoule 11 months ago | hide | past | favorite | 99 comments



Duplicate, more discussion here: https://news.ycombinator.com/item?id=39194093


I have not found Rust mentioned there. Am I right if we say that using Rust will reduce the security attack surface in Linux? I see now it is mentioned in this thread and there are different views about this.


Memory safety isn't exclusive to Rust. Language designers and implementers solved these problems years ago with safer languages (better type systems, removing pointers) or safer runtimes (jvm, .net etc). Rust is notable because it brings a lot of prior art together in one place and the compiler prevents most problems at compile time. As long as programmers don't use unsafe Rust it can produce safer code without runtime overhead. There are a lot of safer languages than C. Not all are suitable for writing system libraries.


Mind you that there is a runtime overhead.

If we look at access beyond on a slice's boundary:

https://github.com/rust-lang/rust/blob/ea37e8091fe87ae0a7e20...

This bounds check is what enables Rust code to fail with a panic vs continuing (which is what triggers a lot of bugs).

Post about the impact on performance: https://blog.readyset.io/bounds-checks/


The obvious ways to do this in Rust are safe, on the other hand, if you write some Rust where all it does is call this glibc function then Rust didn't magically save you because the function is broken. Does that help?


Being that it is a buffer overflow, yes, if glibc were in Rust I think it shouldn't have happened. I believe I saw a libc reimplementation project in Rust, but that's a big effort and not something I would expect to be drop-in for a long time. That said, I think we should try to avoid turning every vulnerability discussion into one about the merits of Rewriting it in Rust.


FWIW C can do it too to some extent, here's postfix: https://github.com/vdukhovni/postfix/blob/master/postfix/src...


Yes, but only if the relevant code can be written in Safe Rust (that is, without using unsafe). This is not always a given, but surprisingly, a huge amount of code can be written in Safe Rust and therefore won't have buffer overflows.


This makes me think the musl people are right. Glibc is quite a large attack surface.


Fwiw, the analogous logic in musl libc[1] uses log_ident set to the empty string if openlog() was called with NULL ident[2]. It also formats twice to buf[1024] without checking that the first format's length isn't in excess of the stack buffer but that's likely always safe given how short the stuff printed in the first format is.

[1]: https://git.musl-libc.org/cgit/musl/tree/src/misc/syslog.c#n...

[2]: https://git.musl-libc.org/cgit/musl/tree/src/misc/syslog.c#n...


More relevant, it caps `log_ident` to a very short length. If my math is correct, the max length of the first snprintf is 72 bytes (actually less with real-world valid PIDs), and the buffer is 1024.

As usual, MUSL is secure at the cost of giving up usefulness. A 32-byte buffer might seem like a lot, but there are dozens of programs with a name longer than 31 bytes on my system. `dbus-update-activation-environment` is the most interesting (most others are gcc/binutils), and I know dbus in general uses syslog, though I haven't traced this particular entry point.


Yeah. You have to math out the various fields but I agree it appears to end up a few orders of magnitude smaller than 1024. It's just not particularly explicit.

> As usual, MUSL is secure at the cost of giving up usefulness.

I would say 'Musl is deliberately simple at the cost of usefulness'. The simplicity lends itself to correctness and security.

Btw, since Musl does not implicitly construct log_ident from argv[0], program name is sort of irrelevant unless that program explicitly openlog()s with argv[0]. dbus-update-activation-environment does not appear to openlog or use syslog at all: https://gitlab.freedesktop.org/dbus/dbus/-/blob/master/tools... Instead it prints error messages to stderr.


Truncated to 31 bytes would be dbus-update-activation-environm. Is it ambiguous?


Interesting. The first snprintf() could return a negative value (at least by the glibc snprintf manpage), and then therefore the second vsnprintf will overwrite... something.

Maybe their implementation can't return negative, or only if the fmt string (constant, here) is invalid? I wouldn't count on it, especially of it being true forever. That's pretty sloppy code.


I believe the first snprintf only returns a negative value if the format string is broken. Musl has decided to avoid that by not breaking the format string. If that changes, returning -1 is the least of their problems.

You can see the implementation here: https://git.musl-libc.org/cgit/musl/tree/src/stdio/vsnprintf... and https://git.musl-libc.org/cgit/musl/tree/src/stdio/vfprintf.... .


Probably the person you're replying to is just confused because before it was standardized some snprintf() implementations returned -1 on overflow. If you were trying to be portable and defensive you'd need to check for either error return.

Not really a concern inside musl because those implementations are probably long gone and because it's calling its own snprintf() anyway.


I'm not confused. I know that some snprintf implementations returns/returned -1 on overflow. I assume that musl doesn't, because it's a fine library.

> If you were trying to be portable and defensive you'd need to check for either error return.

Including defensive against future changes.

I'm sure thousands of bugs are being written every day because people don't check return values that "can't happen", because they know the code they call. Then 10-20 years later, someone changes that code they depend on, without violating the contract.

I encounter these kinds of bugs all the time. There's a simple way to avoid them: Check the damn return values, even if just with an assert.

The extra annoying ones are ones with a comment saying "Can't happen", that then does happen. The person who wrote that could have spent about the same number of characters simply handling the "can't happen".

We can't get away from Hyrum's Law, but we sure can try to minimize its impact.


> I believe the first snprintf only returns a negative value if the format string is broken.

I encounter bugs all the time that come from the fact that the caller knows the code they call, so they ignore the documented promises.

Let's all at least try to not invoke Hyrum's Law, even for code in the same project.


In a sane world, most distributions would have long switched to musl and libressl, from glibc and openssl.


Distributions can't switch to libressl until all the software they package also supports it. libressl is incompatible with openssl, so programs that are compiled against the one cannot run against the other, and both cannot be loaded into the same address space. That's why the few distributions that did switch to libressl eventually switched back.


>the few distributions that did switch to libressl eventually switched back

Note that libressl supposedly improved openssl compatibility dramatically after that.

Also note it would have been a very different scenario if Debian or Fedora actually made the switch as they should have, rather than a few smaller distributions.

Instead, incompetence was rewarded, with plenty of money thrown openssl's way. This has, in hindsight, proven to not have been a good decision; code quality is still bad, and libressl is still a world better.


Debian has a humongous package library, the biggest of any distro if I'm not mistaken. They need full drop-in compatibility between libressl and openssl if they hope to replace the latter any time soon for all those packages.


The (current) compatibility is closer to drop-in than not. Most of these packages have maintainers and an upstream. It is just a matter of getting it done.

There exist distros using libressl and musl, proving feasibility.


s/address space/link-map namespace/

See `dlmopen(3)`. Admittedly, multiple link-map namespaces are weird to use.


I wonder. su called PAM to authenticate the user, PAM called syslog to log authentication failure. Even if syslog was in another library, it would still be called, you would just have more libraries.


… this C is code is just a case exemplar of the problems with C.

We have `l` and `1` being used in the same passage. Single letter variables are bad to start with, but `l` is particularly cursed since, if you happen to be using a font where `l` and `1` are homoglyphs, good luck reading that passage. But that's sadly the default in a number of OSes/environments, and indeed, the site this code is hosted on. One might presume `l` stands for length, … but there's also `vl`, `bufsize` and `sizeof bufs`. `bufs` and `buf`, although the former, `bufs`, is a singular buffer, leading one to wonder why the variable name is pluralized. I'm pretty such it's because it's stack-allocated (as opposed to the heap alloc'd `buf`, and thus `bufs` is "buf[fer], s[tack]". This is just nuts, and it should be no surprise there are multiple CVEs.

The OG code was had CVEs. The commit that "introduced" this — which I almost feel is wrong to say — attempted to use `bufsize` for the buffer size, which certainly feels more right that the OG, but alas, the variable is never meaningfully initialized. This CVE is on the patch to fix a CVE! And in the original CVE's bug report, the ominous words of "Doesn't seem too serious" were uttered.

I'm sure it's fixed now, though, right?

syslog is just a bad interface logging interface, to boot.


Even Comic Sans differentiates between l and 1. I wonder how many confuse them.


There is 0 reasons to have setuid binaries. Additionally, having users escalate a program to root, with no restrictions on what root can do, is even worse. setuid is a total hack and is embarrassing how many Linux distros are using it.


How are users supposed to change their password?


While using a setuid binary to edit the password/group "databases" is the historical default, there's no real technical reason why it must be that way. The passwd program could communicate with the database service via a socket. Likewise the NSS and PAM stuff could communicate with the same service via a socket. No reason for it to be lots of in-process loadable modules in this day and age.


> "Although [$reasons], we decided to not pursue this avenue of exploitation, because:"

I see lots of ground for further research.

(from https://www.qualys.com/2024/01/30/cve-2023-6246/syslog.txt, the article from the duplicate HN thread)


Meh, local privilege escalation.

I think by now everyone has accepted that the Unix/Linux account system is insecure by design and exists just to prevent accidental damage.

There are ways to restrict it but the default configuration simply exposes too much of an attack surface. I still give separate accounts to some services as defense in depth, but it mostly exists to slow down untargeted attacks.


What's the preferred method of limiting permissions then? Virtualization? Containerization? AppArmor?


For normal services namespaces are enough (make sure to set no_new_privs, one of the best Linux features). Run it with the bare minimum of mounts required, no shared /tmp, etc. For all its faults, systemd actually gets this right, by allowing to easily harden services.

Note that this exploit relies on being able to run as root (typically through setuid). If you don't fully trust a service, don't let it ever talk to code running as root in the first place. No opening sockets in /tmp, no listing processes in /proc, no dbus shenanigans, no sudo or su. One of this issues with this was that some programs require setuid for bad reasons (IIRC historically ping was setuid to be able to send ICMP packets). From a quick check (find -type f -perm -4000) most of these problems have been eliminated, via linux capabilities or otherwise.

These tactics successfully saved me from log4shell.


Depends on what you want to achieve. AppArmor/SELinux prevent access to files and directories. Virtualization and containerization tries to build a jail. You can combine the solutions, a container running a distro with SELinux like any from the Red Hat ecosystem.

They have all had vulnerabilities, My preferred method is to not install stuff I don't need, and fix any dangerous configuration for the programs I do need. I prefer Podman over Docker because of rootless for example.


Containerization doesn't protect at all against privilege escalation. And AppArmor is a very partial improvement.

The way to protect against this is with an external supervisor. But then you have to care about privilege escalations attacks against the supervisor. Hopefully that one is much simpler than Linux so it has much fewer vulnerabilities.


I use containerization, but it’s not perfect. Also on the front page today: https://news.ycombinator.com/item?id=39250975


On Linux, Chromium uses setuid or user namespaces to restrict the access of sandboxed components and seccomp-bpf to reduce the kernel attack surface.

Check out the Chromium docs on this topic: https://chromium.googlesource.com/chromium/src/+/HEAD/docs/l...


extra laptops/servers...? meh


at this point I think it's beyond clear any setuid root binaries shouldn't be written in C


You are conflating C with glibc


If your setuid program links to glibc and uses the logging there, then you are incorporating glibc into your setuid program. You could also not do that.


Given the unix philosophy of providing important functionality only in the c runtime library and having all software dynamically link against it, wouldn't the most crucial step in this be to replace glibc with an implementation that isn't written in C?


UNIX philosophy has nothing to do with dynamic linking, in fact during the first decade of its existence only static linking was available.


I mean, I don't disagree, but it probably would not have helped in this case.

This is based on a buffer overflow in glibc itself. It is my understanding that rust still calls into libc for many lowlevel system functions. And if you call C code from rust, well, it's still C code with C code bugs.

Ultimately, if you want to get rid of those, you'd have to rewrite the libc in Rust.


I very much doubt a rust program is going to pull in glibc syslog()

in this case it was in a pam module

and dynamic modules loaded into setuid binaries (pam/nss/...) are also super-high risk and should be memory safe too


> Ultimately, if you want to get rid of those, you'd have to rewrite the libc in Rust.

No, you can eliminate these bugs by getting better at writing code. The language is independent from the quality of any given code.

Rust is capable of introducing theses bugs, but the language is more explicit when you do. One languages isn't more secure than another. They each attempt to fill a niche, some trade language usability for correctness, some trade correctness for being easy to use.


> you can eliminate these bugs by getting better at writing code.

No, the data doesn't show this. This is the "anger" response to being confronted with the reality of the inherent insecurity of C and C++. https://www.usenix.org/conference/enigma2021/presentation/ga...


Alex isn't that convincing. And what data? Are you counting his opinion as data?

Actually asking: Are there any studies that show high quality code is just as memory unsafe as low quality code? Are there any studies that show excluding memory overrun bugs, rust code has a lower risk of being exploitable? What about a memory unsafe language like C written to the MISRA C standards?

I'll happily admit I'm just an old dude angry on the Internet... doesn't mean I'm wrong :)


In the presentation there are links to multiple companies observing a consistent 60-70% of high severity security issues being attributed to memory safety issues. These are companies that are organized, well resourced, etc.

Rust eliminates these bugs and it results in fewer security issues in Rust code than C/C++. See https://security.googleblog.com/2022/12/memory-safe-language...

Additionally, even in brilliant soloist projects like curl, Linux, etc., you still get plenty of memory safety CVEs. The best still don't get it right.

> Are there any studies that show excluding memory overrun bugs...

Why exclude memory safety from the argument? That's the whole point -- to fix those significant fraction of vulnerabilities.


> Why exclude memory safety from the argument? That's the whole point -- to fix those significant fraction of vulnerabilities.

Because while it's a significant fraction of those vulnerabilities found. (key worksd found, because they're easier to find) It's not a significant fraction of the vulnerabilities leading to exploitation.

My original argument was, security issues are preventable by increasing skill. You tried to claim that's merely an instinctual reaction of anger. My refutation is that, yes rust eliminates some memory safety issues, when used correctly, and when you limit yourself to a smaller subset of the language.

The two arguments, 1 isn't that also true for MIRSA C, and 2 does programming is rust lead to a lower defect count in any other metric than memory safety?

Because if exploitation isn't due to memory safety, fixing that class of bug doesn't improve security.

And I'll actually make a 3rd argument, Does a large rust project have a lower defect density than a solo project written by an expert, like you e.g. curl?


>researching what tactics are effective at convincing developers to reconsider C and C++

So he says nothing about how to write better code, but how to convince developers to do it? It's known how to write better code at fairly low effort and is used in several popular projects, mostly networking for obvious reasons.


Just curious whether Rust would have averted these issues?


There's a kind of naive thing going on on HN where peopel see a bug and ask whether or assert Rust will fix it. The problem is they are homing in on just the bugs Rust addresses - this is fine, I've had this conversation at work (not involving Rust, even) for two decades, including discussions about Java.

The problem is that Rust solves _a_ class of issues without even beginning to address other kinds of problems, and so this sort of thinking is evolving toward a mindset that "oh it's rust so no security issues", which is completely wrong.


Perhaps an innoculation against this is to respond "Yes, but so would Ada"


Ada is such a lost opportunity honestly. As powerful as C, performant and clean.


Unfortunely did not come with an OS, wasn't freely available, and quite costly.

Remeber that until AT&T was allowed to take commercial advantage of UNIX, its source code was available for a symbolic price, and it came with a C compiler toolchain, at least until Sun started the trend among UNIX vendors of spliting UNIX development tools into an additional purchase.

The UNIX vendors that had Ada compilers, that was an additional purchase on top of UNIX SDK (which already had C and C++), so unless there was a hard requirement to use Ada, no one bothered to pay extra.


Indeed. Ada killed itself, partially because it was one of the DoD approved languages and so the vendors got on board with raping and pillaging everything they could, killing the whole thing in the process.


I keep using "mostly safe languages" as kind of abstract term, Rust comes into speech even in scenarios where other alternatives would be a viable option.


I don't think so. Rust solve a class of security issue, but that class is 2/3 of all security issues! That's overwhelmingly more secure.

And it's not like it has no effect on the final third either. The stronger type system and ownership system make non-memory related bugs less likely too.

Your comment makes it sound as if you have a better solution. But Rust is the best one we know so far, so this kind of naysaying does more harm than good.


Naysaying is that these problems can't be fixed in C, this kind of naysaying does more harm than good. Rust merely uses an idea that was known before C even existed.


Do you realize how many security issues exist in applications written in memory-safe Java/PHP/JS/Python ? Memory corruption is but the tip of the iceberg.


You are right that there are software bugs that aren't memory bugs.

Now as GP was saying, Rust inherently mitigates a great deal of generally serious software vulnerabilities, which is great, even if it doesn't mitigate anything else.


Memory corruption is literally 2/3 of the iceberg. Look it up.


The answer is probably yes. The issue is caused by a heap-memory overflow. In Rust, if you're storing an array on the heap then there will be bounds checks on the array access. If you're using a vec then it would automatically grow (so again bounds checks).


There was a similar bug in the past in Rust: Integer overflow causing a buffer overflow: https://github.com/rust-lang/rust/pull/54399/commits/8ac88d3...

So the answer is: Not necessarily. I agree though that the memory safety features in Rust would help reduce the risk. On the other hand, one could also write safer C by abstracting away buffer management. The world is not black and white.


That function was using an unsafe block to disable the bounds check. This kind of bug would likely be impossible in "safe" rust (that didn't use unsafe).


In reality people use unsafe to optimize and then introduce bugs. You could also use a safe abstractions to avoid bounds violations in C.


That isn't the use-case for unsafe. If you see a codebase doing this then you should stop paying attention to or using that codebase.

Unsafe is for creating things that are beyond the understanding of the borrow checker. For example, using unsafe is mandatory for creating a mutex because the safety rules are upheld by the data structure itself. Unsafe is not "C mode" as most would assume - it is more unsafe than C because if you don't uphold the memory model things will break.

Rust's strict aliasing and mutability rules provide ample opportunity for compiler optimizations and zero cost abstractions. Turning to unsafe is generally a sign of incompetence/arrogance.


The linked glibc code was likewise being fancy trying to use a stack buffer to avoid a heap allocation. Idiomatic C would have worked just fine too. The point is that Rust is also (though not as severely) subject to developers playing tricks and hurting themselves.


The amount of lines added, let alone unsafe lines, is really startling for such a conceptually simple operation.


It's actually questionable. The vulnerable code is described really well in the Qualys report: https://www.qualys.com/2024/01/30/cve-2023-6246/syslog.txt

Basically: the code was an optimization trying to avoid a heap allocation by using a stack buffer instead, but its fallback for "it doesn't fit in 1k of stack memory" wasn't tested and didn't work right.

Rust struggles with alloca-style code too, to the extent that users who want to do this kind of trickery usually resort to unsafe. There's a link elsewhere in this topic to a Rust library vulnerability that looks similar.

The upshot is that if the code had been written idiomatically and just used the heap like it was supposed to, it probably would have worked fine. The glibc authors got fancy, loaded a footgun, and it went off, something that Rust is equally capable of.


> Rust struggles with alloca-style code too, to the extent that users who want to do this kind of trickery usually resort to unsafe.

Sort of, but not really. I've done this sort of "if it fits in a small stack buffer, use that, else fall back to heap" in code too. It's possible in safe-ish Rust:

  let mut s = SmallString::<[u8; 1024]>::new();
  write!(s, "{} frobs for {} widgets.", var_a, var_b)
      .expect("writes to a String are infallible");
Safe-ish, because there is an unsafe {} hidden behind the safe interface exposed by `SmallString`. (And that crate has had vulnerabilities against it.)

But that's sort of the point: we're here looking at logging code, and the logging code itself shouldn't be doing this sort of unsafe trickery, it should be using some interface/utility that handles that, and then there's only a single spot that needs safety analysis. (Here, that would be the smallvec crate.) While I don't think C outright prevents you from building an equivalent, certainly how people tend to approach C does, and we see that here. The pattern is unsafe at the instantiation site, not at the definition, leading to far more unsafe code. (Not to mention the fact that in C, we have no unsafe {} blocks, so one must assume all such code is unsafe, but even if we just consider the laughable and impossible-to-grep "just the actually unsafe parts", it's far, far more.)

In the C at hand here, the second attempt here fails due to a variable never truly getting initialized (in the sense of having a meaningful value), and the version prior to that is just laughably unsafe, allocating a 1 byte buffer but passing a size far larger.

(The second attempt (i.e., time of bug) also uses the cursed variable name `l`, and the site this code is hosted on uses a font where `l` and `1` are homoglyphs.)


That seems pretty unpersuasive. It seems like you're agreeing that rust is subject to the same bug, are even aware of situations where it's happened, and are hiding behind a switcheroo where you're blaming not the language but the fact that C didn't use a standard gadget for the trick.

But (1) that says nothing about the language, you could totally implement something like smallstr in C, (2) smallstr isn't very standard! I literally had to look it up, and importantly (3) this is glibc, not app code, and you absolutely can't be pulling random libraries into the core standard library in any case.

Basically this seems like excusemaking. Rust in fact has the same problem, because it's a hard problem, and not subject to clean abstraction through the tooling. And it would be good for everyone to admit that fact rather than try to prestidigitize an explanation.


> (1) that says nothing about the language

It does. Again, the area requiring audit is substantially smaller: just smallstr. And not just because we've extracted it there, but because we statically know that unsafe behaviors are limited to there. (By way of unsafe {} blocks.) I.e., we only have to audit the unsafe code in Rust, vs. all of the code in C.

> (2) smallstr isn't very standard!

Well, the same could be said for C? Rust's stdlib, like Cs, is somewhat purposefully kept small. It could be that someday it'd get added, but there is enough variation in implementation here that I'm not sure it would.

But pulling in a package in Rust is far easier than it is in C.

It might not be "standard", but I do think there's a set of crates in Rust that are what I'd call "well-known". Like Boost in C++ or requests in Python.

> (3) this is glibc, not app code, and you absolutely can't be pulling random libraries into the core standard library in any case.

That seems NIH. I see no reason glibc couldn't pull in static code, if it does the thing that needs doing, and correctly so.

> Basically this seems like excusemaking. Rust in fact has the same problem, because it's a hard problem, and not subject to clean abstraction through the tooling. And it would be good for everyone to admit that fact rather than try to prestidigitize an explanation.

No, Rust provides better tooling to solve the problem. (Even if it must still be solved, and even if you had to manually reimplement smallstr yourself.)


You're just going back to saying "Rust is better than C!" which is true, but uninteresting. The question at hand was whether this bug would have happened in a memory-safe language. And in fact it could have, because it's implementing an optimization Rust can only emulate using unsafe.

This is what drives me bananas about the rust community. The religion around memory safety persists even in circumstances where it doesn't exist. That's a cult, not an engineering effort.


Yes, glibc needs a leftpad incident


It would be quite something to rewrite glibc in Rust! Of course, its the C/C++ run time.


Yeah it would. There are a few attempts, such as C-gull (https://github.com/sunfishcode/c-ward/tree/main/c-gull#readm...).

> c-gull is a libc implementation. It is an implementation of the ABI described by the libc crate.

> Currently it only supports --linux-gnu ABIs, though other ABIs could be added in the future. And currently this mostly focused on features needed by Rust programs, so it doesn't have all the C-idiomatic things like qsort yet, but they could be added in the future.


As C++ advocate on C vs C++ flamewars, I already find quite ironic that GCC, clang, clang's libc and Windows Universal C Runtime are all written in C++.

All those C users hatting on C++, while being forced to use tooling that has dropped C for C++.


> All those C users hatting on C++

Such haberdashery!

Anyway, something that I appreciate about the Rust community is their dedication to rewriting anything and everything in Rust. I'd love it if the Lisp community also had some of that drive. Seeing a library which uses a Python script to generate documentation is so disheartening, they don't need that horrendous, awful garbage, they could just do it in Lisp.


I can't quite guess what you meant (balderdash[1]?), but I'm pretty sure you didn't mean a goods and wears shop[0]

[0]: https://www.merriam-webster.com/dictionary/haberdashery [1]: https://www.merriam-webster.com/dictionary/balderdash


Because 'hating' was misspelled 'hatting' I did mean to refer to a provider of men's wear such as hats.


You're not exactly "forced" to use the C stdlib though.

That's one nice thing about C, that the C stdlib is so bare bones that it could just as well not exist and not much would be lost (much harder to ignore the stdlib in C++, at least for some "modern C++" features).

Besides: MUSL is written in plain C ;P


Yeah, but that isn't C any longer, it is a flavour of a language that largely looks like C.

C is the complete printout from ISO/IEC 9899.


Indeed it is a shame. All bloated. It would be nice to have small and lean tool chain again.


It would be quite amusing, but there's no reason in principle that it couldn't be done.

The user-facing APIs would be just as dangerous as ever, but the internal implementation-detail stuff - like this "__vsyslog_internal" - could presumably be made safer.


It's not so unusual to write the C stdlib in a different language.

E.g. Zig is getting a libc written in Zig:

https://github.com/ziglang/zig/issues/514

Rust would work too of course. MUSL is probably the only popular C stdlib actually written in C.


I think their point is that the c in glibc stands for C, as in the language. A glibc rewrite in Rust wouldn't be glibc anymore!


Well, the Common Language Runtime that powers .Net languages isn't written with .Net, and the Java runtime environment isn't written in Java.

I know these are a lot more involved than the functions provided by the glibc, but the point is that libc means Library for C, not Library in C.

And of course on Unixes all software is expected to link to the glibc for basic functionality, no matter whether it's written in C or not. So the name is only historically accurate.


> Well, the Common Language Runtime that powers .Net languages isn't written with .Net, and the Java runtime environment isn't written in Java.

Actually a large part of it is, and in Java's case there are bootstraped implementations, OpenJDK isn't the only one.

Java is like C and C++, where multiple implementations are available for a given specification.

https://docs.oracle.com/javase/specs/


Yes, but even more so. Linux “expects” programs to link to libc, but many other Unix-like OSes require programs - in any language — to link to libc. The kernel syscall interface is not considered a stable API.

Windows has a similar structure, but the required kernel interface library is not the same one as the C runtime, so it’s less confusing language-wise.


It's a funny thought, but I don't think there's any reason it shouldn't work; IIRC redoxos is providing a libc written in rust for compatibility.


The redox impl is here: https://github.com/redox-os/relibc

Apparently it also supports linux!


Yeah, unless you bypassed the bounds checking with unsafe{} for performance reasons. (The is an unlikely place to do so.)


rust disables bound checking in the release builds, so no, you are wrong


Rust does not disable bounds checks in release builds.


It does, as well as overflow checks, you can re-enable everything to be safer with a custom build profile, but you'll loose at benchmarks


Overflow checks turn into two's compliments' wrapping, but that's only considered acceptable because bounds checks are not turned off.

https://play.rust-lang.org/?version=stable&mode=release&edit...

EDIT: I wonder if you're thinking about how sometimes bounds checks are optimized away? It is true that that will happen in release mode more than debug mode, but those are only the checks that can be proven redundant, if there is any doubt, they will not be optimized out. Semantically, they are still there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: