You are ok about the core utils being replaced by a rewrite for no obvious reason and said rewrite being so broken that an allegedly stable distribution actually can’t properly update?
I have to say I'm rather more worried about the apparent lack of testing that their auto-update mechanism is actually updating anything (given how long it took them to notice that symptom), than that they're replacing some software with a not yet quite complete rewrite in their less-stable non-lts edition.
There is no such thing as a less-stable-non-lts edition. That’s the stable version. The LTS version is just a stable version which is getting updated for longer. Non LTS absolutely shouldn’t mean unstable.
1. It literally remains stable for less time. Nine months instead of 5+ years, up to 12 if you pay them.
2. They apparently have a history of testing changes in it.
3. They appear to only sell things like livepatch and extended support for LTS editions, and products you pay for are implicitly more stable than products you do not.
Historically also, they've pushed things out from a LTS release that could have gone in and made people wait for the next non-LTS release because they were too new or experimental. If it's good, it'll be in the next LTS, but if not, it won't and can be removed from the next non-LTS without impacting too much.
Or to use Ubuntu's own terminology: "Interim releases will introduce new capabilities from Canonical and upstream open source projects, they serve as a proving ground for these new capabilities." They also call LTS 'enterprise grade' while interims are merely production-quality. Personally I see these as different levels of stability.
> It literally remains stable for less time. Nine months instead of 5+ years, up to 12 if you pay them.
Isn't "stability" in this context a direct reference to feature set which stays stable? When a version is designated stable it stays stable. You're talking about support which can be longer or shorter regardless of feature set.
When they stop adding features, it's stable. Every old xx.04 and xx.10 version of Ubuntu is stable even today, no more features getting added to 12.10. When they stop offering support, it's unsupported. 14.04 LTS became unsupported last year but not less stable.
These are orthogonal. You can offer long term support for any possible feature combination (if you have the resources), and you can be stable with no support. In reality it's easier to freeze a feature set and support that snapshot for a long time then chase a moving target.
I can see where you're coming from, but I think I'd prefer to describe practically all stable software as living in an unstable equilibrium in the usable region of state-space. When the stabilizing force of security patches, certificate updates, updates to new hardware requirements, and so on and so forth disappears the software falls out of the usable region of space into the, I suppose stable equilibrium, of unusable software. And this fall happens quite rapidly in the case of a linux distribution.
Applying the word "stable" to things in the unusable region of state space seems technically, but only technically, correct.
Not meant as a jab at Ubuntu, but I don't think people choose Ubuntu for engineering rigor. If you want something which is dull, predictable and known for their rigor OpenBSD, illumos, FreeBSD, etc. seem like more likely choices.
Or Debian and Redhat, which have the added bonus of being "boring technology."
If you have a problem with them, 20 other people have had that same problem before you did, two of them have posted on Stackoverflow and one wrote a blog post.
OpenBSD and Illumos may be cool, but you really need to know what you're doing to use them.
For me, it's been more about the online help suggestions you're most likely to find an Ubuntu centric answer when you have issues. Of course you also have to consider the date of a Q/A and the version in question. Since perma-switching my desktop in the past few years, I've mostly used Pop, because I like most of their UI changes, including Cosmos, despite a handful of now mostly corrected issues... They tend to push features and kernel versions ahead of Ubuntu LTS.
That said, the underlying structure is still Ubuntu centered. I also like Ubuntu server, even through I don't use snaps, mostly because the install pre-configures most of the initial changes I make to debian anyway. Sudo is configured, you get an option to import your public key and preconfigure non-pwd ssh, etc. I mostly install ufw and Docker and almost everything I run goes under Docker in practice.
Officially you are right, they release it as a stable OS after a few weeks of beta's.
Unofficially any serious user knows to stick to LTS for any production environment. This is by far the most common versions I encounter in the wild and on customer deployment from my experience.
In fact I don't think I ever saw someone using a non-LTS version.
Canonical certainly has these stats? Or someone operating update mirror could infer them? I'd be curious what the real world usage of different Ubuntu versions actually are.
Obvious reason is to have less bugs in the long run. Temporary increase during transition is expected and is not ideal, but after that there should be less of them.
> Obvious reason is to have less bugs in the long run.
The highest sounds are hardest to hear. Going forward is a way to retreat. Great talent shows itself late in life. Even a perfect program still has bugs.
> Sudo has released a security update to address a critical vulnerability (CVE-2025-32463) in its command-line utility. This vulnerability allows an attacker to leverage sudo's -R (--chroot) option to run arbitrary commands as root, even if they are not listed in the sudoers file.
People start making sudo more secure by replacing it with sudo-rs
One great way you can make things more secure is by reducing attack surface. sudo is huge and old, and has tons of functionality that almost no one uses (like --chroot). A from-scratch rewrite with a focus on the 5% of features that 99% of users use means less code to test and audit. Also a newer codebase that hasn't grown and mutated over the course of 35 years is going to be a lot more focused and easier to reason about.
> People start making sudo more secure by replacing it with sudo-rs
I would have much preferred if ubuntu went with run0 as the default instead of trying to rewrite sudo in rust. I like rust but the approach seems wrong from the beginning to me. The vast majority of sudo usecases are covered in run0 in a much simpler way, and many of the sudo bugs come from the complex configurations it supports (not to mention a poorly configured sudo, which is also a security hazard and quite easy to do). Let people who need sudo install and configure it for themselves, but make something simple the default, especially for a beginner distro like ubuntu.
I don't think run0 uses the same configuration syntax as sudo, so it's a no-go from the start.
sudo-rs can be a drop-in replacement for sudo for at least 95-99% of deployments, without any config changes necessary.
Now the rewrite in Rust is important because it greatly prevents appearance of new, memory-based bugs. Which might inadvertently happen if, say because of fixing a logic bug in one of sudo's more complex usages (and thus, less traversed code path), the maintainer introduced a memory bug.
This resistance, IMHO, is moot anyways since the sudo maintainer himself is in support of sudo-rs and actually helped the project in a consultancy capacity (as opposed to directly contributing code).
> I don't think run0 uses the same configuration syntax as sudo, so it's a no-go from the start.
This is ubuntu, purportedly targeting ease of use, good defaults, and new Linux users. How many Linux newbies are running with custom sudo configurations? By definition, basically none, and of those who do, it's only for passwordless sudo, which I assume can be trivially recreated in run0. For advanced or enterprise users, it is not difficult to install sudo manually or port their configuration over to run0.
> This resistance, IMHO, is moot anyways since the sudo maintainer himself is in support of sudo-rs and actually helped the project in a consultancy capacity (as opposed to directly contributing code).
I'm not categorically against sudo-rs, but use the tool for the job. If all you need is a simple way to get root privilege, sudo is overkill.
run0 can f off along with the rest of the systemd abominations. sudo worked for decades perfectly well and didn't call for any replacement. run0, like much of the systemd projects and rust rewrites, is a solution in search of a problem.
Yes, if you ignore all the bugs resulting from features that almost nobody uses.
> along with the rest of the systemd abominations
Not too interested in engaging systemd debates. I have enjoyed using systems with and without systemd, and while I understand the arguments against feature creep, I think you'd be throwing the banana out with the peel to overlook the idea behind run0.
For such a security sensitive piece of software like sudo, reducing complexity is one of the best ways to prevent logic bugs (which, as you mentioned in the sibling, is what the above bug was). If run0 can remove a bunch of unused features that are increasing complexity without any benefit, that's a win to me. Or if you don't like systemd, doas on OpenBSD is probably right up your alley with a similar(ish) philosophy.
I have to reluctantly agree with you on the merit.
However run0 has a property of being a systemd project, which makes it a no go from the inception. And sudo-rs has a similar property of being a virtue signaling project and not a real one. Hence, sudo stays.
> For anyone who wants to read more about Lennart's reasoning
I'm not sure LP is a high-quality source. He has reputation that makes me want to listen to everyone else but him.
> I'm not sure LP is a high-quality source. He has reputation that makes me want to listen to everyone else but him.
Based off his reputation, I would agree, but after reading a lot of his own words via blog posts, comments in github issues, etc, I wonder how he gained that reputation. He has solid reasoning behind many of his ideas even if you disagree with them, and his comments seem pretty respectful and focused on the technical aspects. Maybe things were different in the past, or maybe some segments of the community just never forgave him for the early buggy systemd implementations, or maybe I just happened to only read things he wrote when he wasn't having a bad day, who knows.
Yeah, but he's coming from a viewpoint that is incompatible with the community at large. "Whatever you did in the past was wrong, fuck you and your opinion, that's how it's gonna be".
The "old" version didn't have a test for the feature... the "new" version started with the tests for the "old" version... it was an easy thing to miss as a result.
As other threads have mentioned, a more advanced argument parser and detection of parsed, but unused arguments could have caught this. Of course, there's already complaints about the increase in size for the Rust versions of uutils, mostly offset by a merged binary with separate symlinks. It's a mixed bag.
But, I'm sure you'll be reverting back to Xfree86 now.
How can you be sure that something is "backwards compatible"?
By running tests. And as it happens, the original coreutils did not have a test for this particular edge case.
Now that a divergence of behavior has been observed, all parties -- the coreutils devs and the uutils devs -- have agreed that this is an unacceptable regression and created new test cases to prevent the same misbehavior from happening again.
A lot of database companies go to great lengths to be bug-for-bug compatible with postures. This does happen. It takes some effort, though, which does not appear to have been applied in the case of this rewrite.
Backwards compatible means it's a drop-in replacement.
> How can you be sure that something is "backwards compatible"?
You compare the outputs from the same inputs.
> the original coreutils did not have a test for this particular edge case
So? 'man date' shows this argument option. Just because there was no existing unit test for it doesn't mean it's ok to break it. It would have taken literally 10 seconds to compare the outputs from the old and new program.
I have no opinion whatsoever on the rewrite. It might be the best thing since sliced bread for all I know. I have trouble with integrators recklessly shipping untested dependencies however.
The sliced bread might not be the best quality, but it is rather consistent and much less crummy when making yourself a toast or just butter+jam. No dangers of a kid cutting itself while making its own sandwich either.
Middle class people who think of cooking for themselves as a hobby maybe lose the ability to understand labor-saving technical advances. People who cook as a duty think of cutting bread as more work, which it quite obviously is.
If cooking is a hobby for you, you're seeking labor. Maybe that makes the obvious unintelligible. If you're poor and have a bunch of hungry kids waiting, you don't want the cutting board covering up half your counter space while you're carefully trying not to screw up eight slices of bread before something on the stove burns.
It was combined with the toaster and sandwiches made easily, and taken away for a bit in WWII, and then came back. It was one of those advancements that "stuck".
Knew what absolute disaster of a video this was going to be before clicking. Highly recommend watching Colin's videos, this one included, for the sheer level of "this is clearly a bad idea, let's do it" that he gives off and the things learned along the way.
Aside from there being no crises, rewriting a set of utilities with nearly no bug reports for years and for which no new features is needed accomplishes what exactly? Aside from new bugs, that is.
There surely would be a more beneficial undertaking somewhere else.
If then you’d argue that they may do as they please with their time, fair, but then let’s not pretend this rewrite has any objective value aside from scratching personal itches and learning how cat and co are implemented.
This effort has produced new bug reports and test cases for upstream, clarifying their desired behavior. That's one positive side effect that helps everyone.
I recommend that you look into the bug trackers of the original tools. There were a lot of bug reports that came from reimplementing these tools. It's also not a replacement - at all. You and distro managers can choose not to use them.
In my experience the crisis comes more from an influx of people who wants to change everything without having read or caring for specifications and portability. There is however a lack of people like to clean the dirty stuff behind them.
My expectation would be that every bug in a Rust replacement is going to receive the brightest spotlight that can be found.
If the rest of coreutils is bug free cast the first stone.
I do not think reimplementing stuff in Rust is a bad thing. Why? Because reimplementing stuff is a very good way to througly check the original. It is always good to have as many eyeballs om the code as possible.
Replacing battle tested software with untested rewrite is always a bad idea even if the rewrite is written a trendy language, key word being untested.
I’m still shocked by the number of people who seem to believe that the borrow checker is some kind of magic. I can assure you that the core utils have all already went through static analysers doing more checks than the Rust compiler.
> I can assure you that the core utils have all already went through static analysers doing more checks than the Rust compiler.
Some checks are pretty much impossible to do statically for C programs because of the lack of object lifetime annotations, so no, this statement can't be right.
It is true that the borrow checker doesn't prevent ALL bugs though.
Furthermore, the "bug" in this case is due to an unimplemented feature causing a flag to be silently ignored... It's not exactly something that any static analyser (or runtime ones for that matter) can prevent, unless an explicit assert/todo is added to the codepath.
Well, you can annotate C code to do a lot more than lifetime annotations today. The tooling for C analysis is best in class.
And even without annotations, you can prove safe a lot of constructs by being conservative in your analysis especially if there is no concurrency involved.
Note that I wasn't specifically commenting about this specific issue. It's more about my general fatigue regarding people implying that rewrite in Rust are always better or should be done. I like Rust but the trendiness surrounding it is annoying.
You can do a lot of things. Yes, there are formally verified programs and libraries written in C. But most C programs are not, including the GNU coreutils (although they are battle-tested). It's just the effort involved is higher and the learning curve for verifying C code correctly is staggering. Rust provides a pretty good degree of verification out-of-the-box for free.
Like any trendy language, you've got some people exaggerating the powers of the borrow checker, but I believe Rust did generally bring out a lot of good outcomes. If you're writing a new piece of systems software, Rust is pretty much a no-brainer. You could argue for a language like Zig (or Go where you're fine fine with a GC and a bit more boilerplate), but that puts even more spotlight on the fact that C is just not viable choice for most new programs anymore.
The Rewrites-in-Rust are more controversial and they are just as much as they are hyped here on HN, but I think many of them brought a lot of good to the table. It's not (just?) because the C versions were insecure, but mostly because a lot of these new Rust tools replaced C programs that had become quite stagnant. Think of ripgrep, exa/eza, sd, nushell, delta and difft, dua/dust, the various top clones. And these are just command line utilities. Rewriting something in Rust is not an inherently bad idea of what you are replacing clearly needs a modern makeover or the component is security critical and the code that you are replacing has a history of security issues.
I was always more skeptical about the coreutils rewrite project because the only practical advantage they can bring to the table is more theoretical safety. But I'm not convinced it's enough. The Rust versions are guaranteed to not have memory or concurrency related bugs (unless someone used unverified unsafe code or someone did something very silly like allocating a huge array and creating their own Von Neumann Architecture emulator just to prove you can write unsafe code in Rust). That's great, but they are also more likely to have compatibility bugs with the original tools. The value proposition here is quite mixed.
On the other hand, I think that if Ubuntu and other distros persist in trying to integrate these tools the long-term result will be good. We will get a more maintainable codebase for coreutils in the future.
> It is true that the borrow checker doesn't prevent ALL bugs though.
True, but "prevents all bugs" is that what the debate pretty much digests to in the "rust is better" debate. So you end up with rewrites of code which introduce errors any programmer in any language can make and since you do a full rewrite that WILL happen no matter what you do.
If that's acceptable fine, otherwise not. But you cannot hide from it.
But that's hardly relevant to coreutils, is it? Do these utilities even manage memory?
These are command line utilities meant to be a human porcelain for libc. And last I checked, libc was C.
Ideally these should be developed in tandem, and so should the kernel. This is not the case in Linux for historical reasons, but some of the other utilities such as iputils and netfilter are. The kernel should be Rust long before these porcelain parts are.
~70% of bugs in NEW code, in companies, that have mottos like "move fast and break things". The same study, found that old C, C++ codebases tends to have these once in a blue moon and other bug classes are more prevalent.
Only if you don't use unsafe though. If you look at almost any real-world project written in Rust, you'll find tons and tons of `unsafe` in its dependency tree.
Congratulations to you on being the 10000th? person [0] to miss the point of unsafe/safe.
1. Unsafe doesn't mean the code is actually unsafe. It only tells you that the compiler itself cannot guarantee the correctness of it.
2. Unsafetiness tells the code reviewers to give a specific section of code more scrunity. Clippy also has an option that requires the programmer to put a comment to explain how the unsafe code is actually safe in the context.
3. And IF a bug does occur, it minimizes the amount of code you need to audit.
>You do coverage testing, which would have found the missing date -r path.
The original coreutils test suite didn't cover the -r path. The same bug would have not been statically discovered in most programming languages, except perhaps the highly functional ones like Haskell.
>You do proper code review, which would have found the missing date -r path.
And in an ideal world there would be no bugs at all. This is pointless -- we all know that we need to do a proper code review, but humans make errors.
Then any replacement project should start with implementing a better test suite in order to know what you're doing. That has been the case with many other utilities such as ntp.
And it should most certainly not be possible to declare options and leave them as empty placeholders. That should be flagged just like an unused variable is flagged. That is a problem with whatever option library they chose to use.
That alone should disqualify it from being a replacement yet. We're talking about a stable operating system used in production here. This is simply wrong on so many levels.
If you read my comment with a little care, you may realize that I said nothing about replacing the software, I only spoke about writing it. In my own Distro I wouldn't replace coreutils with a Rust rewrite either at this point.
On the borrow checker: It doesn't prevent logic errors as is commonly understood. These errors are what careful use of Rusts type system could potentially prevent in many cases, but you can write Rust without leveraging it successfully. The Rust compiler is an impressive problem-avoidance tool, but there are classes of problems even it can't prevent from happening.
BLet us not fall into the trap of thinking thst just because the Rust compiler fails to prevent all issues, we should therefore abandon it. We shouldn't forget our shared history of mistakes rustc would have prevented (excerpt):
- CVE-2025-5278, sort: heap buffer under-read in begfield(), Out-of-bounds heap read in traditional key syntax
- CVE-2024-0684, split: heap overflow in line_bytes_split(), Out-of-bounds heap write due to unchecked buffer handling
- CVE-2015-4042, sort: integer overflow in keycompare_mb(), Overflow leading to potential out-of-bounds and DoS
If we were civil engineers with multiple bridge collapses in our past and then, we finally had developed a tool that reliably prevents a certain especially dangerous and common type of bridge collapse, we would be in the wrong profession if we scoffed at the use of the tool. Whether it is Ruet or some C checker isn't really the point here. The point is building stable, secure and performant software others can rely on.
Any new method to achieve this more reliably has to be tested. Ideally in an environment where harm is low.
It is worth noting, that all three CVEs could be prevented by simple bounds checking at runtime. Preventing them does not require borrow checker or any other Rust fancy features.
It is also worth noting that theoreticals don't help such discussions either.
Yes, C programmers can do much more checks. The reality on the ground is -- they do not.
Forcing checks by the compiler seems to be the only historically proven method of making programmers pay more attention.
If you can go out there and make _all_ C code utilize best-in-class static checkers, by all means, go and do so. The world would be a much better place.
If your only criteria is "remove the buffer under- and over-flows", yes. IMO Rust helps with a few more things. Its strong static typing allows for certain gymnastics to make invalid states unrepresentable. Though in fairness, that is sometimes taken too far and makes the code hard to maintain.
> I can assure you that the core utils have all already went through static analysers doing more checks than the Rust compiler.
I'd be very interested in reading more about this. Could you please explain what are these checks and how they are qualitatively and quantitatively better than the default rustc checks?
Please note that this is about checks on the codebase itself - not system/integration tests which are of course already applicable against alternative implementations.
From my point of view, no, it shouldn’t be routine. Lifetime annotations and borrow checking are pretty far from the sweet spot of easy to deploy, useful and get out of the way when it comes to static analysis.
Honestly, Ada is a far better choice than Rust when it comes to the safety/usability ratio and you can even add Spark on top which Rust has no equivalent of. But somehow we are saddled with the oversold fashion machine even when it makes no sense. I mean, look at the discussion. It’s full of people who don’t even know what static analysis is but come explaining to me that I am a C zealot stuck in the past and ignorant of the magnificence of our lord and saviour Rust. I don’t even use C.
I don’t care that people rewrite stuff become they want to be cool. If they maintain them, it’s their responsibility and their time. I do care about distribution replacing things with untested things to sound cool however. That’s shoddy work.
I also deeply disagree that pure functions should be the norm as an Ocaml programmer. They have their place but are no panacea either. That’s the Haskell hype instead of Rust hype this time.
Claiming without evidence that something is battle-tested while also claiming the competition is "trendy" does not help any argument you might be attempting to make.
I am trying to read your comments charitably but I am mostly seeing generalizations which makes it difficult to extract useful info from your commentary.
We can start by dropping dismissive language like "trendy" and "magic", "fashion" and "Rust kids". We can also continue by saying that "believing the borrow checker is some kind of magic" is not an interesting thing to say as it does not present any facts or advance any discussion.
What you "assure" us of is also inconsequential.
One fact remains: there are a multitude of CVEs that, if the program was written in Rust, would not happen. I don't think anyone serious ever claimed that logic bugs are prevented by Rust. People are simply saying: "Can we just have less possible bugs by the virtue of the program compiling, please?" -- and Rust gives them that.
What's your objection to that? And let us leave aside your seeming personal annoyance of whatever imaginary fandom you might be seeing. Let us stick to technical facts.
The objections I see against Rust and Rust rewrites of things remind me a lot of the objections I saw against Linux and Linux users by Windows users, and against macOS and macOS users by Linux users. Dismissive language and denegrating comments without any technical backing; assertions of self-superiority. "It's a toy", "it's not mature", "it's a worse version of blah blah", "my thing does stuff it doesn't do and that's important, but it does things my thing doesn't do and that's irrelevant".
Honestly it's at the point where I see someone complaining about a Rust rewrite and I just go ahead and assume that they're mouthing off about something because they think it's trendy and they think it's cool to hate things people like. I hate being prejudicial about comments but I don't have the energy to spend trying to figure out if someone is debating in good faith or not when it seems to so rarely be the case.
My impression is exactly the same. For multiple years now I keep seeing grandiose claims about "Rust fandom" and all I ever see in those threads are... the C people who complain about that Rust fandom that I cannot for the life of me find in a 300+ comments thread.
It's really weird, at one point I started asking myself if many comments are just hidden from me.
Then I just shrugged it off and concluded that it's plain old human bias and "mine is good, yours is bad" tribe mentality and figured it's indeed not worth my time and energy to do further analysis on tribal instinctive behaviour that's been well-explained in literature for like a century at this point.
I have no super strong feelings for or against Rust, by the way. I have used it to crushing success exactly where it shines and for that it got my approval. But I also work a lot with Elixir and I would rarely try to make a web app with Rust; multiple PLs have the frameworks that make this much better and faster and more pleasant to do.
But it does make me wonder: what stake do these people have in the whole thing? Why do they keep mouthing off about some imaginary zealots that are nowhere to be found?
I define somebody as a zealot by their expression. Fanaticism, generalizations, editorial practices like misconstruing with the goal of tearing down a straw men, and even others.
If you show me Rust advocates with comments like these I would be happy to agree that there are in fact Rust zealots in this thread.
Generally, they don't. Zealotry is not specific to Rust, but you've reminded me of some moments in the 2020's edition of Programming Language Holy Wars™.
Like, one zealot stabbing at another HN commenter saying "Biased people like yourself don't belong in tech", because the other person simply did not like the Rust community. Or another zealot trying to start a cancel campaign on HN against a vocal anti-Rust person. Yet another vigorously denied the existence of Rust supremacism, while simultaneously raging on Twitter about Microsoft not choosing Rust for the Typescript compiler.
IMO, the sad part is watching zealots forget. Reality becomes a story in their head; much kinder, much softer to who they are. In their heads, they are an unbiased and objective person, whereas a "zealot" is just a bad word for a bad, faraway person. Evidence can't change that view because the zealot refuses to look & see; they want to talk. Hence, they fail the mirror test of self-awareness.
Well, most of them fail. The ones who don't forget & don't deny their zealotry, I have more respect for.
I fully stand behind my "Biased people like yourself don't belong in tech" statement from back then. If you follow the thread you'll see that this person mostly just wanted to hate. I tried to reason with them and they refused to participate.
I, or anybody else, owe them no grace beyond a certain point.
Where do you draw the line when confronted with people who already dislike you because they put you in a camp you don't even belong to but you still tried to reason with them to make them see nuance?
Skewing reality to match your bias makes for boring discussions. But again, I stand behind what I said then. And I refuse to be called a zealot. I don't even use Rust as actively; I use the right tool for the job and Rust was that on multiple projects.
If you're not interested in the context then please don't make hasty conclusions and misrepresent history. If you want to continue that old discussion here, I'm open to it.
EDIT: I would also love it if people just gave up the "zealot" label altogether. It's one of the ways to brand people and make them easier to hate or insult. I don't remember ever calling any opponent from the 'other side' a C/C++ zealot, for what it's worth. And again, if people want to actually discuss, I am all for it. But this is not what I have witnessed, historically.
Yes I also think that bugs in a Rust replacement will receive more attention than other bugs. Why?
- the cult-like evangelism from the Rust community that everything written in Rust would be better
- the general notion, that rewriting tools should bring clear an tangible benefits. Rewriting something mostly because the new language is safer will provoke irritation and frustration with affected end-users when the end product turns out to introduce new issues
Somebody linked a comment from an Ubuntu maintainer where they said they want more resilient tools.
If license was the only concern then I'd think that they wouldn't switch the programming language?
And yeah, obviously using Rust will not eliminate all CVEs. It does eliminate buffer overflows and underflows though. Not a small thing.
Also I would not uncritically accept the code of the previous coreutils as good. It got the job done (and has memory safety problems here and there). But is it really good? We can't know for sure.
C is a bad language in many respects, and Rust greatly improves on the situation. Replacing code written in C with code written in Rust is good in and of itself, even if there are some costs associated with the transition.
I also don't think that Rust itself is the only possible good language to use to write software - someone might invent a language in the future that is even better than Rust, and maybe at some point it will make sense to port rust-coreutils to something written in that yet-undesigned language. It would be good to design software and software deployment ecosystems in such a way that it is simply possible to do rewrites like this, rather than rely so much on the emergent behavior of one C source code collection + build process for correctness that people are afraid to change it. Indeed I would argue that one of the flaws of C, a reason to want to avoid having any code written in it at all, is precisely that the C language and build ecosystem make it unnecessarily difficult to do a rewrite.
> C is a bad language in many respects, and Rust greatly improves on the situation. Replacing code written in C with code written in Rust is good in and of itself
That's empty dogma.
C issue is that C compilers provide very little in term of safety analysis by default. That doesn't magically turn Rust into a panacea. I will take proven C or even static analysed C above what the borrow checker adds to Rust any day of the week.
I like the semantic niceties Rust adds when doing new development but that doesn't in any way justify all rewrites as improvement by default.
> C issue is that C compilers provide very little in term of safety analysis by default.
Yes this is precisely a respect in which C is bad. Another respect is that C allows omitting curly braces after an if-statement, which makes bugs like https://www.codecentric.de/en/knowledge-hub/blog/curly-brace... possible. Rust does not allow this. This is not an exhaustive list of ways in which Rust is better than C.
> I will take proven C or even static analysed C above what the borrow checker adds to Rust any day of the week.
Was coreutils using proven or statically analyzed C? If not, why not?
> This is not an exhaustive list of ways in which Rust is better than C.
Which is why your first and only example is a bug from over a decade ago, caused by an indentation error that C compilers can trivially detect as well.
Can detect, but how many are forced? Have you tried using Gentoo with "-Wall -Werror" everywhere?
You have some theoretical guardrails that aren't used widely in practice, many times even can't be used. If they could just be introduced like that, they'd likely be added to the standard in the first place.
The fact that the previous commenter can even ask the question if someone has analyzed or proven coreutils shows how little this "can detect" really guarantees.
The end your "can trivially detect" is very useless compared to Rust's enforcing these guarantees for everyone, all the time.
That seems to come from taking a meaning of errors and warnings from other languages to C. In other language an error means there might be some mistake, and a warning is a minor nitpick. For C, a warning is a stern warning. It is the compiler saying "this is horrible broken and I do compile this to something totally different from what you thought. This will never work, and you should fix it, but I will still do my job and produce the code, because you are the boss." An error is more akin to the compiler not even knowing what that syntax could mean.
Honestly, this is because I like C. I want control.
This is a silly thing to point to, and the very article you linked to argues that the lack of curly braces is not the actual problem in that situation.
In any case, both gcc and clang will give a warning about code like that[1] with just "-Wall" (gcc since 2016 and clang since 2020). Complaining about this in 2025 smells of cargo cult programming, much like people who still use Yoda conditions[2] in C and C++.
C does have problems that make it hard to write safe code with it, but this is not one of them.
It seems like you're trying to fix a social problem (programmers don't care about doing a good job) with a technical solution (change the programming languages). This simply doesn't work.
People who write C code ignoring warnings are the same people who in Rust will resort to writing unsafe with raw pointers as soon as they hit the first borrow check error. If you can't force them to care about C warnings, how are you going to force them to care about Rust safety?
I've seen this happen; it's not seen at large because the vast majority of people writing Rust code in public do it because they want to, not because they're forced.
I think it works, and quite well even. Defaults matter, a lot, and Rust and its stdlib do a phenomenal job at choosing really good ones, compared to many other languages. Cargo's defaults maybe not so much, but oh well.
In C, sloppy programmers will generally create crashy and insecure code, which can then be fixed and hardened later.
In Rust, sloppy programmers will generally create slower and bloated code, which can then be optimized and minimized later. That's still bad, but for many people it seems like a better trade-off for a starting point.
Inexperienced people who don't know better will make safe, bloated code in Rust.
Experienced people who simply ignore C warnings because they're "confident they know better" (as the other poster said) will write unsafe Rust code regardless of all the care in the world put in choosing sensible defaults or adding a borrow checker to the language. They will use `unsafe` and call it a day -- I've seen it happen more than once.
To fix this you have to change the process being used to write software -- you need to make sure people can't simply (for example) ignore C warnings or use Rust's `unsafe` at will.
This dogma is statistically verifiable. We could also replace them with Go counterparts
> I will take proven C or even static analysed C
This just means you don't understand static analysis as much as you do. A rejection of invalid programs by a strict compiler will always net more safety by default than a completely optional step after the fact.
> Replacing code written in C with code written in Rust is good in and of itself, even if there are some costs associated with the transition.
No it isn't. In fact, "Replacing code written in <X> with code written in <Y> is good in and of itself" is a falsehood, for any pair of <X> and <Y>. That kind of unqualified assertion is what the deluded say to themselves, or propagandists (usually <Y> hype merchants) say out loud.
Furthermore, "designing for a future rewrite" is absolute madness. There is already a lot of YAGNI waste work going on. It's fine to design software to be modular, reusable, easily comprehensible, and so on, but designing it so its future rewrite will be easier - WTF? You haven't even built the first version yet, and you're already putting work into designing the second version.
Fashions are fickle. You can't even know what will be popular in the future. Don't try to anticipate it and design for it now.
> Furthermore, "designing for a future rewrite" is absolute madness. There is already a lot of YAGNI waste work going on. It's fine to design software to be modular, reusable, easily comprehensible, and so on, but designing it so its future rewrite will be easier - WTF? You haven't even built the first version yet, and you're already putting work into designing the second version.
If software is in fact designed to be modular, reusable, and easily-comprehensible, then it should be pretty easy to rewrite it in another language later. The fact that many people are arguing that programmers should not even attempt to rewrite C coreutils, for fear of breaking some poorly-understood emergent behavior of the software, is evidence that C coreutils is not in fact modular, reuseable, and easily-comprehensible. This is true regardless of whether or not the Rust rewrite (or another language rewrite) actually happens or not.
> C coreutils is not in fact modular, reuseable, and easily-comprehensible
It's not. I never said it was. Nor are my bank's systems; I don't want them to fuck them up either. My bank's job is not to rewrite their codebase in shinier, newer languages that look nice on their staff's CVs, their job is to continue to provide reliable banking services. The simplest, cheapest way for them to do that is to not rewrite their software at all.
What I was addressing was two different approaches to "design[ing] software [...] in such a way that it is simply possible to do rewrites"
* One way is evergreen: think about modularity, reusability, and good documentation in the code you're writing today. That will help with any mooted future rewrite.
* The other way, which you implied, is to imagine what the future rewrite might look like, and design for that now. That way lies madness.
I'm 50 and prefer Rust... though tbh I haven't worked much with C or Rust. I just never liked C, preferring to stick to higher level languages, even C# over it. I do like Rust though, even if I feel like I'm pulling my hair out sometimes trying to grok ownership symbols. Most Rust I understand by looking at it... I can not say the same with C.
I don't think there is one programming language that is best suited for all types of programs. I think that Rust is probably the best language currently in use for specifically implementing Unix coreutils, but I don't think that this implies that (say) Zig or Odin or Go or Haskell would necessarily be terrible choices (although I really would pick Rust rather than any of those).
But my point was that there's no reason to think that the specific package of design decisions that Rust made as a language is the best possible one; and there's no reason why people shouldn't continue to create new programming languages including ones intended to be good at writing basic PC OS utils, and it's certainly possible that one such language might turn out to do enough things better than Rust does that a rewrite is justified.
I mean, all good then.