Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Agreed. I think that announcement was unprofessional.

This was a unilateral decision affecting other's hard work, and the author didn't provide them the opportunity to provide feedback on the change.

It disregards the importance of ports. Even if an architecture isn't widely used, supporting multiple architectures can help reveal bugs in the original implementation that wouldn't otherwise be obvious.

This is breaking support for multiple ports to rewrite some feature for a tiny security benefit. And doing so on an unacceptably short timeline. Introducing breakage like this is unacceptable.

There's no clear cost-benefit analysis done for this change. Canonical or debian should work on porting the rust toolchain (ideally with tier 1 support) to every architecture they release for, and actually put the cart before the horse.

I love and use rust, it is my favorite language and I use it in several of my OSS projects but I'm tired of this "rewrite it in rust" evangilism and the reputational damage they do to the rust community.





> I love and use rust, it is my favorite language and I use it in several of my OSS projects but I'm tired of this "rewrite it in rust" evangilism and the reputational damage they do to the rust community.

Thanks for this.

I know intellectually, that there are sane/pragmatic people who appreciate Rust.

But often the vibe I’ve gotten is the evangelism, the clear “I’ve found a tribe to be part of and it makes me feel special”.

So it helps when the reasonable signal breaks through the noisy minority.


>I know intellectually, that there are sane/pragmatic people who appreciate Rust.

For the most part that is almost everyone who works on rust and writes rust. The whole coreutils saga was pretty much entirely caused by Canonical, The coreutils rewrite project was originally a hobby project iirc and NOT ready for prod.

for the most part the coreutils rewrite is going well all things considered, bugs are fixed quickly and performance will probably exceed the original implementation in some cases since concurrency is a cake-walk.

The whole re-write it in rust largely stemmed from the idea that if you have a program in C and a program in Rust then the program in rust is "automatically" better which is often the case. The exception is very large battle tested projects with custom tooling in place to ensure the issues that make C/C++ a nightmare are somewhat reduced. Rust ships with the borrow checker by default meaning logically its like for like.

In the real world it is not always the case there are still plenty of opportunity for straight up logic bugs and crashes (See cloudflare saga) that are completely just due to bad programming practices.

Rust is the nail and the hammer, but you can still hit your finger if you don't know how to swing it properly

FYI for the purpose of disclosing bias I am one of the few "rust first" developers. I learned the language in 2021, and it was the first "real" programming language I learned how to use effectively. Any attempts I have had to dive into other languages have been short lived and incredibly frustrating because rust is a first-class experience in how to make a systems programming language


It really makes me upset that we are throwing away decades of battle tested code just because some people are excited about the language du jour. Between the systemd folks and the rust folks, it may be time for me to move to *BSD instead of Linux. Unfortunately, I'm very tied to Docker.

That “battle-tested code” is often still an enduring and ongoing source of bugs. Maintainers have to deal with the burden of working in a 20+ year-old code base with design and architecture choices that probably weren’t even a great idea back then.

Very few people are forcing “rewrite in rust” down anyone’s throats. Sometimes it’s the maintainers themselves who are trying to be forward-thinking and undertake a rewrite (e.g., fish shell), sometimes people are taking existing projects and porting them just to scratch an itch and it’s others’ decisions to start shipping it (e.g., coreutils). I genuinely fail to see the problem with either approach.

C’s long reign is coming to an end. Some projects and tools are going to want to be ahead of the curve, some projects are going to be behind the curve. There is no perfect rate at which this happens, but “it’s battle-tested” is not a reason to keep a project on C indefinitely. If you don’t think {pet project you care about} should be in C in 50 years, there will be a moment where people rewrite it. It will be immature and not as feature-complete right out the gate. There will be new bugs. Maybe it happens today, maybe it’s 40 years from now. But the “it’s battle tested, what’s the rush” argument can and will be used reflexively against both of those timelines.


As long as LLVM (C++ but still) is not rewritten is rust [0] , I don't buy it. C is like JavaScript, it's not perfect, is everywhere and you cannot replace it without a lot of effort and bugfix/regression tests.

If I take for example sqlite (25 years old [3]) there are already 2 rewrites in rust [1] and [2], and each one has its bugs.

And as an end user I'm more enclined to trust the battle-tested original for my prod than its copies. As long as I don't have the proof the rewrite is at least as good as the original, I'll stay with the original. Simple equals more maintainable. That's also why sqlite maintainers won't rewrite it in any other language [4].

The trade of rust is "you can lose features and have unexpected bugs like any other language, but don't worry they will be memory safe bugs".

I'm not saying rust is bad and you should not rewrite anything in it, but IMHO rust programmers tend to overestimate the quality of the features they deliver [5] or something along these lines.

Memory safe != good product

[0] https://rustc-dev-guide.rust-lang.org/overview.html [1] https://github.com/epilys/rsqlite3 [2] https://github.com/tursodatabase/turso/ [3] https://sqlite.org/chronology.html [4] https://www.sqlite.org/whyc.html [5] https://www.phoronix.com/news/Ubuntu-25.10-Broken-Upgrade


systemd has been the de facto standard for over a decade now and is very stable. I have found that even most people who complained about the initial transition are very welcoming of its benefits now.

Depends a bit on how you define systemd. Just found out that the systemd developers don't understand DNS (or IPv6). Interesting problems result from that.

> Just found out that the systemd developers don't understand DNS (or IPv6).

Just according to Github, systemd has over 2,300 contributors. Which ones are you referring to?

And more to the point, what is this supposed to mean? Did you encounter a bug or something? DNS on Linux is sort of famously a tire fire, see for example https://tailscale.com/blog/sisyphean-dns-client-linux ... IPv6 networking is also famously difficult on Linux, with many users still refusing to even leave it enabled, frustratingly for those of us who care about IPv6.


Systemd-resolved invents DNS records (not really something you would like to see, makes debugging DNS issues a nightmare). But worse, it populates those DNS records with IPv6 link local addresses, which really have no place in DNS.

Then, when after a nice debugging session why your application behaves so strangely, all the data in DNS is correct, why doesn't it work, you find that this issue has been reported before and was rejected as won't fix, works as intended.


Hm, but systemd-resolved mainly doesn't provide DNS services, it provides _name resolution_. Names can be resolved using more sources than just DNS, some of which do support link-locals properly, so it's normal for getaddrinfo() or the other standard name resolution functions to return addresses that aren't in DNS.

i.e. it's not inventing DNS records, because the things returned by getaddrinfo() aren't (exclusively) DNS records.

The debug tool for this is `getent ahosts`. `dig` is certainly useful, but it makes direct DNS queries rather than going via the system's name resolution setup, so it can't tell you what your programs are seeing.


systemd-resolved responds on port 53. It inserts itself in /etc/resolv.conf as the DNS resolver that is to be used by DNS stub resolvers.

It can do whatever it likes as longs as it follows DNS RFCs when replying to DNS requests.

Redefining recursive DNS resolution as general 'name resolution' is indeed exactly the kind of horror I expect from the systemd project. If systemd-resolved wants to do general name resolution, then just take a different transport protocol (dbus for example) and leave DNS alone.


It's not from systemd though. glibc's NSS stuff has been around since... 1996?, and it had support for lookups over NIS in the same year, so getaddrinfo() (or rather gethostbyname(), since this predates getaddrinfo()!) have never just been DNS.

systemd-resolved normally does use a separate protocol, specifically an NSS plugin (see /etc/nsswitch.conf). The DNS server part is mostly only there as a fallback/compatibility hack for software that tries to implement its own name resolution by reading /etc/hosts and /etc/resolv.conf and doing DNS queries.

I suppose "the DNS compatibility hack should follow DNS RFCs" is a reasonable argument... but applications normally go via the NSS plugin anyway, not via that fallback, so it probably wouldn't have helped you much.


I'm not sure what you are talking about. Our software has a stub resolver that is not the one in glibc. It directly issues DNS requests without going through /etc/nsswitch.conf.

It would have been fine if it was getaddrinfo (and it was done properly) because getaddrinfo gives back a socket and that can add the scope ID to the IPv6 link local address. In DNS there is no scope ID, so it will never work in Linux (it would work on Windows, but that's a different story).


If you don't like those additional name resolution methods, then turn them off. Resolved gives you full control over that, usually on a per-interface basis.

If you don't like that systemd is broken, then you can turn it off. Yes, that's why people are avoiding systemd. Not so much that the software has bugs, but the attitude of the community.

It's not broken - it's a tradeoff. systemd-resolved is an optional component of systemd. It's not a part of the core. If you don't like the choices it took, you can use another resolver - there are plenty.

I don't think many people are avoiding systemd now - but those who do tend to do it because it non-optionally replaces so much of the system. OP is pointing out that's not the case of systemd-resolved.


It's not a trade-off. Use of /etc/resolv.conf and port 53 is defined by historical use and by a large number of IETF RFC.

When you violate those, it is broken.

That's why systemd has such a bad reputation. Systemd almost always breaks existing use in unexpected ways. And in the case of DNS, it is a clearly defined protocol, which systemd-resolved breaks. Which you claim is a 'tradeoff'.

When a project ships an optional component that is broken, it is still a broken component.

The sad thing about systemd (including systemd-resolved) is that it is default on Linux distributions. So if you write software then you are forced to deal with it, because quite a few users will have it without being aware of the issues.


That's the main problem with systemd: replacing services that don't need replacing and doing a bad job of it. Its DNS resolver is particularly infamous for its problems.

Even worse, the license requirements (gpl->mit) will be less beneficial to the community than the rust replacements.

Rust has no specific license requirements on code written in it. People choose whatever license they prefer.

True, but you might want to look into the licenses people are actually choosing for Rust versions of coreutils/uutils and who's promoting them.

Sure, those authors chose that license because they did not really particularly care for the politics of licenses and chose the most common one in the Rust ecosystem, which is MIT/Apache 2.

If folks want more Rust projects under licenses they prefer, they should start those projects.


I released my most recent Rust project under the GPLv3. The first issue was someone asking me to relicense it under MIT. I politely declined.

I bring this up because no matter what you choose, someone will wish it was otherwise.


> If folks want more Rust projects under licenses they prefer, they should start those projects.

100% true, but also hides a powerful fact: Our choices aren't limited to doing it ourselves. Listening to others and discussing how to do things as a group is the essence of community seeking long-term stability abd fairness. It'a how we got to the special place we are now.

Not everyone can or should start their own open source project. Maybe theyre already doing another one. Maybe they don't know how to code. The viewpoint of others/users/customers is valid and should not only be listened to but asked for.


I agree that throwing away battle tested code is wasteful and often not required. Most people are not of the mindset of just throwing things away but there is a drive to make things better. There are some absolute monoliths such as the Linux kernel that will likely never break free of its C shackles and thats completely okay and acceptable to me

Well, what's the alternative?

It is basic knowledge that memory safety bugs are a significant source of vulnerabilities, and by now it well-established that the first developer who can avoid C without introducing memory safety bugs hasn't been born yet. In other words: if you care about security at all, continuing with the status quo isn't an option.

The C ecosystem has tried to solve the problem with a variety of additional tooling. This has helped a bit, but didn't solve the underlying problem. The C community has demonstrated that it is both unwilling and unable to evolve C into a memory-safe language. This means that writing additional C code is a Really Bad Idea.

Software has to be maintained. Decade-old battle-tested codebases aren't static: they will inevitably require changes, and making changes means writing additional code. This means that your battle-tested C codebase will inevitably see changes, which means it will inevitably see the introduction of new memory safety bugs.

Google's position is that we should simply stop writing new code in C: you avoid the high cost and real risk of a rewrite, and you also stop the neverending flow memory safety bugs. This approach works well for large and modular projects, but doing the same in coreutils is a completely different story.

Replacing battle-tested code with fresh code has genuine risks, there's no way around that. The real question is: are we willing to accept those short-term risks for long-term benefits?

And mind you, none of this is Rust-specific. If your application doesn't need the benefits of C, rewriting it in Python or Typescript or C# might make even more sense than rewriting it in Rust. The main argument isn't "Rust is good", but "C is terrible".


But the result of the battle test is the reason to throw the crippled veteran away!

I agree with everything you've said here, except that the reality of speaking with a "rust first" developer is making me feel suddenly ancient. But that aside, the memory safety parts are a huge benefit, but far from the only one. Option and Result types are delightful. Exhaustive matching expressions that won't compile if you add a new variant that's not handled are huge. Types that make it impossible to accidentally pass a PngImage into a function expecting a str, even though they might both be defined as contiguous series of bytes down deep, makes lots of bugs impossible. A compiler that gives you freaking amazing error messages that tell you exactly what you did wrong and how you can fix it sets the standard, from my experience. And things like "cargo clippy" which tell you how you could improve your code, even if it's already working, to make it more efficient or more idiomatic, are icing on the cake.

People so often get hung up on Rust's memory safety features, and dismiss is as through that's all it brings to the table. Far from it! Even if Rust were unsafe by default, I'd still rather use it that, say, C or C++ to develop large, robust apps because it has a long list of features that make it easy to write correct code, and really freaking challenging to write blatantly incorrect code.

Frankly, I envy you, except that I don't envy what it's going to be like when you have to hack on a non-Rust code base that lacks a lot of these features. "What do you mean, int overflow. Those are both constants! How come it didn't let me know I couldn't add them together?"


Most of us sane people tend to be more quiet unfortunately.

I enjoy rust, but I enjoy not breaking things for users and making lives harder for other devs even more.


Much of the drive to rewrite software in Rust is a reaction to the decades-long dependence on C and C++. Many people out there sit in the burning room like the dog in that meme, saying "this is fine". Most of them don't have to deal at all directly with the consequences involved.

Rust is the first language for a long time with a chance at improving this situation. A lot of the pushback against evangelism is from people who simply want to keep the status quo, because it's what they know. They have no concept of the systemic consequences.

I'd rather see over-the-top evangelism than the lack of it, because the latter implies that things aren't going to change very fast.


> I'd rather see over-the-top evangelism than the lack of it, because the latter implies that things aren't going to change very fast.

No new technology should be an excuse to engage in unprofessional conduct.

When you propose changes to software, you listen to feedback, provide analysis of the benefits and detriments, and make an informed decision.

Rust isn't special, and isn't a pass to cause endless heartache for end users and developers because your code is in a "safer" language.

New rust code should be held to the same standards as new C and C++ code that causes breakage.

Evangelism isn't useful here, let the tool speak for itself.


If you were right, then people should not be using Rust or C/C++. They should be using SPARK/Ada. The SPARK programming language, a subset of Ada, was used for the development of safety-critical software in the Eurofighter Typhoon, a British and European fighter jet. The software for mission computers and other systems was developed by BAE Systems using the GNAT Pro environment from AdaCore, which supports both Ada and SPARK. It's not just choosing the PL, but the whole environment including the managers.

This is an interesting read on software projects and failure: https://spectrum.ieee.org/it-management-software-failures


Nvidia evaluated Rust and then chose SPARK/Ada for root of trust for GPU market segmentation licensing, which protects 50% profit margin and $4T market cap.

"Nvidia Security Team: “What if we just stopped using C?”, 170 comments (2022), https://news.ycombinator.com/item?id=42998383


> If you were right, then people should not be using Rust or C/C++. They should be using SPARK/Ada.

Not all code needs that level of assurance. But almost all code can benefit from better memory safety than C or C++ can reliably provide.

Re what people "should" be using, that's why I chose my words carefully and wrote, "Rust is the first language for a long time with a chance at improving this situation."

Part of the chance I'm referring to is the widespread industry interest. Despite the reaction of curmudgeons on HN, all the hype around Rust is a good thing for wider adoption.

We're always going to have people resistant to change. They're always going to use any excuse to complain, including "too much hype!" It's meaningless noise.


You can’t change things faster than persuading the people that maintain the things. Over-the-top evangelism doesn’t work well for persuasion.

On the other hand, the presence of an alternative is the persuasion.

It's very easy to justify for yourself why you aren't addressing the hard problems in your codebase. Combine that with a captive audience, and you end up with everyone running the same steaming heap of technical debt and being unhappy about it.

But the second an alternative starts to get off the ground there's suddenly a reason to address those big issues: people are leaving, and it is clear that complacency is no longer an option. Either evolve, or accept that you'll perish.


That was probably a mischaracterization on my part. I wouldn't consider rewriting almost everything useful that's currently in C or C++ to be over the top. That would be a net good.

Posts that say "I rewrote X in Rust!" shouldn't actually be controversial. Every time you see one, you should think to yourself wow, the software world is moving towards being more stable and reliable, that's great!


But it is nonsense. Every time some rewrote something (in Rust or anything else), I instead worry about what breaks again, what important feature is lost for the next decade, how much working knowledge is lost, what muscle memory is now useless, what documentation is outdated, etc.

I also doubt Rust brings as many advantages in terms of stability that people claim. The C code I rely on in my daily work basically never fails (e.g. I can't remember "vim" ever crashing on me in the last 30 years I use it). That this is all rotten code C that needs to be written is just nonsense. IMHO it would far more useful to invest in proper maintenance and incremental improvements.


Regarding VIM - it's not as risky as something that's exposed over a network, but it's had plenty of CVEs, and skimming them shows many if not most are related to memory safety. See:

https://www.cvedetails.com/vulnerability-list/vendor_id-8218...


You want the computing infrastructure to remain essentially as it was in the 1970s. I don't.

Me neither, I just do not want to make steps backwards because people rewriting stuff for stupid reasons. But your argument is just the old "you do not want to adapt" shaming attempt which I think has no intellectual substance anyway.

Except in many of such cases, like here in apt, any compiled language with GC/RC would do.

This is the kind of UNIX stuff that we would even write in Perl or Tcl back in the day.


Sometimes good things are ruined by people around. I think Rust is fine, although I doubt its constraints are universally true and sensible in all scenarios.

This is also not an endorsement of C/C++.


> It disregards the importance of ports. Even if an architecture isn't widely used, supporting multiple architectures can help reveal bugs in the original implementation that wouldn't otherwise be obvious.

The problem is that those ports aren't supported and see basically zero use. Without continuous maintainer effort to keep software running on those platforms, subtle platform-specific bugs will creep in. Sometimes it's the application's fault, but just as often the blame will lie with the port itself.

The side-effect of ports being unsupported is that build failures or test failures - if they are even run at all - aren't considered blockers. Eventually their failure becomes normal, so their status will just be disregarded as noise: you can't rely on them to pass when your PR is bug-free, so you can't rely on their failure to indicate a genuine issue.


> Canonical or debian should work on porting the rust toolchain (ideally with tier 1 support) to every architecture they release for

This will be an impediment for new architectures in the future. Instead of just "builds with gcc" we would need to wait for Rust support.


> Instead of just "builds with gcc" we would need to wait for Rust support.

There's always rustc_codegen_gcc (gcc backend for rustc) and gccrs (Rust frontend for gcc). They are't quite production-ready yet, but there's a decent chance it's good enough for the handful of hobbyists wanting to run the latest applications on historical hardware.

As to adding new architectures: it just shifts the task from "write gcc backend" to "write llvm backend". I doubt it'll make much of a difference in practice.


> to rewrite some feature for a tiny security benefit

For what it's worth, the zero->one introduction of a new language into a big codebase always comes with a lot of build changes, downstream impact, debate, etc. It's good for that first feature to be some relatively trivial thing, so that it doesn't make the changes any bigger than they have to be, and so that it can be delayed or reverted as needed without causing extra trouble. Once everything lands, then you can add whatever bigger features you like without disrupting things.

No comment on the rest of the thread...


> Canonical or debian should work on porting the rust toolchain (ideally with tier 1 support) to every architecture they release for, and actually put the cart before the horse.

They already have a Rust toolchain for every system Debian releases for.

The only architectures they're arguing about are non-official Debian ports for "Alpha (alpha), Motorola 680x0 (m68k), PA-RISC (hppa), and SuperH (sh4)", two of which are so obscure I've never even heard of them and one of the others most famous for powering retro video game systems like Sega Genesis.


> This is breaking support for multiple ports to rewrite some feature for a tiny security benefit. And doing so on an unacceptably short timeline. Introducing breakage like this is unacceptable. I’m Normally I’d agree, but the ports in question are really quite old and obscure. I don’t think anything would have changed with an even longer timeline.

I think the best move would have been to announce deprecation of those ports separately. As it was announced, people who will never be impacted by their deprecation are upset because the deprecation was tied to something else (Rust) that is a hot topic.

If the deprecation of those ports was announced separately I doubt it would have even been news. Instead we’ve got this situation where people are angry that Rust took something away from someone.


Those ports were never official, and so aren't being deprecated. Nothing changes about Debian's support policies with this change.

EDIT: okay so I was slightly too strong: some of them were official as of 2011, but haven't been since then. The main point that this isn't deprecating any supported ports is still accurate.


That’s helpful info, but I don’t think it will change any of the minds that are angry about what they see as Rust taking something away from someone.

It’s the way the two actions were linked that caused the controversy.


*It disregards the importance of ports. Even if an architecture isn't widely used, supporting multiple architectures can help reveal bugs in the original implementation that wouldn't otherwise be obvious."

Imo this is true for going from one to a handful, but less true when going from a handful to more. Afaict there are 6 official ports and 12 unofficial ports (from https://www.debian.org/ports/).


It really comes down to which architectures you're porting to. The two biggest issues are big endian vs little endian, and memory consistency models. Little endian is the clear winner for actively-developed architectures, but there are still plenty of vintage big endian architectures to target, and it looks like IBM mainframes at least are still exclusively big endian.

For memory consistency, Alpha historically had value as the weakest and most likely to expose bugs. But nobody really wants to implement hardware like that anymore, almost everything falls somewhere on the spectrum of behavior bounded by x86 (strict) and Arm (weaker), and newer languages (eg. C++ 11) mean newer code can be explicit about its expectations rather than ambiguous or implicit.


> and the author didn't provide them the opportunity to provide feedback on the change.

this is wrong, the author wrote a mail about _intended_ changes _1/2 year_ before shipping them on the right Debian mailing list. That is _exactly_ how giving people an opportunity to give feedback before doing a change works...

Sure, they made it clear they don't want any discussions to be side tracked about a topics about thing Debian doesn't official support. That is not nice, but understandable, I have seen way too much time wasted on discussions being derailed.

The only problem here is people overthinking things and/or having issues with very direct language IMHO.

> This is breaking support for multiple ports to rewrite some feature for a tiny security benefit

It's not braking anything supported.

The only thing breaking are unsupported. And are only niche used too.

Nearly all projects have very limited capacities and have to draw boundaries, and the most basic boundary is unsupported means unsupported. This doesn't mean you don't keep unsupported use cases in mind/avoid accidentally breaking them, but it means they don't majorly influence your decision.

> And doing so on an unacceptably short timeline

1/2 a year for a change which only breaks unsupported things isn't "unacceptably short", it's actually pretty long. If this weren't OSS you could be happy about one month and most likely less. People complain about how little resources OSS projects have, but the scary truth is most commercial projects have even less resource and must ship at a dead line. Hence why it's very common for them to be far worse when it comes to code quality, technical dept, not correctly handled niche error cases etc.

> to every architecture they release for

Rust toolchain has support for every architecture _they_ release for, it breaks architectures niche unofficial 3rd party projects support. Which is sad, sure, but unsupported is in the end unsupported.

> cost-benefit analysis done for this change.

Who says it wasn't done at all. People have done so over and over on the internet for all kind of Linux distributions. But either way, you wouldn't include that in a mail announcing an intend for change (as you don't want discussions to be side tracked). Also benefits are pretty clear:

- using Sequoia for PGP seems to be the main driving force behind this decision, this projects exists because of repeating running into issues (including security issues) with the existing PGP tooling. It happens to use rust, but if there where no rust it still would exist. Just using a different language.

- some file format parsing is in a pretty bad state to a point you most likely will rewrite it to fix it/make it robust. When anyway doing so it using rust if preferable.

- and long term: Due to the clear, proven(1), benefits of using rust for _new_ project/code increasingly more use it, by not "allowing" rust to be required Debian bars itself form using any such project (like e.g. Sequoia which seems to be the main driver behind this change)

> this "rewrite it in rust" evangilism

which isn't part of this discussion at all,

the main driving part seems to be to use Sequoia, not because Sequoia is in rust but because Sequoia is very well made and well tested.

Similar Sequoia isn't a "lets re-write everything in rust project" but a "state of PGP tooling" is so painful for certain use cases (not all) in ways you can't fix by trying to contribute upstream that some people needed a new tooling, and rust happened to be the choice for implementing that.


I fully agree, and in what concerns command line utility applications I see no benefit of using Rust's borrow checker.

At most if a rewrite would happen, it makes much more sense in a compiled language with automatic resource management.


Command line utilities often handle not-fully-trusted data, and are often called from something besides an interactive terminal.

Take for example git: do you fully trust the content of every repository you clone? Sure, you'll of course compile and run it in a container, but how prepared are you for the possibility of the clone process itself resulting in arbitrary code execution?

The same applies to the other side of the git interaction: if you're hosting a git forge, it is basically a certainty that whatever application you use will call out to git behind the scenes. Your git forge is connected to the internet, so anyone can send data to it, so git will be processing attacker-controlled data.

There are dozens of similar scenarios involving tools like ffmpeg, gzip, wget, or imagemagick. The main power of command line utilities is their composability: you can't assume it'll only ever be used in isolation with trusted data!


None of that requires a borrow checker.

Any memory safe compiled managed language will do.


That's definitely true!

Some people might complain about the startup cost of a language like Java, though: there are plenty of scripts around which are calling command-line utilities in a very tight loop. Not every memory-safe language is suitable for every command-line utility.


Java is not the only option, and even then, GraalVM and OpenJ9 exist, long are the days people had to pay for something like Excelsior JET.

I totally agree. In reality, today, if you want to produce auditable high-integrity, high-assurance, mission-critical software, you should be looking at SPARK/Ada and even F* (fstar). SPARK has legacy real world apps and a great eco system for this type of sofware. F* is being used on embedded and in other realworld apps where formal verification is necessary or highly advantageous. Whether I like Rust or not, should not be the defining factor. AdaCore has a verifed Rust compiler, but the tooling around it does not compare to that around SPARK/Ada. I've heard younger people complain about PLs being verbose, boring, or not their thing, and unless you're a diehard SPARK/Ada person, you probably feel that way about it too. But sometimes the tool doesn't have to be sexy or the latest thing to be the right thing to use. Name one Rust realworld app older than 5 years that is in this category.

> Name one Rust realworld app older than 5 years that is in this category.

Your "older than 5 years" requirement isn't really fair, is it? Rust itself had its first stable release barely 10 years ago, and mainstream adoption has only started happening in the last 5 years. You'll have trouble finding any "real-world" Rust apps older than 5 years!

As to your actual question: The users of Ferrocene[0] would be a good start. It's Rust but certified for ISO 26262 (ASIL D), IEC 61508 (SIL 4) and IEC 62304 - clearly someone is interested in writing mission-critical software in Rust!

[0]: https://ferrocene.dev/


The point was how would you justify choosing Rust based on any real world proof. Maybe it will be ready in a few years, but even then it is far from achieving what you already have in SPARK along with proven legacy. I am very familiar with this, and I still chose SPARK/Ada instead of Rust. SPARK is already certified for all of this. And aerospace, railway, and other high-integrity app industries are already familiar with the output of the SPARK tools, so there's less friction and time in auditing them for certification. Aside from AdaCore, who collaborated with Ferrocene, to get a compiler certified I don't see much traction to change our decision. We are creating show control software for cyber-physical systems with potential dire consequences, so we did a very in-depth study Q1 2025, and Rust came up short.

> I love and use rust, it is my favorite language and I use it in several of my OSS projects but I'm tired of this "rewrite it in rust" evangilism and the reputational damage they do to the rust community.

This right here.

As a side-note, I was reading one of Cloudflare's docs on how it implemented its firewall rules, and it's so utterly disappointing how the document stops being informative suddenly start to reads like a parody of the whole cargo cult around Rust. Rust this, Rust that, and I was there trying to read up on how Cloudflare actually supports firewall rules. The way they focus on a specific and frankly irrelevant implementation detail conveys the idea things are ran by amateurs that are charmed by a shiny toy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: