Hacker News new | past | comments | ask | show | jobs | submit login
Progress toward a GCC-based Rust compiler (lwn.net)
350 points by askl 9 months ago | hide | past | favorite | 237 comments



The claims in the article feel kinda weak as to the motivation.

> Cohen's EuroRust talk highlighted that one of the major reasons gccrs is being developed is to be able to take advantage of GCC's security plugins. There is a wide range of existing GCC plugins that can aid in debugging, static analysis, or hardening; these work on the GCC intermediate representation

> One more reason for gccrs to exist is Rust for Linux, the initiative to add Rust support to the Linux kernel. Cohen said the Linux kernel is a key motivator for the project because there are a lot of kernel people who would prefer the kernel to be compiled only by the GNU toolchain.

That explains why you’d want GCC as the backend but not why you need a duplicate front end. I think it’s a bad idea to have multiple front ends and Rust should learn from the mistakes of C++ which even with a standards body has to deal with a mess of switches, differing levels of language support for each compiler making cross-platform development harder, platform-specific language bugs etc etc.

> A lot of care is being put into gccrs not becoming a "superset" of Rust, as Cohen put it. The project wants to make sure that it does not create a special "GNU Rust" language, but is trying instead to replicate the output of rustc — bugs, quirks, and all. Both the Rust and GCC test suites are being used to accomplish this.

In other words, I’d love gccrs folks to explain why their approach is a better one than rustc_codegen_gcc considering the latter is able to achieve this with far less effort and risk.


Having another Rust implementation allows for an "audit" to help validate the Rust spec and get rid of any unspecified behavior. It would also give users options. If I hit a compiler bug in MSVC, I can file a report, switch to GCC and keep working on my project until the bug is fixed. With Rust, that's not currently possible.


Your sentiment is a commonly expressed one, but not usually by people who have adopted Rust. It’s usually Clang/MSVC/GCC users who have decided this is the optimal flow and want to replicate it in all future codebases they work in, regardless of language.

In reality if you hit a compiler bug in Rust or Go (or any other language with one main impl like Python, Ruby…) you would file a report and do one of two things - downgrade the compiler (if it’s an option) or write the code a different way. Compiler bugs in these languages are rare enough that this approach works well.

That said, for people who are really keen on a spec, there is one being worked on (https://blog.rust-lang.org/inside-rust/2023/11/15/spec-visio...).

But this GCCRS effort doesn’t get you any closer to your ideal C/C++ style workflow because they are committing to matching the semantics of the main compiler exactly. Bugs and all. And that’s the way it should be.

The Rust ecosystem becomes worse if I have to install a different toolchain and learn a different build system with a different compiler for every new project I interact with. And after all that extra effort it turns out there are subtle differences between implementations. My developer experience is just worse at that point. If I wanted to code like this, I would code in C++ but I don’t.


> Your sentiment is a commonly expressed one, but not usually by people who have adopted Rust.

Perhaps because it’s currently not possible in Rust, so users don’t really see the advantages yet.

Multiple implementation are always a good thing. They bring in diversity of thought and allow for some competition in the implementation. Having multiple slightly different implementations also gives more clarity to a spec.

This isn’t the kind of stuff that users tend to think about until it’s already there.


I agree there is value to multiple implementations. We wouldn't be where we are today with open compilers and efficient machine code if it weren't for the competing implementations of LLVM and GCC.

However there isn't enough benefit to multiple language _frontends_ that I can see that outweighs the many downsides.

Not only do you have to contend with the duplication of effort for implementing two or more language frontend projects, you now create ecosystem splits when inevitably the implementations do not match their behavior.

The biggest example of multiple frontends, C++, is not one language. C++ is Clang-C++, GNU-C++, MSVC-C++, and so-on. All of these implementations have subtle incompatibilities with how they handle identical code, or they have disjoint feature matrices, or they have bugs, and so on. Multiple frontends is IMO one of the _worst_ parts of C++ because it makes writing truly portable code almost impossible. How many new C++ features are unusable in practice because one of the main 3 toolchains hasn't implemented it yet?

Rust, on the other hand, everything is portable by default. No #ifdef CLANG, no wrangling with subtly incompatible toolchains. If I write code I can trust it will just work on any architecture Rust can build for, ignoring implementation bugs and explicitly platform specific code.

I have a hobby game engine I work on in my spare time that was developed exclusively on x86-64 for years and I ported it to run on Apple Silicon ARM in two afternoons. Rust just _worked_, all the time was spent on my C/C++ dependencies and new code needed for MacOS support.

I simply don't see how nebulous, difficult to quantify benefits of competing implementations can outweigh the practical advantages of truly portable code that comes from a single implementation. The ecosystem benefits to a uniform language, and the benefits to my sanity with being able to trust my code while compile everywhere I need it to.


Writing truly portable C++ it’s not all that hard. In the worst case you need a handful of ifdefs even if you write systems code.


#ifdefs are a signed of ported code, and not portable code.


If you need a few hundred lines of ifdef in a codebase with millions of lines it is portable in the sense that you are able to port it with reasonable effort.


That's nice, you're welcome to continue writing your ifdefs. Nothing you or anyone else has said has built a compelling case for alternate frontends. You're saying "it's not much effort". Sure, but I prefer 0 effort.

What we're saying is that everyone actually using Rust today is fairly happy with how the language is. That's why Rust has had an 80%+ approval rating among Rust developers since its creation and C++ is at 49% (https://survey.stackoverflow.co/2023/#section-admired-and-de...).

This suggestion of alternate frontends still has no good technical reason driving it, just people pattern matching on their past experience. If it happens, all it will do is lower Rust's approval to C++'s level.


If only alternative frontends were the only reason that C++ developers were unhappy with the language...


Only #if you ignore the hundred other issues with building portable c++ code:

* Libraries * ABIs * Build system(s) * Platform compiler support...

In compiled/interpreted land, most if not all of these are solved problems. Even for libraries, usually dependencies can be reliably downloaded and recompiled (if they are not compiled on the fly).

I don't really care if you can create a "portable header" library when the ecosystem is sharded, and "portable" only holds true for some subset of c++ compiler/platforms.


That seems orthogonal to having multiple frontends. I have in fact worked on several systems that compiled under three compilers and ran on multiple architectures and/or operating systems and it was not a hot mess of ifdef spaghetti.


Its orthogonal to ifdef sphaghetti so long as that's not in support of the (general) sphaghetti.

I'm not specifically commenting on ifdefs, because ifdefs are only a small part of the issue (caused by multiple compiler frontends).


It's bad enough already when there's a single implementation, but people can't agree which version increment has left the newfangled stage.


That is dumb and counterproductive. Your entire scenario presupposes unlimited labor capacity and funding (monetary or otherwise), so “diversity is better!”

In reality, the people wasting their time to produce a redundant ground-up rewrite of existing functionally could have spent their time and effort on something else that actually moves the world forward.


I think the only thing I’m assuming here is that there are people who want to do the work. Perhaps because it’s fun to write a compiler (it is!), or because they don’t view llvm as the pinnacle of compiler technology, or maybe their employer sees value in it.

And apparently there are people who want to do the work, because we have work on a couple of implementations.

You can choose to use your one true implementation, no one is stopping you. But it’s kind of weird to assume that people working on another one are detracting from yours. That’s not usually the way open source software works.


That's only true if you are not distributing your code. If you want to kill portability just involve MSVC in anything.


All i hear is fragmented ecosystem.


Have you had your ears checked lately?


Nah, they’re right.


Monopoly is never the optimal flow.

Sincerely,

- Rust adopter


Monopoly rings hollow when it comes to a community open source project.

Monopolies are bad when they stifle progress, but rust's written under a free software license. At any point, anyone can take the project and fork it into a new one, can carry on the progress under a new flag.

Unlike traditional monopolies, the rust project's priorities aren't extracting profits or holding power, but rather in improving the language, and they're open to anyone (well, anyone who's tolerant of others) joining and helping.

The argument here is that we will end up with a better outcome if we work together as a community, rather than if we fragment just for some perceived ideal of multiple implementations.

While you're criticizing the rust monopoly, you might as well go and also yell at the group of friends voluntarily picking up trash off the sidewalk for free since they're working together as one monopoly of friends instead of creating two competing cleaning groups.


> Monopoly rings hollow when it comes to a community open source project.

Last I heard `open source` was used to refer to code not projects.

Every project has its "decision makers" (no matter the structure or titles) that have the power to dictate the direction and vision. You can literally see it every day in many projects, not trying to single out Rust or anything. The power is not distributed evenly, not even within the elite inner circle.

There's no 'together as a community` when it's followed by a clause like `we need to follow the one direction (X) has set`.

What power does `the community` exactly have? to open a PR and get denied? is to fork? Can the community ellect leaders and change course? You said it yourself, anyone that want to have their voice heard, they have to climb up the ladder and join the inner circle. You're descriping `Power elite` and not `community`. Even forks don't work in real life, what happens is the project usually die slowly and everyone move to the next thing.

I'm not dismissing the structure itself, it's a good pragmatic approach, it's just that you're twisting and polishing it. In short:

  You: Everyone is welcome. No need to fragment. Rust is run by community.

  Rust's governance: We rule. We also pick any new members. Community doesn't have much say about anything.
So pursuing efforts like the one in the post is exactly how the community is empowered and what might enable Rust to survive any challenges it faces in the future. People doing their own things is how the open source ecosystem began and continue to flourish. So can you just welcome their effort and let them be?


Reminds me of iron law of oligarchies. Which basically is true of any group.


> rust's written under a free software license

the way you are using "free" is undefined behavior. "Free" as in speech software, the F in FOSS, is associated with copyleft FSF and GNU, and not like BSD, MIT, and Apache, the open source in fOSs, which is what Rust is.


The english language is full of ambiguity if you choose to be pedantic, yet somehow most people manage to communicate.

The fact that you knew I was referring to rust's license means my English was in fact not undefined, but exactly as clearly defined as one can hope english to be, i.e. we both thought it meant the same thing.

In my opinion, in context, "free software license" is also a clear and common phrase, not undefined behavior.

Essentially, I think everyone has agreed that "free software", unless you add more qualifiers, will refer to the FSF definition of free software https://www.gnu.org/philosophy/free-sw.html

The FSFs definition includes both the licenses rust is licensed under: https://www.gnu.org/licenses/license-list.html#Expat

The FSF would happily call BSD, MIT, and Apache "Free Software Licenses", and in fact do so.


The FSF recognizes BDS, MIT and Apache as Free Software (but not Copyleft) licenses.


no, they recognize those licenses as compatible with the GPL, but not as free software

https://www.fsf.org/

"Free software means that the users have the freedom to run, edit, contribute to, and share the software."

those licenses allow derivatives which users cannot edit and contribute to thus they don't guarantee the 4 freedoms of free software.


That's literally contrary to what FSF says.

Please read: https://www.gnu.org/licenses/license-list.html .

For example:

  Apache License, Version 2.0 
  This is a free software license, compatible with version 3 of the GNU GPL.
Or:

  Expat License [this is the MIT license]
  This is a lax, permissive non-copyleft free software license, compatible with the GNU GPL.

The fact that a license allows derivative work to take away the 4 freedoms don't make the license not a free software license. An end user in reception of a permissively licensed free software can make full use of the 4 freedoms.


I don't have a lot of experience in C or C++ but I wonder if this ever works in practice for a non-trivial codebase? I'd be really surprised if, without diligently committing to maintaining compatibility with the two compilers, it was easy to up sticks and move between them.


Many complex C++ codebases have full parallel CI pipelines for GCC and LLVM. It encourages good code hygiene and occasionally identifies bugs in the compiler toolchain.

If you are using intrinsics or other architecture-specific features, there is a similar practice of requiring full CI pipelines for at least two CPU architectures. Again, it occasionally finds interesting bugs.

For systems I work on we usually have 4 CI pipelines for the combo of GCC/LLVM and ARM/x86 for these purposes. It costs a bit more but is generally worth it from a code quality perspective.


Adding CI pipeline running compiles on MSVC was one of the big shakeouts of undefined behaviour and bugs in our C++ codebase - while making that compiler happy is annoying if you come from Unixy land, it did force us to shed quite a few bad habits and even find several bugs in that endeavor.

(And then we could ship the product on Windows, so that was nice.)


Our codebase works on all three, we compile on MSVC for Windows, GCC for Linux, and Clang for Mac.

It isn't easy and honestly the idea of "just don't use MSVC for a while" is strange to me. Sure you can compile with any of them but almost certainly you are going to stick to one for a given use case.

"This release is on a different compiler" isn't something you do because of a bug. Instead your roll back a version or avoid using the unsupported feature until a fix is released.

The reason is as much as they are supposed to do the same thing the reality is bugs are bugs, e.g. if you invoke undefined behavior you will generally get consistent results with a given compiler but all bets are off if you swap. Similarly it is hard not to rely on implementation defined behavior without building your own standard library which specifically defines that behavior across compilers.


>I'd be really surprised if, without diligently committing to maintaining compatibility with the two compilers, it was easy to up sticks and move between them

Many places deliberately compile with multiple compilers as part of the build/test pipeline, to benefit from more compiler warnings, diagnostics etc.


It's pretty easy to move between gcc/clang/icc for most codebases. Though there are some useful features still that are gcc only. (And probably some that are clang only, though I pretty much only use gcc...)


It works but you have to keep it at least "semi-active". Some shops have CI services setup to cross compile etc. already. Mainly I've seen this not as a tool to maintain a "backup" but as a way to shake out bugs and keep non-portable stuff from creeping into the codebase.

You could probably do most of it with conservative linting and some in-house knowledge of portability issues, non-standard compiler extensions, etc.

It's typically a lot easier to do this for different compiler, same target, than different targets.


In a presentation a few years ago Valve Software said that getting the Source engine working on Linux and Mac helped them find some issues in their Windows code (I don't remember the exact timeline though - I assume they had an Xbox 360 build pipeline at the same time, but maybe not yet a PlayStation 3 build pipeline).


It's easy to write C or C++ code which works with both gcc and clang. Generally if there's a problem it's a standards compliance issue in your code so it's good to know and fix.

MSVC is a bit more quirky... At times its standards compliance has been patchy but I think it's ok right now.


There are lots of libraries that need to compile on a variety of platforms (ie different versions of LLVM for Android and MacOS, MSVC for Windows and GCC for some embedded targets not well supported by LLVM).


It is quite common. In most places I have worked we did GCC and clang builds. Some did GCC/clang/msvc/ICC.

And of course plenty of OSS libraries support many compilers even beyond the main three.


>Having another Rust implementation allows for an "audit" to help validate the Rust spec and get rid of any unspecified behavior. It would also give users options. If I hit a compiler bug in MSVC, I can file a report, switch to GCC and keep working on my project until the bug is fixed. With Rust, that's not currently possible.

Theory is cool, but in practice other compiler has its own quirks too

How about having one compiler which is used by all developers, which increases chances of bugs getting caught faster and fixed faster meanwhile you just use workaround?


Rust has currently no formal specification, so what would you use as an arbiter?

Also the article says otherwise > The project wants to make sure that it does not create a special "GNU Rust" language, but is trying instead to replicate the output of rustc — bugs, quirks, and all


You use the same process as for deciding if changes to rustc are compliant or not; the judgement of the language and compiler teams.

And running into these kinds of questions, both in a single project that evolves over time (rustc) and a separate project, help to feed information back into what you need for a formal specification; which is something that is being planned out. Having a second implementation can help find these areas where you might need to be narrower or broader in your specification, in order to clarify enough for it to be implementable or broaden it enough to accommodate reasonable implementation differences.

They are not trying to diverge in language implementation, but there will always be simple compiler bugs, which may crop up in one implementation but not another. For instance, some LLVM optimization pass may miscompile some code; gccrs wouldn't necessarily be trying to recreate that exact bug. I think that "bugs, quirks, and all" really means that they aren't trying to fix major limitations of rustc, such as introducing a whole different borrow checker model which might allow some programs that current rustc does not. They're trying to fairly faithfully recreate the language that rustc implements, even if some aspects might be considered sub-optimal, but they aren't going to re-implement every ICE and miscompilation, those are places where there could be differences.


I agree with you it has benefits, what I’m wondering about is if not more bugs (though not the same ones) would be fixed by contributing directly to rustc compared to the massive effort of building a new compiler.


It's about finding those bugs in the first place. Working on a different implementation is one way of doing that.


That works until you have a popular project that needs to support all the compilers. Then you need you need to work around n sets of bugs and incompatibilities.


The C++ spec regularly has all sorts of errata and misdesigns. Not sure I buy this argument.


Not sure about that, as the article has this:

> The project wants to ... replicate the output of rustc — bugs, quirks, and all.


I hit a compiler bug in rust and downgraded it to an earlier stable version. That’s usually enough.


> Rust should learn from the mistakes of C++

They are. You quoted them doing it, the care taken not to become a superset. The problem with C/C++ stemmed from compiler vendors competing with each other to be "better" than their peers.

Multiple front ends of an implementation of a language usually shakes out a bunch of bugs and misimplementations. That's the primary benefit of having multiple front ends IMO.


>The problem with C/C++ stemmed from compiler vendors competing with each other to be "better" than their peers.

Yea but this is Linux and OSS. NIHisms are fucking rampant everywhere.

I give it a year before gccrs announces a new "gnurs" mode with language extensions.


I'd be surprised about this mostly because I can't imagine anyone using gcc-specific Rust extensions. Not only does rustc have an extraordinary amount of traction by being basically the only choice up until this point, Rust doesn't really have the reputation of moving slowly to adopt new features; if anything, there's been as much skepticism about new features added over the years as excitement. I can honestly imagine more people adopting a Rust implementation with a commitment _not_ to add new features that are added to rustc than one that adds its own separate features.

Even as someone who probably will never use gccrs, I think having more than one implementation of Rust is a good thing for the language (for all the usual reasons that get cited). In the long term, I'd love for Rust to eventually to be able to specify its semantics in a way that isn't tied to rustc, which is a moving target and gets murky when you consider implementation-specific bugs.


> Rust doesn't really have the reputation of moving slowly to adopt new features

There are some exceptions to this. Although there are possibly good reasons for such features moving slowly, a competing compiler could popularize them before they are stabilized in rustc. One example is generators[1] but there's a longer list in The Unstable Book.[2]

[1] https://github.com/rust-lang/rust/issues/43122

[2] https://doc.rust-lang.org/stable/unstable-book/language-feat...


Prevalent use of nightly Rust purely for specific unstable features would do the same. It's been a while since there has been a "must have" feature has kept people confined to nightly, though.


I feel like having a distinct implementation that is unstable is more likely.

Stabilizing things from rustc is a recipe for disaster as if rustc builds something new you are in a bad place.


NIHI is usually "using product X is almost as hard (or expensive )as building it"

Linux tends to shun unfree software which is a very different take. Buy or build is roughly money vs time (aka money). Software freedom is not the same thing.

Also why does software freedom lead to divergences? Certainly GCC partook in the arms race around C++ but recently they have been pretty good about aiming for full standard support as the goal. (The exception being proposed features but that is unavoidable when C++ prefers battle tested proposals)


Actually one of the problems we are having with compilers catching up with ISO is that since C++11, features aren't battle tested, rather designed in paper or some special compiler branch, and eventually adopted by everyone.


Honestly depending on the feature that might be for the best. If all three compilers have distinct on by default divergent settings that would be terrible.

My point is outside of instances where everyone agrees something has to change you need a compiler branch at the minimum. This means the compiler you changed will get that feature first.

Given it takes years to implement the full standard this leads to divergence between the compilers in standards compliance.

Honestly all totally workable but makes talking about "standard C++" hard.


There are more than three compilers, and many of the changes if done at all in an existing compiler, are a mix between private branches and ongoing unstable features, hardly battle tested as the first standards were, when existing practice is what came into the standard.


I worry that once gssrs is marginally used, it implicitly gives permission for Microsoft to create some MSVRust and then we really do descend into standard-body hell.


I don’t understand why this question is kept being asked.

Is Rust holy in some sense that’s it’s not allowed to rewrite the frontend?

It’s still a PITA to bootstrap Rust on a new architecture and there is also no working Rust compiler for architectures not supported by LLVM.

I know that there is codegen_rust_gcc but that one has the same bootstrap problems as the original Rust compiler and also requires architecture support to be landed in various Rust part which Rust maintainers have beem hesitant about (been there, done that).

So, I am absolutely glad that there will be an out of the box usable Rust compiler in the near future which will finally allow me to build rustified libraries again on architectures such as alpha without much pain.


With GCC itself being written in C++, what is better about GCC's own bootstrap story except that it's been part of operating system distributions for longer? I am not aware of anyone using clang or MSVC to bootstrap GCC on Linux distributions for example, they use GCC's own lineage traceable back to C. It has the same bootstrap problem, and it isn't a problem in practice because vendors distribute binary toolchains. Even BSD and Gentoo start out with an initial set of binaries before rebuilding the world.

The architecture support angle is a better one for why a GCC backend is helpful, but that will also be addressed by rustc_codegen_gcc so it isn't a relevant argument here. And by the looks of it, that will be a usable solution much sooner than gccrs, and it will remain usable with a much lower long-term resource commitment. This is an argument more in favor of rustc_codegen_gcc than gccrs.

The strongest argument I can see about bootstrapping issues here is that Rust moves much faster and Rust's own compiler code requires a relatively recent Rust version, so in the extremely specific circumstance where binaries are not available and cross compilation is also not possible, bootstrapping will take more steps than it would for a gccrs frontend limited by C++'s slower rate of evolution. I don't see this being anywhere near as much of a problem as gccrs trying to catch up to & keep pace with Rust's frontend evolution. The former we can solve with the same build farms and cross compiles we need anyway, the latter requires continuous ongoing human effort with all of the disadvantages of C++.


Based on that logic, why did the LLVM community develop Clang, Clang++, libc++, etc. instead of continuing with DragonEgg? There already were GCC, G++, libstdc++ , as well as EDG C++ front-end.

GCC, Clang, MSVC, and other compilers complement each other, serve different purposes, and serve different markets. They also ensure that the language is robust and conforms to a specification, not whatever quirks a single implementation happens to provide. And multiple implementations avoids the dangers of relying on a single implementation, which could have future problems with security, governance, etc.

The GNU Toolchain Project, the LLVM Project, the Rust project all have experienced issues and it's good to not rely on a single point of failure. Redundancy and anti-fragility is your friend.


LLVM saw growth for a number of reasons, but nothing to do because it was actually beneficial for the C++ ecosystem:

* A C++ codebase. At the time GCC was written in C which slowed development (it’s now a C++ codebase adopting the lessons LLVM provided)

* It had a friendlier license than GCC which switched to GPLv3 and thus Google & Apple moved their compiler teams to work on LLVM over time.

* Libc++ is a combination of friendlier license + avoiding the hot garbage that was (maybe still is?) libstdc++ (e.g. there were incompatible design decisions in libstdc++ that inhibited implementing the C++ spec like SSO). There were also build time improvements if I recall correctly.

* LLVM provided a fresher architecture which made it more convenient as a research platform (indeed most compiler academics target LLVM rather than GCC for new research ideas).

Basically, the reason LLVM was invested in instead of DragonEgg was a mixture of license & the GCC community being quite difficult to work with causing huge amounts of investments by industry and academia into LLVM. Once those projects took off, even after GCC fixed their community issues they still had the license problem and the LLVM community was strongly independent.

Compilers don’t typically generate security issues so that’s not a problem. There are questions of governance but due to the permissive license that Rust uses governance problems can be rectified by forking without building a new compiler from the ground up (e.g. what happened with NodeJS until the governance issues were resolved and the fork reabsorbed).

It’s funny you mention the different C++ compilers consider that Clang is well on its way of becoming the dominant compiler. It’s already targeting to be a full drop-in replacement for MSVC and it’s fully capable of replacing GCC on Linux unless you’re on a rarer platform where GCC has a bit richer history in the embedded space). I think over the long term GCC is likely to die and it’s entirely possible that MSVC will abandon their in-house and instead use clang (same as they did abandoning IE & adopting Blink). It will be interesting to see if ICC starts porting their optimizations to LLVM and making them freely available - I can’t imagine ICC licenses really bring in enough money to justify things.


I think I read somewhere that recent ICC is LLVM based, just not freely available because of course the LLVM licence doesn’t require that. Can’t remember the source though, so take it with a pinch of salt.



is it llvm based or clang based?

edit: it is both!


> it’s now a C++ codebase adopting the lessons LLVM provided

LLVM isn't the first compiler written in C++, and it isn't the first compiler framework building tool either

The most famous one, certainly.


> It’s funny you mention the different C++ compilers consider that Clang is well on its way of becoming the dominant compiler.

I'd be interested to know what metric you're basing this on.

Outside the Apple ecosystem, LLVM seems to have made very little inroads that I can see. No major Linux distribution that I'm aware of uses LLVM, and Windows is still dominated by MSVC.


> GCC has a bit richer history in the embedded space

Yeah as someone who works in the embedded space, I do wish that LLVM had more microcontroller support. Maybe one day! It was weirdly hard to get up and running with the STM32L4 and LLVM, whereas GCC was easy as, just for a recent example. Apparently I can pay ARM for a specific LLVM implementation?


I doubt very much that using C slowed development. To my recollection it was more that for years the FSF pushed for a GCC architecture that resisted plugins to avoid proprietary extensions. LLVM showed the advantages of a more extensible archite and they followed suit to remain competitive.


> Rust should learn from the mistakes of C++

Rust should learn from the mistakes of C++ and C, which are one of the longest lasting, biggest impact, widely deployed languages of all time?

It's confusing when people think language standards are bad, and instead of saying this code is C99 or C++11, they like saying "this code works with the Rustc binary / source code with the SHA256 hash e49d560cd008344edf745b8052ef714b07595808898c835f17f962a10012f964".


C and C++ are widely used despite their language, compiler, and build system fragmentation. Each platform/compiler combo needs ifdefs and workarounds that have been done for so long they’re considered a normal thing (or people say MSVC and others don’t count, and C is just GCC+POSIX).

There’s value in multiple implementations ensuring code isn’t bug-compatible, but at the same time in C and C++ there’s plenty of unnecessary differences and unspecified details due to historical reasons, and the narrow scope of their specs.


Yes, that's how the story goes. Languages with specs are widely deployed despite being fragmented and bad, not because people find value in multiple implementations. Must be a coincidence that C, C++, Javascript and C# and Java all fall under this umbrella.


C# has multiple compilers and runtimes? Mono used to be a separate thing but if I recall correctly mono has been adopted by MS & a lot merged between the two.

JavaScript itself is a very simple language with most of the complexity living in disparate runtimes and that there are multiple runtime implementations is a very real problem requiring complex polyfills that age poorly that are maintained by the community. For what it’s worth TypeScript has a single implementation and it’s extremely popular in this community.

Java is probably the “best” here but really there’s still only the Sun & OpenJDK implementations and the OpenJDK and Oracle are basically the same if I recall correctly with the main difference being the inclusion of proprietary “enterprise” components that Oracle can charge money for. There are other implementations of the standard but they’re much more niche (e.g. Azul systems). A point against disparate implementations is how Java on Android is now basically a fork & a different language from modern-day Java (although I believe that’s mostly because of the Oracle lawsuit).

Python is widely deployed & CPython remains the version that most people deploy. Forks find it difficult to keep up with the changes (e.g. PyPy for the longest time lagged quite badly although it seems like they’re doing a better job keeping up these days). The forks have significantly less adoption than CPython though.

It seems unlikely that independent Rust front end implementations will benefit it’s popularity. Having GCC code gen is valuable but integrating that behind the Rust front-end sounds like a better idea and is way further along. gccrs is targeting a 3 year old version of Rust that still isn’t complete while the GCC backend is being used to successfully compile the Linux kernel. My bet is that gccrs will end up closer to gcj because it is difficult to keep up.


> C# has multiple compilers and runtimes?

Yes. Roslyn, Mono and some Mono-like thing from Unity to compile it into C++.

> Mono used to be a separate thing

Mono is still a thing. The last commit was around 3 months ago.

> multiple runtime implementations is a very real problem requiring complex polyfills

You can target a version of the spec and any implementation that supports that version will run your code. If you go off-spec, that's really on you, and if the implementation has bugs, that's on the implementation.

> TypeScript has a single implementation

esbuild can build typescript code. I use it instead of tsc in my build pipeline, and only use tsc for type-checking.

> [Typescript is] extremely popular in this community

esbuild is extremely popular in the JS/TS community too. The second most-popular TS compiler probably.

> [Java has] only the Sun & OpenJDK implementations

That's not true. There are multiple JDKs and even more JVMs.

> Java on Android is now basically a fork & a different language from modern-day Java

Good thing Java has specs with multiple versions, so you can target a version that is implemented by your target platform and it will run on any implementation that supports that version.

> Python is widely deployed & CPython remains the version that most people deploy. > The forks have significantly less adoption than CPython though.

That is because Python doesn't have a real spec or standard, at least nothing solid compared to the other languages with specs or standards.

> It seems unlikely that independent Rust front end implementations will benefit it’s popularity.

It seems unlikely that people working on an open-source project will only have the popularity of another open-source project in mind when they spend their time.


> Yes. Roslyn, Mono and some Mono-like thing from Unity to compile it into C++.

Roslyn is more like the next gen compiler and will be included in Mono once it’s ready to replace msc. I view it as closer to polonius because it’s an evolutionary step to upgrade the previous compiler into a new implementation. It’s still a single reference implementation.

> Mono is still a thing

I think you misunderstood my point. It had started as a fork but then Microsoft adopted it by buying Xamarin. It’s not totally clear to me if it’s actually a fork at this point or if it’s merged and shares a lot of code with .NET core. I could be mistaken but Mono and .Net core these days also share quite a bit of code.

> rebuild can build typescript code

Yes, there are plenty of transpilers because the language is easy to desugar into JavaScript (intentionally so - TS stopped accepting any language syntax extensions and follows ES 1:1 now and all the development is in the typing layer). That’s very different from a forked implementation of the type checker which is the real meat of the TS language compiler.

> The second most popular TS compiler probably

It’s a transpiler and not a compiler. If TS had substantial language extensions on top of JS that it was regularly adding, all these forks would be dead in the water.

> That’s not true. There are multiple JDKs and even more JVMs

I meant to say they’re the only ones with any meaningful adoption. All the other JDKs and JVMs are much more niche and often benefit from living in a niche that is often behind on the adoption curve (i.e. still running Java 8 or something or are willing to stay on older Java versions because there’s some key benefit in the other version that is operationally critical).

> Good thing Java has specs with multiple versions, so that you can target a version…

Good for people implementing forks, less good for people living within the ecosystem in terms of having to worry about which version of the compiler to support with their library. For what it’s worth Rust also has language versions but it’s more like an LTS version of the language whereas Java versions come out more frequently & each implementation is on whatever year they wanted to snapshot against.


FYI Mono has been shipping Roslyn as its C# compiler for a few years now. Mono's C# compiler only fully supports up to C# 6 while Roslyn supports C# 12, the latest version.

Mono shares a lot of code with .NET (Core) but is mostly limited to the standard libraries and compiler. Mono is still its own separate implementation of the CLR (runtime/"JVM") and supports much more platforms than .NET (Core) today.


>Yes. Roslyn, Mono and some Mono-like thing from Unity to compile it into C++.

Mono has approx. 0.x% market share outside Unity. Also Mono is used by .NET Core for Blazor WASM iirc.

Let's don't compare this scenario of sane world with the mess that exists in C++ world.


> TypeScript has a single implementation

It's probably a matter of time until there's a TypeScript compiler implemented in Rust. But the surface area of the language is pretty big, and I imagine it will always lag behind the official compiler.

> Forks find it difficult to keep up with the changes

That's interesting to think of multiple implementation of a language as "forks" rather than spec-compliant compilers and runtimes. But the problem remains the same, the time and effort necessary to constantly keep up with the reference implementation, the upstream.


There’s been plenty of attempts to implement TS in another language or whatnot. They all struggle with keeping up with the pace of change cause the team behind TS is quite large. There was an effort to do a fairly straight port into Rust which actually turned out quite well, but then the “why” question comes up - the reason would be to do better performance but improving performance requires changing the design which a transliteration approach can’t give you & the more of the design you change, the harder it is to keep up with incoming changes putting you back at square 1. I think Rust rewrites of TS compilers (as long as TS is seeing substantial changes which it has been) will be worse than PyPy which is basically a neat party trick without serious adoption.


Java and C# seem to have actually gotten the idea of multiple implementations correct, in the sense that I have never needed to worry about the specific runtime being used as long as I get my language version correct. I have basically never seen a C/C++ program of more than a few hundred lines that doesn’t include something like #ifdef WIN32 …


Well, there's Android... and Unity, which as I recall is stuck on an old version of C# and its own way of doing things. I also had the interesting experience of working with OSGI at work a couple of years.


And Meadow, and Capcom, and Gotdot 3,


Is there more than one Java implementation that is usable? All I can find are obsolete products that never got very far.


Eclipse OpenJ9 (previously IBM J9) is in active development and supports Java 21.


Here are a few Java implementations that I've used recently.

- My Android phone

- OpenJDK on my laptop

- leJOS for a university robotics course to run on Lego robots

- Smart cards, I have a couple in my wallet

Perhaps I'd call the Lego robot stuff obsolete, certainly not the Android userspace or smart cards though.


Your Android phone and the latest Java share very little commonality. It only recently supports Java 11 which is 5 years old at this point. The other non-OpenJDK implementations you mentioned are much more niche (I imagine the smart cards run JavaCard which is still probably going to be running an OpenJDK offshoot).


Java 17 LTS is supported from Android 12 onwards.

PTC, Aicas, microEJ, OpenJ9, Azul, GraalVM are a couple of alternative JVM implementations.


Again. I’m not claiming that alternative implementations don’t exist, just that they’re not particularly common/popular compared with OpenJDK/Oracle (which is largely the same codebase). Android is the only alternative implementation with serious adoption and it lags quite heavily.

BTW GraalVM is based on OpenJDK so I don’t really understand your point there. It’s not a ground-up reimplementation of the spec.


GraalVM uses another complete infrastructure, JIT and GC compilers, which affect runtime execution, and existing tooling.

Doesn't matter how popular they are, they exist because there is a business need, and several people are willing to pay for them, in some cases lots of money bags, because they fulfill needs not available in OpenJDK.


I don't think what you're describing here is accurate re: GraalVM. GraalVM is Hotspot but with C1/C2 not being used, using a generic compiler interface to call out to Graal instead AFAIK.


Only if you are describing the plugable OpenJDK interfaces for GraalVM, that have been removed a couple of releases after being introduced, as almost no one used them.

I have been following GraalVM since it started as MaximeVM on Sun Research Labs.


Who said anything about the latest Java. Since Java has versioned specs, platform X can support one version, and you can target it based on that information even if other versions come out and get supported by other platforms.

For example C89 and C99 are pretty old, and modern C has a lot that is different from them. But they still get targeted and deployed, and enjoy a decent following. Because even in 2024, you can write a new C89 compiler and people's existing and new C89 code will compile on it if you implement it right.


But as a developer I do want to use (some of) the latest features as soon as they become available. That's why most of my crates have an N-2 stable version policy.


And since Rust does a release every 6 weeks, we’re talking a ~4 month lag. That’s unheard of for C/C++.


It is the sort of thing that people aimed for in the 90s, when there was more velocity in c and c++. If rust lives long enough the rate of change will also slow down.


In the 90s if you wanted new features you had to pay for them.


Or do languages get specs because they are widespread and becoming fragmented? That clearly apples to C and C++ - both were implemented first and then a formal spec was written in response to fragmentation.


> It's confusing when people think language standards are bad, and instead of saying this code is C99 or C++11, they like saying "this code works with the Rustc binary / source code with the SHA256 hash e49d560cd008344edf745b8052ef714b07595808898c835f17f962a10012f964".

I don't know if that's totally fair. I remember that it took quite a while for C++ compilers to actually implement all of C++11. So, it was totally normal at the time to change what subset of C++11 we were using to appease whatever version of GCC was in RHEL at the time.


Technically, no compiler ever implemented "all" of c++11... You'd have to implement garbage collection, for example :D


Not to mention "export", which was in C++98 but only ever supported by Comeau C++.


standards are good. Canonical implementations managed/organized by a single project are also good (python, go, ruby, ...).


Or, y'know, "rustc 1.74.0" like a normal person.


And in the rare instances where you're using in-development features "rust nightly-2023-12-18"

Literally the only reason to specify via a hash* would be if you were using such a bleeding edge feature that it was only merged in the last 48 hours or so and no nightly versions had been cut.

*Or I suppose you don't trust the supply chain, and you either aren't satisfied with or can't create tooling that checks the hash against a lockfile, but then you have the same problem with literally any other compiler for any language.


Indeed. If I'm specifying a hash for anything I'm definitely not leaving things up to the very, very wide range of behaviours covered by the C and C++ standards.


That's besides the point. Adhering to a language standard is much clearer than specifying it by a language compiler's version. Behaviour is documented in the former while one has to observe the output of a binary (and hope that side effects are understood with their full gravity).


But no-one writes code against the standard. We all write code against the reality of the compiler(s) we use. If there's a compiler bug, you either use a different version, a different vendor, or we side step the code triggering the issue. The spec only tells you that the vendor might fix this in the future.


This is definitely not true. Whenever I have a question about a C++ language feature I typically go here first[0], and then if I’m looking for compiler specific info I go to the applicable compiler docs second. Likewise, for Java I go here[1]. For JavaScript I typically reference Mozilla since those docs are usually well written, but they reference the spec where applicable and I dig deeper if needed[2].

Now, none of these links are the specifications for the languages listed, but they all copiously link to the specification where applicable. In rare situations, I have gone directly to the specification. That’s usually if I’m trying to parse a subset of the language or understand an obscure language feature.

I would argue no one writes code against a compiler. Sure we all validate our code with a compiler, but a compiler does not tell you how the language works or interacts with itself. I write my code and look for answers to my questions in the specification for my respective languages, and I suspect most programmers do as well.

[0]: https://en.cppreference.com/w/

[1]: https://docs.oracle.com/javase/specs/index.html

[2]: https://developer.mozilla.org/en-US/docs/Web/JavaScript


If the compiler you use and the spec of your language disagree, what do you do?

The Project is working on a specification. The Foundation hired someone for it.

A Rust spec done purely on paper ahead of time would be the contents of the accepted RFCs. The final implementation almost never matches what was described because during the lengthy implementation and stabilization process we encounter multitude of unknown unknowns. The work of the spec writers will be to go back and document the result of that process.

For what is worth the seeming dragging of feet on this is because the people that would be qualified and inclined to doing that work were working on other, more pressing matters. If we had had a spec back in, let's say, Rust 1.31, what would have that changed, in practice?


> If the compiler you use and the spec of your language disagree, what do you do?

If the compiler claims to follow the specified version of the spec, and it doesn't, you file a compiler bug.

And then use the subset that it supports, perhaps by using an older spec if it supports that fully. Perhaps looking for alternative compilers that have better/full coverage of a spec.

"Supports the spec minus these differences" is still miles better than "any behaviour can change because the Rust 2.1.0 compiler compiles code that the Rust 2.1.0 compiler compiles".


> If the compiler claims to follow the specified version of the spec, and it doesn't, you file a compiler bug.

> And then use the subset that it supports, perhaps by using an older spec if it supports that fully. Perhaps looking for alternative compilers that have better/full coverage of a spec.

If you encounter rustc behavior that seems unintentional, you can always file a bug in the issue tracker against the compiler or language teams[1]. Humans end up making a determination whether the behavior of the compiler is in line with the RFC that introduced the feature.

> "Supports the spec minus these differences" is still miles better than "any behaviour can change because the Rust 2.1.0 compiler compiles code that the Rust 2.1.0 compiler compiles".

You can look at the Rust Reference[2] for guidance on what the language is supposed to be. It is explicitly not a spec[3], but the project is working on one[4].

1: https://github.com/rust-lang/rust/issues?q=is%3Aopen+is%3Ais...

2: https://doc.rust-lang.org/reference/introduction.html

3: https://doc.rust-lang.org/reference/introduction.html#what-t...

4: https://blog.rust-lang.org/inside-rust/2023/11/15/spec-visio...


Can you articulate what a spec for a programming language entails in your mind. I am sure it is a niche opinion/position, but I base any discussions of formal specification of a language on ‘The Definition of ML’. With that as a starting point I don’t see how the implementation process of a compiler would do anything that could force a change to the spec. Once the formal syntax is defined, the static and dynamic semantics of the language laid out and those semantics proved approximately consistent and possessing ‘type safety’ (for some established meaning of the words). Any divergence or disagreement is a failing of the implementation. I’m genuinely interested in what you expect a specification of Rust to be if, as your comment suggests, you have a different point of view?


A specification can be prescriptive (stating how things should work) or descriptive (stating the observable behavior of a specific version of software). For example earlier in Rust's post-1.0 life the borrow checker changed behavior a few times, some to fix soundness bugs, some to enable more correct code to work (NLL). An earlier spec would have described different behavior to what Rust is today (but of course, the spec can be updated over time as well).

How should the Rust Specification represent the type inference algorithm that rustc uses? Is an implementation that can figure out types in more cases than described by the specification conformant? This is the "split ecosystem concern" some have with the introduction of multiple front ends: code that works on gccrs but not rustc. There's no evidence that this will be a problem in practice, everyone involved seems to be aligned on that being a bad idea.

> Any divergence or disagreement is a failing of the implementation.

Specs can have bugs too, not just software.


I think it was probably a rhetorical question, but with regards to type checking, as with ML, type inference is an elaboration step in the compiler that produces the correct syntax of the formal language defined in the language definition. Specifying the actual implemented algorithm that translates from concrete syntax to abstract syntax (and the accompanying elaboration to explicit type annotations) is a separate component of the specification document that does not exert any control or guidance to the definition of the formal language of the ‘language’ in question.

I think this may be a large point of divergence with regard to my understanding and position on language specifications. I assume when posts have said Rust is getting a spec, that that included a formal, ‘mathematically’ proven language definition of the core language. I am aware that is not what C or C++ or JS includes in a spec (and I don’t know if that is true for Ada), but I was operating under the assumption the Rust’s inspiration from Ocaml and limited functional-esque stylings that the language would follow the more formalized definition style I am used to.


Look at Goals and Scope in https://blog.rust-lang.org/inside-rust/2023/11/15/spec-visio... for what the Rust spec will be.


But barely anyone gets to write C++11. You write C++11 for MSVC2022, or C++11 that compiles with LLVM 15+ and GCC 8+, and maybe MSVC if you invest a couple hours of effort into it. That's really not that different from saying you require a minimum compiler version of Rust 1.74.0.


The long-lasting and widespread nature of C and C++ is why their mistakes are the ones most worth learning from.


> the longest lasting, biggest impact, widely deployed languages of all time

Can be true at the same time as:

c and c++ have made mistakes, and have had issues as a result of bad choices

The later should be learned from by any language in a place to not make those same mistakes. We call this technological progress, and its OK.


C and C++ both learn from the mistakes of others too. Of course as mature languages they have a lot they cannot change. However when they do propose new features it is common to look at what other languages have done. C++'s thread model is better than Java's because they were able to look at the things Java got wrong (in hindsight, those choices looked good at the time to smart people - lets not pick on Java for not predicting how modern hardware would evolve in the future. Indeed it is possible in a few years hardware will evolve differently again and C++'s thread model will then be just as wrong despite me today calling it good)


I think people generally specify their MSRVs as version numbers for libraries, and often a pinned toolchain file for applications. I haven't seen anyone use a hash for this, though I'm sure I might have missed something.

I don't think language standards are "bad".


That phrase doesn't mean what you think it is.

"Learn from the mistakes of X" doesn't mean X is bad, it means X made mistakes.


So by that definition you would only want one web browser as there are currently multiple HTML+CSS+JavaScript "front ends". In which case, you end up with a Chrome monopoly where Google gets to decide what the web looks like!


A monopoly where a party with conflicting interests to the users being the sole arbiter is obviously problematic. A monopoly which is run by a group of individuals that do not have obvious conflicts of interest, and work for different organizations to mitigate the chance of non-obvious conflicts of interest, is far more interesting. Rust has the latter.

It's not plausible that that will that a similar entity will take over the HTML/CSS/JS space, because of the amount of money competing with users interests in the space due to things like advertising, the desire to invade users privacy, etc. Still, if somehow we could magic up this monopolizing organization without conflicts of interest that would be far superior to the current state of affairs...


The Rust community is not perfect. Neither is the LLVM community nor the GCC community, not any Open Source community. Consider the recent drama / growing pains that has occurred within the Rust community. Everyone has biases and conflicts of interest. Anyone who doesn't recognize the benefit of alternatives will learn the hard way.

The Rust community rationalization that they don't need / want alternatives for "reason" is self-serving and all about control. I don't care if someone is the BDFL, they aren't right 100% of the time and not always doing things for altruistic reasons. The Rust community has imbued the leadership with some godlike omniscience and altruism because it makes them feel good, not because it's sound policy.


There's a long distance between "perfect" and "conflicts of interest", the latter bears more resemblance to corruption than imperfection.

Of course rust leadership is going to make and has made mistakes, I'm confident every BDFL ever has also made and will continue to make mistakes, the same goes for the 'herd' of clang/gcc/msvc/... and the herd of browser makers. The target is not perfection, but merely being better than the alternative.

I think that, in the absence of conflicts of interests, single source of truth models (e.g. what rust rust/python/java/... does) are likely to do better than 'herd' of competing implementation models (C/C++/javascript) at making a good language. The latter probably does better at working despite conflicts of interest, but that's not a problem with most programming languages where there is relatively (compared to the browser ecosystem) little opportunity for a powerful corporation to push their interest to the detriment of others.

I think the rust community is quite clear that the rust leadership is flawed, but that's not very interesting without a way to make leadership better. If you can convince people you have that way - you'll get a lot of interest.


A goodly number of people appear to want just that.

I don’t agree, mind you, but someone reading your comment with this mindset will probably be reinforced rather than swayed.


The main motivation is having a GPL licensed, independently developed complete Rust compiler which is not dependent on LLVM.


Yes this is important. Otherwise in the future you'll see proprietary rust distributions for various devices or with specific features


You'll still see those though? A fully GPL implementation doesn't prevent the current implementation from being used in a proprietary rust distribution... unless of course the GPL implementation were to fork from the current implementation... which is the main thing people against the GPL implementation are worried about


I think, the GCCRS project takes the rustc implementation as reference, and targets 100% compatibility with them as a fundamental goal and requirement. There is no schism involved.


C++ and Java, which have many implementations, also have proprietary distributions, see Apple’s fork of Clang, Microsoft’s MSVC, Azul VM, Jamaica VM, ...

And some of them are pretty nice. Life is good.


When you don’t have a GPL implementation, the only good implementation might be closed source fork of the LLVM version, effectively forking Rust into Rust and Rustium. Will life be good then?


This seems like an astoundingly remote possibility.


Given the companies backing LLVM and the surrounding ecosystem, I think it’s an astounding possibility.

Companies love permissive licenses for a reason.


There are arguments for and against but for colour: D in GCC uses a shared frontend, which means it's not hard to maintain the frontend aspects — but that also means the frontend is a monoculture which is not very healthy at the moment.

If someone will pay you to do it I would do a new frontend — it's surprisingly little work to get something you can bootstrap with.


Having maintained both C++ and D for different compilers, it's way easier to do it in D since the front-end features are the same (modulo builtins) and the stdlib is 99% the same.


> I think it’s a bad idea to have multiple front ends and Rust should learn from the mistakes of C++ which even with a standards body has to deal with a mess of switches

I do not disagree, yet at the same time, having the same set of switches across different languages is nice, too.


> The project wants to make sure that it does not create a special "GNU Rust" language

That's disappointing. GNU's extensions to the C language are extremely valuable features. I'd like to see what sort of contributions they would make.


They may be valuable C features, but Rust isn't designed by committee, and if it's missing some useful feature, you could probably convince the Rust team to include it, without having to invent GCC-specific extensions.


Hah


Rust needs a language standard:

https://blog.m-ou.se/rust-standard/

https://rust-lang.github.io/rfcs/3355-rust-spec.html

https://github.com/rust-lang/rfcs/pull/3355

There are many organizations and industries that will not adopt Rust until it has a standard.

C, C++, C#, and even JavaScript (ECMAScript) have language standards. Why shouldn't Rust have one too?

C: https://www.iso.org/standard/74528.html

C++: https://isocpp.org/std/the-standard

C#: https://learn.microsoft.com/en-us/dotnet/csharp/language-ref...

JavaScript / ECMAScript: https://ecma-international.org/publications-and-standards/st...


That RFC is accepted, and this is starting to happen.

Progress has been disappointingly slow, but the project is alive, and has potential to speed up next year.

https://blog.rust-lang.org/inside-rust/2023/11/15/spec-visio...


> That RFC is accepted, and this is starting to happen.

> Progress has been disappointingly slow,

I don't think there's ever been a more concise summary of Rust.


LOL some people complain that Rust is moving too slow, and others that Rust is moving too fast and adds too many features. Can never make everyone happy...


Depending on the scenario, I don’t hear mutual exclusion there.

It can be moving too slow on important things, and too fast on unimportant or volatile things.


The problem is the things people think are important or unimportant is not universal.


Good projects with bad leadership die ever day.

Hopefully Rust has people with vision and drive in places to let it escape its niche status.


I just filled out the Rust survey [1] and may have done both—iirc there were checkboxes for missing features and concerns the language is getting too complicated. It's a hard balance to find.

[1] https://blog.rust-lang.org/2023/12/18/survey-launch.html


Mara’s blog post (your first link) says essentially that Rust does not need a standard since it already has means for adding features and maintaining compatibility.


Mara's blog post also describes the benefits of standardizing Rust.

Since she created the RFC for standardizing Rust (https://github.com/rust-lang/rfcs/pull/3355) and is also on the team that is working on Rust standardization (https://blog.rust-lang.org/inside-rust/2023/11/15/spec-visio...), I think she was making the point that Rust has good controls in place for adding features while compatibility, not that "Rust does not need a standard".

If she really believed that Rust does not need a standard, why would she create the RFC and join the team working on the effort?

Rust is a great language. There is no reason why it should not have a standard to better formalize its requirements and behaviors.


Yes, it’s a nuanced blog post. But that also means it doesn’t coming out swinging hard for needing a standard, either. It seems like there is as strong an argument to be made that “Rust is a great language. There is no reason why it needs a standard.”

See the Ferrocene compiler which has been qualified for ISO standards. It’s essentially a standard Rust 1.68 compiler with a lot of added documentation. If you need a Rust compiler for safety critical environments, it’s reasonably priced and requires essentially zero changes to the Rust compiler that they didn’t just upstream. Without a standard.

Yes, it would be nice to have a standard for reducing ambiguity. But does the language need a standard? And if so, then for what purpose?


They created a specification for Ferrocene because Rust does not yet have a language standard:

https://spec.ferrocene.dev/

>> But does the language need a standard?

Yes, Rust needs a standard.

>> And if so, then for what purpose?

For the same purpose that all standards have--to formally define it in writing.

Ferrocene's web site (https://ferrous-systems.com/ferrocene/) shows that it meets the ISO 26262 standard (https://en.wikipedia.org/wiki/ISO_26262).

Why does ISO 26262 matter? What purpose does it serve? Couldn't a vehicle manufacturer just say "our vehicles are safe"?

Which would you trust more: a vehicle that is verified to meet ISO 26262 standards, or a vehicle whose manufacturer tells you "it's safe" without formally defining what "safe" means?

I stated it above, but I will re-state it here: Without a language standard, there are many organizations and industries that will not use Rust. Not because Rust is not a fantastic tool for the job, but because laws, regulations, etc. require standardization and qualification of components.

This means that I can use a qualified C compiler and toolchain to write safety-critical code, but I can't use Rust despite the fact that Rust is a better choice and will help to prevent problems. Standards do matter. Rust needs a language standard.


> For the same purpose that all standards have--to formally define it in writing.

This is tautological. It's equivalent to saying "it needs a standard to be written because it needs a written standard."

I mean what use case is there for Rust language users that isn't already met by the Ferrocene project? And the Ferrocene project is not a standard as in "other implementations will be found lacking," but a description of the 1.68 compiler as-is. That is a specification, not a standard. Ferrous Systems did not need Rust to have a standard in order to qualify the compiler for ISO 26262 and IEC 61508.


>> "it needs a standard to be written because it needs a written standard."

Yes. And if the law in your country requires it to be standardized for specific use cases, then a language standard is needed.

>> what use case is there for Rust language users that isn't already met by the Ferrocene project?

Can you legally use Rust for the control software in aircraft? (https://en.wikipedia.org/wiki/DO-178C)

What about the safety systems for railroads? (https://ldra.com/ldra-blog/software-safety-and-security-stan...)

What about the control systems for nuclear reactors? (https://www.nrc.gov/docs/ML1300/ML13007A173.pdf)


And if you need a ISO 26262 qualified Rust compiler, one exists. Hurrah.

Since you edited your post…simply having a standard won’t immediately qualify the language for those industries. There is only a tenuous link between having a standard and qualifying the language for industrial use.


>> simply having a standard won’t immediately qualify the language for those industries. There is only a tenuous link between having a standard and qualifying the language for industrial use.

Not having a language standard disqualifies Rust.

Administrators say that it shows that Rust is "not serious" and "not ready" for critical work.


> Not having a language standard disqualifies Rust.

Then explain how Ferrous Systems qualified a stock Rust compiler.


Ferrocene is not a "stock Rust compiler". The "stock Rust compiler" is not qualified for safety critical work. If Ferrocene did not add value above what is offered by the stock Rust compiler, why would anyone buy Ferrocene?

Ferrocene is qualified for some safety critical work and plans to have more qualifications soon.

Ferrous Systems wrote a blog post about the process: https://ferrous-systems.com/blog/qualifying-rust-without-for...


The blog post you link to says it's an unmodified fork. Here's a Ferrous Systems employee saying as much:

> Ferrocene is upstream rustc but with some extra targets, long term support, and qualifications so you can use them in safety critical contexts. This is what was stopping things like automotive companies from moving to Rust for things like engine control units, etc.

> It basically costs some money for the support and the qualification documents, but they will be all you need to prove qualification to any pertinent regulatory body so that your software can be certified for use in a real vehicle or whatever.

> ...Ferrocene is just unmodified rustc

https://old.reddit.com/r/rust/comments/17qi9v0/its_official_...

Basically the value add was to expand the support and documentation, which was required for qualification.

Again...no "standard" needed.

I think you are conflating standards and specifications. Ferrous fleshed out the specification, the description of the 1.68 compiler as-is. That means Rust 1.68 as-is was good enough for ISO qualification. Without a standard.

A standard is a minimum bar for languages to meet in order to be considered compliant. That's not a problem right now because there is, for all intents and purposes, a single canonical compiler and that is not likely to change.


As you point out, the important bit is the documentation that shows how and why a language is compliant with a safety standard. The spec serves as the basis on which the compliance can be referenced. A given implementation then is shown to satisfy the spec.

So yes, a given vanilla release is compliant, you just can't do anything with that in a way that an audit will allow until you include all the justifications as to how that makes the language safe (defined by whatever safety standard you need).


Spec ≠ Standard

Rust absolutely does not need a standard. Having a standard is a completely outdated way to design software in the internet era. Having a specification is a great idea and is what most people actually mean.


What’s the difference?


A standard is a way of writing and ratifying a specification. A spec is just a formal document describing the requirements for something, but a standard is when you have large coalition of relevant players accepting the spec as a committee. In this context it's also implied that it's an ISO standard which is extremely slow, rigid and formal and is a downgrade in every way from the current system of anyone who wants to voice their opinion just chiming in on an RFC on github.


The Ferrocene spec permits Rust to be used in those industries.


Go has a really nice spec and multiple implementations too.

https://go.dev/ref/spec


Does any other go implementation support the full language? I thought gccgo lagged significantly.


The official GCC releases tend to lag a bit as they are released on a longer cadence than the standard Go compiler but upstream. The current release is a bit further behind than normal due to the complexities around implementing the generics back end, but it is being worked on (https://groups.google.com/g/golang-dev/c/5ZKcPsDo1fg).


Thanks for the heads up, I was starting to think it would join gjc in retirement home.


> There are many organizations and industries that will not adopt Rust until it has a standard.

Counterpoint: rust is doing fine without those organizations and industries. Why change what is working well?


>> Counterpoint: rust is doing fine without those organizations and industries. Why change what is working well?

Because Rust is a game changer.

Wouldn't it be better to have Rust used for the code that runs on automobiles and aircraft? Or you would prefer that they keep using (subsets of) C and C++?

Is the security of your Internet-of-Things (IoT) devices good enough or could they be better?

What do you have against Rust being used in more places and for more purposes?


Ferrous Systems has a basically bog standard Rust 1.68 compiler that’s been certified for use in the most safety critical environments: https://ferrous-systems.com/blog/officially-qualified-ferroc...

This happened without a standard.


>> This happened without a standard.

They created a specification for Ferrocene:

https://spec.ferrocene.dev/

While it is technically not a Rust language standard, it serves a similar purpose for Ferrocene.


If rust wants to replace memory-unsafe languages, it needs to cover their use cases.


Which usecases does it not cover?


>>> There are many organizations and industries that will not adopt Rust until it has a standard.

So if you want those orgs/industries to get memory safety via rust, you either need a standard, or to convince them to not require that.


Those are bureaucratic reasons, not anything specifically lacking in Rust.


Is it doing fine without those? It seems like every time someone makes a personal or professional project in C/C++, the Rust community floods the comments section and talks about how it's irresponsible to use C/C++ and how the author should just throw away the whole project and re-write it in Rust.

It happens so often it became a meme at this point.


> It happens so often it became a meme at this point.

No, the meme is kept alive by C++ people who say that this happens without it actually happen. It happened a long time ago a few times, since then it's either an active discussion about languages, where for some reason talking about Rust is a problem, but every other language is okay or it's someone feeling attacked by the mere idea that projects which only could be done in C++ could now also be done by another language and starts crying about how the Rust community would flood every topic.

There's no longer an area where C++ is the only available option. Get over it. The rest of the world did a long time ago.


This is really odd to me. Language design should start with specification. Your compiler is just a reference implementation.


Rust language design does start from specifications, namely RFCs. What is being discussed here is producing a more formal specification than the ones that currently exist.


Yeah. The culture crash here is shocking dissonant. The people you'd normally expect to be the biggest voices for documentation robustness are...

... suddenly finding themselves in the "Actually, language standards are bad" camp, all because of a tribal opposition to the FSF?

Write the standard. Then argue that gccrust doesn't do it right. Don't refuse to document the language just to hamstring a competitor.

Also, please start with the borrow checker semantics. I don't think I've ever met anyone who could explain exactly what the rules are regarding what it can/can't prove.


What culture crash? The Rust project has a language reference, is working on expanding it into a more formal spec, and there are efforts like Ferrocene's to qualify the existing compiler for use in safety critical environments.

The argument is not that language standards are bad, it's that a C++-like ISO standard is unnecessary (when the quality documentation can exist in another form) and C++-like implementation fragmentation is bad.

(Have you read the NLL RFC? The Polonius work?)


There's been active work on a Rust specification for a while now. It'll happen.

https://blog.rust-lang.org/inside-rust/2023/11/15/spec-visio...


I’m surprised at the negative reaction to GCC-RS. If a language doesn’t have multiple implementations, it’s a pretty sad excuse for a language.


> I’m surprised at the negative reaction to GCC-RS. If a language doesn’t have multiple implementations, it’s a pretty sad excuse for a language.

That used to be the common wisdom (especially because of C/C++), but it's a lot more debated these days.

The consensus in the Rust community is that the current situation (one canonical-by-definition compiler, lots of documentation, a minimal spec for safety-critical industries, and specs for some modular sub-parts) get most of the advantages of multiple implementations without the drawbacks.


I think that the people who are debating it are missing some things. It often happens that only when a second implementation is attempted, unspecified holes in the documentation are exposed and standards are tightened up. Any differences in the two compilers will either mean that there's a bug in the new implementation (most likely), a bug or unclear issue in the documentation, or a bug in the mature implementation (it happens).

And no, the official compiler isn't canonical by definition. If it were, it would mean it has no bugs, and if there's a crash or a wrong result that's what the language is supposed to do.


Everything in engineering is trade offs. A single front end ends up in a stronger position for the community (i.e. the users of the language) because bug reports are easier (only one project to report them to), collaboration is easier/simpler for OSS maintainers (no issues where “crate foo works fine on rustc but doesn’t on gccrs” to triage/maintain), and language features come quicker (no need to synchronize/debate with other implementations). The downsides of underspecification are much smaller by comparison. As far as documentation goes that’s a red herring because gccrs is reusing the single standard library implementation which means that any documentation issues would still exist (I don’t think I’ve even once needed to lookup documentation for the compiler & language docs issues would be shared as well since rustc in this model still remains the canonical ground truth).


At this point, gccrs is quite immature, so you can simply ignore it. If we get to the point where its quality is good enough, this will change, but the likelihood is that any problems found will result in improvements to documentation (if something is underspecified). There could also be optimization bugs in LLVM that aren't present in GCC, so we could find bugs in rustc that aren't in gccrs at some point, but I think that will only be significant if gccrs greatly improves.

For now, anyone who finds that “crate foo works fine on rustc but doesn’t on gccrs” can just report a bug to gccrs.


I’m not saying that gccrs contributors should stop. If they want to invest their time & energy into it kudos. I happen to think the costs outweighs the benefits but you’re right that I can just ignore it to no ill effect right now. Where I would caution that though is when the Rust project starts needing to making accommodations to help gccrs (which is something described in the article). It’s possible that some accommodations help Rust anyway / there’s no harm, but it’s also useful to be mindful that more & more of these accommodations can start to have a negative cost over time & thus impact Rust as a whole & these costs are obnoxiously hard to quantify and because people like to get along there’s a general preference to be more accommodating. That’s the real danger that gccrs poses and the intangible benefits that you lay out are likely not that significant in the long term compared with the cost to build gccrs / modify Rust to accommodate gccrs. Of course, we’re just arguing over opinions since it’s so hard to quantify any of this.


>it's a lot more debated these days

By who specifically? I only ever see arguments against standardization and multiple implementations from the Rust community.


sounds like cope for the fact that there is not a good spec (a "Standard", perhaps?) tbh


The problem is on the long run: syntax stability, avoiding feature creeps with extensions/attributes, like we actually have with C (c++ is beyond saving due to its absurd and grotesque complexity).

Without that, you won't have real-life alternatives.


> avoiding feature creeps with extensions/attributes,

That seems like a purely semantic argument? Rust is adding features extremely rapidly! You're just saying it's "development" if it's done by one entity but "creep" if it's done by someone else?


Just personally I can see the virtue of multiple/different implementations. But the issue is building on top of gcc. The GNU toolchain is a dumpster fire, and I legitimately don't know how anyone can develop on it.

Not just ideologically, I mean literally - I don't understand how one sets up a development environment for GCC itself. I've had the misfortune of bootstrapping it a handful of times and it's the single worst behaved piece of software I've ever seen.


Bootstrapping GCC is not hard at all:

https://github.com/NixOS/nixpkgs/blob/master/pkgs/developmen...

(there are a ton of infrequently-used options in there; if you ignore them what's left is quite simple. replace all the options with `false` and do the dead-code elimination.)

Hint: don't use glibc. The real problem comes from glibc and the way it depends on gcc internals, which depend on libc (i.e. circularly). Bootstrapping GCC on Musl is quite easy.

glibc is the real dumpster fire.


> Cohen listed a few things that gccrs is already useful for. According to him, the Sega Dreamcast homebrew community uses gccrs to create new games for the Dreamcast gaming console, and GCC plugins can already be used to perform static analysis on unsafe Rust code. The Dreamcast community's interest stems from the fact that rustc's LLVM backend does not support the Hitachi SH-4 architecture of the console, whereas GCC does; even in its incomplete state, gccrs is helpful for this embedded use case.

This is delightful.


This is a little misleading. You don't need to have a gcc frontend for this, just a gcc backend


So we'll finally see Rust support for all the architectures that gcc supports that LLVM doesn't, like Alpha, SuperH and VAX, for starters. That'll be nice!


And mips64, which rustc recently dumped support for after their attempt to extort funding/resources from Loongson failed:

https://github.com/rust-lang/compiler-team/issues/648

This is the biggest problem with the LLVM mentality: they use architecture support as a means to extract support (i.e. salaried dev positions) from hardware companies.

GNU may have annoyingly-higher standards for merging changes, but once it's in there and supported they will keep it for the long haul. It's like buying vs renting. It takes a lot more dev hours to get support into GCC, but once it's there, it stays there.


> This is the biggest problem with the LLVM mentality: they use architecture support as a means to extract support (i.e. salaried dev positions) from hardware companies.

I have a hard time seeing this as a bad thing. Hardware companies seem like the most logical people to pay for maintaining support for the architectures they sell.


The pain is inflicted not on the hardware companies or future customers, but on their past customers.

Their customers pay full cost up front. Vendors can pay full dev cost up front too. GCC's model encourages this.


Support wasn't dumped, it was demoted from Tier 2 to Tier 3 to better reflect the level of support that backend effectively already had. As mentioned in that thread, if someone steps up to maintain it, it can be bumped again to Tier 2.

Be aware that GCC doesn't have a similar level of specificity about the state of each platform they support. There are packages in Debian that "compile" but when trying to execute them on "exotic" platforms you encounter bugs immediately. Supporting a platform in a static codebase of a compiler is easy, but in codebases actively being worked on, like GCC and LLVM, keeping things working is not trivial.


I would assume it also means additional configuration options for already-supported architectures.

For example, I recently discovered that with RISC-V, GCC supports the RV32E target but LLVM doesn't.


Are you sure about that?

I was pretty sure that llvm has supported RV32E for years now. https://reviews.llvm.org/D70401?id=395048


Oh, last time I checked clang didn't support it.

In any case, there are a lot of other compiler flags that are exclusive to gcc.


Can't wait to see some PDP-11 machine code from Rust (the last time I checked, freestanding C compiling still worked on GCC).


> A lot of care is being put into gccrs not becoming a "superset" of Rust, as Cohen put it. The project wants to make sure that it does not create a special "GNU Rust" language, but is trying instead to replicate the output of rustc — bugs, quirks, and all.

In my experience, the part in italics is a significant mistake.

Rust does not have a specification; there is a reference but it is explicitly not normative. A language udocumented except for a single reference implementation (as is tpday's fashion) have a long term weakness. What is the motivation for slavishly trying to maintain compatibility with the bugs and accidental quirks of another implementation? To guarantee that existing code will work in both implementations, which sounds sensible. And it is a sensible goal, but it does so at enormous cost by enforcing it in the wrong place.

The problem is that sometimes decisions are wrong, and sometimes bugs are written. But when you promise that all implementations will be bug compatible as part of compatibility you are also signing up to fossilize these bugs whether you want to or not.

A good example of someone who embraced this (to their credit!) is Microsoft: they spend a lot of person-power making sure that old programs continue to run while trying to fix security and reliability bugs. Rust need not and should not sign up for this burden so early in its lifespan. They should learn from history.

If they want the language to evolve they should embrace QA and QC. Famously "you cannot test quality into a product". You need QA: architecture, design, design and code reviews, etc to ensure that things will work properly and when not, that "failure heads in the appropriate direction". Then later in the development cycle QC (test cases) tries to see if you missed. This doesn't just apply to product development -- it applies to language development especially.

The strong standards (e.g. ComonLisp, C++, FORTRAN) embraced this belief. The weak, de facto ones (Most notably Python, but plenty of others) can still become popular, but change is difficult. Look at how long the Python 2->3 transition took, and how few python implementations there are.


If they find a big then presumably they’ll report it upstream and both implementations can be changed.


The point is that fixing that bug may break existing, running code that depends on it.


Thats why we run crater against all of crates.io: there are changes that should be allowed that we hold off on because 9f breakage, and fixes we assumed we couldn't land that in reality had no real world impact. As time goes on the confidence a "clean" crater run gives us goes down, due to adoption in closed source environments, but it is still invaluable signal. Depending on the bug, we can keep the current behavior around for prior editions but fix it for future ones.


Presumably they'll both agree 100% of the time as well.


Late in the thread but this may not be a good thing. I already can't use rustc or golang from distros because they can't keep up with the latest versions of these languages that are being released evey few months.

I have a system now where much to my shock I am getting rid of gcc because it is not needed but keeping upstream golang and rust for updating existing software, which is by the way a nightmare when I updated golang few months back for a CVE and had like four places where the go based apps stored their own go environment.


Thanks for posting the lwn.net link, reminded me of renewing my subscription!


It's a good subscription to have. I've gotten far more value from my LWN subscription than I spent, and recommend everyone that does lowish level work get one.


>the Linux kernel is a key motivator for the project because there are a lot of kernel people who would prefer the kernel to be compiled only by the GNU toolchain.

Linux can already be compiled with clang if you want an all LLVM based toolchain. The duplicate effort of developing and maintaining this does not sound worth it to have GNU "purity."


It's not about purity, it's about options. The ClangBuiltLinux community advocated that Linux should not be dependent upon a single compiler. But when Rust came along, suddenly many of the same people suddently decided that a single compiler was okay.


I always understood it as that they want control over "best practice".

A GNU compiler could add convenient feutures that the hardcore Rust users don't want other Rust users to be able to have.

Like ideological purity.


I think you may be misunderstanding here. They aren’t trying to keep the kennel GNU exclusive —— they merely want the option for a pre GNU toolchain.


What is a pre GNU toolchain?


They presumably mean "pure GNU toolchain", not "pre".


[flagged]


Rust already uses C++ in LLVM so nothing changes there.

This must be the umpteenth rant on C++ I've seen from you. This does not contribute anything. Please stop.


The underlying point is correct though. Rust developers cannot be expected to maintain a C++ implementation of their language. Half of them learned Rust to get away from C++, the other half never knew C++ in the first place.


Most Rust developers can outright ignore gccrs, and those that do want to use it will never need to look at the implementation. Would it be nicer if the GCC frontend was written in Rust? Sure, I guess. But in a quick look none of the GCC frontends are written in anything other than C++, and introducing a new language here would be an entire project on itself.

And since GCC doesn't actually include mature Rust support (yet), this would also mean that building GCC would depend on LLVM.


The GCC Ada frontend is written in Ada (and has been maintained that way in the gcc source tree for well over twenty years now).


Oh right; that seems to be the exception then, because none of the others I checked earlier used anything other than C++.


For what it's worth, rust developers are already maintaining an independent C++ implementation of the language: https://github.com/thepowersgang/mrustc


[flagged]


Since when was "there is an alternative C++ compiler" anything to do with "the canonical compiler is in Rust"? The word "lie" here appears to have the sole purpose of aggravating the reader; it certainly has no meaning correlated with its usual meaning.


Well, rustc is the canonical Rust compiler. mrustc mostly just exists to aid in bootstrapping and really not much else.


I make the difference between canonical and bootstrapping.

When I said canonical, I meant the reference one which will actually be used to compile "normal" rust code.

I guess this reference canonical rust compiler has the decency to be fully written using rust itself, am I wrong? (I was told it is, then if it is not, I was lied to)


It is trivial to verify for yourself without having the risk of being mislead by any person might tell you: https://github.com/rust-lang/rust/tree/master/compiler


The frontend is written in Rust. The backend is just LLVM which is written in c++.


I did check that, and indeed, we get the worst of both worlds.

Really, this is not the "simpler" C syntax I was expecting to see in the end, it seems we get c++2 or c+++. I have been coding for decades from assembly, to many high level languages, I even looked at what seemed to be early rust syntax, and there... OMFG I cannot even read that code and make sense of it, the syntax seems filthy, absurdely, rich. Exactly what it was not supposed to be as far as I can recall. In my experience, the worst syntax was perl5, but it seems we have a serious new contender here.

No, I am not meaning to be nice and complacent, you have to understand that something looks really bad here, and you have to expect such reaction as mine, this is to be expected in the look of this.

I dunno which kind of organized people is financing and pushing that, there is a scent of Big Tech, aka teams of brain damaged people and scammers, worldwide scale.

All I know is I am going to stay away from this, until I get some time to decrypt and understand that f*** syntax, yeah I sadi DECRYPT RUST SYNTAX you bloody lunatics.

Yeah, we had team c++ and now we have team rust, GREAT!


How clean the syntax looks is one of the least important things about a programming language. I don’t know why you’re so hung up on it. But yes, Rust’s syntax is more complex than that of C.

Nobody has ever advertised Rust as “C with simpler syntax” so I’m not sure why you expected that.


[flagged]


Rust is much less complex than c++. Again, I’m not sure why you’re so focused on syntax. That’s the easiest part of a programming language to learn. The semantics, abstractions, etc. are what take the most effort to learn and MUCH simpler in Rust than in C++.


I was told, simpler than C, syntax wise and coding a _real life_ rust compiler implementation being _SIGNIFICANTLY EASIER_ than a C compiler.

That was a lie, period.

rust is just another toxic computer language, but this one seems to be financed and pushed like any other by powerful organized groups of people. As I said, there is a smell of Big Tech.

And I don't forget the "ofc, the canonical rust compiler is written in rust"... (but the real hard work is actually done by llvm which require a c++14 compiler). What a f** joke.

I wanted to update my judgment on rust, well, I did.


> I was told, simpler than C, syntax wise and coding a _real life_ rust compiler implementation being _SIGNIFICANTLY EASIER_ than a C compiler.

Who told you this? Ignoring Rust, most languages are more complicated to implement than C. Only the original Pascal seems simpler, since it has no preprocessor, its syntax is less complicated, and there are fewer implicit type conversions.

It seems unreasonable to me to hate Rust because its compiler is more complex to implement than a C compiler, and because its syntax is more complex. This is just a consequence of it having more concepts built-in to the language, like ownership, lifetimes, crates (i.e. modules), polymorphism, generic functions, etc. Most programming languages are not as stripped-down as C or Pascal.

> And I don't forget the "ofc, the canonical rust compiler is written in rust"... (but the real hard work is actually done by llvm which require a c++14 compiler). What a f* joke.

The only C compilers that are freely available and produce good code are written in C++. C isn't special either.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: