Hacker News new | past | comments | ask | show | jobs | submit login
Integrating “safe” languages into OpenBSD? (marc.info)
258 points by dmm on Dec 4, 2017 | hide | past | favorite | 330 comments



I remember I stopped frequenting /r/programming when it became this weird Haskell echo chamber. It was a bubble where a small but vocal faction inside the community seemed to over react to any criticism. They methodically and tirelessly responded to every comment with an endless litany of "facts" showing how Haskell could do anything from os kernels to game programming. They touted every industry mention of Haskell use and demanded evidential proof for any claim Haskell might not be suited for every situation. It reminds me of the Sea Lion comic[1]. I'm not sure if that faction still holds as much sway today.

I get the same feeling on Hacker News about Rust. There is a point where well-meaning evangelism does more harm than good. I sometimes get the feeling people spend more time writing defences of these languages than they spend time writing programs in them.

1. http://wondermark.com/c/2014-09-19-1062sea.png


It's become clear to me that a lot of the Rust fanatics don't actually know how C++ works and and Rust is their first low-level language. The ones I'm talking about know enough to make them sound like they know what they're talking about unless you actually know what they're trying and failing to accurately describe.

It sounds and seems like a language worthy of more investigation on my part, but I have to agree that the militant, presumptive attacks on any language besides their Chosen One has left a bad taste in my mouth.


There are those of us who have been writing C++ for decades and are sick of the footguns and build systems but have never had a viable alternative before.

I don't think anyone is arguing that we should throw everything out, however for all my greenfield stuff Rust has been a welcome breath of fresh air. Yeah, there are some people who are a little too over the top but a large part of the support behind Rust is from those of us who've used it in production and found it to be a fantastic language and ecosystem.


> I don't think anyone is arguing that we should throw everything out, however for all my greenfield stuff Rust has been a welcome breath of fresh air.

If you can write things from scratch in a clear room fashion, perhaps.

C++ is very good, when you need to integrate a lot of 'low-level' libraries (that are alredy in C or C++) together in a meaninful way, like for instance, browsers do.

Like the capacity to 'talk' with FFMPEG, levelDB, linux, etc.. in a complete native fashion basically.

I think in that regard Rust is sort of late to the party, and will find a wall hard to beat down, once the adoption rate get some maturity.

If you are already a experienced C++ dev, compared to modern C++, Rust has very little to offer. Of course is more modern, (maybe) more ergonomic?

But little things like package manager or explicit lifetime management by itself wont cut it.. at least not for people with large codebases already in C++.

When you are younger, programming languages look like that secret ingredient that will turn whatever you do into a magical tool.. But experience tells you that nothing beats hard work or that you should understand the strenght and weakness of each language, and know when and how to use it.

So for the problem domains where C++ are being used, it will get hard to beat, because a lot of great software are already there, and can be integrated in a native fashion, with whatever you do on top of it.

And by the way, modern C++ is already secure enough, making some selling points for Rust, a little bit of cosmetic right now.

If you throw a language like Swift into the equation.. particularly i think they(C++ and Swift) form a lovely and unbeatable couple.


I mean this in the most non-confrontational way: have you actually used/looked at Rust?

The reason I ask is just about every item you laid out I've had the opposite experience.

Talking with C/C++ is trivial with bindgen and clang. Embedding in C++ is likewise straightforward. I mean Firefox is one of the larger C++ codebases out there. The package manager is great and makes dropping in things much less painful.

On the security front I don't think you can compare the two, I can easily think of things like iterator invalidation that C++ just doesn't handle well.

The last big project I did in Rust ran on 6 different targets(x68 win32/linux/osx, android armv7/x86, wasm) targeting 3 different rendering stacks(OpenGL, HTML Canvas, Android Views) integrated deeply with a Lua interpreter. Having done cross-platform stuff like this in the past it was a build/toolchain nightmare. With Rust everything was straightforward and pretty darn painless.


> I mean this in the most non-confrontational way: have you actually used/looked at Rust?

I follow Rust since Graydon Hoare's initial vision for it. Which i love it when it first came out. The garbage collected Rust had so much potential, in my point of view.

Then when i started to follow it, it was chaging into this current encarnation trying to chase C++, and market as a better C++.

I've coded in it for a period of 6 months, i like it, i understand its value, but somehow with the explicit lifetime thing, it started to become even more painful to code in it, even compared to C++, where you still have to create a header file and a impl file, without a proper module system.

The first Rust vision, now was re-created even more beautifully in Swift.

I get it what Rust is, i think it has its value and its place, im just saying that it will get a time where it will be very harsh to Rust get more adoption, because, even if you like it, you just cant get out of the great ecosystem all coded in C and C++. Interfacing with C for other languages is pretty easy, but you still have to wrap C++ interfaces, and use just some features, because you lost access to the whole thing (think LLVM for instance).

So C++ has this net effect, which it achieved by the virtue of all this amazing software ecosystem, where giving its talking its native language its much easear to interact with.


> The first Rust vision, now was re-created even more beautifully in Swift.

Had you said Poney I'd have agreed, but I don't see how Swift matches the first Rust vision (a safe concurrency model).


I meant Pony[1] obviously

[1]https://www.ponylang.org/


I have used Rust occasionally on weekend projects.

Rust toolchain still fails support on mixed mode debugging.

With C++, I can have a mixed .NET, JS and C++ solution and without problems debug and single step across all languages, regardless of targeting desktop, server or UWP.

Similar applies to Java or Android tooling.

Additionally if one is into native desktop coding, bindings to Gtk and Qt are still work in progress, and there isn't nothing that would match something like Blend.

Finally, the borrow checker still hinders some programming patterns that are quite common on GUI code, a few of them will be tackled with NLL.

Ah, and cargo does not support binary libraries although rustc can handle them, which is an issue for companies selling libraries.

I am aware all these issues will be eventually fixed, it is just as I see the situation currently.


The real problem is when it's not just binding: it's actually using metaprogramming, which is a huge part of why I use C++. There's no equivalent to blaze or eigen for linear algebra in Rust, and much of that heavy lifting is handled by templates. (There are some libraries with rudimental expression template-like behavior, but nothing as sophisticated.)


That's probably where you and I differ, I don't see metaprogramming as a plus. More often then not it tends to lead to some long compile times(which is why boost never sees the light of day in any codebase I work in).

I'll also counter that having worked in a fairly math heavy 3D graphics space I've never needed them, although our requirements may be different.


This is the sort of thing that bothers me about these kinds of arguments. Sure, you're always going to be able to find certain domains where existing libraries or language features make a certain ecosystem very well suited. And that's fine. No one's saying to discard the best tool for the job for another one that might have some general advantages but won't actually get you where you need to go.

But there are a ton of domains where C++ doesn't have anything special to offer and you could just as reasonably write the code in rust.


Perfect. I absolutely agree. The thing is, I use C++ because it does a lot of things specialized for what I need. I don't work on every problem, and I look forward to the chance to use Rust. That being said, for now, the problems I'm working on are domains that very much benefit from C++ and I will continue to use it.

I totally agree that Rust is a great language and has wide applications. It doesn't fit my current needs, but I don't doubt it meets many people's needs. More power to them -- and it's cool to have a "The new C++ killer language" which can actually live up to its hype, unlike Java and C# and Go and D.


What's wrong with D? I quite liked it when I used it (for a relatively small project). It's not really a C++ replacement (GC), but as a high-level language the only problem I found was lack of a large library ecosytem - the language itself was quite pleasant to work with (more so than C++ and even Rust, which I also like).


D seems not bad -- I just mean that Rust was the first "C++ killer" which actually could match C and C++ speeds. I also have a lot of respect for Walter Bright and Andrei Alexandrescu.

I'm sure I'd much prefer it over Go, for example.


> And by the way, modern C++ is already secure enough, making some selling points for Rust, a little bit of cosmetic right now.

There is not a single large-scale network-facing widely attacked piece of software I'm aware of in C++ that has not fallen to some memory safety problem. Memory safety issues frequently produce RCE.


> And by the way, modern C++ is already secure enough, making some selling points for Rust, a little bit of cosmetic right now.

Sorry. In hindsight, and out of the context like this, i think i picked up harsher words than i should. I dont meant to be disrespectful to all the great and valuable work that is going on the Rust community.

> There is not a single large-scale network-facing widely attacked piece of software I'm aware of in C++ that has not fallen to some memory safety problem. Memory safety issues frequently produce RCE.

Sure, Rust will be better at this. No doubt. But lets not forget that a lang like C++ needed to evolve while still working, so how much of those security issues are there given the use of ancient code practices of the past? software coded in the 90's even the 80's. I guess we at least should wait for when Rust has millions of tools, and billions of line of codes running in production, to see what sort of issues might plague Rust code the most. But Rust will need to get there first. So lets not forget that its a language created in the 70's and that is responsible for great tool, and is still being used to build big project, with millions of LOC and with many devs working in groups in some very sophisticated pieces of software, and this is no small feat.

What i meant to say, earlier, that maybe is not so clear, is the choice is not as unidimensional, by just cherry picking one point of view when you need to decide what kind of tech, fits the best for a given scenario. Im trying to point out that other languages like C++ or even C, still can be good bets, when you look into more vectors of influence, not only in the security aspect, or what language have more FP idioms in it. In the real life, is not as easy as some Rust evangelists imply it is.

C++ its not as good as Rust in the security aspect, theres no doubt about it, but the modern C++ is fine when you sum the vectors of what language will fit the best for a given scenario. I mean, you can be explicit about ownership, and make the api clear about ownership and lifetime. Of course Rust choose to be more pedantic about it. But as a C++ dev i have no issues with lifetime or owership management just by using smart pointers, std::move, etc.. (even in a multi-threaded environment)

But i dont think is as easy, as the Rust community is implying, that is to "just use Rust" for a project, where C++ may also be a good fit.

Even for new, clean room projects, i think C++ may be a better fit for certain scenarios, and im glad that theres also Rust as a choice. But i think sometimes this zealot type of evangelism sometimes more based in spreading fear, than by selling Rust for its own virtues, can hurt more than it looks like.

I guess by now, Rust doesnt even need to be compared to anything else, thanks to great hackers like you its already showing its own virtues. Im just trying to say that by now it probably reached a peak of users that can use the language in more low-level scenarios, and it probably should aim to get more of the Ruby, Python, PHP, NodeJS crowd, to fill its army.

Because from now on it would be harsher for Rust to get more adoption, without a big ecosystem backing it up (Like Swift has with Apple, Java with Android and C and C++ has with Unix)


>No doubt. But lets not forget that a lang like C++ needed to evolve while still working, so how much of those security issues are there given the use of ancient code practices of the past? software coded in the 90's even the 80's

But C++ was a mess from the very beginning, many of its problems are inherited from C, and the language itself was an ugly extension to C, not a carefully elaborated new language. So most new features were introduced to repair the breaches of such messy design.


>modern C++ is already secure enough

Why to have nullptr dereferencing? Why to have an undefined behavior on option<> dereferencing? Why to have dangling refs? Why not to check borrow possibility statically? God, we don't event have a decent and safe variant (sum) type in c++. Adding all that (unnecessary) complexity c++ has (how many whatever-values and initializations does it have?). It is unsafe as hell, and lots of stuff could be made safer trivially in a new language (say, rust or F*).


Just as a sidenote, rust has no defined behaviour when dereferencing a null pointer. What makes it worse is that a null pointer in an option<> is handled as None instead of Some, which kind of broke some of my code until I found that particularly nasty bug and worked around it (with offsets).


Rust does not have nulls beyond unsafe blocks, does it?

>What makes it worse is that a null pointer in an option<> is handled as None instead of Some

What? How could NULL be treated as Some?


NULL is still a valid memory address, which in this case I had to use.

When packing such an address into a mut or const within a Option<>, a NULL will be treated like a None when you write, essentiall, Some(NULL).


It does not.

I think your parent must have been doing something strange; if you have a *const T, one that's null will still be a Some. It's &T, which cannot be null by definition, that uses the null value as None, since it cannot be null.


I was actually using mut/const and got this bug, I don't think I used &T since I was handling raw memory addresses.


Hm, then I don't know what happened, as I just checked and it's definitely not doing the optimization, as it shouldn't!

If you can reproduce you should file a bug.


It might help to know that I was writing an UEFI application, so basically kernel mode.

Otherwise, I've redesigned the affected subsystem so I'm not sure I could reproduce the behaviour


> C++ is very good, when you need to integrate a lot of 'low-level' libraries ... like for instance, browsers do.

That's sorta funny that you use that example, since rust is a Mozilla project, and they've already moved some parts of Firefox over to rust, and plan to move more in the future. Clearly this is not an impediment; reasonable C/C++ interop was a must-have for rust's design.

> If you are already a experienced C++ dev, compared to modern C++, Rust has very little to offer.

That's a weird argument to make. Essentially you're arguing that you should never learn any new languages once you have proficiency in something similar? That would seem to be a bit short sighted.

> ... at least not for people with large codebases already in C++.

Sure. No one's saying "throw out all your C++ code and rewrite everything in rust" (and if anyone is, they can be safely ignored). But that doesn't preclude taking a look at some parts of your code that might benefit from being written in a safer language, or rewriting small tools in rust, or considering it for new projects.

> When you are younger, programming languages look like that secret ingredient that will turn whatever you do into a magical tool.. But experience tells you that nothing beats hard work or that you should understand the strenght and weakness of each language, and know when and how to use it.

No one's saying languages are magic, just that, by some metrics, some are objectively better. This might be an odd example, but I did Java for many years before switching to Scala a couple years ago. In the past few months I've had to go back to Java, and it's been frustrating that it's so easy to write certain classes of bugs that I just never see or write in Scala. Does that make Scala some magical tool that fixes all my problems and means I don't have to do any work? No, of course not. But it is objectively better than Java by some metrics (and sadly, worse by others).

> modern C++ is already secure enough

That's a pretty bold claim, and I'm certain it's not true. You might have a different idea of what "enough" is than I do, though. I'd believe that the "default" state-of-the-art C++ use is more secure than it was 10 years ago, but that doesn't mean that a language like rust can't offer superior safety guarantees.

And there's something to be said about encouraging safer practices through language features and convention. If you can make certain kinds of errors in C++ programs (I don't see pointer arithmetic going away any time soon, or people always using vectors and the like and never raw arrays), then you (or someone else) will, regardless if there are safer ways that help you avoid those kinds of errors. Given a language with a huge surface area like C++, and disagreements on what are the "safe" parts to use, it's inevitable.

> If you throw a language like Swift into the equation.. particularly i think they(C++ and Swift) form a lovely and unbeatable couple.

Despite Swift's open source status, I really don't see it making any meaningful foothold outside macOS/iOS.

I don't have a horse in this race. I abandoned C++ years ago, and I've only started learning rust a few weeks ago. I like it, but I have a lot to learn, and I've already found some rough spots that aren't so great. No tool is perfect.


(well I mean there has been Ada* for ages)

*or fortran or pascal


The tool chain there is still quite irritating and “old school”.

At least for me, part of the attraction of Rust is the development tools around the language itself (e.g. cargo).


Yeah, in no particular order here's what I love about cargo:

  Seamless cross-compile. I've done a ton of cross-compile projects, cargo/rustup is the best by far.

  One-line dependency import. Saves *hours* setting up build chains.

  First-class win32 support. Hard to stress how rare and awesome this is coming from a gamedev background.

  Integrated testing. So nice just to have.

  Integrated benchmarking, ditto above.


It sounds and seems like a language worthy of more investigation on my part, but I have to agree that the militant, presumptive attacks on any language besides their Chosen One has left a bad taste in my mouth.

This 100 times over. It's a language that seems novel and strong enough to stand on its own merits - but there's a section of the community that makes me want to stay away.

I wish they'd spend less effort re-writing already working tools in C and more effort writing new, better tools. A drop in replacement for "ls" isn't that exciting for me.


Pretty sure that's already happening with tools like ripgrep: https://github.com/BurntSushi/ripgrep


On the other hand part of the complaint in the main article is the lack of basic core utils at a standard that OpenBSD would consider. Or things that could be a base world dependency for them.


But that ties into what I'm saying - trying to 'oxidise' a system written in C that already works.


OpenBSD developers themselves did that, rewriting a lot of things that already worked. But they wanted to improve them security wise. So your argument doesn't fit with what they did already. Language is tangential here. Their main focus was security. So it's pretty reasonable to ask, why things can't be rewritten in Rust to improve security further.

Note, that the answer doesn't say that it can't benefit security, which would be a principal argument for OpenBSD. It brings side reasons as to why it's not easy to do (like integrated tools, developers' availability and so on).


For those who want the OpenBSD devs to abandon their many, many years of experience with C and instead adopt Rust, I think it's a fair question to ask the Rust folks why they haven't just started from scratch and built their own unix-like operating system. This seems to be a more reasonable approach that trying to force an existing operating system to adopt their language.

Oh wait, that's right, none of them has even gotten around to writing a replacement for ls, grep, .....


> I think it's a fair question to ask the Rust folks why they haven't just started from scratch and built their own unix-like operating system.

Some did: https://www.redox-os.org

> Oh wait, that's right, none of them has even gotten around to writing a replacement for ls, grep,

They did that for some utils: https://github.com/redox-os/coreutils

> This seems to be a more reasonable approach that trying to force an existing operating system to adopt their language.

I don't think anyone is forcing to adopt anything here. The question was about integrating languages like Rust in OpenBSD.


> For those who want the OpenBSD devs to abandon their many, many years of experience with C and instead adopt Rust

Who exactly? Can you actually point to anyone making this desire known in earnest?


Or to put it another way, find something new and valuable as your Trojan horse?


> A drop in replacement for "ls" isn't that exciting for me.

And a good getting your feet wet project for someone else :)


> I wish they'd spend less effort re-writing already working tools in C and more effort writing new, better tools.

Is this really a thing? Folks keep pointing out this bugbear of folks going out and asking projects to rewrite in Rust and ... there aren't that many examples of this.

If anything most of the more famous Rust projects (ripgrep, redox, etc) are clean-slate.


http://transitiontech.ca/random/RIIR this phenomenon has been documented


https://github.com/uutils/coreutils for instance

Personally, I don't find this a bad idea, as long as it is not the primary thrust. The reason it is needed is that you aren't going to convince everyone to change the default tools they've used for however many years. If you want to make things safe, you have to swap out implementations and keep same/similar interfaces.


Yeah, that's my point, there are scattered examples but it's nobody's primary thrust.


> It's become clear to me that a lot of the Rust fanatics don't actually know how C++ works and and Rust is their first low-level language.

That doesn't change the fact that Rust programs are, when memory safety is concerned, more secure than C++ programs. This is true both in theory and in practice.


>It's become clear to me that a lot of the Rust fanatics don't actually know how C++ works and and Rust is their first low-level language.

Why would not knowing C++ bar people from anything? It's a wildly specific skill, diminishing in importance every day.


There is a sort of social game that seems to emerge at some point, combining the worst of in-group bullshit and logic-puzzlng as a substitute for actually attempting to help folks honestly seeking questions.

The game goes like this: start discussing something: say, including rust in OpenBSD.

- Assume it (here, using rust in oBSD) is a great idea.

- For each objection, explain why it is trivial. Your tools are your own reasoning and knowledge of programming, the ability to ignore, minimize or belittle any and all reasons why things are the way they are, assuming away any legacy issues, and remembering that all needs and use-cases in the world exactly match yours.

- When you get bored, just walk away. It isn't your problem, after all, and you were just trying to help. High-fiving your cohort optional.

It ends up simultaneously coming off as arrogant and clueless. Last time I had a small problem and the time to try something new, I started it in rust. After getting a series of jackass responses to a simple question[1], I wrote it in C, because I just don't care enough to put up with twits fluffing their own ego at the expense of newbies.

This problem is hardly unique to rust - unfortunately it happens everywhere nerds gather. But rust has a really bad case of it.

[1] Found the answer a few days later when I thought of a better search term.


Where was this? If it was in an official Rust forum, we don't allow "jackass responses", and I'd like to take care of it.


That's frustrating to hear. I've stuck to the #rust[1] channel and have always received friendly answers whenever I've gotten stuck, and I've had a lot of questions over the past 3 years. :)

[1] https://chat.mibbit.com/?server=irc.mozilla.org&channel=%23r...


> After getting a series of jackass responses to a simple question

Getting those responses where, exactly?

If you had mentioned literally any other language with a FLOSS community around it I wouldn't question that statement. However, Rust proponents so clearly claim to do the opposite-- you've even got Steve below explicitly stating that jackassery isn't allowed in Rust forums.

What's the data to the contrary?


I agree. It makes the community looks childish and inexperienced by association.

I think I recall reading somewhere on Rust's website that telling project maintainers to switch to Rust was frowned upon, however I can't find it right now. Sometimes I feel like it should be written in a 20pt font on the home page. Or maybe written in blinking letters every time you run rustup.

I always feel it's very presumptuous and somewhat disrespectful to second guess the devs of a project, in particular if you're not a big contributor yourself. Pay your dues. These messages are effectively not a whole lot more useful or insightful than saying "Rust rulez, C droolz" or "have you considered indenting with spaces instead of tabs?".


To be clear, you can take it from me: yes, don't go tell people to re-write things in Rust. Agree 100% that it's presumptuous and disrespectful.

At the same time, I don't see it actually happening as much as people say that it happens. Maybe I just don't see those things. If I did, I'd tell them to cut it out.


People have told me for the longest time that Java is a shitty language I shouldn't be using. Most of those people said C++ would be a far better idea without even bothering to show any reason why C++ would be more suited besides "Java is slow, huahuaha, it is shitty, huahahaha". So, I have to admit I take a bit of a pleasure of seeing those people getting challenged on their presumption that C++ is the greatest language on earth.

Language advocacy is far older than Rust. And many language advocates are far more obnoxious than what I read on HN with regard to Rust. I actually have the feeling that the C++ crowd is just a bit sad that the usual defense of "well, yeah, other languages do those things better, but the code they produce is far too slow for our problem, so they're useless!" doesn't cut it anymore.


> People have told me for the longest time that Java is a shitty language I shouldn't be using.

Java is a great language in theory, the problem with Java is that people have evolved to use layers upon layers upon even more layers of abstractions, the tooling is horrenduos (try configuring maven vs npm) and especially all complex Java applications end up being as slow as molasses (e.g. SAP GUI, Lotus Notes, Eclipse, jDownloader, Vuze on the GUI application side).

This leads to people projecting their horrible experiences with the Java ecosystem and applications to the language itself, which is a pity.


> try configuring maven vs npm

vs Cargo (Rust) - http://doc.crates.io/build-script.html (simplicity)

vs elm-package (Elm) - https://github.com/elm-lang/elm-package (automatic semver compliance)

vs Stack (Haskell) - https://docs.haskellstack.org/en/stable/GUIDE/ with stackage-sets (packages tested to work well together) - https://www.stackage.org/


> try configuring maven vs npm

That's kind of a weird comparison since npm is a package manager and maven is a build tool, packager, dependency resolver, test runner, ...

A better (but still lacking) comparison would be "try configuring maven vs. npm+webpack", and then I'd say... oh god, I'll take maven in a heartbeat.


This is typical of enterprise computing, where Java reigns, exactly because it makes easier to do the "layers upon layers upon even more layers of abstractions" that were being done in C, C++ and Smalltalk before Java came into existance.

Also there are lots of similar architectures in the .NET world with WCF, Web Forms, EF, SharePoint, ...

Anyone that thinks JEE is bad, should spend some weeks trying to do the same with CORBA, DCOM or SOM, in either C or C++.

Same thing will happen to Go, JavaScript, Scala, or whatever language gets adopted into the enterprise cathedral.


How is Maven (or Gradle, but I prefer Maven) bad? I've never used npm, but I quite like Maven's use of XML for defining project dependencies etc. with clear structure.


I have very little experience with Maven, but I find it hard to understand and restricting. Maybe I should take the time to learn the fundamentals properly and maybe I would be enlightened, but it seems to work very different to other build tools.

Example "Lifecycles" [0]. The documentation says "Maven is based around the central concept of a build lifecycle. What this means is that the process for building and distributing a particular artifact (project) is clearly defined." The first sentence just says "this is important!". I have no idea what that second sentence is supposed to tell me. The documentation goes on without a clear definition what a lifecycle is. I figure it is a kind of command. Afterall, the basic lifecycles are "default" which is "make all", "clean" like "make clean", and "site" like "make docs". I'm not sure, there is no good definition of "lifecycle" in the documentation there. Hard to understand.

Well, there is a predefined set of lifecycles and each lifecycle has a predefined list of phases, where stuff can hook into. For example, with java the compile phase will turn my java files into class files. The test phase will turn class files into test reports. Recently, I wanted to merge multiple coverage reports [1]. Turns out, this is done in a compile phase, which is reasonable. However, the reports are generated in the test phase, which is after the compile phase. At this point, I'm lost. This corset of phases is restricting me.

[0] https://maven.apache.org/guides/introduction/introduction-to... [1] http://www.eclemma.org/jacoco/trunk/doc/merge-mojo.html


Spot on. As a Java developer, i've bounced around between Maven, Gradle, and ad-hoc shell scripts (and Ant back in the day!), and just cannot understand how so many people still think Maven is any good. It's not the XML - that's a superficial annoyance. It's the elaborate but extremely rigid model of what a project is and what a build is, that makes even the smallest deviation from a completely vanilla project excruciatingly painful.

Gradle has all sorts of failings, and is overly complex in its own way, but at least it gets the meta-model right: provide fundamental tools for building a graph of tasks with dependencies between them, and ship a set of hopefully useful tasks built on top of that.


Because that rigidity excels when you have a Fortune 500 project, distributed across 4 development sites across the globe, with 100 developers of multiple consulting companies developing several modules.


The java tooling is fantastic... when coming from the C++ world.


The Java community is a poor example to make your point, since it's always been vicious and obnoxious in relation to C++. It was partly built on tearing down C++.

The reaction of some parts of the Java community to Hypertable choosing C++ instead of Java was nothing short of embarrassing for instance, but it's not just that, over the years I have seen too many examples to list :)


The C++ world hasn't exactly been kind to Java either. The two languages have the kind of vitriolic contempt for each other that only siblings could have.


I find most of the programming/lang/framework subreddits intolerable. I dip into those waters occasionally, but unsub pretty quick. Too many egos, trolls, asshats, and too much noise. There are notable exceptions, like /r/elixir. But I abandoned /r/python and /r/django months ago, and won't be back. I'm an adult, and prefer to discuss things those who will show at least a modicum of respect to others.


/r/rust is actually one of the better ones, thankfully.


The flipside of that is - I'm an adult, I want to talk to people who can handle critiques and debate without getting unduly upset. Disagreement != trolling.


That's not a flipside. That's a restatement of my criticism of these subreddits. There is a difference between constructive criticism and/or correction (which I am all for), and belittling, demeaning, insulting, and cursing at people.


There is a difference between constructive criticism and/or correction (which I am all for), and belittling, demeaning, insulting, and cursing at people.

IME a lot of thin skinned people on the internet always view the former as the latter. It's equally as tiresome.


IME a lot of people use "people need to be less thin skinned" as a weak rationalization for dickish behavior that would get them fired from their job and/or ostracized from their real-life social circle. But since it's online and no one knows who you really are, it's somehow "acceptable". No, it isn't, and communities that encourage or allow that type of behavior are incredibly toxic.


[flagged]


Your parent's response wasn't passive aggressive at all, it was a direct statement of their view of the problem with people who think the problem is other people being "thin skinned".


Really not sure how you don't see it. Mimicking my sentence structure, implying that people who talk like me are ostracised by society (!?), and using phrases like "incredibly toxic". I could almost hear the tone of voice reading that sentence.


I assure you that you have no idea what tone of voice I was using. There was absolutely nothing passive-aggressive about my response to you.

I was merely pointing out how you're flat-out wrong, and I am super tired of people who somehow believe that feelings, emotions, and empathy don't matter in communication. There are some people (such as yourself) who seem to believe they don't, but in the end that just ends up being short-sighted, non-inclusive thinking that drastically reduces the number of people who want to associate/work/do business with you. If that's ok with you, then by all means, continue to engage in boorish behavior, and I'm sure in the end you'll reap what you sow.

So... is that "blunt" enough for you?


Ok, I see how the sentence structure mirroring could be read as passive aggressive, but the rest of that is just flatly stated disagreement; there's nothing "passive" about it.


So why is everytime I point out the glaring problems of rust, which are mentioned in their docs and contradict their marketing hypes are always downvoted into oblivion? Nobody ever dated to check out the docs and issues. I mean claiming memory safety with an unsafe keyword and refounting, and deadlocking concurrency with manually required mutexes called "fearless concurrency" whilst having no idea about actual concepts solving those problems.

Pure fanboyism. Better than C yes, but not better and much slower and unsafer than ATS or Pony.


I just want to say that that comic is absolutely wonderful. Made my morning :)


> It reminds me of the Sea Lion comic[1]

You nearly made me spray coffee all over my precious Microsoft Natural Ergonomic Keyboard. (Which - no joke - would have been the third I would have destroyed this way.)

Thank you for starting my day with a healthy dose of laughter! (I know that comic is trying to make a serious point, but the surreal example it chose is just so hilarious!)


> I sometimes get the feeling people spend more time writing defences of these languages than they spend time writing programs in them.

Programmers love language wars.


In WWII it was famously true that fighter pilots would fiercely argue for the absolute superiority of whatever plane they had been flying for a few months. For example, Thunderbolt pilots hated being moving to Mustangs (etc) - even though to our eyes the Mustang is obviously a superior plane.

Once your instincts adapt to something intimately (and programming languages require that); that's what feels right. You'll argue vociferously against a change and use what are objective reasons. (The Thunderbolt could dive faster, it was a brick.)

Of course, in the United States Army Air Force, you still had to switch.

The old hands usually remain convinced of the superiority of what they know, and irritated by Young Turks telling them something else, nicely or not. Because the Young Turks can't be right. But they are, often.


> Programmers love language wars

Programming Language Advocates love language wars

Most programmers just like to be aware of ALL the PROs and CONs of the languages and choose the right one for the task at hand.


Hey, better the Rust Evangelism Strikeforce than the C Org, whose members are bound to that billion-year contract maintaining legacy embedded code riddled with buffer overflows and undefined-behavior traps, not to mention broken or brittle build tools. (The billion-year contract is there because that's a lower bound on the expected lifetime of that code base.)


One thing proggit has over HN is there's much less political discussion. I find the constant political debates and drivebys here incredibly tiring. Only about one-third the links on the frontpage here of any interest to me as a 'hacker'.


I have to agree. There seems to be more and more political/economic discussions here with which most of the posts seem to be from another planet. The more non-technical the discussions, the more strongly and obnoxiously held the positions seem to be.


The points made against switching to newer languages are valid. It will cost time, it will cost effort, it will take many years of work to convince stubborn maintainers to switch.

But it will not stop _everyone_ from adopting better languages, and their efforts will eventually surpass the older, less secure systems.

When there's feature parity, each and every new exploit will be called out: "this wouldn't have happened in our system'. And that is a good thing. There _are_ better alternatives to C.

As for the point about how nobody is working on replacements, that's wrong. There's (partial) replacements for most coreutils written in rust, and a whole kernel has been under active development for years now.


I think this misunderstands the OpenBSD philosophy. The more likely it is "everyone else" disagrees, the more likely it is Theo is choosing the right course for the project.

The system being secure is a secondary benefit to it being comprehendible and coherent to an individual. It's not enough for the output of a magic box to be a better widget, even if the widget is better in every measurable way. The box itself must not be magic.

This isn't "right" or "wrong". It's simply a stance that values an individual's ability to understand their computing device from top to bottom.

Rust is an amazing and wonderful piece of technology, produced by a great investment of energy by brilliant minds. I'm incredibly grateful it exists and look forward to it's continued development and adoption.

But I'm not brilliant, and I don't have a lot of energy, and so I'm also grateful there exists an operating system I can understand, and other people who continue to work to make that operating system useful.

Much open source technology is created by people with a financial incentive for others to use it. That's fine. OpenBSD is written by people who want to use it. It's usefulness to others is coincidental. That's fine too.

I'm glad there is so much money funding the development of open and free computing innovations. The world is a much better place for it.

But there isn't a lot of money funding the rest of the unix philosophy besides the "open" part, like the parts about composability and anti-monolithic design. Because those things are not valuable in financial terms, and are potentially even destructive towards the purpose of capturing value at all.

I am grateful there is a small radical free operating system defending our freedom from monoculture.

And I'm similarly glad for their defacto antagonism towards proselytizing of all kinds, even if that proselytizing is done with good intention, and even if that proselytizing turns out to be right.

That's why you need to show them the code. When you show the code, it means you understand. Until you understand, you're not free.

At least, this is my interpretation. I'm not affiliated with the project in any way. Just a fan.


"But I'm not brilliant, and I don't have a lot of energy, and so I'm also grateful there exists an operating system I can understand, and other people who continue to work to make that operating system useful."

Writing correct, efficient C takes more years to master than people take to learn Rust that I've seen. Also, the C compiler and many other parts of OpenBSD are black boxes to their developers. Your worries apply equally to their situation unless you've read and understood all their dependencies. On top of it, the OpenBSD people are always rewriting stuff for claimed benefits in maintainability or security. It's just when we talk a safe, systems language that can be as simple as a Wirth language or complex as Rust they suddenly can't justify the effort of even piecemeal replacement.

Then, next week, they'll put piles of effort into a mitigation across their toolchain whose benefits are so probabilistic even they can't tell you what attacks will fail or succeed. It's worth it, though, to improve their security standpoint. Unlike the pain of recoding even one utility in something like Rust. That's where this email draws the line.

Of course, I encourage people to do exactly what he asks every time another BS argument is raised. He's worried about drawbacks of a non-C language? Make something like Cyclone or get a Wirth language better at selectively turning off safety compiling to C w/ great C FFI. He's worried about compile times? Fix the compiler. He says utilities aren't rewritten in the better language? Rewrite them showing its advantages esp against the bug reports in OpenBSD's tracker. Just keep pounding away at the problems until he runs out of excuses to import stuff or is extra clear they simply don't like language/method X for arbitrary reasons. Regardless, you get a pile of safe utilities/modules for OpenBSD to do useful things on top of fast, safe tooling. Win, win. :)

Alternatively, contribute those ports to OS's that want to bring in best-of-breed tooling for boosting safety and security. They're usually smaller, less mature, and need all the help they can get.


My impression is that the Rust developers and community are firmly in the practical camp. They will keep plowing at Rust and its ecosystem. They will improve and optimize every nook and cranny they can find.

I wouldn't be too worried about Rust, it will have a niche at least as big as Ruby's is and IMO it will be way more entrenched since its target domain moves way slower and the barriers to entry are way higher than for scripting languages and web frameworks, IMO.


>He's worried about compile times? Fix the compiler.

If it were that simple, the compiler wouldn't be slow in the first place. Rustc and ghc have both been too slow to be usable their entire lives. There doesn't seem to be any reason to believe they can be made fast enough to consider using.


> The system being secure is a secondary benefit to it being comprehendible and coherent to an individual.

This is a really interesting and compelling philosophy to me, but it’s the first time I’ve heard OpenBSD described this way! Why is this not mentioned on the project’s homepage?



Nothing in that list says anything about being "comprehendible and coherent to an individual". I'd agree those are useful and good things, but security is indeed mentioned and emphasized there, and not as being secondary to aesthetic concerns.


As regards understandability and minimising "magic boxes", this means it's pretty much just core OpenBSD (kernel & base system). The moment you add the usual desktop environments or a huge monolith like Firefox, you've lost these benefits. Same with drivers. More drivers you have, more the system becomes byzantine. Keeping things minimal and well designed yet useful must be a continuing challenge for OpenBSD!


1) I Think Firefox is already pledged. Firefox 57 is in current, built with Rust.

2) Nothing wrong with kernel and base. It provides the kernel, clang, X.org, documentation and BSD-games, among the rest of Unix tools.

3 The drivers are in base and NOT in a module form, as the GNU/Linux crap with incompatible vendor releases and binary blobs tied to a version.


Hah. Going by the "anything you compile with a magic box becomes itself a magic box" argument, even the OpenBSD base is a magic box as soon as you run it on an x86 machine.


As for the point about how nobody is working on replacements, that's wrong. There's (partial) replacements for most coreutils written in rust, and a whole kernel has been under active development for years now.

I acknowledge the strong pull to go and re-implement existing standards like POSIX in a new language like Rust.

But, having had to again deal with all the corner cases of signal handling and threads for a Linux application, I encourage any prospective Rust OS developers to pick a different path.

Please, please, let's move beyond POSIX. Give me a sane set of operating system API calls that work in a reasonable way together. And that can compose easily.

So that future developers don't have to deal with file read calls that may (or may not) be interrupted by a signal, and all that.


I find your post quite funny in the context of the topic..


I totally respect (and have donated to) the OpenBSD project, and ran it for years for my firewall machines. Though these days I'm running containerized Linux firewalls for ease of upgrades. Anyway...

I do understand OBSD's objections to using a new implementation language for their core utilities. And I understand their philosophy for what they're doing and how they're doing it in the general case.

But the fact remains that POSIX, as it has evolved (or not, in some cases) makes writing correct programs harder than necessary. Especially modern multi-threaded programs.

I'd like to see something better and simpler. And maybe written in something other than C.


Were there any successful attempts to design a brand new and complete OS API in the last decade?


UWP, slow and steady, will eventually get there.

ChromeOS, the OS APIs are the web platform.

iOS, almost everything that matters is done via Cocoa

Android, Java reigns. There is very little UNIX exposed to userspace, less so since Google started clamping down NDK since version 7.


Not really, but POSIX itself is almost 30 years old (and primarily standardized interfaces that were considerably older), and a lot of interesting stuff happened ca. 1995-2005. See e.g. BeOS or Inferno.


Yep, I look forward to the day OpenBSD is obsolete. How Theo can be so anal about features like ASLR but then completely disregard the potential of compiler enforced memory safety is beyond my cognitive power.


Considering the complete lack of viable competition in the space, that day is a long way off.


Why is Redox not viable competition?


Come on, I really like Rust but these types of comments are why people like Theo de Raadt don't take the evangelists seriously. You can't take a project like OpenBSD that has existed for more than 20 years, is being used right now to power all kinds of applications around the world on all kinds of architectures and say that Redox, an experimental x86-64 OS, is viable competition. What with the seven applications written for it listed on Wikipedia, including "A simple web browser with basic image support" and "A simple editor that is similar to notepad". I don't know what the driver situation is like but I don't expect to see a lot of support for anything beyond the most basic devices.

In general I think the "why don't you rewrite your project in $languageoftheday" remarks should be dismissed and ignored. Theo is entirely right when he says that if people think it's a good idea then they should spend less time bothering the devs about it and more time actually coding it.

You think Curl or Emacs should be written in Rust? Then do it. Don't bother the devs about it. If it's really better then people will make the switch.


I love this. I think both of you are right, but your perspectives are for different time frames, with one side having an ideal view about a future time frame, while the other thinking about the now. This is the kind of situation that leads to innovation and how projects like Linux or neovim started. It's a natural way to progress. I hope jackpot51 gets pissed off and takes redox to the point where it proves the naysayers wrong, which will take time and effort, and doesn't serve the now.


I really hope that redox will eventually become a competitor to OpenBSD and other flagship OSs, I'd love to work on a Rust OS. That being said I still don't think it makes sense to label it a competitor at this stage though.

If I decided to start practicing Formula One next week that won't automatically make me competition for Lewis Hamilton. Redox has some way to go before people can seriously consider using it over OpenBSD for real world applications.


That is precisely what I am going to do


Are you joking? Are you running large, mission-critical systems in production on Redox already?

I'm sure it will be good enough for that some day, but that day is not now.


Redox is an unproven toy OS compared to some of the more mature and battletested alternatives. And don't think battletested is a metaphore, some of these have been used to operate battlegear.


I feel like you should add a disclosure about being the creator of Redox.


Ouch, that's not a good look at all.


Because of GNU/Linux.


Maybe I'm misunderstanding but there is a pretty serious effort to rewrite all the gnu coreutils in Rust:

https://github.com/uutils/coreutils


That is sort of addressed by Theo:

>Such ecosystems come with incredible costs. For instance, rust cannot even compile itself on i386 at present time because it exhausts the address space.


He has a point though... As we get more CPU/ram we as programmers don't even bother to check how many resources we are using. Personally, I don't know whether its a good thing or a bad thing. Also in terms of systems languages I believe only rust has some potential to truly replace C. Although, a large part of C usage still takes place in the embedded world where rust has yet to be ported to many embedded processors. Go's GC makes it a show stopper for use in many places as for haskell I would be very interested to see more low level embedded programming going on in it.


> He has a point though…

Does he? He "stat[es as] fact" that

> There has been no attempt to move the smallest parts of the ecosystem, to provide replacements for base POSIX utilities.

which as xwvvvvwx notes is categorically wrong, then points out that rustc can't compile itself on i386, which is relevant… how?


> then points out that rustc can't compile itself on i386, which is relevant… how?

Remember he is speaking as the leader of an operating system project. As in, a basic part of the project functioning normally is compiling the whole thing from scratch. If something needs cross-compilation to even get started it won't end up in OpenBSD base.

I seem to recall when it supported more architectures they made a public show about how they weren't going to cross compile even for targeting wimpy/sluggish machines, because recompiling the OS was a good stress test for the kernel itself.


And thus they lock themselves into the lowest common denominator as they target smaller systems. This is ridiculous. OpenBSD looks like performance art, literal security theater.


> And thus they lock themselves into the lowest common denominator as they target smaller systems. This is ridiculous. OpenBSD looks like performance art, literal security theater.

Those lowest common denominator systems tend to find bugs not present elsewhere.

And stating that openbsd looks like performance art and security theater seems to indicate you haven't looked at what openbsd has done for security.


OpenBSD can be security theater and still do great things. But at some point sticking to an 90s Unix/C aesthetic looks deeply anachronistic.

Yes, bugs are found at boundaries and interfaces, large or small. Something has to be different for the output to be different.

If someone writes a bug proof TLS implementation while at the bottom of the pool wearing SCUBA gear, it is still security theater.


Do you use it? I have had it on one machine or another since around 2000, and I have by and large been pretty happy with it. Not every personality type will like it, and sometimes they do make odd decisions, but it is pretty well put together.

Another point about the *BSDs, something doesn't have to be in base for you to use it. The base system is supposed to be small and not have a lot of dependencies. You are free to use things in ports and packages or compile them yourself. So this is not the same as never being able to use rust.


> then points out that rustc can't compile itself on i386, which is relevant… how?

I'm actually gobsmacked this is the case. There used to be a saying, only half-joking, that a language that can't host/compile/bootstrap itself is nothing more than a toy. As others have more eloquently pointed out, it shouldn't have to be explained why people who write operating systems and compilers would consider that a no-go.


Eh, 64-bit desktops were becoming common a decade ago. Considering the small minority of developers using a 32-bit machine for development, I don't see that it is worthwhile to spend effort on that.

Note that Rust can (and does) target various 32-bit platforms (ARM and maybe RISC-V, not sure) for cross-compile. Self-hosting on a 32-bit platform is such a minor drawback these days. 64-bit ARM processors are becoming more common these days as well.


There are still plenty of 32bit machines out there. I personally have maintained and support a Intel 4004 knockoff used for controlling a big industrial machine. And of course more modern variants of the same machine with an 8008. Some of these couldn't even run DOS but it's still supported.

32bit is not even close to be being that old and lots of machines, applications or platforms require it and they need maintenance. Especially OpenBSD is a system that I as a Linux-fan recognize for supporting old hardware much better than Linux.

And it's not only 32bit x86, there is plenty of other architectures limited to 32bit, mostly from the ARM sector but I believe some older MIPS and other even more exotic arch's are supported.

If rust can't selfhost on x86 how can we expect it to selfhost on more exotic 32bit hardware that BSD supports?


Oh, I wasn't implying that OBSD or NBSD should adopt Rust as a development language. Not at all.

My point is more that I wouldn't want the Rust developers to spend time on making it able to self-host on 32-bit platforms. I'd prefer they spend their time elsewhere, that's all.


> I'd prefer they spend their time elsewhere, that's all.

And I'm sure the OpenBSD developers would prefer to spend their time on OpenBSD instead of working around Rust's lack of support for a platform that OpenBSD supports. I'm still kind of surprised that when the question of "why doesn't OpenBSD switch to Rust?" and the answer was "because Rust doesn't self host on a platform we support" that the response has been "well drop that platform." How about: no. How about: if someone wants Rust to be a viable option, then they have to adjust Rust to be a viable option, not ask other projects to massively constrain their currently working support of a platform.

I hate to sound like the old geezer, but I get the impression that many people here have no clue about software beyond desktop and mobile. It's like they don't even realize that firmware and operating systems have to be written and maintained on older hardware. There is a lot of software out there running on legacy hardware that the world depends upon which you never see.


> I'm actually gobsmacked this is the case.

So gobsmacked you apparently couldn't even begin to attempt answering the question but felt you just had to go on a rant as irrelevant as you believe it's righteous, uh?

> There used to be a saying, only half-joking, that a language that can't host/compile/bootstrap itself is nothing more than a toy. As others have more eloquently pointed out, it shouldn't have to be explained why people who write operating systems and compilers would consider that a no-go.

Rust has been self-hosted for almost as long as it's existed. The boostrapping OCaml compiler was left behind back in 2011.


> Rust has been self-hosted for almost as long as it's existed. The boostrapping OCaml compiler was left behind back in 2011.

Not on x86 which is what this whole conversation is talking about. So if OpenBSD used rust in base they would have to drop support for x86.


We haven't even gotten to alpha, hppa, loongson, luna88k, macppc, octeon, sgi, or that backwards beauty of big-endianness, sparc64. But hey, it compiles on amd64! That should be good enough, right?


He's talking specifically about OpenBSD base. Unless you can point to a rust binary in OpenBSD that Theo forgot about, he's not wrong.

i386 is relevant because OpenBSD supports i386.


> He's talking specifically about OpenBSD base.

No, he is very explicitly saying that

> There has been no attempt […] provide replacements for base POSIX utilities.

Which once again is categorically false, a github repository purporting to do exactly that has been provided.

> i386 is relevant because OpenBSD supports i386.

i386 is supported, the issue is compiling the compiler on i386.


> i386 is supported, the issue is compiling the compiler on i386.

which is required for the system to be self hosting

seriously - what is being said is this:

" oh hey lets throw away the functional and perfectly good entire base set of utilities for this 1/2 complete project on github using a language that doesn't even natively build on all of our supported platforms and wouldn't even remove the need for a C compliler in base, and further complicate the base toolchain, not to mention breaking all kinds of other builds which use shell utilities expecting certain behavior, etc ,etc, etc, because somone thought it would be 'neat' to do this. And whyyyy aren't you taking me serously??? "

every few days (hours?) some noobish person desides to ask some fantasy question about whatever topic of interest they are noobing about on openbsd (and other OS) discussion lists, and then gets whiny when they are being called out for being 'green' about life itself. this is another of those cases, and I have no idea why it got crossposted here or upvoted.


Thank you for saving me the trouble of writing that.


> every few days (hours?) some noobish person desides to ask some fantasy question about whatever topic of interest they are noobing about on openbsd (and other OS) discussion lists, and then gets whiny when they are being called out for being 'green' about life itself. this is another of those cases, and I have no idea why it got crossposted here or upvoted.

this sort of attitude is astoundingly hostile and toxic for an open source community to hold.


I agree, but understand how tiresome it can get when people who -- understandably -- don't know any better do a drive-by of your project and suggest things, things which have often already been discussed to death, or don't even need to be discussed because anyone knowledgeable about the project would immediately see there's no need for discussion.

Now, that doesn't mean that a hostile rebuff is required or good policy, but random people who actually do not know what they are talking about, and haven't taken the time to learn enough to know what they're talking about, don't really deserve a long, in-depth, drawn out rebuttal or discussion.


> There has been no attempt to move the smallest parts of the ECOSYSTEM to provide replacements for base POSIX utilities.

You deliberately cut out the part which states he's talking about the ecosystem of OpenBSD, in an OpenBSD mailing list. That is an extremely disingenuous and uncharitable cherry-picked interpretation. He was categorically talking about efforts to port OpenBSD utilities to such a language and merge them into the project (i.e. the OpenBSD ecosystem). What you're suggesting he said is just plain FUD.


And if the end user is unable to compile everything, i386 would be only half-supported (or maybe, for the austere OpenBSD maintainers, not supported at all).


I think it is important for us to think more on the resources, for programs we write consume, when: they are compiling, they are executing and when they are just "dormant" on persistent memory. Experience and history shows that the bigger the program is, the more resources it needs to function, the more bugs it has, which consequently makes it more difficult to debug and prone to failure.


I run full Windows 10 on a tablet (dual core, 2GB RAM), and it's pretty amazing to me how many websites that have no reason to run slow completely fail on it.

I can only imagine it works fine on dev machines with much faster quad+ cores and 64GB of RAM or whatever.

Just as an aside, it's done a lot to have the tablet be my primary "fiddle-at-home" machine: keeps me really conscious of resource limits, including ones I normally don't think of like screen size. (Most websites render terribly in landscape on a 10" tablet.)


Win10 just doesn't work well on 2GiB. Compressed pages are nice but not enough to prevent swapping. It also doesn't help that MS prevents you from running 32-bit on modern hardware to alleviate some of the memory pressure.

The solution is to install 32-bit Linux on it. Then it won't suck.


Windows works great, the issue is the applications, and indirectly the developers of said applications, some of which are on HN.

Most applications are requesting hundreds of megabytes if not entire gigabytes, the system will swap to death after you open an app and a browser tab on facebook.

I remember a friend who bought a 2GB netbook, the thing froze to death whenever he opened just eclipse, he had to return it.


Or i386 OpenBSD.


[flagged]


We've asked you before not to engage in personal attacks like this. Please stop.

https://news.ycombinator.com/newsguidelines.html


Could you tell me what personal attack that would be?


> You should refrain from posting comments like you did here -- they make the community worse. You should also reconsider how you provide tech support, in general.

So an actively hostile comment in response to an opinionated one is somehow better and not measurably worse? You could just as easily have a discussion on why they think it objectively sucks, and you both may learn something from it.


I actually outlined why I thought they were wrong, and shut your mouth is clearly attached to not making subjective statements about other people's belongings (which, I absolutely stand by as inappropriate) -- so yes, an angry but on-point comment is better than just throwing out "lolsux" uselessly, or in the case above, based on wrong information.


If you keep posting like that, or like this for that matter, we will ban you. Please clean up your act.

https://news.ycombinator.com/newsguidelines.html


As I've said before, you'll have to ban me then, if you really believe I'm a negative contribution.

I'm not going to stop responding in reasonable, human, and direct ways to comments.

I'm also going to note, yet again, your highly biased enforcement:

Nothing to someone who did insult me, just random dog-piling based on your whims because you happened to notice an emotional outburst.

That's terrible community management.


"Shut your mouth" is indefensible on HN, and pointing at the other person is a low-quality move. Since you don't want to use this site as intended or take responsibility for misbehavior, I've banned the account.


> Firstly, shut your mouth

> You should refrain from posting comments like you did here -- they make the community worse.

I don't even know what to say.


That you disagree with my assessment that angry lead-ins to detailed responses to people insulting your things are appropriate, but feel insulting other people's belongings (while providing incorrect and useless tech advice) is appropriate -- or even constructive?

Because that's what you did say. (:

I (as you mighy expect) disagree with your assessment of what makes a functional community.


Rust fell into the same trap that killed many, many gamedev companies:

Performance matters. Even more than features.


It depends what kind of performance. The only performance they are "lacking" seems to be compilation speed (an annoyance, but they're working on it) and compilation memory usage (rarely a problem, considering we also have C++, Java and C# around :) ).


The performance of the compiled code is great, and the performance of the compiler is something they have identified as a major issue to fix, and are working to solve it.


So did the now-dead game companies.

Performance is like money: it's easy to squander and hard to acquire.


> Performance is like money: it's easy to squander and hard to acquire.

In this case, there were several decisions to not care about performance right now in order to emphasize correctness and shipping faster, while making sure there are no technical obstacles to making compilation faster in the future. The main time sink in Rust compilation is that all the abstractions in the Rust code get compiled into the initial bytecode representation passed to the LLVM side, and they are only reduced there, instead of cutting down on the hierarchies on the Rust side. This costs performance in several places -- the creation of all the bytecode, the copying of it, and then LLVM parsing it all in. The upside of doing it this way is that it makes the Rust-specific compiler much simpler and easier to implement, and that the optimizations that remove the towers of abstraction on the LLVM side are extremely well tested.

As Rust matures, optimizations that reduce the complexity of the created initial LLVM bytecode can be, and probably will be done on the Rust side.


Really? I'm pretty sure for most game companies, shipping is king and everything else is secondary.

Also, have you seen gameplay footage of in-development titles, particularly older ones before the days of Unity and UE4? Usually they're choppy as hell because the engine is under development at the same time as the game, and devs prioritize getting a golden but slow codepath working first so that the artists have something to go off of, and all the optimizations are shoved in to recover framerate in the months right before release.


So...which game companies?


You are equating two non-comparable things, Rust the language and gamedev companies using an abstract concept (performance). It could be a lede for an opinion piece or an essay, but it isn't an argument.


Is that still true? PUBG is probably the worst-optimized game ever written, barely getting 60fps on a setup that can run Overwatch at 200+ fps. And yet, it's the most-played game on Steam.


As someone who plays PUBG, I think it's in spite of performance not for irrelevance of performance.


Are you implying that Rust isn't performant?


If compilation exhausts the IA32 address space, then I'd say it's not adequately performant as a whole, regardless of how "efficient" the resulting binaries might be.


Hard disagree. Compilation is like encoding a video— it's a price you pay once, and if you know the resulting binary will be run millions of times, it's totally worthwhile spending a lot of compute and memory upfront to get that binary as fast as possible.


Right, but the problem is that - in the OpenBSD world - it's a price that's paid much more than once. Sure, that binary might be run millions of times, or it might be run only tens or hundreds or thousands of times before an update comes around. And that's just for one platform; OpenBSD supports lots of platforms, both 32-bit and 64-bit, and all of them are expected to be fully usable for (among other things) developing OpenBSD (which includes, you know, actually compiling OpenBSD).

To rephrase that a bit: OpenBSD is designed to be a system where any user on any platform can contribute to OpenBSD using the tools included in OpenBSD's default install. Deviations from that will almost certainly receive a cold reception at best.


What languages do you come from? What is fast compilation for you?

The compilation phase can take hours in C++. Up to a day when compiling huge projects will all the optimization flags.

Live that for some time and it will quickly prove you that you were wrong. Compilation time matters.


Fast compilation: less than a second (feels like not waiting at all)

Slow compilation: more than a minute (makes to start browsing HN, missing the end, thus losing even more time)

To have fast compilation even with big projects is hard. Go, C, and D are usually fast. Scala is usually slow.

I care about development builds primarily. The edit-compile-test loop must be really really fast. Optimization flags are irrelevant, because if performance matters you often must have them enabled for development as well.


This is off topic a bit, but there is a solution for this:

> Slow compilation: more than a minute (makes to start browsing HN, missing the end, thus losing even more time)

See the thread here: https://askubuntu.com/questions/409611/desktop-notification-...

TL;DR install undistract-me, add 2 lines to bashrc, and you will get a desktop popup when a command that takes longer than 10 seconds to complete is finished.

Fedora does this by default on install and I have found it so handy. Kick off a compile etc, then just browse HN/reddit til I get the popup.


Yes, this helps a little. As a fish user, I had to write more than two lines [0] though. A second monitor is detrimental, because the notification is too far away sometimes.

[0] https://github.com/qznc/dot/blob/master/config/fish/config.f...


I don’t think people are saying compilation doesn’t matter. Certainly, I would consider C++ to be a language that is at the high performance end of the spectrum. High performance languages, high level languages like C++, Rust, Ada, Haskell, Ocaml, And Swift have relatively long compile times but I would classify them as languages suitable for applications requiring high performance. Go is an interesting exception in that it produces pretty high performance results without long compile times.

But you do have a point. Things are so much better now than when I started programming 50 years ago. Machines and languages are so much better. Programming is a dream compared to back then.


A fast edit-compile-run cycle makes development a lot more efficient in my experience.


Right, definitely! But in that case, it's really incremental build time that's the important thing. Not that overall/first build time isn't important too, but in general I'd rather see my incremental build go down by 80% than my first build go down by 20%, and I think this is reflected in where the Rust team has historically applied their perf efforts, eg: https://blog.rust-lang.org/2016/09/08/incremental.html

(Appreciating as well that most incremental build gains come from avoiding unnecessary work, so they're as much the domain of the build system as they are of the compiler.)


So, you say Python is totally missing the point and is wrong? Even when encoding videos, performance matters.

Also, that optimizing compilation is important is no reason to not work on i386. This is a point Rust needs to fix. And not only i386 support, but also other architecture families, as host.


Python is slow as fuck.

All the heavy API and computation libraries are wrappers are C binaries that are optimized to death.


This seems like a bit of a trite point unless many rust developers are actually working inside i386. Though the compiler itself might not work very well in i386.

Not many people are whining about our C compiler toolchains not fitting into our microcontrollers.


Every port being self-hosting is a fundamental project value in OpenBSD. The reason being that they believe every port should be useful and functional, not just a novelty. And requiring that every port be self-hosting is a way to enforce this.

For example, the NetBSD project has a dreamcast port, but like most of their ports, it is crosscompiled. The last time I tried the port it would crash when put under high load for a while and would kernel panic when you tried to play audio. The netbsd dreamcast port is not functional in the sense that openbsd would like to enforce, something which is not relevant to netbsd and in no way denigrates them, but merely serves as an example.


Thanks for the context, that wasn't clear to me.


> This seems like a bit of a trite point unless many rust developers are actually working inside i386. Though the compiler itself might not work very well in i386.

Trite point? OpenBSD supports 1386[1] if they pull a rust compiler into base and start rewriting things in rust, then they can't support 1386. Dropping a supported platform is not a "trite point".

[1]: https://www.openbsd.org/plat.html


At this point i386 is legacy for most of the world. OpenBSD is an ultra-conservative, orthodox project, therefore they will probably support i386 for years into the future - I mean, they supported VAX until 2016.

That is a choice they are entitled to make, the trade-off being it would appear to make most modern technologies a poor fit for adoption in OpenBSD - that's the price they have to pay. It is a problem of their own making.


“i386” is OpenBSD's label for the 32-bit Intel architecture (they don't actually support the 80386). Intel still sells these.


They support 486's though, which is quite rare for 2017.


486s and 586s are still sold for usage as embedded systems. They are well understood and some of them managed to pass certification decades ago.


You can still buy 32-bit Xeons, even. Sometimes srs bsns requires stability, too.

https://ark.intel.com/Search/FeatureFilter?productType=proce...


not to mention, 386 or not, there's still other 'odd' 32bit platforms which still have huge use for embedded things where openbsd would work great (hello mips/arm32, for starters)


There's a 32-bit x86 processor in every PC with the Intel Management Engine version 11 or later.


OpenBSD is an ultra-conservative, orthodox project,

...which is exactly the kind of a system one needs for production environments.


If OBSD feels it can make use of older arches then so be it, many users will find less intensive jobs for the respective hardware and it saves it from going to waste/recycling.


Not to mention plenty of people in the "developed" world (let alone "developing") can't afford to buy a new computer, and thus are going to use 32-bit "legacy" desktops and laptops for a very long time.


Unfortunately those sorts of people tend to be less informed about I.T and that eventually leads to their NetBurst, Pentium M and Atom machines performing even slower than they would be if they were maintained probably. They probably waste more power waiting for the machines to accomplish their tasks in an under-maintained state.


So... my main point was that "I have to cross-compile the rust compiler from another arch" is not equivalent to "rustc is not usable in i386". And even if rustc wasn't usable in i386, the binary it produces would be.

So you could have a scenario where the i386 binaries would have to be cross compiled from a 64-bit machine but the end result would work just fine on those machines.

I wasn't aware that OpenBSD's objectives included having each arch be able to build itself. Totally reasonable goal. It's harder to do "dreamcast port"s in those scenarios, though.


How do you even find an i386 processor these days?


AMD still sells Opteron CPUs [0]. You can buy new servers with them [1].

[0] https://www.amd.com/en-us/products/server/opteron

[1] https://www.thinkmate.com/systems/servers/rax/amd#browse


In the OpenBSD world i386 == x86. And it's pretty easy to find an x86 processor now a days.


… such as? You're merely restating the claim, without providing any proof.

Consumer machines, AFAICT, are all amd64 (or x86_64, if you prefer that name). I understood the original post to mean i386 == x86, and I agree — where do you even find an x86 today (for sale, in a non-niche use case, i.e., "pretty easy")?


OpenBSD has as a goal to run on much more than just standard currently-sold consumer hardware. You can certainly disagree with that goal, but that doesn't make it go away.


Oh I totally get this—but what is this hardware that people are still using OpenBSD with that they haven't upgraded in 30 years? Targeting i386 as opposed to, say, i486 or i686 seems like an exercise in idealistic masochism.


I think you're misunderstanding: "i386" is just shorthand for "32-bit Intel". They're not specifically talking about the 80386 chip. The same problem exists when you consider newer 32-bit chip families in that ISA.


I have atleast 3 Pentium machines still lying around, plus one netbook which is 32bit only. These machines are still widely used especially were replacement is expensive or even impossible


Linux and BSD ran for a long time on 32bit systems. 4GB of memory is an ocean in my mind. Those systems should be able to compile their own programs and tools.

On a related note, we will eventually be running development tools on microcontrollers. Not that little 16bit parts will run the tools, but that 16bit parts are going away. In price sensitive areas this will not happen, but for things with a larger budget why not run the tools right on the target? If your controller is an RPi why not use it for development?


They don't mean "x86" (the 32bit instruction set), but i386 aka Intel 80386, a processor introduced in 1985: https://en.wikipedia.org/wiki/Intel_80386


https://en.wikipedia.org/wiki/IA-32

It includes 486, Pentium, etc.


From the article you linked:

> the 80386 instruction set, programming model, and binary encodings are still the common denominator for all 32-bit x86 processors, which is termed the i386-architecture, x86, or IA-32, depending on context.


It’s not a trite point for BSD but if those are the kinds of objections holding back adoption by Linux and bsd, then I think the end result will just be a new Rust-first operating system.


> It’s not a trite point for BSD but if those are the kinds of objections holding back adoption by Linux and bsd, then I think the end result will just be a new Rust-first operating system.

That would be great! Get back to me when it exists, instead of trying to dissuade an already established project from one of it's primary goals and handwaving said goals as "outdated" and "unimportant". Heck, it's open source, so if someone feels like it, they can just fork and go hog wild!


People are still using i386? I'd assume even if they are, it's such a tiny minority that it shouldn't be an excuse to hold everyone else back.


OpenBSD supports a wide variety of hardware platforms, including machines with Alpha, PA-RISC, and SPARC64 processors. On each of them, the base system is able to compile itself.

If rustc cannot even build itself on i386, what kind of support can we expect for other platforms with an even smaller user base?

On a project such as OpenBSD they cannot suddenly drop platforms and only support amd64 as portability is one of their main "selling" points. Furthermore, that would also mean to lose the developers that are interested in these alternative platforms and probably chose OpenBSD because of the platform support. Furthermore, such developers are usually not only contributing to platform-specific parts, but also to system utilities and ports.

For the full list of platforms, see https://www.openbsd.org/plat.html


> OpenBSD supports a wide variety of hardware platforms, including machines with Alpha

Can confirm, spent many happy hours hacking on an AlphaStation running OpenBSD!


> On a project such as OpenBSD they cannot suddenly drop platforms and only support amd64 as portability

They… don't need to? That rustc can't compile itself on i386 doesn't mean you can't ship a rustc for i386, it just means you have to cross-compile it.


I see it as part of portability that you do not need to use any external system for bootstrapping.

Imagine as a developer who compiles base from source you had to find another system only to compile rustc and then transfer it to your machine. And you would not have to do this only once, but for every compiler bug fix coupled with the overall rapid evolution of Rust. I think many in the OpenBSD community would oppose such an approach, even without further considering other aspects such as security implications.


This is the case IIRC with Android - even though I want to build it for a 32-bit ARM architecture, I can't build it on anything but 64-bit. I guess the vast majority of Android systems cannot even compile Android!

https://source.android.com/setup/requirements


> I see it as part of portability that you do not need to use any external system for bootstrapping.

You always need an external system for bootstrapping, you're not assembling the base C compiler with which you're compiling everything else. At one point you need to obtain a compiler from somewhere else.


...And the point is that the OpenBSD project has a policy of self-hosting. The bootstrap compiler for base (along with everything else needed to build base) must exist in base.


Addressed in the article.

> In OpenBSD there is a strict requirement that base builds base.


One of OpenBSDs sweet spots is turning old hardware into useful, secure, reliable, network infrastructure.

i386 might not be as popular as it was on 'normal' OS, but I wouldn't be surprised if OpenBSD had a lot of people still using it.


OpenBSD is a perfectly servicable operating system on an old Atom board I have (nice router!), and on i386 only Core Duo iMacs/Macbooks long abandoned by Apple. These will be perfectly good machines to use for a long time, and it's really great you can get up to date support on these systems from this great project. Obsolete means different things to different people.

I'm a big fan of what Rust promises, but the solution is not that OpenBSD changes its policies or that OpenBSD drops i386. Rust should become self hosting on i386.


Theo would say, if you think so then get to writing code instead of commenting about it.

These aren’t just philosophical comments, the implications for him and OBSD devs is to spend massive time developing these things. Dropping a supported platform and taking on a huge investment of effort needs serious justification.


Linux supported i386 until 2012, I think in this context it is more likely to refer to pre-Pentium 86 CPUs though (I'm not certain on that)

I've seen i386 multiple times to also refer to any x86 Arch.


i386 in this context refers to 32bit intel x86 cpu architecture in general and generic pc compatibles specifically. OpenBSD currently runs on 486s and better:

> All CPUs compatible with the Intel 80486 or better,[0]

[0]https://www.openbsd.org/i386.html


There's significant infrastructure in place that uses i386. It used to be fairly popular, I'm sure you can google it.


Yep:

  $ uname -a
  OpenBSD hostname 5.9 GENERIC.MP#6 i386


>Such ecosystems come with incredible costs. For instance, rust cannot >even compile itself on i386 at present time because it exhausts the >address space.

Is cargo supported on i386 platforms? Also Rust complies itself, afaik there is no way to compile Rust/Cargo but to use previous version of it. If one of the past builds of Rust is backdoored, any version between then and now is backdoored, language is safe, environment... as safe as it was never compromised. OpenBSD compiles everywhere where C code works, Rust/Cargo works where it's supported and it will takes decades to catch-up on some architectures.


Okay, there's a lot of misunderstandings here. Theo is right about some things, and wrong about some things. And people are misunderstanding what things he's right about. The things he's wrong about are very minor.

Rust absolutely works on 32-bit platforms, though we often use the i686 target rather than an i386 one. Platform support list is here: https://forge.rust-lang.org/platform-support.html

Theo is talking about building rustc, not compiling most Rust programs. That's the first distinction that it seems like many people are missing.

However, apparently the compilation process OOMs when building on a i386 box. I don't use those platforms, but I'd believe it. The Rust compiler is large. However, I thought (and looking at our CI, this seems to be true https://travis-ci.org/rust-lang/rust/jobs/311223817) we do compile with an i686-unknown-linux-gnu host (for this build), so I dunno. Maybe it was a fluke, maybe I'm misunderstanding, I'm not sure.

We often provide artifacts via cross-compiling, but this is unacceptable to OpenBSD. That's totally okay. They have good reasons for doing this.


If compilation of rustc runs out of memory it represent an upper limit on the complexity of actually viable Rust programs, and given how much software is larger than a compiler it is a discouraging performance level.


Rustc is more than just a Rust program; we compile LLVM from scratch, for example. Is the OOM in the Rust code, or in the LLVM code, or in the final linking, or what? It's not clear.

The compiler is one of the largest Rust programs that exist. Last I checked, it was three quarters of a million lines of Rust, but a quick loc shows 1.5 million lines of Rust, 2.3 million lines of C++, and 900,000 lines of C (again, mostly LLVM and jemalloc).

Servo is also very large, and they don't report having OOMs, though I'm not sure if they build on 32-bit or just cross-compile.


FWIW, i've compiled llvm/rust with 2G, I never ran out of ram compiling rust, but in llvm gnu-ld would run out of memory, using the gold linker fixes that, as well as the configuration flag for enabling separate debug info -DLLVM_USE_SPLIT_DWARF=ON


Agree. Moreover, Rust compiler contains some dark areas which nobody wants to deal with. See https://github.com/rust-lang/rust/issues/38528 for example. Basically it means that Rust compiler can suddenly take exponential time and space for compilation.

That bug really bites hard any code heavy on iterators (Rust often praised feature!). It has reliable reproduce test-case, but still it's already year old and was down-prioritized!

Hard to believe anybody uses Rust for real large project given so little attention to crucial details.


I mean, that thread has a comment less than a day ago, and Niko says:

> I'm going to lower this from P-high to reflect reality. I'm still eager to investigate but haven't had time, and making it P-high is not helping =)

P-high means someone is actively assigned and working on it, so yeah in some sense this is a down-prioritization, but only from "Someone is working on this, so put your work somewhere else" to "this is open to work on"; the de-prioritization may lead to it getting fixed sooner, as Niko is a busy guy.

So, "nobody wants to deal with" feels like a mischaracterization to me here.


Well, yes. The last comment says the issue is still there :) I mean this bug alone in fact nullify the entire incremental compilation effort. It's kind of weird.

> The de-prioritization may lead to it getting fixed sooner, as Niko is a busy guy

And Niko put "medium" priority month ago :)


True, but we're talking about 32 bit systems. Chrome stopped being able to compile on 32 bit systems years ago, as a C++ programs. I think Firefox is in the same situations.

It does sound kind of bad for Rust, but the competition isn't doing much better :p (weak excuse, I know)


There is a really, really substantial difference in the "required for basic credibility/usage" qualifications of a web browser versus an operating system. Operating system instances can run for decades, executing arbitrary tasks, without needing to be restarted. Web browsers need to be updated incredibly frequently just to remain functional for parts of the internet.

That's not to say either one is "worse" or "better", but comparing the two on an axis like platform support is like comparing tomahawk missiles and tall ships. Totally different requirements and use cases.


Travis is running containers, so even if you use a i686 rustc, you still benefit from a 64-bits kernel, meaning processes still have the full 4GB of address space. On an actual i386 linux kernel, this would be limited to 3GB. Maybe it's even less on openbsd (I don't know, but technically, it could be as low as 2GB). That could explain why it works for you and not for them.


Ah, right. Thanks.

I still thought that we generally kept it down to around 2GB of space, but maybe that's wrong.


I'm pretty sure that attack you describe is mentioned in literature as essentially undefeatable. I really wish I could remember exactly what the name of it was; the gist is, there has to be a first compiler somewhere. If at any point in the chain, the compiler is infected with a self-propagating virus that hides itself in the byte code of the binary, it can ensure that the exploit is in every future version of the compiler.

I may be remembering the details a bit wrong, but it was a good read.


You're looking for "Reflections on Trusting Trust" by Ken Thompson, one of the original co-authors of Unix:

https://dl.acm.org/citation.cfm?id=358210


There is a possible defense: https://www.dwheeler.com/trusting-trust/


... and the hope is that https://github.com/thepowersgang/mrustc will let us do this for Rust.


I think you are referring to “Reflections on Trusting Trust” by Ken Thompson.

https://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thomp...


It's not undefeatable. The guy who invented it in 1970's, Paul Karger, told people the concepts on how to defeat it right afterward. Their advice for building systems in a way to catch a lot of subversion was encoded in the first standards for information security. I included most of those methods in my security framework here:

https://pastebin.com/y3PufJ0V

https://en.wikipedia.org/wiki/Trusted_Computer_System_Evalua...

One compiler made to the highest standard in development assurance is CompCert.

http://compcert.inria.fr/

It has specs of everything it does, proofs it does it in tool with minimalist checker, extracts to ML that can be compared against those specs in various ways (eg visually), can optionally be compiled with a mini-ML compiler or Scheme you homebrew yourself, and passed exhaustive testing by third party with only a few bugs in specs (not code). There's another using formal specs called KCC which could be ported to something bootstrappable like META II or Scheme.

The other requirement from TCSEC was that source be supplied to customer to build with their onsite, trusted tools. I looked into even having compilers done in Perl since it's already widely deployed. David A. Wheeler made brilliant suggestion of either bash or awk. I have put tools for those and more on rain1's bootstrapping page. rain1 or someone there called the concept "Ubiquitous Implementations." Note we've focused on technical content, not presentation, on that one being busy folks. Looks rough. :)

http://bootstrapping.miraheze.org/

You also need repo security to protect that source with it either cryptographically sealed and/or sent over secure transport. Link below on repo security from David A. Wheeler. Quite a few forms of transport security now.

https://www.dwheeler.com/essays/scm-security.html

After Thompson wrote on Karger's attack in 1980's, it took a life of its own among people that continue to mostly ignore the prior solutions. It's a problem absolutely solved to death starting with the person who discovered it in MULTICS Security Evaluation. Far as state-of-the-art, the current path of research is exploring how to integrate solutions for many languages, models, or levels of detail in one picture of a system with no abstraction gap attacks with proof of that for all inputs. That's a bit more complex but just an imperative language to assembly delivered and bootstrapped? Straight-forward but tedious, time-consuming work the first time it's done. :) Also, expensive if you buy CompCert which is only free for GPL stuff. Two of us are eyeballed CakeML's lowest-level languages as a cheat around that for verified bootstrapping.

http://cakeml.org/

EDIT: Btw, all that is technical discussion and argument. For fun, you might enjoy the story "Coding Machines" which is about only coding-related story I started reading and couldn't put down. Probably took an hour to read. It covers discovery of a Karger-Thompson-style attack along with how people might respond mentally and in terms of solutions. Some other stuff in that one.

http://www.teamten.com/lawrence/writings/coding-machines/


You want the "trusting trust" keyword.


A talk by Ken Thompson, "trusting trust".


Technically, his Turing award lecture.


And Go: https://github.com/ericlagergren/go-coreutils

Does he really not know this or is he ignoring them to make a point?


It uses remote dependencies. First compilation in a single-run environment will be very slow, there won't be second compilation. Go compiler is fast when you add the `-i` flag, without it it takes a couple of seconds to compile a few hundred lines, a few more minutes when you have to `go get` packages. Now, github goes down, your build is broken for that time.


Not sure what the "it" is in your first sentence, but allow me to address the rest of your points. "First compilation ... will be very slow": the go compiler is in fact quite fast, orders of magnitude faster than C++ or rust compilers. The fact that there's a noticeable pause when you want to compile thousands of files does not mean that its slow. "There won't be a second compilation": not for each top level tool, but there certainly could be shared packages that don't need to be recompiled. "Go compiler is fast when you add the `-i` flag", ok, do that then. "Now, github goes down, your build is broken": you only need to depend on external github references if you want always to build against the latest version of your referenced code. I can't imagine anyone interested in stability wants this. There are lots of options for vendoring your dependencies in tree.


> "Go compiler is fast when you add the `-i` flag", ok, do that then

Only useful when you have mutable environment, most build spaces don't have it because it's insecure. So it's useless for big projects with external dependencies, you HAVE TO download them on each and every build.

>your build is broken": you only need to depend on external github references

Go projects use not only github repos, there is gopkg, gitlab and some others which I don't remember. All of them must be online and works fast, any lag will delay whole build system, which in many cases is pipe-lined. I can't imagine anyone interested in stability wants this.


>Not sure what the "it" is in your first sentence, but allow me to address the rest of your points.

The project in linked repository.


go-coreutils is abandoned, and not POSIX compliant. It was meant to be a proof-of-concept, and it kind of was, in a negative way.


What was the problem?


I have no idea. I have been watching 5-6 similar Go projects (some with the same name) which lost interest (some stopped after first commit), tried to create their own versions of coreutils (incompatible), never saw adoption, testing, or support, and they all tanked.


Some would consider Go to not be as safe as Rust for example. But his argument still holds for POSIX compliance, it is a loveless task so it it gets done at snail's pace.


This doesn't seem to be more than a toy project [1]. Not to mention, it's GPL-licensed, so it's basically useless for OpenBSD [2].

[1] - https://github.com/ericlagergren/go-coreutils/blob/master/xx...

[2] - https://www.openbsd.org/policy.html C-f GPL


This project is very incomplete (most commands are not implemented) and didn't get a commit since almost 6 months.


The bootstrap problem is real and getting worse: go needs go, haskell needs haskell, rust needs rust; it's nasty.

There are already "better" and "safer" languages that could easily replace the entire unix userland. Heck, awk can do half of it, but all of that C code is already written...


> go needs go, haskell needs haskell, rust needs rust

And C needs C, or for more recent compilers, C++ needs C++. You just don't notice it because these two languages are already part of the base system.


But a C compiler is part of basically every Unix system. That matters. For example, one of the perks of my job is that I do occasionally get to play with real nice hardware (such as a Cray with a couple thousand nodes at one point). The downside of that is that the owners won't give me root access and because there are lots of other people working with it, stability of the system is more important than getting the newest software installed. Oh, and this also sometimes means restricted internet access.

So, bootstrapping matters to me, and the bootstrapping story for anything involving LLVM isn't very good (I'm actually having some problems with a language other than Rust in this regard).

Interpreters you can generally build fairly easily from source (Lua, Ruby, Python, Tcl, for example). But when it comes to compilers, bootstrapping is generally a much bigger obstacle. There are a few notable exceptions that only require a C compiler, no internet access during their build, and build in an acceptably short time:

* OCaml bootstraps from C via a bytecode interpreter, which is then used to build the native compiler. On my laptop (using four cores), I can actually do that in under a minute (two minutes if I want flambda).

* Nim compiles to C and hence builds in half a minute on the same machine from C sources.

* LuaJIT builds in a few seconds, assuming a dynamically typed language with a JIT compiler is sufficient for you.


> There are a few notable exceptions that only require a C compiler, no internet access during their build, and build in an acceptably short time:

most any scheme/lisp compiler as well

requiring internet access and binary bootstrap toolkits thing for the core piece of an ecosystem (e.g. the language compiler) is decidedly new-school, rooted in overly commercial projects, and for the worst, imho.

ps: lawn.


To be clear, rustc and Rust projects generally do not require the internet to build. It's a hard requirement of many projects, like Debian or Firefox.


Sorry, I didn't mean to imply that Rust did. For me, it's primarily a problem with some JVM-related tools.


It's all good!


C has a large advantage: you can write a compiler in machine language (not assembly, raw machine language) in a week. Not a good compiler: it will produce horrid, unoptimized code. You just need enough to build a good C compiler. From there you can build a c++ compiler, which in turn can build the C++ compiler you want.

Every time you add a new language feature you increase them time to write your bootstrapping compiler. Garbage collection means you will spend a few more months building a garbage collector. (remember we are writing in machine language so most of the abstractions you are used to dealing with - even in assembly - are missing. Thus complexity is growing exponentially)


> large advantage

How is this a large advantage? When was the last time someone actually did this?



Sure, and most people do this as an exercise, whether it's to learn how to write a compiler, or to see how small they can make the compiler, or something like that. No one does it because they really have a strong practical need to.


That so called advantage is how Niklaus Wirth did all his compilers, and many other language designers, it is not unique to C.


I didn't mean to imply it was unique to C, though in hindsight I did. This can be done in many languages.

I believe that Niklaus Wirth created his first self hosting implementation in a small subset of the language and expanded from there. Anyone thinking about doing this for a non-trivial language should consider following that example.


> go needs go, haskell needs haskell, rust needs rust

That's painful. It's one big point which Nim [1] does better. It compiles to C and bootstraps from C. That makes porting to other platforms much easier.

I don't understand why Haskell and Rust don't provide bootstrapping from C. Did they write the first compilers with Assembler?

> You just don't notice it because these two languages are already part of the base system.

C (not C++) has become the essential foundation on any platform. You will likely not be able to sell your embedded system unless it supports C.

[1] https://nim-lang.org


Rust's first compilers were written with OCaml.

We don't provide bootstrapping from C because no C-based toolchain has ever existed. However, it may in the future; see my link upthread.


I think bootstrap with OCaml is ok - if just bootstrap from source becomes possible at all. Binary-only dependency is awful when Rust is wanted on yet unsupported systems.


It's not a significant issue if you can accept cross-compiling. This is how most platforms get their start.


> This is how most platforms get their start.

The key word here is 'start'. I don't like to arbitrarily constrain tools, but I've always felt that cross-compilers were sort of a last resort, ie, you don't have a native compiler (you're writing the compiler or operating system), or are really pressed for time. Once you've got a native compiler, that should be it: self-hosted from there on.


> I don't understand why Haskell and Rust don't provide bootstrapping from C. Did they write the first compilers with Assembler?

I don't know about Haskell, but the first Rust compiler was written in OCaml. See for instance the comments at https://www.reddit.com/r/rust/comments/6nt2j1/is_there_any_e...


GHC has a special mode (not available in released builds) that compiles Haskell to plain old C, which can then be compiled by any other C compiler. Of course when compiled in such manner, Haskell code runs very slowly but it works.


> See for instance the comments ...

Interesting! So, it should basically be possible to bootstrap from source using OCaml and the old source from the Git repository.

It's interesting that OCaml was used to develop Rust since OCaml is also used for formal verification of SPARK (a dialect of Ada) which is another language for safety critical applications. OCaml seems to be a good choice to invent new big languages.


> That's painful. It's one big point which Nim [1] does better. It compiles to C and bootstraps from C. That makes porting to other platforms much easier.

Unless you are talking about an architecture which doesn't have llvm backend, it takes the same effort as nim to port rust and haskell.

> Did they write the first compilers with Assembler?

Rust was written in Ocaml first, bootstrapping from C need decent amount of effort. I don't know the origins of haskell.


> Unless you are talking about an architecture which doesn't have llvm backend

Nim doesn't depend on LLVM. It works also with GNU C.


Yes, i was talking about rust and haskell. How many architectures out there that don't have llvm backend but only gcc?(excluding the microcontrollers with very little RAM(<1M))


haskell was C-bootstrap-able many versions ago, but you need the previous version to build the next from that point forward so it's quite a dance.

There is also no effort to maintain the "old" version as a strict "bootstrap" since everyone just has binaries of the previous version lying around. :)


GHC is still C-boostrap-able. That's literally the only reason why the C backend still exists:

> The C code generator is only supported when GHC is built in unregisterised mode, a mode where GHC produces 'portable' C code as output to facilitate porting GHC itself to a new platform.


so GHC can generate C code to bootstrap GHC?


GHC can compile GHC (or Haskell in general) to C, the main use case being bootstrapping GHC without cross-compilation, as the latter has historically been lacking:

> Support for cross-compilation works reasonably well in 7.8.1. Previous versions had various issues which are usually work-aroundable.

(7.8.1 was released in 2014, GHC itself is 25 years old)


>go needs go

No it doesn't. You can use gccgo to bootstrap go.


If "safe" language would be real magic pill for safe programming, Ada would have replaced C and C++ in systems programming and safety critical applications long time ago.


I believe Ada suffered from its DoD origins, from the era of giant monolithic project management and its verbosity.

Rust is a different kind.

Also, I've seen people doing generic low level network stacks (or at least articles about it) in Ada, seems lovely.


Ada is extremely well designed language. Verbosity designed for better safety trough readability and there is science backing it up.


And it should have. The use of C and C++ in safety critical applications should be classified as gross negligence.


There are no magic pills. There's a lot more to languages than just the languages themselves.


A bit incendiary:

I fail to see what advantage rewriting existing and proven tools with a new language would bring. Shouldn't the main value new tools bring to be enable writing of new things?

Isn't focusing on existing utils more like a lack of imagination and OCD on optimizing a thing beyond any further value?


So does Theo. Base is where the core of the OS that's used to build everything (including base itself) lives. People are more than welcome to have Rust, Go, Java, Erlang, or whatever in the ports tree. There's no reason those are needed in base because they don't build what's in base now. If parts of base required one of those languages to build, that would be an argument for including that language.

It ends up being a bit chicken and egg, right? Don't put a language into base because base doesn't use it. Packages in base can't use it because the compiler's not in base. So effectively someone needs to recreate enough core tools in base in the new language that it's worthwhile pulling in the language tools themselves. And then one must justify any compiler performance and platform limitations before the PR is approved, too, potentially solving those upstream in the language community.

It's a major undertaking, and people keep asking why someone else doesn't do the work. That's why Theo wants people to show progress on code before asking for the tools to be put into base.


> I fail to see what advantage rewriting existing and proven tools with a new language would bring.

This is not the first rewrite, and won't be the last one. Earlier environments were very memory-constrained, which led to optimizing for memory usage; a rewrite with less memory constraints can focus on speed (as mentioned in the GNU coding standards: "For example, Unix utilities were generally optimized to minimize memory use; if you go for speed instead, your program will be very different. [...]").

The current challenge is the "end of Moore's law" leading to an increasing use of multiple cores, instead of faster cores. Developers will have to focus on parallelism instead of raw speed, and new languages can help.

> Shouldn't the main value new tools bring to be enable writing of new things?

Or writing old things in a new way.

> Isn't focusing on existing utils more like a lack of imagination and OCD on optimizing a thing beyond any further value?

These tools are the base over which your system is built (their GNU version isn't named "coreutils" for nothing). Focusing more effort on them makes sense.


> The current challenge is the "end of Moore's law" leading to an increasing use of multiple cores, instead of faster cores. Developers will have to focus on parallelism instead of raw speed, and new languages can help.

Moore's law is indeed nearing its end, but this also ends the multiple core trend.

If you're betting on multiple CPU cores taking off more than they did already, don't.


> Isn't focusing on existing utils more like a lack of imagination and OCD on optimizing a thing beyond any further value?

OpenSSL is a classic counter-example. Interestingly, there are two different approaches actually happening to fix OpenSSL:

- Taking the existing C code base, throwing a lot away and cleaning up the rest, e.g. LibreSSL (developed by the OpenBSD people)

- Taking the TLS spec and rewriting the library from scratch in a safe languange, e.g. ocaml-tls.


> I fail to see what advantage rewriting existing and proven tools with a new language would bring

I think code safety is not an issue here because the Posix tools (and Unix/Linux as a whole) have already proven their extreme reliability.

The only real advantage to rewrite the Posix tools is motivation. A rewrite in a modern language like Rust will likely keep maintainance longer alive than old C code that no one likes to maintain.

It's the same reason why a new display manager (Wayland) is wanted instead of X11. There is basically no technical reason to replace the X server which works very well to this day. However, there are not much people anymore who like to maintain the old X code, or to support new graphics cards in X.


There is basically no technical reason to replace the X server which works very well to this day.

There is, the architecture is not made with the modern-day world in mind: an X11 application can read all keystrokes, mouse event, and do screen grabs of other windows. An X11 application can emulate your screen locker to grab your credentials. X11 does not support different scaling on different monitors that are connected to your computer.


> an X11 application can read all keystrokes

I like that because I use key desktop macros heavily.

> and do screen grabs of other windows.

I like that since it enables me to make screen shots of any application.

> X11 does not support different scaling on different monitors that are connected to your computer.

I use two monitors with different resolutions, one horizontal, and one in portrait mode. They work perfectly, also with virtual screens. What you possibly mean is that X doesn't support that by default. Nevertheless, it's possible if the graphics vendor provides appropriate drivers.


Made me chuckle to think of writing systemd in rust


> I wasn't implying. I was stating a fact. There has been no attempt to move the smallest parts of the ecosystem, to provide replacements for base POSIX utilities.

Apparently Redox developers disagree :) https://github.com/redox-os/coreutils

These are even based on BSD coreutils.


It would be interesting to try to cobble together a Linux distribution with a rust userland, using coreutils (https://github.com/uutils/coreutils) and over time building more and more of the userland/interface in rust.


whyy couldn't they pick a different name for this project?

rustutils? coreutils-rust?

name collision like this is how we have millions of people running around thinking that vim is vi while they bash emacs for having bloat as compared to the 'minimalism' of 'vi'..


i also saw the same eurobsdcon youtube video. i agree it is somewhat hard to imagine replacing long-time unix tools with newer versions that only differ in being implemented in a new language. making sure nothing breaks takes so much time, with not enough reward.

but writing new tools in safer languages makes more sense to me.

yes, you would need the toolchain in base. not easy, but should be possible. i only know go. perhaps it's still too much of a moving target for openbsd.

you could write programs in new safe language and put them in openbsd ports. problem is that it then isn't a "part of openbsd". do openbsd developers really want to write their next daemon in c? for how long will they stick to c?


Theo is completely ignorant here. I also saw him spout similar inaccuracies in this video: https://youtu.be/fYgG0ds2_UQ?t=2112

Uutils and Redox are setting out to provide POSIX compatible coreutils, and Redox builds from scratch in less than 30 minutes.


Something that is 'setting out' to provide something, doesn't seem like it is production ready enough to run a production operating system on top of.


You might want to add a disclaimer or at least a note in your profile that you're the same jackpot51 which is the creator of Redox.


Sorry, I did not mean to be secretive.


Unless I'm mistaken, POSIX compatibility is only a goal of uutils, not Redox OS.


Well, jackpot should know, he's the author of Redox.


> Theo is completely ignorant here.

perhaps, in one sense - or, alternatively is busy not spending all his time chasing the latest 5000 fads and 100000 not-yet-implemented projects which may or may not ever end up being completed...


Someone should start by writing all the setuid apps in Go. OpenBSD would probably be more open to Go than Rust since Go is a lot like C with a Garbage Collector. They love C.


They already have pledge(4) with C.


Seems most os developers don’t care about the users and only about their maintenance and adoption cost.

I WANT my computer and files protected by software written in a safe way using safe languages. I don’t care about the compile time of grep, I stopped caring about that kind of dick measurement competitions a long time ago.

Give me a safe, open operating system and I’m willing to trade Posix compatibility and compile speed.


Curious, but this kind of shitty attitude won't work with OpenBSD nor with any other open source that I know. It is not about what you care, but about what they care. Like if they own you something.

That being said, I wonder in what safe language the OS you used to write this is written.


I wouldn't call protecting my computer and data being shitty attitude. OS developers need to realize that secure is not something they can compromise on going forward.


I'm looking forward to the world of running Servo on Redox and never having to worry about a buffer overflow again.


I have spent a long time using alternative languages. I'm a compiler geek. However, most of these languages ultimately lead to disappointment. Not because they necessarily fail, but rather because the problem of writing software is not a problem of language or platform. It is a problem of thought and abstraction. So, the big promises that these new languages and platforms make will ultimately only be fulfilled for a subset of applications. The rest of the time, we must still trudge through the process of good design, good testing, and good real-world feedback.

People love to beat up on C, because it does not have feature X or feature Y, and because there is plenty of bad C software to point at. However, it isn't the language that leads to bad software. Any entrenched popular language has plenty of bad software examples, and plenty of security vulnerabilities. The problem isn't the language, but rather, the developer. A good developer who understands and appreciates the problem can write good software in machine code, C, C++, Rust, Java, Haskell, C#, Go, D, or any other language or platform. A good developer who understands secure programming processes can write secure software in any of these languages, given enough time.

What these higher-level languages offer is better abstraction, which can make time-to-market faster. However, in a deeply entrenched system that is already in market, maintaining the current languages and platforms is typically the better play. This is especially true when the source code is as meticulously maintained as it is with OpenBSD. Someone who understands the basics of C can easily read the OpenBSD kernel or userland, and understand everything about how a given program works. Some knowledge of BSD Make, Bourne shell, Korn shell, and Perl may be required to understand the init scripts and build process, but in all, it is a system that is easy to understand, easy to maintain, and with an excellent development process in place to deliver two builds a year. OpenBSD isn't broke, and a new language isn't going to fix it.

That being said, there are tools that the OpenBSD team could use that neither impact the build time nor require adopting fad language of the week. There are excellent model checkers out there for C, and quite a few proof assistants are gaining the ability to check proofs on C code directly, using Separation Logic and Hoare triplets. These tools can complement an entrenched C system by providing an external mechanism of verification that allows designers to formally verify that implementation meets specification. While formally verifying all of OpenBSD would be a tall order, taking hundreds of man years, formally verifying critical pieces of the kernel and creating simplified contract boundaries for the remaining code would be an excellent complement to the code review process that the OpenBSD team already does. Better still, it is something that could pay off without reinventing OpenBSD to match the current fad, as such verification can be run independently of the standard build process. If I were to make a suggestion to the OpenBSD team, it would be to explore such tools. But, to de Raadt's point, such a suggestion would only be valid if someone did the heavy lifting to put such a tool in place, and if it were of utility to the team. They won't adopt something for the sake of that thing, but they are keen for anything that can practically improve the security of their system.


You're missing high-level languages such as Modula-2 or Free Pascal that are safe-by-default while being close to the machine. They compile faster. They're easier to analyze. Someone who understands C could easily port it to one of these languages or even write a translator that converts them to C for easy integration of incrementally-rewritten pieces of the OS. They don't because they like C for non-technical reasons or want to do one piece of tooling work over another.

Another you might find interesting in alternate history where C programmers adopt better tech is Cyclone that tried to stick to C as much as possible. Rust's safety scheme took a lot of inspiration from Cyclone.

https://en.wikipedia.org/wiki/Cyclone_(programming_language)

http://trevorjim.com/unfrozen-cyclone/

" There are excellent model checkers out there for C, and quite a few proof assistants are gaining the ability to check proofs on C code directly"

I agree. Further, just using Design-by-Contract with property-based and fuzz testing on those contracts would improve things by itself. There was also an academic who applied Frama-C to one of their smallest, battle-tested components in I think a string library. Although no coding flaw found, the exercise did show the documentation was incorrect. Gotta wonder what benefit might happen for less obvious stuff than string operations.


Cyclone is an interesting example. Something that is backward-compatible with C and does not incur significant compile-time overhead, like C++ does, is a potential alternative. Unfortunately, Cyclone was abandoned. It only has support for 32-bit architectures, and it is not being actively maintained. One of the biggest reasons why C is so popular is that it works on practically every platform, even those that Cyclone, Modula-2, and Free Pascal do not.

Of course, if OpenBSD eliminates support for a bunch of the platforms it currently supports, such alternative languages become viable. However, the OpenBSD team purposefully maintains these platforms because they expose errors that would otherwise be missed by a scaled-down release. Software errors missed in x86_64 or x86 may be picked up in MIPS or SPARC because of endianness.

An alternative that I think is more tenable is a language that compiles to C. Such a language can piggy-back off of C's incredible market penetration while adding safety features. Barring some extensions to Cyclone, most of Cyclone's features, for instance, could be implemented as a source-source compiler that targets ANSI C99 or ANSI C11.


That's actually what I said here in my more thorough comment below plus some of other replies:

https://lobste.rs/s/4cf21p/re_integrating_safe_languages_int...

Ill be replying to followups on that later tonight or in the morning. I totally agree it should be default. I'll go further to say it should have C datatypes/sizes, calling convention, FFI with little to no annotations required, and compile to C. Each check should be removable per midule with a compiler option, pragma per module, or equivalent keyword/operator marked unsafe . It also needs inline ASM. For its own advantages, add macros, safe linking, and a REPL with incremental compilation.

That by itself would be better and faster to develop than with C with seemless integration with legacy codebases. Side benefit like in my Brute-Force Assurance method would be it benefiting from all tooling C ecosystem has to offer esp static analysis and certifying compiler.

What you think on that?


A couple of minor notes.

OpenBSD does not have the Bourne shell. One gets the C shell, the Korn shell, and the Korn shell in POSIX mode: no Bourne shell; nor Bourne Again shell.

It is interesting to note that in FreeBSD Perl was removed from the base operating system about a decade and a half ago. So there's apparently an argument to be had that knowledge of Perl need not be required. (-:


You are correct on both counts.


The problem "safe" languages have in this context is that they really truly are no better than C, for the simple sake of them throwing out performance and resource considerations from the get-go. Additionally, if you take even just a day to read up on UNIX and C history, you'd quickly find the two are essentially intertwined, and that to usurp C, you'd have to nix UNIX, too.

I've yet to see anyone with serious industry experience come out and say sane things about replacing C. It's just not going to happen. Not in my lifetime, unless someone decides to write a new operating system, and Linux subsequently loses footing.


Bravo Theo! Someone finally stood up to hype!


This moment should be in CS textbooks. Want something?

Wrong way: write an e-mail

Right way: Here is my feature-per-feature, bug-per-bug rewrite of grep in Rust


> So rather than bothering to begin, you wrote an email.

This is where Theo appears to me to be quite different to eg Linus. He plays the man, not the ball, with personal attacks.


Come on man. This is such low effort appealing to Theo's public image. If this is something you feel you must describe as a personal attack, it must be hard to get around in life.

I find this response quite level headed and I like that some of his responses make it to the front page that provide insights into why such language changes are more difficult than they seem.


I believe that Linus's attacks are usually good intentioned, to get people in the Linux community to act better. I am not sure what Theo's intent is.


If you want to effect change, do some work, don't write emails.


As can be expected from a project with a slogan of "Shut up and hack"[1].

[1] https://www.openbsd.org/lyrics.html#51b

As the notes indicate, this saying goes back much further than this release song.


Where's the attack there? It's a statement of fact.


If I say "You have a big nose, it makes you ugly" as the first line in my email, it may well be a "statement of fact" and yet still be a personal attack. Your implication that a statement of fact cannot be a personal attack is untrue.

Whether the OP has or has not written Posix utilities in rust is of no relevance to the actual argument whatsoever, it is just a way to have a dig at the OP.


The second part of your example is a statement of opinion, not fact -- so it's not really the same thing at all.


His argument doesn't change if you change it to

>You have a big nose, it makes you ugly to me.

And that is a statement of fact.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: