As one of the original creators of sudo (https://en.wikipedia.org/wiki/Sudo) I've witnessed it getting nearly totally rewritten and then incrementally bug-fixed over the last 43 years. It must take the prize for the UNIX command most highly-scrutinized for security flaws. Flaws which have been identified and fixed.
Thousands of developers and security experts have gone over it. So part of me wonders - how is it possible for a single dev team to totally reimplement it without unknowingly
introducing at least a bug or two? Is there something to this Rust language which magically eliminates all chances of any bug being introduced?
I can only thank you for the work you've done in creating sudo, I think it's an invaluable tool in the general day to day use for so many people. As someone working on sudo-rs, our goal with creating it never was to invalidate any of the work previously done, and we are very much aware that our implementation will not be bug free, especially not at the start.
For me personally, creating this Rust version allowed me to work on something that I would normally not be able to work on, given how I would not rate my confidence in writing relatively safe C code very high. If nothing else, at least we already found a few bugs in the original sudo because of this work. Despite the 43 years of bugfixing, such a piece of software is unlikely to ever be free of bugs, even if just for the changing surroundings.
Other than that, having some alternatives can never hurt, as long as we keep cooperating and trying to learn from each others work (and from each others mistakes).
In a very literal sense, you could write this same code, in unsafe Rust, so one could argue that Rust does not prevent it.
Some may argue that if this program was written in Rust in the first place, "concatenate all command line arguments into one big string for processing" wouldn't be the way you'd go about escaping command line arguments. The issue here is about misplacing a null terminator, Rust strongly prefers a "start + length" style of representing strings instead of null terminators, so you'd never really end up in this situation in Rust in the first place.
I'm sure there's other ways to evaluate the situation as well. Which one you find compelling is up to you.
Yes buffer overflows are one of the explicitly addressed vulnerabilities of Rust's bounds checker, which is always on, if memory serves. I haven't touched Rust in a year.
You can, but unsafe code is discouraged in general, even given a slight performance cost.
For performance-insensitive, security-critical code, there really shouldn't be any such code in the entire program—and it would be easy to verify that with a presubmit.
In an ideal world, no, there shouldn't. But `unsafe` is not just a performance hack; people do find things that they legitimately need to do that Rust can't statically verify. Probably the most trivial example is interacting with code that is not itself written in Rust.
This does imply that some of the stronger claims about Rust's level of static safety guarantees that float around on the Internet can't really be true unless substantially everything you might want to do has a version that's been completely written in Rust. Whether you feel that means that achieving the desired level of safety implies you've still got to rely on some dynamic analysis tools just to be sure probably depends on how much safety you really want, and how much faith you're willing to place in the skills of the authors of the libraries you use.
And even then, if we really want to go least common denominator, if you're running your program on Windows or a Unix or basically any other OS that isn't Redox, then you've got unsafe code executing every time Rust's own standard library needs to make a syscall to achieve something.
Which I don't say by way of criticizing rust Rust. It's got to live in the same crappy world we all have to live in, and it's arguably doing a better job of de-crappifying it than any other systems programming language. I'm just trying to illustrate how an unqualified statement along the lines of "there shouldn't be any unsafe code in the entire program" is kind of a self-strawman, precisely because Rust has to live in said crappy world, and I think that it might be unsafe to lose sight of that fact.
Look into the crates you use and you’ll find tons of unsafe code, especially around custom data structures doing buffer pointer arithmetic and stuff. If it’s wrapped in a safe interface, you’d never know.
The borrow checker does not prevent out of bounds access of arrays (or vectors or whatever you want to call them).
The borrow checker is intended to protect against "temporal memory unsafety". It can tell you that you are using something that has already been freed, or something that could be freed while you are using it for example.
Bounds checking is a "spatial memory unsafety" problem, it has nothing to do with borrowing and exclusive references.
Bounds checking is a trivial problem, for an array that has N length, like a char[N], something tried to access a value past the end of the array (like char[11] if N=10).
Rust doesn't really do anything special here and protecting against buffer overflows does not require any novel technology.
An implementation of bounds checking is as simple as an "assert(I >= 0 && I < N)", where I is the index and N is the length of the array.
In C this is difficult to do because arrays or are just pointers (or decay to) and pointers do not carry any information about the length. Keeping a separate variable containing the length around but this apparently is too unergonomic since virtually all C software does not check every array access in all parts of the program.
In Rust, its very rare to use raw pointers to work with "arrays". Instead there is a "slice" type that models the concept of a contiguous sequence of values in memory. The important part is that the slice type is a "fat pointer", and the fat pointer contains the length or the array. The slice type is able to check every access in all parts of program.
So all slice accesses are checked by default. and you can't "turn it off". If you really want to disable bounds checking for some reason, there is an unsafe "get_unchecked" function.
There are some sequence types other than slices, arrays for example (array is a specific type here). They are still checked but they don't have to store the length information because it's encoded into it's type.
The Vec type is another one. It is a resizable "array", and it stores 3 things, a pointer to the allocation, the capacity and the length. That's mostly not relevant here though. It checks every access like the other types.
Bounds checking is a very easily preventable error. It should not be happening in $CURRENT_YEAR.
Advanced type systems and borrow checking/memory safe languages DO go a long way, but obviously, No. The best developers can do pulling a RiiR is try to follow best practices and learn from past mistakes. We've certainly come a long way in 43 years. Ditching C string handling eliminates a ton of bugs before you factor in the memory safety. Heck, you have to admit: someone setting out to make a secure sudo replacement could do a lot better nowadays even using just C. The OpenBSD project does a pretty good job demonstrating this imo. If you make a programming language that doesn't have many of the sharp edges OpenBSD code avoids, you could probably get yourself a head start, but clearly it also is going to take plenty of care and experience too, and a programming language can't really grant you that.
I think it's at least worth humoring. It probably shouldn't be shipping as a default any time soon, though...
While I don't negate your experience and I genuinely anticipate that this project is going to rediscover some pain, there's something to be said about the fact that we don't have to replicate the life work of Newton, Leibniz, Maxwell, etc to really "get" classical physics. It fits now into the high school curriculum, and if you pass it, you can be fairly decent at it; with a little additional effort, you can get real freaking good at what took those people their whole freaking lifetimes.
This is because we can stand on those giants' shoulders and have the benefit of hindsight and not have to also repeat each and every of their blunders, and have better technology and learning methodology to boot.
So I presume if you yourself wanted to rewrite sudo from the first principles, you, with all your experience and knowledge already there, would spend a lot less time doing it, and it would be way cleaner and simpler.
So while I'm not dunking on your effort and experience, I'm just pointing out that it's not impossible to take your experience and turn it into something better over a smaller timespan.
"Nothing" is too strong. It does not solve logic bugs, but type systems stronger than C can solve some logic bugs too.
Even something as simple as having some concept of "private" and "public" and some boundaries between them can help. I'm writing some code right now in Go, hardly a super strong type system, but I've still put some basic barriers in place like, you can have a read-only view of the global state, but the only way to write to it is to a per-user view of that state, and the only way to witness the changes to the underlying value is through one of those per-user write handles. This eliminates a large class of logic errors in which one accidentally reads the original global state when you should be using the per-user modified state or vice versa. This is a rewrite of some older code, and this error is so rampant in that code as to be almost invisible and probably in practice unfixable in the original code. (Which was solved in practice by only every dealing with one user at a time, and if there was multiple users, it simply ran the process completely from scratch once per user. It tried to cache its way out of repetition of the most expensive stuff, but, the cache keys had some of the same conceptual underlying problems, so it just wasn't as good as it should be.)
You can't solve everything this way. Rust's stronger type system offers more options, but you can't solve everything with that either. But with good use of types, there are still classes of mistakes you can eliminate, and classes of other mistakes you can inhibit.
(There are some tradeoffs, though; with bad types you can alse mandate incorrect usage. But I think in the case of something like a sudo replacement we can reasonably assume fairly high skill developers and that there will be a lot of high skill oversight, as evidenced by the fact they've already sought out a third-party security review.)
C does have some notion of visibility: put private declarations into the .c file instead of the .h file and declare static linkage. You could have a function that returns a pointer to const for read only data. Obviously they can cast that away, but other languages have unsafe escape hatches too. C also has static analyzers to help with some classes of bugs.
Cowboy code might be common, but you don't have to do that. If using something C-like, C++ definitely gives you a lot more tools to write safe code (or hang yourself, up to you) though.
C has "some notion" of a lot of things. That doesn't make them particularly usable at scale. C has the worst static typing of a language that can even plausibly call itself statically typed in the modern world.
C++ is an option to obtain the sort of thing I talked about, yeah, but in 2023 you need to use something memory safe for something as important as sudo, and C++ on its own is not. C++ and a great static analysis tool would be the minimum I would consider acceptable, but there is something to be said for things like Rust that build the analysis all the way in to the compiler rather than relying on external tools, and then future Rust external tools can build on that even more solid foundation if even more assurance is needed.
Enums, Option and Result types, absence of null, not to mention that the type system, borrow checker, and static everything by default, rewards encoding application state and state transitions using all these mechanics, such that they can be verified at compile time. I'd say the language does quite a lot to address logic bugs as well as memory safety. It can't protect a determined developer from themselves, but it provides incredibly useful tools to anyone who can work out how to use them.
Even if I don't like the design of Rust's borrow checker I still do appreciate how Enums/Option/Result types and pattern matching can make your code more robust. Really wish I can bring some of them to C++... I frequently use a poor-man's version of Result types with a `TRY()` preprocessor macro, but I'm often jealous of what Rust has in its toolbelt.
Generally the same idea, yes. Your parent mentioned a key difference though: "and pattern matching." enums in Rust have much stronger language support.
But there are also differences, for example, errors must be absl::StatusCode, whereas enums in Rust allow for arbitrary error payloads.
Also don't discount ecosystem usage: everyone uses Result in Rust, abeseil isn't used by most things, and std::expected has its own issues (though I can appreciate how tough making those calls is) and only landed in C++23, so it's not as widely used as Result either.
Sibling comment mentioned pattern matching, but didn’t point out the important point that the rustc compiler makes sure all patterns matches are exhaustive.
To use a C example, if you add a new definition/variant to an enum, suddenly all switch statements over that enum will fail to compile (unless there is a default: branch).
This does eliminate a large swatch of logic errors, though by no means all.
Static-everything is such a gimmick in my opinion. It sounds great until you try to do something useful with your code. It's almost never the case that people actually want to hard-code stuff in the source code.
Almost always you read configuration files at run-time (like sudo does) and change your behavior depending on run-time information - so you will have run-time errors.
I use rust, and it does have static by default in many places (for example it's hard to do the traditional OOP virtual polymorphism or to keep objects of various types in one container) and it makes it pretty hard for me to write "nice" looking code.
It usually devolves into a lot of nested if-else and switch (match) instructions.
I haven't run into that so much myself. What I have run into is trying to write C-but-in-Rust, for which the compiler yells at me to please knock it off. It got way easier when I gave up and committed to doing things the Rust way.
Not saying you haven't done that, just sharing my personal experience with it.
One of the habits I struggled to get rid of at first was the habit of making lots of things in structs refs for no reason, where it would be typical kn C for C reasons.
This led to a lot of unnecessary friction with the borrow checker while I was still getting to understand it.
Once I started always asking myself "does this really need to be a ref?", things became much easier. And this in turn revealed itself as a useful rule of thumb for keeping coupling between subsystem in check.
Then I was doing some C work and I realised I was asking the same question to myself about pointers too. And I found it could often reduce cognitive load down the line, because I'd end up in much fewer situations where I'd have to figure out "who should free this, and when?".
That's when I realised this cognitive load of keeping the borrow-checker happy was always there in C too. It was just more diffuse, and invisible until it blew up in my face. And when it did it was in the form in nasty runtime problems, not a helpful compiler error that happened before the code was even checked in.
If your program doesn't have a way of reloading its configuration at runtime, then even that first object created by reading the configuration from file can be immutable.
Yep! What I mean, though, is that the loading function itself will need to mutate the object as it reads settings from disk and updates the in-memory data structure. Once that's done, you can pass that around as a read-only object.
This gets said a lot, but I am coming to believe that the case is overstated. For two reasons:
1. Valgrind exists. It's not perfect, but it does arguably do a pretty good job as long as you're writing modern C. The biggest gap I'm aware of is that it can't really help you with global pre-allocated buffers. But I don't think that any language or tool can effectively protect you from information leakage if you're doing that sort of thing, not even Rust.
2. Memory-safe is not the same thing as secure. Programs written in memory-safe languages are rotten with security vulnerabilities, too. Rust's happening to be a memory-safe language that doesn't use garbage collection does not render it immune to this situation. It has some protections around concurrent usage of data that do add additional safety under certain circumstances (assuming you don't switch them off), but I doubt it's a panacea. I worry, though, that the Rust community's tendency to pitch this stuff as a security panacea could breed a culture of complacency that negates the advantages that Rust does bring to the table for systems programming languages. People tend to take unnecessary risks when they believe they're invincible.
The fundamental problem with valgrind is it only looks at what happened, not what could happen. Valgrind is great at making sure you don't have memory safety issues for "normal" inputs, but is basically useless at making sure your code doesn't have memory safety vulnerabilities when fed atypical inputs.
It's true that it doesn't eliminate all bugs in general, but it can completely eliminate buffer overflows for example.
There is no excuse to not at least have bounds checking. This is one of the most basic memory safety problems and it's trivial to prevent.
Just preventing this small issue will prevent a non-trivial fraction of bugs. I don't have sudo's bug list on hand but I wouldn't be surprised if 25% or more are caused by buffer overflows.
So even if it doesn't prevent all logic bugs, it cuts out a pretty big chunk of the bug list.
>assuming you don't switch them off
You can't switch them off.
>Rust community's tendency to pitch this stuff as a security panacea
I’d rather have safety default on with an opt-out, rather than the inverse that C gives you with -Werror -Wall -Weverything -Wyesireallymeanteverything. Compile it again one two different architectures, compile yet another time with clang-tidy and then static analysis with Coverity just to be sure. Run it with valgrind, asan and thread sanitizer. Sprinkle some fuzz testing on top.
Yet you still don’t the same level of confidence as a rust program that may have a small unsafe block in one corner of the code.
>It’s important to understand that unsafe doesn’t turn off the borrow checker or disable any other of Rust’s safety checks: if you use a reference in unsafe code, it will still be checked.
Unsafe rust basically just lets you use raw pointers, mutate static variables, use C-style unions, and do FFI calls, but otherwise it's exactly the same, and the safety checks are not in any way disabled.
The main thing is that pointers let you access whatever memory you want, and borrow checking the pointer value itself doesn't prevent this.
I don't think I would describe this as "switching them off", I would describe it as, "using raw pointers" or something along those lines.
Many vulnerabilities rely on crafting very particular inputs that trigger memory corruption in programs. Unless you happen to have fed that same input to your program when running it under Valgrind then Valgrind is useless for this case.
> It's not perfect, but it does arguably do a pretty good job as long as you're writing modern C.
There's no such thing as modern C. C code that's written neatly and meticulously looks the same today as it did 30 or 40 years ago, except for language changes such as the move from K&R declarations to function prototypes. C the language hasn't changed since 1989 except for minor things like mixed functions and declarations, the introduction of the long long type, restrict pointers, designated initializers, compound literals, and threads being in the standard library.
A lot of the most serious security vulnerabilities are memory safety because e.g. remote code execution is very often along the lines of "LOL, I smash buffer with machine code, it gets executed" and that's a memory safety problem.
For sudo you have potential for some very serious logic bugs, where the program does exactly what the programmer wrote, but what they wrote was not what they intended.
Rust's type safety makes it less vulnerable to these mistakes than some languages, but there is no magic. In C obviously a UID, a PID, a duration, an inode number, a file descriptor, a counter are all just integers. In Rust you could make all those distinct types (the "New type idiom"), and out of the box the Duration and the File Descriptor are in fact provided as distinct types. So, some improvement.
> In C obviously a UID, a PID, a duration, an inode number, a file descriptor, a counter are all just integers. In Rust you could make all those distinct types
For various kinds of IDs you can do that in C, too:
It may be not as nice as other languages, but it isn’t bad, either. If you use C++, it can be made a bit nicer, and you could also have such structs that you can calculate with.
You can technically do this but then you have to write wrapper functions for all relevant syscalls or libc functions to unpack the structure and call the actual thing. Lots of work.
Rust aside, one thing to consider is that a reimplementation of an existing piece of software does offer the benefit of being able to test the old version and the new version side by side for consistent behavior. You could have an entire class of test cases that is just "do X with the old version, and then do X with the new version, and just make sure the result is the same." There is also the entire bug history of the old version that can be investigated during reimplementation. If the old version has specific tests for each resolved bug, those can also be run against the new version to ensure it has consistent behavior.
In this case though, it's only a partial reimplementation: "Leaving out less commonly used features so as to reduce attack surface", which would complicate that approach.
Of course it's not. But hubris is possible and does not take years to master. And honestly statements like my overnight-rust-sudo is better than some poor peoples 40 years of work... well they don't really help Rust becoming more popular, but actually contribute to everything rust becoming even more irritating. Most rust tools are released with this pathos of "we fix what oldies couldn't get right with C". Not a great attitude to approach giants whose shoulders we're standing on indeed.
Who said anything about "overnight"? This project has been worked on for a year, implemented a test suite that found bugs in the original sudo, and have generally been respectful of the original work.
I think you might be projecting something here, there's no evidence for for your assertions.
I think a better question might be whether it prevents categories of bugs that are more likely to be exploitable than, say, the logic errors that no language could ever prevent?
Also, it sounds like your seasoned eyes would be valuable in reviewing this code.
It can eliminate many bugs, but it certainly wouldn’t eliminate all bugs. During implementation they realized they were not implementing sudo’s (undocumented) feature of failing to run if the sudoers file is world-writable: https://ferrous-systems.com/blog/testing-sudo-rs/.
Of course they did find and fix the bug, but in general Rust isn’t going to protect you from bugs like this that are essentially logic errors.
That is documented. Since the mercurial web interface isn't very nice to use I picked a random version. sudo 1.8.6 from 2012 writes in the man page "The sudoers file must not be world-writable,".
This is also a very common behaviour for security sensitive applications to check config file permissions. Another example I remember are ssh private keys.
I might be to harsh but it is not so trustworthy they still made this error and still miss the documentation.
I’m not sure why people are downvoting you. I suspect they may be clicking the link and thinking ‘that’s not documentation it’s source code’, not realizing it actually _is_ documentation.
Interesting, the posting I linked to indicated this behavior wasn’t documented. It’s certainly not surprising and as you mentioned, it’s equivalent to openssh requiring specific permissions on private key files.
I might be too harsh, but it is not so trustworthy that they found bugs within the original sudo after less than a year of effort. those other devs had over 40 years to find it
On the surface, sudo seems fairly straightforward, so it’s interesting to hear how much work has gone into it! Do you have any interesting facts or anecdotes you’d care to share?
great story! also, TIL that I've been pronouncing `sudo` wrong, I was 100% sure that it was supposed to be like pseudo, but I guess that is a myth :)
It's so great to be able to listen and learn from the people that invented these important building blocks themselves, I feel lucky. Thanks for sharing.
The key is in "on the surface". While the common usage of sudo is fairly straightforward, you me and most people use like 5% of it. The trick is in all the side shows.
Makes you wonder then why it does so much, if those rarely used features increase the surface area of possible exploits? This is just a question I’ve had about *nix utilities in general, since sudo is hardly the only tool with obscure flags and features
Because the long tail of features is useful to someone. Mind, I like doas for this reason, but having the more feature rich option available makes sense.
Yes, have a read of the sudoers man page and marvel at the complexity of the configuration, and wonder about your chances of getting it right if you are not well-experienced. This is the config file with the infamous paragraph:
The sudoers grammar will be described below in Extended Backus-Naur Form (EBNF). Don’t despair if you are unfamiliar with EBNF; it is fairly simple, and the definitions below are annotated.
OpenBSD replaced sudo with their own "doas" command a few years ago; the doas.conf manual page is about 100 lines; sudoers is over 2,000.
> Is there something to this Rust language which magically eliminates all chances of any bug being introduced?
no, altough it has features that prevent or reduce the probability of some types of bugs - one example of this being memory safety bugs. rust can't prevent logic bugs.
the rust reimplementation probably has more bugs than the original, but a theoretically better chance to achieve fewer bugs in the long run.
is rewriting mature linux infrastructure in rust a good idea? many people agree that no, it's probably not a good idea outside of special use cases.
> how is it possible for a single dev team to totally reimplement it without unknowingly introducing at least a bug or two?
As someone with over three decades of C programming experience (so not as much as you), maintaining widely used stuff written in C for decades, that has recently switched from C and C++ as main languages for systems programming to Rust, I'd instead ask this: How is it possible, even given 43 years of working the problem, to create a program in C that does what it's supposed to, and only what it's supposed to?
But also, one of the answers from the article is "Leaving out less commonly used features so as to reduce attack surface". Most security bugs in sudo are in features I don't use.
Rust isn't just memory safe. It's also orders of magnitude harder to accidentally make other mistakes, such as race conditions.
> How is it possible for a single dev team to totally reimplement it without unknowingly introducing at least a bug or two
This is possible if every bug fixed has an associated test. If they use this battery of tests to test their new implementation it should be as good as the original implementation.
I’m sure there is a logic bug or two in the new implementation. Whether you want to take the risk of new logic bugs for the benefit of removing several whole categories of bugs (both known and unknown!) is the question and tradeoff in each case like this. This too requires scrutiny, but I’d be a lot more comfortable running a Rust program with 2 years of scrutiny than any C program with 40.
You gotta start somewhere right? I mean its not like you got it right the first time. Don't everyone go switching over just yet, but people can't scrutinize something that doesn't exist.
Thank you for working to create one of the tools which is obviously on the list of the most valuable and beneficial to computer security. Perhaps only second to netfilter.
And I'm really sorry so many people have decided they're going to imply something is wrong or broken with it for their own clout. Or because they've bought into the lie that no code written in C can be safe or correct.
For what it's worth, I and all the engineers I willingly associate with (read: the ones who I respect) all have said the exact same thing. Switching to rust here, just 'cause, isn't going to meaningfully increase anyone's security. But what are you gonna do. Other than ask people to be honest?
Annoying fanboys aside... again, *thank you*! The computer security world is meaningfully better because of your work, and that's something the RIIR fad will never be able to replace :)
The link you cite says it was worse in the Rust version:
> During the audit, it came to light that the original sudo implementation was also affected by [CLN-001: relative path traversal vulnerability], although with a lower security severity due to their use of the openat function.
Re: I am unsure where you got this. It's the same vulnerability.
> During the audit, it came to light that the original sudo implementation was also affected by this issue, although with a lower security severity due to their use of the openat function.
> Like building an entirely new car company around only making side-impact collisions safer
...which still results in safer cars overall, so I don't see the problem. Especially if those cars are almost completely immune to side-impact collisions and if it's actually not a car company but a technology every manufacturer can use for future products.
You don't see the problem in starting an entirely new car manufacturer from scratch just to fix one safety issue?
> if it's actually not a car company but a technology every manufacturer can use for future products
In that case it's like every single manufacturer changing their engine design in order to have a different wiring harness with a slightly thicker shielding around a single cable. The amount of work and cost involved, and risk to every other part of the process, just to fix one tiny thing, makes no sense. It is an insane amount of work for extremely little benefit.
> it's like every single manufacturer changing their engine design in order to have a different wiring harness with a slightly thicker shielding around a single cable.
Car manufacturers improve their engine designs all the time.
It's also not uncommon that programs get rewritten in order to achieve better results, and recently Rust happens to be a popular choice where both speed and security are important. There's nothing extraordinary about it really.
I'm assuming this project's aim is to replace sudo, in which case hand-waving away "Leaving out less commonly used features" is a bit worrying. What are these features? How uncommonly are they used? In which way will it fail if a configuration uses those features?
One of those left out features is `sudoedit` or `sudo -e`. I use this a lot when editing files in /etc or any file that my user does not have permissions. The flag first copies the file to a temporary location with permissions for my user to edit, then opens my text editor (defined via $SUDO_EDITOR env var) as _my user_, without any sudo permissions. After I close the editor, the file is copied back with the original permissions only if there were any changes.
The cool thing is running the editor via my user, which loads my user's configuration/plugins, instead of the root user's.
This would be a great use case for a capability security model. Essentially what you really want is the sudo command to acquire a temporary capability token to edit that specific file. Then run your editor and pass it the capability. (And revoke the capability when the editor process closes).
It’s a pity this isn’t more straight forward to implement on Linux.
Are there no overwrite/seek bugs in Unix that could be exploited in that case? It seems to me like only using sudo for a cp command would reduce the attack surface.
I’d certainly hope not. A capability based security system in an OS is only secure if it doesn’t have bugs. Just like most security-critical software.
However, I’m not sure that Linux is capable of masquerading as a general purpose capability based operating system. I think it’s missing a bunch of APIs.
It's doable by opening the file in a privileged process (sudo) and passing the file descriptor to a non-privileged process.
Maybe one could make a sudoedit that opens a file in sudo process and then spawns a non-privileged editor process which inherits the file descriptor and is given the /dev/fd/ path on the command line, so it stays none the wiser about the whole process.
Sounds like a bit of recipe for accidentally handing access to an unintended privileged fd through inheritance (ignoring the /dev/fd one) such that a compromised unprivileged SUDO_EDITOR value gives you sudo access. Maybe not likely, but I’d really be hesitant about any feature that relies on implicit fd inheritance…
Another option could be to open a UNIX socket in the privileged sudo process, spawn an unprivileged child process 'shim' that connects to that socket, and then the sudo process can pass the file descriptor over the socket. Since the child shim is 'clean', it should have nothing more than stdin/out/err open, plus now this passed FD. Then the shim can spawn the target program and allow it to inherit just the passed FD.
I think the larger issue is that I doubt many (if any?) editors allow opening a file via an inherited file descriptor! I guess some will read stdin (the shim could close stdin and then dup2() it into its place), but then there's no way to save the file back when finished.
Close all other fds between fork and exec then (you can look at the code of base::LaunchProcess in Chromium for an example). It’s a minuscule amount of code to audit compared to XDG portals. And it’s backwards-compatible with decades of unix programs.
For a more complicated solution: spawn a zygote process early with a unix socket which you’ll use to send the fd later. Zygote at start drops provileges. When it receives the fd, it closes the socket and execs the editor.
I’m not saying it’s not possible to do correctly. But do you not agree that the first is hard to correctly (can overlook an fd) while the latter is a lot of complexity?
There is the CLOEXEC flag which is the intended way to manage this but it’s not the default and you have to be diligent about setting it which again carries its own set of challenges.
What you’d really want is CLOEXEC implicitly on all fds and having to explicitly opt in for fd inheritance.
So now any program that's running as your user, even your browser, can edit any file you edit with sudo. It just has to watch for your editor to quit and win a race with sudo to modify the file before sudo reads it.
It is a little less useful if the file is not readable by your user, and once you authenticate anything within your vim can also silently run other sudo commands since on most distros sudo remembers the autnentication for a while.
Now that I think of it, not sure how sudoedit behaves wrt this cached auth.
That makes your editor run as root, which is a bad idea for many reasons (aside from security, any mistake now has the potential mess with the whole system).
One of the features I use in some (larger) environments, which isn't on the roadmap or implemented is LDAP support in sudo-rs.
Using the regular sudo, this allows you to manage the sudo permissions for the entire network from the central LDAP configuration, and even make rules that are time/host/user/command limited in a central location with no chance of simple syntax-errors wiping out your entire configuration, just that single rule is being ignored in this case.
I've lived through the transition to systemd. I'm sure sysadmins will be able to manage this one. And I'm sure great technical documentation exists, this was a PR release, not where I'd look for a list of missing features.
On the flip side, now you get structured logging, efficiently-searchable logs over any of those fields, the ability to easily aggregate logs from multiple machines, the ability to accurately iterate over logs in processes without missing entries, and on and on and on.
Logs as a database is wildly superior to logs as a plain text file, with virtually the only downside being that you need a specific program to tail them.
> the only downside being that you need a specific program to tail them.
The journal can output to text files as well!
But this is exactly it, my only issue is the ideological one that there is no spec for the database. The implementation is the specification, so the only true way of building a reader of logs is to implement the journal. There are attempts to document the layout but if there is any difference then you can't submit a bug to get the layout corrected but the implementation isn't wrong.
I can think of plenty of other applications with bigger issues than that. But I think can still want the defacto log for Linux to have an official spec.
For sure! But that mistake has already been made, and has been in the wild for years, so removing those features (and proposing yourself as a replacement for the original) is now a breaking change
OpenBSD replaced sudo with doas (with a vastly reduced feature set) several years ago, and without breaking everything. Sure there are use cases where you absolutely need some feature of sudo, but you can always install it.
As annoying as it is to have to update every sudo reference -> doas, it forces you to think about everywhere you're using it, rather than waiting to see what breaks and then trying to fix it.
> As annoying as it is to have to update every sudo reference -> doas, it forces you to think about everywhere you're using it, rather than waiting to see what breaks and then trying to fix it.
In my scripts I never call sudo or doas. Instead, if the script needs to do something as root, I write the whole script so that it expects to itself be run as root.
And then when I want to run my script, I run it as root
No, it's never better to run whole scripts as root when root is only required for part of it. Unless every expression in your script requires root, the blanket statement holds.
In my experience, and in my own scripts, it is better to explicitly check if you are being run as root, advise against it and exit (with maybe some break glass flags) and invoke sudo when escalated privileges are required.
OpenBSD is much more open (pun not intended!) about breaking parts of userspace to push through beneficial changes. After all, they control their own userspace and can fix up most things before they even become an issue. Linux is only the kernel.
They could, but their users really won't like that. They have their workflows that they got used to. In practice it's gonna be GNU Coreutils and Glibc and the other usual suspects. If they bundle something more exotic, it better be for a very good reason. For example musl on Alpine or what Android does.
It's such an entrenched tool that I'm sure there a compatible replacement could be useful.
Personally I would appreciate someone to take on the mess that is PAM. It was much too complex from the start and it hasn't become better over the years.
OpenBSD uses BSD_Auth instead of PAM. So you cannot use your YubiKey with doas via PAM on the Linux ports. At least not in the same way, as they do not support caching it seems.
OpenBSD is mainly used where other Unices can be used, and provides widely used software like OpenSSH, OpenBGPD and OpenSMTPD. To say that it's a hobby project strikes me as very ignorant. That said, it is not very easy to convince the developers that a function is missing because it's a pretty opinionated project, and they might not share the user's definition of needed functionality. Thankfully, they're nowhere near ebassi levels of functionality deletion disorder.
> OpenBSD is mainly used where other Unices can be used,
That's a pretty general statement, and I'd say to that not really. It's very much a hobbyist OS. A few people use it at home as firewalls, a few small businesses maybe, but it's mostly hobbyists and developers.
> To say that it's a hobby project strikes me as very ignorant.
I mean, I've been familiar with the project for over 20 years, so I don't think I'm ignorant at all. The developers primarily make the OS for themselves and people with the same ideas and priorities.
> That said, it is not very easy to convince the developers that a function is missing because it's a pretty opinionated project, and they might not share the user's definition of needed functionality.
Right, the devs prioritize their own needs, and can do so because it's a hobbyist OS.
Priorities. If a fundamental security tool's design limits it's trustworthiness, it greatly reduces it's usefulness and "breaking" it's interface is thus warranted.
There's a gulf between being 99% compatible with something and being a 100% drop-in. If the author of program B wants to replace A, he should go the extra mile and implement every feature of A so as to erase technical excuses for stasis. B needs to put aside his ego, swallow his pride, and implement all the features of A, even the ones her personally dislikes, because the effect of doing otherwise will be that B doesn't replace A. We have to work backwards from out desired outcomes.
Perhaps, but if the goal is security of a critical tool, losing some attack surface (features) if they aren’t widely used is a win. Other projects, like ntpsec[0] have taken this approach with good results. Although, I agree with another commenter in this thread[1] that this effort would have been better directed at something with an inherently small attack surface like doas.
The features they are leaving out were presumably added for a reason. If someone is on a system that is using sudo-rs as a drop-in replacement (not under their control) and they need to use one of those less widely-used features, how secure is the work-around they have to use instead? I'm hoping this factored in to their analysis.
Sometimes reimplementing something and leaving out lesser-used features to "reduce the attack surface" can sound an awful lot like "let them pound sand".
Did you know you can set up an OpenVPN tunnel with no encryption or authentication?
I'm sure for someone out there that's a make-or-break feature, but for the vast majority trying to use OpenVPN it's a massive, insecure footgun. (Hell, how many bugs have protocol negotiation led to in OpenVPN/SSL/etc?)
Compare that to Wireguard that just says... it's encrypted. Full stop. Carry on.
A lot of this tooling and technology was developed in a different era with different priorities. Security, and especially network security, was not such a huge focus 30-40 years ago. Priorities have shifted. The operating environment is a lot more homogeneous (when's the last time you dealt with a layer 2 protocol besides ethernet?), while the risk of poor security has grown immensely.
It's absolutely fair to critically evaluate these features and determine which can be removed to simplify and improve the products for the vast majority of users. If a small fraction of users stay on sudo, but the majority are able to move to a more secure option... that's a win. This is exactly what Wireguard provides versus OpenVPN.
Sure, and those people are free to continue to use the C-based implementation of sudo, implement the features themselves and submit patches to the maintainers of the rust sudo implementation, etc. But, the idea that a feature, once implemented, is sacrosanct and can never be deprecated is insane, especially for a piece of critical security infrastructure.
That’s an interesting criticism. With commercial projects I’ve been involved in, feature deprecation was always based on a business case built on cost vs future revenue projections, not on a sense of duty to maintain ever single feature forever, regardless or whether or not anyone actually used it.
It’s actually mostly just a matter of time and money. They implemented the most commonly–used features first, because those are the most important ones. The majority of people could probably swap sudo-rs in for their existing sudo implementation today with nothing lost. Others will need one of those “enterprise features” and so they should either wait or pitch in.
So, you may get a memory-safe su/sudo-rs, but those who distribute it in a binary form won't be obliged to show you the source code it was built from (potentially including some modifications).
I wanted to write this comment too, as I have a serious concern that the effort to displace GPL tools with Rust rewrites in MIT/Apache will one day lead to a proprietary Linux, but the original sudo is not in GPL. It's in an ISC/MIT style license [1].
In what sense? It is completely in the spirit of the GPL to reimplement a GPL tool from scratch with the same behavior and a different license. After all, that's how the free Unixes came about (though admittedly those were BSD licensed typically).
Kinda? Historically there were indeed concerns about reimplementation and copyright. One of the ways that the GNU Project tried to fight claims was to reimplement the tools using dynamically-allocated memory (instead of Unix's traditional fixed-size buffers) to make sure the implementation was sufficiently different. Other ways were making the implementation Posixly correct, adding internationalization or trying to pick different approaches (like using more modern algorithms for sorting). The GNU tools were better, not just direct ports, and that's why it was common to install them in systems like Solaris or HP-UX.
> Kinda? Historically there were indeed concerns about reimplementation and copyright.
Free software projects should welcome multiple implementations and interoperability, because these are the mother's milk of free software. It's frankly incoherent, given values of free software, that a reimplementation of, for example, Unix coreutils (GNU) would find fault with a reimplementation of itself (uutils).
Notwithstanding how philosophically incoherent it is, a desire, now that Linux and free software have some market power, to be a bully back, to grasp for monopoly power, to play AT&T, is really distasteful. What's exciting about free software is not the artifact, Linux or coreutils or sudo, but that anyone can create new and interesting alternatives. That users get to make choices about which implementation to use. The existence of FreeBSD does not make Linux worse. It makes it better! The "solution" to an MIT licensed coreutils is a GNU licensed fork which is 10x better. Instead, we get complaints which amount to a kind of free software entitlement, an endless pissing and moaning about how other people won't do new things your way.
This is a major problem in the way that most normies view the GNU and the GPL. In the past, I may not have chosen the GPL for my own projects, but I'd be pleased to contribute to a GPL project. Now, I'd have a hard contributing to a GPL project, because of just how toxic this attitude (no other license matters but ours) is.
The fact that it is free software shouldn't make things different since it's a matter of copyright law. It's fine to copy the interface - the implementation not really. There would certainly be NO problem if a close reimplementation was under the GNU GPL (as a derivive work). But a close reimplementation can't have a different license.
This was a discussion of the philosophical/meta/attitudinal issues, and why this is kinda a bizarre attitude for a free software project to hold.
Legally -- implicit in your comment are unstated assumptions about what is copied from the implementation. Suffice to say -- I disagree that a so-called "clean room" implementation is always required re: software published on the internet, or that the act of simply reading GPL code taints one forever, re: that code. The reason why is the reason why reading a book does not taint one forever and prevent one from writing one's own book -- "You should have used a while loop instead of a for loop here, now you're going to copyright prison." See, and pay close attention to the "Merger Doctrine": https://en.wikipedia.org/wiki/Idea–expression_distinction
It's my view that copyright is a rather weak IP protection, and GPL-type advocates have misconstrued the law of copyright for years, especially "derivative works". Mostly because they've been misinformed about the law by mendacious FOSS leadership cough like the FSF and SFC.
Now I'd agree with you, if there is something especially clever/creative about the GNU implementation that the reimplementation copies, that might be an issue, but again, even then, it seems especially rich for a reimplementation to find fault with another reimplementation.
a reimplementation that copies the interface has no need to copy the license. GPL people just get scared because they know their license can't prevent people from rewriting their own software (of course, such an idea is ridiculous).
the GPL is not a perfect license, there is no such thing.
Arguably writing in Rust will force a similar magnitude difference to those examples. For example, you'll probably replace that modern sorting algorithm with a call to .sort
I'm really not certain this is enough to be a copyright problem.
For instance, GNU and POSIX both publish their specs for the coreutils. If a coder were to take a look at the actual GNU code (which BTW is published for everyone to see), copyright law has a well trodden distinction between the idea and the expression -- that is, ideas are not copyrightable. If the "idea" simply amounts to what would be a more a detailed specification, I'm not sure there is a problem, like ... GNU uses this kernel facility for X. The problem would be vast amounts of "expression", especially "creative expression", directly copied and reimplemented in Rust. If the code is meat and potatoes, not 10xer galaxy brain fare ("I wrote a custom allocator which is suspiciously like the custom allocator implemented by GNU"), there shouldn't be an issue.
Think about what copyright to a play, or a novel, or a screenplay is. Now imagine a comment in the text/source: "This is how Toni Morrison did her characterizations in Beloved". This obviously isn't a copyright violation, unless you're copying the actual expression or a translation of the actual expression found in Beloved.
The GNU ~~coreutils~~ util-linux version of su would be GPL but su is from Unix V1 (1971) so there are AT&T implementations, BSD implementiations, etc.
That only checks that binaries are the same as what is published by distributors.
It doesn't help against a malicious distributor, unless package managers also do a deterministic build themselves and verify that checksum from self-build binary matches the checksum published by a distributor.
If I'm not mistaken, many (most?) of the package managers of major Linux distributions actually build a package from source themselves in an automated process.
Otherwise, getting a package to run on different architectures would be a ton of manual labor. At least Debian certainly has that automated.
Besides the huge security risk involved in... just distributing random binaries?
Anyone building from source has the ability to potentially including some modifications. Unless you are able to verify that checksum published (by whoever builds the code) "matches" the source code (by reproducing the build) you cannot be sure there aren't any modifications. Published checksums are often meant only for verifying that binary you got matches the original binary. Not for verifying that it is build from specific source code.
Many Debian packages already have reproducible builds, but not everything.
From Debian wiki:
> Reproducible builds of Debian as a whole is still not a reality, though individual reproducible builds of packages are possible and being done. So while we are making very good progress, it is a stretch to say that Debian is reproducible.
But the point is that this has nothing to do with the type of an open source license. GPL doesn't guarantee you that a build is reproducible and MIT doesn't prevent you from having reproducible builds.
For better or worse the use of GPL is going away, even the future of Linux kernel is not guaranteed.
In the realm of IoT FOSS UNIX like operating systems, all the contendants are using a mix of Apache, MIT and BSD licenses, including the ZephyrOS sponsored by the Linux Foundation.
When the GPL generation is gone from the face of the Earth, it won't last long that UNIX-like OSes get another steward alternative to the Linux kernel, with a more appealing license to big corps.
Would be interesting to see a a Debian derivative that combines this with the Rust Implementation Of GNU Coreutils.[1] Could be a big win for memory safety and performance.
I looked into this a few years back when I was making my own toy Linux distro, and this is the list of packages provided by a typical GNU system that meet POSIX requirements for a userspace:
* `bash`
* `bc`
* `binutils`
* `bison`
* `Coreutils`
* `Diffutils`
* `file`
* `Findutils`
* `flex`
* `gawk`
* `glibc`
* `grep`
* `tar`
* `gzip`
* `M4`
* `make`
* `man-db`
* `man-pages`
* `procps-ng`
* `psmisc`
* `sed`
That's a reasonable start, but you also need, minimally, something to replace `pciutils`, `IPRoute2`, a bootloader, and an init system. For a close to expected experience, add in `TexInfo`, `XZ`, `ZStd`, and `bzip2`, plus `shadow` if you don't want passwords stored in plaintext.
POSIX doesn't dictate an editor, but you probably want something that can run in a terminal. Usually `cURL` and either `openssl` or `GnuTLS`, plus `bind-utils`, `ldns`, or something equivalent are there for actually using the network, something to replicate `gpg` functionality if you're going to install signed packages, and of course the package manager itself. Cargo is fine for Rust app developers, but can't replace an installer of system packages. You likely need an `ssh` implementation to replace `OpenSSH`.
I'm sure there's more I'm missing, but this is pretty close to what you'd get in a minimal server image.
If you're looking to fully get rid of C and not need a C compiler, though, Linux itself is a hurdle. You don't necessarily need a kernel quite as fully-featured, but you need something that at least implements the POSIX system calls. Just about every Linux distro I'm aware of seems to also provide Python and Perl these days as a whole lot of system utilities and build scripts use them. Presumably, rewriting all of Perl and Python in Rust is not feasible, so you either need some other interpreted scripting language good for system scripting that is written in Rust, or somehow make your shell a superset of POSIX but also much closer to a real programming language.
Don't underestimate the lift of replacing `libc`, either. It's not just the C standard library and interface to system calls. It also provides the linking loader that makes it possible to even run other programs, all of the locales and time zones, the system's name server, profiler, memory dumper. A whole lot of stuff.
A Linux distro is going to need to see compiler to self-host regardless of the user land. If you can live without Linux, there's redox ( https://redox-os.org/ )
Seriously, better reimplementations are great. But weren't you shocked to read about all those weird sudo features? I mean, the normal stuff is very weird, subtle, and therefore fragile.
Anyone who uses sudo "deeply" should probably think about whether there are other ways.
I'd love to see a verification / validation parameter/flag/tool that allows the user to dry-run the current sudo configuration and print out the parts unsupported by sudo-rs.
Portability helpers for projects like these enable a frictionless change.
IIRC, all the recent sudo vulns are logic errors, not memory safety. I mean, rewrite away but let's not pretend that there couldn't be some new bug introduced due to a misunderstanding of how something works or just a plain old mistake.
> let's not pretend that there couldn't be some new bug introduced due to a misunderstanding of how something works or just a plain old mistake.
Is anyone doing that? I see a lot of claims of memory safety, but as far as I can see the project isn’t saying other types of bugs are for sure eliminated.
In the same way a new memory bug could be introduced to the original sudo.
Shrinking the attack surface with static checks seems like a better deal in the long run.
doas is 43184 bytes, and does everything I need. That sudo exploit was a wakeup call. If you haven't moved away from it yet, give opendoas (Debian package) a try.
The thing you linked says "default y" to "Allow legacy TIOCSTI usage". So yeah, you can disable it, no, it's not the default.
There's also a related issue with TIOCLINUX and the paste functionality. (That will however be solved in an upcoming kernel version. I wrote the patch for it :-)
> There's also a related issue with TIOCLINUX and the paste functionality. (That will however be solved in an upcoming kernel version. I wrote the patch for it :-)
By now we've gotten several projects that re-implement a core unix util in a safe language, often also with better performance due to concurrency, better standard primitives, etc
Have any of these ever been adopted by a distro as the default implementation? Is that something that might happen? I.e. I would never bother to upgrade my sudo command or my grep command, but getting better defaults would be better
Obviously they would have to be perfect drop-in equivalents, which some of these projects don't try to be, but others of them do
This is good work, and I'm not quite sure why people are complaining about it; there clearly won't be any replacement of the traditional C sudo by this unless it's driven by distros and the community making it happen.
Multiple implementations make it much easier to do fuzzing and generate automatic test suites that may be used to improve all the versions of this critical utility.
> Leaving out less commonly used features so as to reduce attack surface
In principle I think this is a great idea, but when it comes to encouraging adoption, I think this might be a mistake. I would love to use this, but I'm not going to install it myself and make sure it stays updated. Realistically, I'm only going to use it if Debian decides to replace their sudo-c package with sudo-rs. And I just don't see Debian (known for being fairly conservative with changes and updates) doing that when sudo-rs doesn't implement sudo-c's full feature set.
I don't know why anyone is enamored with a rewrite of something as critical as sudo in a memory safe language, as if memory-safety somehow magically makes all of the other types of bugs disappear.
No thanks. Keep this far away from all of my systems.
Having sudo itself makes an OS less secure since malware can use it to easily get root. Sure a rust version may be more secure, but even better would be deleting it entirely.
How else would you do things as root or superuser without exposing everything? If your sudo configuration is ALL = ALL (or is a variant of this), then sudo opens everything up for use/abuse. But if you carefully construct the configuration to allow only certain commands, it’s much better than just giving out the root password to users.
1. Don't. Just include the correct configuration as part of the OS.
2. For a configuration file use a group for passing out write access to users on the system.
3. Have a daemon which handles updating configuration files based off some central source of truth which pushes changes to a fleet of servers. Programs could directly communicate with this daemon to get their configuration.
On a server, absolutely.
On a desktop, the user is usually the administrator anyways. UAC exists on Windows land and the Administrator account is disabled for a reason.
I disagree. Desktop users run a lot of untrusted software. Allowing software to gain Administrator privileges with a single click from the user is a mistake. It doesn't follow the principle of least privilege where the user should only give apps permission for what they need to do and not more.
Why do we need a memory safe re-implementation? Is there no way to wrap a binary and have it blow up if any attempts at memory unsafe accesses are detected?
I remember a couple of years ago a root exploit in Sudo that was the result of failing to check for a sentinel value, thinking “that is a bug that wouldn’t happen in Rust, even though it isn’t related to memory safety!”
Rust enums are sum types, and imho are one of the few unambiguously good language feature ideas. I miss them any time I use a language where they are not built in. F# is another nice language where they are first class and where I first got familiar with them
I wouldn't mind so much if they just called them "sum types" or "tagged unions", or even some other new name. Reusing the existing name "enum" from other languages, but differently from the way all those other languages have used it for 45 gorram years, is freaking maddening.
Swift and Scala also uses the enum keyword to define sum types, and their history goes earlier than Rust, so now you have multiple languages to yell at!
I don't think so, OCaml consistently calls them "variant types". I don't know who wrote that page, but that wiki didn't even exist before September and it isn't endorsed by ocaml.org, so I suggest you don't consider it authoritative.
Thousands of developers and security experts have gone over it. So part of me wonders - how is it possible for a single dev team to totally reimplement it without unknowingly introducing at least a bug or two? Is there something to this Rust language which magically eliminates all chances of any bug being introduced?