Hacker News new | past | comments | ask | show | jobs | submit login
Making Rust supply chain attacks harder with Cackle (davidlattimore.github.io)
181 points by djoldman 11 months ago | hide | past | favorite | 83 comments



This could be interesting but the biggest problem is that “unsafe” is a master permission that implies all other permissions in addition to memory and thread safety weaknesses. With unsafe you can do file I/O, process, net etc. for example, the nix crate can do all of this without ever touching the std APIs.

Now of course you could disallow unsafe in most crates, but any that do are now much higher value for compromise.

> That said, using unsafe to say perform network access is harder than just using Rust’s std::net APIs, so we’re at least making it harder for a would-be attacker

No, it’s actually quite easy. Done every day by C and C++ developers and in a few hours you have built up some high level APIs. Heck you can copy paste the std or nix implementations since that’s what they’re doing under the hood anyway.


You don’t even need unsafe to reproduce unsafe behaviour on Linux. You can just read and write to `/proc/self` and modify memory arbitrarily. If you have `std::fs`, then you have `unsafe`, then you’ve got everything.

In general, sandboxes don’t work well at the language level. You really need to go at the system level.


Yup.

Even at the language VM level, it doesn't seem to be tenable.

Microsoft tried to go all out on this back in the day with Code Access Security. I remember three things about it:

1. Engineers/sysadmins would easily get frustrated, and just let the app run under full trust

2. Perf issues, since security demands would result in checking the call stack up

3. When they changed things in .NET 4 a lot of web code would break unless you added a magical attribute.

Needless to say, microsoft more or less gave up on it in .NET core


Same thing with Java Security Manager which is in the process of being removed with no replacement.

Unfortunately supply chain security is incompatible with developer convenience. At least not without a lot of work to make it bearable.

We will have to suffer through a lot worse attacks than now before people will take it serious (most developers likely never but governments will at some point intervene - see EU's CSA).


IDK what JSM looked like to use, but .NET permissions were in some ways arcane and sneaky.

Back then, you often just ran VS as local admin, supply chain attacks weren't a 'real' thing most of the time, so NBD.

So then you try to deploy your app, and discover the joys of signed assemblies.

And you -make absolutely sure- when you leave, you give instructions to rebuild the whole pipeline if need be.

TBH at least we knew there was the polite illusion of a sandbox...


WDYT of WASM capability system? IMO, it would be a good contender for sandboxing dependencies. However, I haven’t tested it in practice.


Does it have a threading model yet? Otherwise I'm not good enough to even try.


I'd prefer 'sandboxes' at an even lower, function-level, such as with Austral-lang's capabilities: https://news.ycombinator.com/item?id=34168452

There you can more exactly specify what a function may do, instead of relying on blunt categories like "filesystem".


It's true that every layer of the stack needs every layer below it to take security seriously. But that's not an argument against hardening languages, because they're also a part of the stack that upper layers rely on upon.


I dont disagree with this. I just want to highlight that to do this sort of sandboxing in any systematic manner, you’ll always need to go down at the system level. Because here we’re trying to sandbox the system using abstraction at the language level which is bound to run into impedance mismatch. I don’t think it’s the role of the language to restrict which system calls are acceptable.

However, there is some appeal to the syntax introduced by the author if we use it for a proper and portable sandboxing mechanism. Maybe WASI, with capabilities?

To be more specific, there’s no reason for rust to know that writing to a specific file will allow modifying the program’s memory. It’s also not a security problem from the system, it’s just how it works. It really only makes sense for the system to enforce that kind of sandboxing, because it has enough context to enforce things sensibly.


It's still an improvement, and having a significant number of your imports be unable to use any kind of unsafe code or access files or the network greatly decreases the amount of code review you need. If I import a jpeg decoder that works only on buffers I provide and has no unsafe code or equivalents, then I don't have to worry about it exfiltrating any personal data.


I could see a combination of this approach and one of the audit approaches like `cargo crev` working well in the unsafe case:

- Require audit if there is new unsafe code

- Otherwise, rely on cackle to enforce no use of fs/net etc in safe Rust

This could provide the best of both worlds, automating most of the audit burden while still providing strong guarantees.


You wouldn't want the unsafe in your own crate, of course. You'd leave that for the javascript jit, which nobody will think twice about. You definitely need JavaScript because you want to talk to polkit, because of course the user should be allowed to select a color profile. Naturally. And then, oops, maybe a little extra JavaScript gets passed to the interpreter that does some funny stuff.


A semi-automated system that compares old unsafe code to new unsafe code would likely be really helpful here - say, a LLM prompted to investigate whether the new unsafe blocks are a significant difference in scope and documented intent from the old unsafe blocks. Unless the winners of https://www.ioccc.org/ are among your attackers, it's a pretty solid line of defense.


> in a few hours you have built up some high level APIs

I think this is what OP is talking about when he calls using the std::net API’s easier.

Driveby hackers are (probably) not going to spend a few hours re-implementing a network call.


Detecting supply chain attacks at the end-developer level using such tooling is rarely going to pan out. Developers just don't care enough. There is a considerable increase in effort required when such defences are introduced.

Also, there are too many things to keep track of for the authors of such approaches. Eg: How would this tool handle changes in the standard library API?

Languages and repositories need to be funded appropriately to solve a problem that is systemic.

I've played with several methods, built tooling around this problem (github.com/R9295). Hell, my thesis is going to be about supply chain security. Solutions that inform the developer and allow them to make decisions don't work.

I get that there is no absolute defence and security is layered etc. But these approaches feel hacky


I respectfully disagree. I care about supply chain attacks to the level that I only use reputable, preferably small dependencies. I check their dependencies too. However if I could limit their interface to a limited set of operations and be warned if a new version changed the APIs it uses, I would be exstatic. This goes into CI so I don't need to remember to run it, so it is fire-and-forget. Kind of CSP for JS (in browser).

Not using Rust though (yet?), so can't vouch for Cackle. Great idea though.


> Detecting supply chain attacks at the end-developer level using such tooling is rarely going to pan out.

Even if true, the uptake fraction (e.g. usage %) isn’t the key point: to provide an additional (rather novel to boot) layer of analysis and therefore security. Remember: security is layered.

> Developers just don't care enough.

Even if narrowly true, this neglects to recognize that developers exist in varied situational contexts. When properly motivated, some can use these kinds of tools.


They are forced to care when their employer does.

Hence why many companies force internal repos on CI/CD, and dependencies only uploaded after legal clearance.


They would care if it was part of their job


While this hilights a real problem and what seems like a reasonable solution -- the more I am exposed to it, the more I just feel like automatic transitive dependency pulls of 3rd party packages, from the Internet, is just... not a good idea for many use cases.

Crates.io, NPM, Maven repository, etc are a wild west. That's fine for OSS developers and hobbyists. I think it's crazy for a business.

Honestly, if I were starting a commercial project from scratch, I'd dispense with it all and "vendor" packages like Google does: they go in your (probably mono) repository as tagged, versioned, maintained-by-you, directories. Perhaps as a git submodule, perhaps as a fork. But not pulled from the Internet.

Yes, you can set up your own mirror or private repository, but this only solves half the problem.

Because the other half is cultural: systems like cargo or npm encourage very deep dependency chains, lots and lots of packages. It becomes exhausting and difficult to track what is doing what and who is maintaining it. It leads to bloat in build times, but also to brittleness and complexity.

Tracking dozens of semvers and dependency trees I think becomes seductive to people as a kind of exciting busywork. But it's not usually solving the core business problem. It very often creates new ones unrelated to the thing you're supposed to be thinking about: shipping a stable, working product that solves customer needs.


>> Maven repository, etc are a wild west.

I'd disagree with this one being characterised along with the others. When i want to publish on Maven Central, i have to:

    1. Prove i own the domain i'm about to upload a package under, e.g. if i claim com.myname - then i'm going to need to prove that to Maven Central by creating a DNS TXT record on com.myname

    2. Sign my release - every jar i publish needs to be signed by my (or my org's) GPG key

    3. On top of the automated mechanical controls, there's an actual human sign off in the loop for the registration process at least
This might still leave attacks like typo-squatting potentially open but not an easy thing to do and it effectively stops most of the other horrors like replacing an already published artifact with a malicious version, or "brand-jacking" my library's name and pushing up a new malicious release.


Yes, that's fair, Maven did start with a better story than others in terms of authenticity of sources, etc.

Crates.io doesn't even have the concept of an organizational namespace. It's ridiculous.

But Maven still has the broader cultural problem of automatic dep resolution: it's just too easy to go adding deps for every little utility and nifty function or feature or framework.


I wouldn't even call it a solution. If you have a trustworthy dependency that uses, say, net and fs APIs, and that dependency suddenly becomes malicious, the malicious update will still be able to wreak havoc without increasing its API use and triggering any alert. And as another comment has pointed out, if a dependency is allowed to use unsafe it can do pretty much whatever it wants. Ultimately you still have the same choices for each dependency :

- Trust it blindly

- Audit the code (and do that again for each update)

- Write it yourself instead

The last two can be time and resource consuming so you sometime have to choose the first solution.

Cackle can be a useful tool to (occasionally) raise alarms for when dependencies you trust blindly start using different APIs (so the trust isn't completely blind anymore). But it doesn't really solve the problem.


You could solve this with capabilities: make the main function not only take argv, but also a map of unforgeable tokens for the various sorts of possible “unsafe” actions the user wants the program to be able to do. Add APIs that can restrict these tokens (e.g. take the filesystem access token and generate a “access this directory and its children” token). Any code that wants to do one of these unsafe actions must take a token as a parameter and pass it to the standard library function that actually does the thing. (FFI makes this hard, but just prevent deps from doing that unless the developer opts in and also prevent deps from interacting laterally by requiring each dep to use its own copy of transitive deps).

This sort of capability-based approach to security would make untrusted code relatively safe to execute because the worst it could do without the explicit cooperation of the developer is an infinite loop.


Something like the JVM security manager ? https://docs.oracle.com/javase/8/docs/api/java/lang/Security...

I wonder if anyone tried to use it to limit dependency risk in that way.


My impression was that the SecurityManager was ACLs. I’m thinking more of capabilities as found in the E language and various protocols like CapTP. The idea is that there is no “ambient authority” in a program: to be able to interact with the outside world, you need to be have a token that the runtime guarantees cannot be created by any program. All the tokens would be passed to the main function at startup and then passed down the call stack explicitly to code that wants these feature.

The whole paradigm is to avoid needing to check permissions by making it impossible in principle to do anything you’re not allowed to do.


It's a neat idea, but you'd probably have to build it into the OS from ground-up to work. And then a whole ecosystem of development languages and tools built around it. Quite a lot of work to have something anywhere near as functional as what's around today.


I don’t think so, a language by default doesn’t really have any access to the environment (ignoring side channels like Rowhammer attacks) aside from access to memory and the CPU. Ensuring the security properties I’m talking about is mainly a matter of designing the runtime’s OS interfaces from the ground up with a capability model.


The problem with Cackle is probably that 99% of the time the dependency updates are completely reasonable and valid. It’s going to run into the ‘more noise than signal’ problem really quickly.


Good to see more attempts at analyzing dependencies for malware.

Plug: we've been building Packj [1] to detect malicious Python/NPM/Ruby/Rust/Java/PHP packages. It carries out static/dynamic/metadata analysis to look for "suspicious” attributes such as spawning of shell, use of files, network communication, use of decode+eval, mismatch of GitHub code vs packaged code, and several more.

1. https://github.com/ossillate-inc/packj


Seems trivial to bypass just by using `extern "C" fn` or `#[link_name]` attributes, unless you're scanning for those too.

But you can get around source scanning too by compiling your code, `include_bytes` the executable, mmap'ing and then calling the entrypoint.


Sure, what you list may or may not have been missed, but they can all be vetted under the same model. It is possible to make things more automatically secure, step by step.


Not really, because the real malicious code can exist out-of-band of the source, and the attack vector is indistinguishable from normal code.


Then you fix that attack vector in the same way. People don't have to solve 100% of the problem immediately; it's still an improvement.


AFAIK the last step is going to require an unsafe block, which is going to be a bright red flag even for people who don't think to grep for include_bytes.


Unsafe is hardly a bigger red flag than mmap'ing an opaque blob with PROT_EXEC and then calling a function in it. But the entire pretense of such an attack is that you're not looking at the malicious code.

Unsafe is not really a red flag, except for people who don't understand what unsafe means. And like you point out, if you see it, you need to audit the code regardless.


These attacks would be very noticeable in a crate that was not already using unsafe. That's a pretty reasonable start, at least - and it gives a little more benefit to reducing the number of unsafe-using crates you pull in.

(For non-rust folks, using mmap requires using unsafe, so you can't map your own machine code in without being explicit about unsafe)


The issue isn't `mmap`, that's just a contrived example to show that scanning source code isn't enough to stop an attacker from adding malicious code to the supply chain.

Forbidding unsafe is a big gun. You need unsafe code for any FFI, so attackers will just look for crates that link to system libraries and add their malicious code there. If you want to do any kind of fast iteration over slices/vectors you need to use unsafe to explicitly elide bounds checks.

I think the real nugget of truth is that the vulnerability is `cargo update` and that tooling needs to look at changes that happen between versions, and alerting to new unsafe or extern bindings is a good way to do that. But I still see a few obvious ways around it - what you really want is a linker shim that can detect when symbols are being relocated into the final executable that shouldn't be there.


Absolutely. Crates that already use unsafe will be a more attractive target. Most of the ways to sneak arbitrary code execution into rust require unsafe.

As I said, it helps. It's not perfect. But there are a lot of crates out there that don't use unsafe, and having a way to focus your attention is useful. Looking at some of my own projects, for example, I notice that I often use crates like "env_logger" with 3.9M downloads. What an absolutely magical target for a supply chain attack -- but it doesn't use unsafe, or net, or tokio, or a lot of other things, and pinning that restriction with Cackle would let me update it with much less worry. I kinda like that.

I'm really not arguing that this is a silver bullet. It's just .. nice.

Edited to add, as a counterpoint: The "log" crate, which is used by env_logger, _does_ use unsafe. Great target for a supply chain attack -- and perhaps reason to nudge the developers to find ways to get rid of it!


> and perhaps reason to nudge the developers to find ways to get rid of it!

Or perhaps nudge users to understand what `unsafe` means and why its necessary rather than evangelize at maintainers.


Nah. The log crate primarily uses unsafe to poke at pointers in a probably-ok-but-racy way on platforms that don't support atomics. It would almost certainly be better to use a platform-appropriate mutex for those limited contexts, or at least allow the presumably-faster, presumably-benign race to be hidden behind a feature flag.

I'm quite familiar with what unsafe means. And in the context of supply chain attacks, there's a clear benefit from going from "a few uses of unsafe as a possible performance optimization" to "no unsafe". Obviously, that doesn't work for all crates and we shouldn't demand it. But I made the comment about log after looking at the code to understand its use of unsafe.


Except now You only need to audition the Unsafe part, which should, and usually is being utilized to minimum, rather then the entire project.


And even if the crate with ffi isn't compromised, they are the most likely spots for a cve anyway. Openssl and libcurl bindings for instance. So we should be paying attention to them anyway. I always prefer a pure safe rust crate for that reason, and because it is easier to deploy as a from scratch container or stand alone binary built against musl. Openssl and libcurl have permissive licenses so they are statically linked anyway, and there are no other options of course.


This piece of advice hits me weird:

> For crates where you don’t need or want new features, bug fixes etc, you could consider pinning their versions.

It seems to me that you should _always_ pin your dependency versions. I'd go so far as to say that build tools shouldn't even have an option to automatically pull the latest - "version" should be a required field for all dependencies (and, ideally, that gets decorated with a hash after the first download).

Yes this means you might miss out on security fixes that you'd otherwise get automatically. But having the code that you run & ship change absent your intention just feels like a bizarre default approach.


The notion that one should always pin their dependency version falls apart when considering that package managers like cargo when considering libraries. Libraries hard pinning dependencies causes a world of trouble.

This is exactly why cargo uses a lock file that does pin your dependencies as described for binary crates, but does not use the same mechanism for libraries. https://doc.rust-lang.org/cargo/faq.html#why-do-binaries-hav...


Wouldn't the library dependency hierarchy have been figured out during development in order to get it to a state that can compile? I'm assuming if you're pinning a dependency you already have something that compiles


Let's say crate B depends on crate A with a pinned dependency, and uses one of its types in a public interface.

Crate C depends on them both. It now can't bring in updates to A until B does, and when B updates that's a breaking change, so it better bump its major version.

Take a look at this trick, for example, for foundational crates updating their major version: https://github.com/dtolnay/semver-trick

Now imagine that being an issue every single patch update.


Yeah I see what you mean, it's not so much about the first time it's deployed but for future updates where you want to update one dependency but another one requires it to stay the way it is


Good point, diamond-pattern dependencies can get hellish. In the past, rather than deal with shadowing etc, I've resorted to choosing versions of my major dependencies that use the same version of their underlying dependency (for example, sync my GRPC lib with other HTTP client libs so they all use the same version of Netty - this is in the Java world).


Quite interesting. That doesn't seems like a complete falling apart but rather a (unintentional?) attempt of best of worlds since once set your dependencies on your binary, they will be pinned until you manually run a cargo update.

Granted that the supply chain attack defense would fall on the hands of the users of the library. Unless the library writer aggressively pins of dependencies on the Cargo.toml (not the default semver action), problematic, but at least Cargo allows multiple versions of dependencies on a program, on Python for instance this is much complicated scenario (but still the smallest of the dependencies problems there).


This behavior as very intentional, patterned after Bundler.


That guidance has changed and will hit stable for 1.74 (5 weeks).

For the new guidance: https://doc.rust-lang.org/nightly/cargo/faq.html#why-have-ca...

See also https://blog.rust-lang.org/2023/08/29/committing-lockfiles.h...


Both answers have major downsides.

If you pin your dependencies, when something you depend on fixes a security bug you don't get the fix unless you notice the new release and realize you need to apply it.

If you do not pin your dependencies, then your code will randomly break because you are depending on something that changed (Hyrums law). You push to CI and now you have build failures to fix unrelated to your change in the best case. In the worst case everything builds and you ship to customers who discover the issue (even if you have a great manual test plan that would have discovered it: if it is the last build before release you probably don't run the full plan)

I don't have a good answer here. I'm in the pin everything camp, and can report it is hard to notice all changes upstream. It was just a few years ago we finally threw away a computer with Windows XP with no service packs - it sat in a closet for years untouched just in case we had to rebuild some old software.


In cargo (Rust's primary build tool), there are two ways to pin dependencies:

- `Cargo.lock`: handled implicitly so long as you check it in which, as of rust 1.74 (next release), `cargo new` will scaffold all projects this way by default.

- `Cargo.toml`: requires using a `=` version requirement. We generally recommend against pinning this way; see https://doc.rust-lang.org/cargo/reference/specifying-depende...


> I'd go so far as to say that build tools shouldn't even have an option to automatically pull the latest - "version" should be a required field for all dependencies

When I'm developing, I frequently grab latest versions. Until i share, I want to keep up to date.


I'd argue that the right model is to consider what's been updated whenever you do a release. Likely not to the level of diffing source (which isn't realistic for most setups), but at least lets you see what's changing.


In terms of my personal projects I am thinking for some time about using Rust in a way I usually use C - minimal dependencies, at most two level dependencies (my dependencies and their dependencies, no their dependencies dependencies). I could afford it, because of what I usually want to work on. Maybe at times even going further as sometimes I do with my C projects and forgo some parts or entirety of the standard library. It would not be a common Rust, but it would be safer than the usual C and would have some ergonomic wins.


Linux distributions largely solve the problem of C dependencies, both getting and vetting them.


If that's what you want, you're already treading a very non-standard path. At that point, you might want to think about changing languages.

Zig might be a better fit. The community tends to like minimal dependencies and bootstrapping. The downside is that it is a relatively new language with all the grief that brings. But, if you're going Rust with minimal dependencies, you're already in "Here Be Dragons" territory.

And, this might especially apply to you as you mentioned C. I see Zig as a better C while Rust is a much better C++.


> If that's what you want, you're already treading a very non-standard path.

Rust makes it quite easy to forgo a large portion of the standard library. rustc and cargo are separate tools, if you don't want cargo's features, you can use rustc as normal.

These paths may not be popular but that doesn't mean they aren't well supported, and depending on your niche, may not even be that unpopular. embedded on things that are small enough you're not running embedded Linux, for example, rarely use the standard library.


I found this hypothetical story to explain a supply chain attack easy to understand:

https://davidlattimore.github.io/making-supply-chain-attacks...


Great work, but I'm note sure it will be enough. Rust crates are insecure by design and we need to face it. Here are at least 8 methods to backdoor Rust crates https://kerkour.com/rust-crate-backdoor

Blocking macros might be, in my opinion, one of the best defense that you can have today.


This is a really overwrought alternative to just breaking up `std` and listing which new libraries one wants in the regular [depenencies] section.

https://github.com/rust-lang/rfcs/pull/1133 Yeah definitely I haven't been wanting this for years...


I would love that as a solution as well but what it leave out compared to Cackle is abstractions. You can have some packages that abstract lower level operations into more restricted higher level ones which can give you a different surface area for auditing.


Hmm, I want this but for node. Those dependency trees are wild.


The proposed solution uses security policies to block/allow APIs. However, the problem is accurately figuring out such policies, not to forget the painstakingly time consuming process behind it.


The author is clearly aware of the problems... the TUI for setting up and maintaining the policies looks very efficient to use!

Very impressive stuff


Note that the process described in the post is a better experience than other block/allow interfaces, because you _can_ specify per library what things it can or can't access. You aren't forced to specify everything at the process or user level, where it can be harder to list or specify such things.

There will still be the possibility of confused deputy attacks, but it still looks like a significant improvement to supply chain security.


s/harder/more difficult

hardening supply chain attacks is probably not the goal here


[flagged]


Yes, and I hope You go all the way to the linux kernel... Scratch that, you better break into intel and audit their microcode personally. Anything less is just being irresponsible about your dependencies...


There are companies that are doing this. Oxide Computer is one of them. Their customers are going to thank them when it turns out the Equation Group has a backdoor in Intel's BMC.


Now I have to trust Oxide Computer - why should I trust them?


Our code is open source, so you can choose to independently verify that if you desire.

We have attempted to minimize binary blobs to a pretty extreme extent. Unfortunately there are a couple of things from vendors that are impossible to remove, but we have made progress on avoiding as many of them that are avoidable as possible. bcantrill did a talk about this (and some other things) https://news.ycombinator.com/item?id=32911048 if you're curious.

That said your overall point that you’re always trusting somebody is absolutely valid.


A lot of your competition has verifiable open source software as well: https://www.dell.com/en-us/blog/enabling-open-embedded-syste...


It's not really that simple, because you can only choose your imports one level deep. All of the options for 'taking responsibility' add a lot of ongoing busy work in pinning versions, reviewing code, watching for CVEs, etc. A lot of it can be avoided with improvements like the article's.


Why can you _only_ choose your imports one level deep?

With rust at least, don't just pull crates off crates.io. Use a locally managed registry and only allow crates you've mirrored to be pulled. If that crate has a dependency then you need to pull that dep as well. It seems like a lot of work (because it is) but this is how you validate supply chains.

I'm honestly a bit upset that years before Log4shell I never tried to make a software repackaging company. At least within the DoD sphere being able to source libraries / software from a US vender is a big plus. I think enough that I could've basically just provided Review as a Service where I "sell" OSS software as a US Company and then use that money to review & develop those OSS software.


If you can't afford to do that work, you can't afford to depend on those libraries. If you can't afford feature X without depending on a library, then you can't afford feature X.


Except that a huge chunk of the Rust language sits in crates on the Internet that nobody is willing to sign and vouch.

Proc macros pull in a bunch of dependencies that pretty much exist in every single Rust program. Serde might as well be in the Rust language as the Orphan Rule prevents anything else from being anywhere near as effective now that it has momentum. Those entire trees of dependencies should be signed and vouched by Rust itself as they are effectively part of the language. And, yet, they aren't.

Rust has enough adoption at this point that they should probably switch to something like Maven's repository where you have to have domain control and a PGP key to release things. It would also solve the problem of namespace squatters.


What are you suggesting exactly? Should I manually audit the hundreds of thousands of third party code that my project uses? That's clearly not feasible.


Right, and because you haven't done that you have no idea how insecure your code is. Tough luck if someone breaks your code via some dependency you didn't audit - you get blamed.


Have you ever written any code? What you are suggesting is totally unfeasible in all but the most paranoid applications.


I write code and write that from experience. I cannot audit everything, and i've been burned more than once for not doing so.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: