I wouldn't even call it a solution. If you have a trustworthy dependency that uses, say, net and fs APIs, and that dependency suddenly becomes malicious, the malicious update will still be able to wreak havoc without increasing its API use and triggering any alert. And as another comment has pointed out, if a dependency is allowed to use unsafe it can do pretty much whatever it wants. Ultimately you still have the same choices for each dependency :
- Trust it blindly
- Audit the code (and do that again for each update)
- Write it yourself instead
The last two can be time and resource consuming so you sometime have to choose the first solution.
Cackle can be a useful tool to (occasionally) raise alarms for when dependencies you trust blindly start using different APIs (so the trust isn't completely blind anymore). But it doesn't really solve the problem.
You could solve this with capabilities: make the main function not only take argv, but also a map of unforgeable tokens for the various sorts of possible “unsafe” actions the user wants the program to be able to do. Add APIs that can restrict these tokens (e.g. take the filesystem access token and generate a “access this directory and its children” token). Any code that wants to do one of these unsafe actions must take a token as a parameter and pass it to the standard library function that actually does the thing. (FFI makes this hard, but just prevent deps from doing that unless the developer opts in and also prevent deps from interacting laterally by requiring each dep to use its own copy of transitive deps).
This sort of capability-based approach to security would make untrusted code relatively safe to execute because the worst it could do without the explicit cooperation of the developer is an infinite loop.
My impression was that the SecurityManager was ACLs. I’m thinking more of capabilities as found in the E language and various protocols like CapTP. The idea is that there is no “ambient authority” in a program: to be able to interact with the outside world, you need to be have a token that the runtime guarantees cannot be created by any program. All the tokens would be passed to the main function at startup and then passed down the call stack explicitly to code that wants these feature.
The whole paradigm is to avoid needing to check permissions by making it impossible in principle to do anything you’re not allowed to do.
It's a neat idea, but you'd probably have to build it into the OS from ground-up to work. And then a whole ecosystem of development languages and tools built around it. Quite a lot of work to have something anywhere near as functional as what's around today.
I don’t think so, a language by default doesn’t really have any access to the environment (ignoring side channels like Rowhammer attacks) aside from access to memory and the CPU. Ensuring the security properties I’m talking about is mainly a matter of designing the runtime’s OS interfaces from the ground up with a capability model.
The problem with Cackle is probably that 99% of the time the dependency updates are completely reasonable and valid. It’s going to run into the ‘more noise than signal’ problem really quickly.
- Trust it blindly
- Audit the code (and do that again for each update)
- Write it yourself instead
The last two can be time and resource consuming so you sometime have to choose the first solution.
Cackle can be a useful tool to (occasionally) raise alarms for when dependencies you trust blindly start using different APIs (so the trust isn't completely blind anymore). But it doesn't really solve the problem.