> Say, running an instance of Oracle, or a torrent program that by it's nature constantly needs to make network connections and write/read different files?
Yes, those seem relatively simple to pledge (source availability aside); there are a lot of permissions that they should be able to drop once they decide on, say, where the database lives or what files they're saving to. It gets even better if you're willing to privsep the torrent program, though that could take some refactoring.
Note that you can trivially do a looser sandbox around unmodified processes using exec pledges and unveil, even for proprietary code. These kinds of sandboxes need to be permissive, though, since they're not aware of program phases. So they're not nearly as tight as a sandbox written by the developer with knowledge about expected program behavior.
> It gets even better if you're willing to privsep the torrent program, though that could take some refactoring.
Now you're talking about modifying the code substantially which is out of scope of the thought experiment.
Pledge can't really help with the torrent program since it needs to make new network connections and write and read arbitrary files constantly. Unless as you say, you substantially modify the code.
If substantially modifying the code is off the table, can you give an example of how pledge can prevent an attacker leveraging an RCE in the torrent program? To what extent would they be restricted? You can't say, limit execution to only certain files/libraries or restrict the ability to delete or overwrite files, right?
> Note that you can trivially do a looser sandbox around unmodified processes using exec pledges and unveil, even for proprietary code. These kinds of sandboxes need to be permissive,
Yeah, I wouldn't consider that to be a sandbox. Imposing limitations on a program isn't by itself a sandbox, nor is every instance of doing so sandboxing.
> Pledge can't really help with the torrent program since it needs to make new network connections and write and read arbitrary files constantly. Unless as you say, you substantially modify the code.
Unveil helps with the "arbitrary files" part. There's a reason Linux is cloning that interface with landlock.
How? The torrent program needs read and write access to create whatever files it needs to, which can't be predicted ahead of time.
Imagine a worst case scenario for an RCE in a torrent program, and then what is your best case scenario for pledge and unveil being able to confine an attacker?
Because I'm pretty sure it would be a lot less restrictive than what proper sandboxing can provide.
> There's a reason Linux is cloning that interface with landlock.
Sure, because it has advantages as part of defense in depth. I never said it was useless or without value.
Besides that, from memory landlock actually preceded unveil having started development in 2016, so I don't know that it's fair to say Linux is cloning anything if they had a solution first.
> How? The torrent program needs read and write access to create whatever files it needs to, which can't be predicted ahead of time.
The same way it was handled in Firefox, for example; unveil the output dir. At least my torrent program doesn't shit files all throughout my file system. Maybe yours does?
I meant arbitrary files within the dir. Not including any other dirs/files it has to read. So basically, it's marginally more effective than a chroot, without any real granularity.
Besides, you avoided the hard question:
Imagine a worst case scenario for an RCE in a torrent program, and then what is your best case scenario for pledge and unveil being able to confine an attacker?
Because I'm pretty sure it would be a lot less restrictive than what proper sandboxing can provide.
> Imagine a worst case scenario for an RCE in a torrent program, and then what is your best case scenario for pledge and unveil being able to confine an attacker?
Preventing exfiltration of any data outside of the downloads dir. Preventing execution of new programs. Preventing inspection, tracing, and signaling of existing ones. Preventing mmap of writable executable memory for shell code. And preventing pivoting exploits using system interfaces like vulnerable sysctls, large subsystems like drm, and so on.
This much can be done without touching the program code, or even binary, at all, using unveil and exec pledges.
If you're willing to refactor the code a bit, you can also prevent new sockets from being opened and new addresses from being listened on if the code doing networking is isolated from the code doing disk I/O.
> Preventing exfiltration of any data outside of the downloads dir.
Except for all the data it needs access to. I'm not so sure torrent programs will continue to function correctly if they can't re-read their config file, in my experience most want access to a temp directory, the ability to run a few external applications like rar or zip, etc. Most torrent programs need access to more than just the directory where downloads end up when complete.
> Preventing execution of new programs.
This gets spicy if the torrent program is written in an interpreted language like python, no?
I honestly don't have much faith in how far unveil/pledge can restrict in this scenario, but as a result of this discussion I now have an OBSD box again so I can test and play around with it.
> If you're willing to refactor the code a bit
That's beyond the scope of the question. It's bad enough there is no mechanism to sandbox binaries where you don't have access to the code, talking about rewriting programs to solve the issue is some kobayashi maru nonsense.
Chrome and Firefox have both been successfully pledged and unveiled. What programs more complex than them are you considering?