The safe execution of any untrusted Turing complete code is a pipe dream.
You, at least, need a clean sheet CPU design starting from ISA, and basic logic operations formally validated against instruction level analysis to have a fighting chance.
But even such chip do get pwned, as shown by key recovery from credit cards in the wild.
> The safe execution of any untrusted Turing complete code is a pipe dream.
I don't think that's true. It's not turing completeness that's the real problem here. It's that software usually has access to accurate timing information, whether it's via RDTSC, gettimeofday(), or sharing memory with another thread that does things that take a predictable amount of time. If a program has no notion of current time and cannot measure how long something takes, then a lot of those side channel attacks no longer work. (Note that this precludes using styles of threading that have nondeterministic results, but it doesn't preclude using styles of threading that are deterministic, like Haskell's parMap.)
I do think maybe we should move away from the model of "let's let people run programs comprised of arbitrary instructions on their computers, and build all our security around keeping programs from reading and writing things they shouldn't" to a model of "all programs running on this computer were compiled by a trusted compiler, and our security is based on the compiler disallowing certain unsafe constructs". This is sort of analogous to web browsers running javascript in a sandbox, or running eBPF in the Linux kernel.
That's not what the poster is talking about though.
They're talking Von Neumann with a special "blessed" tooling written to not produce behaviors that would let users do nefarious things. They want to reduce the space of possible computations from arbitrary to "only these patterns which are provably safe".
Essentially, they want to hobble the user (malicious or not) and force good behavior by giving them tools that are incompatible with malicious behavior. They want Asimov's Three Laws of Robotics for computation.
The issue being, you run into Halting problem real fast when trying to make that blessed toolset. How does it recognize malicious code or bad series of individually benign, but collectively malignant opcodes? Remember, side channels like Spectre and Meltdown boil down to timing how long it takes for a computer to say "no", and then for you to access a piece of data you know should only be cached if the conditional that was preempted by an access violation was one value or another.
That is, start timer -> run (expected to raise access violation) conditional, branch speculative load -> access check -> exception -> check for result value in cache -> stop timer -> rinse -> repeat
Each of those is a benign command that could be sprinkled in anywhere. Collectively, they are a side-channel. You could still make variations of the same setup by tossing in junk values in between the necessary steps that would avoid this blessed tooling's (assumed) unwavering pattern recognition. I wouldn't actually use a compiler to stop this. You'd use a static analyzer to recognize these combinations; and even then, there's a lot of timer -> thing -> timer stop -> check programs that aren't malicious at all out there.
The answer with computers has been "if it absolutely must remain secret, implement security at a higher level than just the computer". Everyone should know that if you've got access, the computer will do what it's told to do.
The poster's suggestion is a pipe dream; and a dangerously seductive one at that, since anytime you hear from the "Trusted/Secure Computing" crowd, it almost always means someone wants to sacrifice everyone else's computing freedoms so they can write something they can pretend to guarantee will work.
Sorry, the cynicism leaked in a bit at the end there; but I have yet to see a security initiative that does anything but make life miserable for everyone except security people. I'll put up with some unsafe behavior in order to keep the barrier to entry low for the field in general; and accept the cost of more rigid human centric processes to make up for the indiscretions of the machine. Keep abstraction leakage in check.
>The safe execution of any untrusted Turing complete code is a pipe dream.
The safe execution of any code requires an operating environment that never trusts the code with more than the least privilege required to complete a task. It has worked in mainframes that way for decades.
The IT zeitgeist these days makes me sad. Things can be better, but almost everyone is pushing in counterproductive directions, or has given up hope.
> The safe execution of any code requires an operating environment that never trusts the code with more than the least privilege required to complete a task. It has worked in mainframes that way for decades.
It has nothing to do with any OS level security features. We are talking about things happening below the level of what software can see.
You just cannot see any sign of such attack by looking at any register the OS can see.
Timing attacks are only a subset of side channel attacks, though. One can also imagine thermal attacks -- the amount of power you consume leaks information about what you're doing. And if I share a processor with you, there's various ways I can imagine estimating your power usage. On a processor that has dynamic clocking, the clock speed I'm running at is an indicator of the operations you're doing. Even without dynamic clocking, the probability of an ECC error, for example, is likely to change with temperature.
Eliminating timing vulnerabilities is necessary to allow potentially-hostile workloads to share hardware, but it is not sufficient.
Determining what clockspeed you're running seems like it would also require access to timing information though, right? RAM errors is an interesting idea for sure, but I think that can and should be shored up at the RAM level. I think a strong sandbox, WebAssembly and the like, should be pretty reasonable to run untrusted.
A 2013 paper[1] demonstrating exactly that: side channel detecting thermals that's measured without measuring on-CPU timing.
Instead they measured CPU temperature through frequency drift measured through change of network packet markers. A bit contrived but they made it workable quite reliably.
I can still determine timing information by measuring how long it takes to execute a program. The only way to prevent this is to enforce constant-time programs by delaying a response until a specific amount of time (see constant time comparison functions in cryptography). That's not feasible for many applications, especially operations on a latency-sensitive critical path.
You can run algorithms deterministically in a multi-tenant system. Only allow tenants to run deterministic algorithms and side-channels are eliminated. Algorithms with provable time bounds can be run and the output delayed until the known time bound to eliminate timing attacks.
The people operating mainframes have a vastly diffrerent mindset and skills from the average computer/smartphone user. The former can and do dedicate 40h/week and more to studying documentation and tweaking the sandboxes of the stuff they run.
Yes, but it wouldn't include code that exploits those vulnerabilities, by design, and it wouldn't trust any other code, so in effect, it would shield the system from it.
The safe execution of any untrusted Turing complete code is a pipe dream.
You, at least, need a clean sheet CPU design starting from ISA, and basic logic operations formally validated against instruction level analysis to have a fighting chance.
But even such chip do get pwned, as shown by key recovery from credit cards in the wild.