Hacker News new | past | comments | ask | show | jobs | submit login

That's not what the poster is talking about though.

They're talking Von Neumann with a special "blessed" tooling written to not produce behaviors that would let users do nefarious things. They want to reduce the space of possible computations from arbitrary to "only these patterns which are provably safe".

Essentially, they want to hobble the user (malicious or not) and force good behavior by giving them tools that are incompatible with malicious behavior. They want Asimov's Three Laws of Robotics for computation.

The issue being, you run into Halting problem real fast when trying to make that blessed toolset. How does it recognize malicious code or bad series of individually benign, but collectively malignant opcodes? Remember, side channels like Spectre and Meltdown boil down to timing how long it takes for a computer to say "no", and then for you to access a piece of data you know should only be cached if the conditional that was preempted by an access violation was one value or another.

That is, start timer -> run (expected to raise access violation) conditional, branch speculative load -> access check -> exception -> check for result value in cache -> stop timer -> rinse -> repeat

Each of those is a benign command that could be sprinkled in anywhere. Collectively, they are a side-channel. You could still make variations of the same setup by tossing in junk values in between the necessary steps that would avoid this blessed tooling's (assumed) unwavering pattern recognition. I wouldn't actually use a compiler to stop this. You'd use a static analyzer to recognize these combinations; and even then, there's a lot of timer -> thing -> timer stop -> check programs that aren't malicious at all out there.

The answer with computers has been "if it absolutely must remain secret, implement security at a higher level than just the computer". Everyone should know that if you've got access, the computer will do what it's told to do.

The poster's suggestion is a pipe dream; and a dangerously seductive one at that, since anytime you hear from the "Trusted/Secure Computing" crowd, it almost always means someone wants to sacrifice everyone else's computing freedoms so they can write something they can pretend to guarantee will work.

Sorry, the cynicism leaked in a bit at the end there; but I have yet to see a security initiative that does anything but make life miserable for everyone except security people. I'll put up with some unsafe behavior in order to keep the barrier to entry low for the field in general; and accept the cost of more rigid human centric processes to make up for the indiscretions of the machine. Keep abstraction leakage in check.




Ookay. Not sure how you can glean all that from a statement like the one I was responding to...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: