Hacker News new | past | comments | ask | show | jobs | submit login

how is trusting your fab any different than trusting Intel?

> that's not very different from trusting your compiler

if you were paranoid enough to be worrying about CPU backdoors, why would you trust your compiler?




>how is trusting your fab any different than trusting Intel?

You increase the cost of an attack - it's harder to change a processor's behavior by editing the mask than the VHDL. If you were super-paranoid you could source to multiple different fabs and run the chips you get back in parallel, with some sort of trap that goes off whenever you get different results from one or other processor.

>if you were paranoid enough to be worrying about CPU backdoors, why would you trust your compiler?

If you don't trust your compiler, why are you even bothering worrying about CPU backdoors when you've got a much easier attack vector open?


run the chips you get back in parallel

Who's to say you're going to trigger the condition that causes the backdoor? Seems very unlikely. If you have ideas on this, though, I'd be interested.

If you don't trust your compiler, why are you even bothering worrying about CPU backdoors when you've got a much easier attack vector open?

You may not trust your compiler, and therefore do certain things in a VM where e.g. access to network is limited. See [1].

[1] http://qubes-os.org/Home.html


It's hard to change the behavior of the CPU because it's in the middle of the chip and highly speed-optimized. It's much easier to add an extra chunk of silicon that affects one of the peripherals. I'd probably add something in the southbridge to spoof the BIOS flash.


Trusting Intel and trusting your fab are different problems. Intel creates a design and sends it to the fab. Intel has to trust that the fab will not alter their design, but in general that is an extremely difficult attack to carry out and would most likely require the fab to have a much more in-depth understanding of the IC design than they likely have. However, Intel can put whatever they want into an IC and it would be incredibly difficult for anyone to find it.

Also, trusting your compiler is different from trusting your CPU because one is much, much easier to check than the other. You can build GCC yourself, look at the source code, manually check the output. You could even write your own compiler. In general, we can't make our own processors or verify their internals, yet.


Looking at the source code is not enough: http://cm.bell-labs.com/who/ken/trust.html.


So write the compiler in assembler. Or at least a compiler that can be used to compile the source for the "real" compiler.


That's the problem : you would have to trust your assembler. How was the assembler compiled or assembled ? Etc etc. You would go back to the first "physical" translation of a program "into" a computer.


Hmm, indeed.

You'd have to write an assembly program and then a hand-translated binary version that can run directly on the bare metal with no OS. And use it to compile the "real" compiler.

I wonder if one could make that simple enough to do be "somewhat reasonable," yet complex enough to compile the real compiler. Probably! Though it may take a team of people quite some time.

I wonder if government(s) have infrastructure/teams that do this already, or if there are any open source projects aimed at this kind of thing.


Various thoughts:

1. Trusting chip fabs is a non-starter. Institutional security only helps large institutions. You will not audit your fab, nor can you trust other people to do it for you. Unlike software, there is no "certificate" that is easily verified in the event of a backdoored fab.

2. There's two phases in the lifecycle of a backdoor. First, is its latent 'offline' period which is noninteractive with respect to the attacker. For example, a backdoored compiler [RoTT] basically propagates a virus to new copies of the compiler. Solving this seems tractable by eliminating bootstrapping as the sole method for compiling a compiler, and research into interpreted languages that are easily ported to new platforms for the stage0 compiler.

3. The second phase is when the introduced vulnerability is 'active' and ready to be exploited over the network by an interactive attacker. The case of a network-facing compiler-infected binary should be easily solved by solving the first problem. The case of backdoored hardware/microcode is much more insidious, and requires coming up with assumptions to frame the problem for even a chance at being tractable. (Also, it requires the trustable software tools from (2) for implementing the solution)

4. Secure 'offline' computing on its own would be a boon for things like maintaining the integrity of master cryptographic keys (although keep in mind one still has to prevent against a backdoored CPU acting as an infected stage0, but this is much easier in a noninteractive setting). I'm less familiar with state of the art in this area than I should be, but I'm guessing implementations are probably still in the dark ages of 'trust the company'.


I think that it boils down to writing arbitrary data in memory (so that they can't follow the usual path of programs, as with compilers). For the anecdote, a friend of mine used to flash a "hello world" program into a microcontroller using push buttons only :)


Intel owns outright most of their fabs that produce their processors. Intel is the fab, hopefully they trust themselves.

AMD on the other hand, is going fab-less for at least some of their products. TSMC was mentioned in the past as being a partner. So you'd have to trust both AMD and TSMC.


I have heard it joked that Intel is a fab with a small design firm attached.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: