Hacker News new | past | comments | ask | show | jobs | submit login
Intel SGX Explained (iacr.org)
95 points by zmanian on Jan 31, 2016 | hide | past | favorite | 47 comments



The juicy bit:

"That being said, perhaps the most troubling finding in our security analysis is that Intel included a licensing mechanism in SGX that prevents software developers who cannot or will not enter a (yet unspecified) busi- ness agreement with Intel from authoring software that takes advantage of SGX’s protections. All the official documentation carefully sidesteps this issue, and has a minimal amount of hints that lead to the Intel’s patents on SGX. Only these patents disclose the existence of licensing plans."


What are the ramifications of this exactly?


The SDK documentation does, almost, tell you:

  > The signing tool supports a single-step signing process, which requires
  > the access to the signing key pair on the local build system. However,
  > there is a requirement that any white-listed enclave signing key must
  > be managed in a hardware security module. Thus, the ISV’s test private
  > key stored in the build platform will not be white-listed and enclaves
  > signed with this key can only be launched in debug or prerelease mode.
And, indeed, launching an enclave without debug mode set fails with 'SGX_ERROR_SERVICE_INVALID_PRIVILEGE' error.

A debuggable SGX enclave enables read-a-word and write-a-word primitives, so loses its confidentiality and integrity.


Intel has shown a disturbing willingness to add unproven revenue generating functionality to the trusted computed base like require an Intel signature on SGX software or giving Intel DMA access to everything.


I think there would be quite a bit of controversy as with Edward Snowden if someone leaked that key somehow. That someone would be considered a hero by many, and also a traitor by others.

Alternatively, someone leaks an SGX exploit that bypasses it all, and we wonder whether it was a mistake like so many other vulnerabilities, or if someone deliberately put it there because they didn't believe in Intel having that amount of control... "I wish for the insecurity that brings us freedom."


It'd make no difference, as the keys in question are replaceable via microcode updates and the microcode version is included in the remote attestations.

SGX doesn't really give Intel "control" in the sense of taking away existing freedoms. It's a new feature. You can always elect not to use it, or not to use software that uses it.


You can always elect not to use it, or not to use software that uses it.

That's always the excuse given for every new invasive user-hostile feature. The problem is when the majority of new applications and websites require it. You can always elect not to use a computer either, but I think such a position would be untenable even for a "extreme Stallmanist".


It gives Intel control over developers. In general, a computer will execute what you ask it to. SGX will not let you run production enclaves without Intel's permission. This is like Verified Boot, except there's no credible security benefit to be gained from it.


It is a bit of a bait-and-switch since no other CPU feature works this way and Intel never mentioned this "feature" in all their years of disclosures about SGX.


Isn't that how most extensions work? And microcode has been around for at least a decade now.


I know of no other CPU feature that requires authorization from Intel.


TXT requires an ACM, which is essentially a small signed BIOS subset. At least ACMs are freely downloadable from Intel, and they don't look into what you'd like to run under TXT.

https://software.intel.com/en-us/articles/intel-trusted-exec...


Sadly, leaking the key is not the answer. You'd give independent developers the freedom to use SGX, but at the same you'd make SGX worthless.

Details: if the key used to sign architectural enclaves (like the Launch Enclave) would leak, this would completely break SGX. Anyone with the key could create their own Quoting Enclave and the guarantees behind software attestation would go down the drain.


You're assuming that SGX is only useful in conjunction with attestation.

I want to use SGX to protect cryptographic keys. Attestation is mostly unnecessary.

For normal computing, as long as you control the machines and can bootstrap trust yourself, you don't need Intel's attestation mechanism at all. You do, however, need to ability to launch an enclave.


But enclaves are worthless without attestation.

If the OS is evil and you don't do attestation, it can emulate SGX and run your code in a simulated enclave environment where EGETKEY returns keys that the OS knows about.

If the OS is not evil, you can use process isolation to generate and protect the keys.


This is a pretty black-and-white view of things. A security technology does not need to solve all problems simultaneously to be of use. (But, you do need to do thorough analysis and be cognisant of the risks.)

Ignoring the cloud computing aspect of SGX, no amount of attestation can recover from your-OS-is-compromised-from-day-one scenario. The attestation is only as good as its verifier.


Interesting question: should Linux enable non-debug enclaves at all as long as this policy remains in place?


I expect this to play out like the W3C EME standard. Software attestation separates debug from non-debug enclaves, so your kernel will need to load production enclaves for you to watch Netflix. If Mozilla/Firefox caved, so will Linux.


That'd be a ridiculous sort of circular-DRM move that helps nobody, so, no.

One of the primary use cases for this technology is cloud computing. Amazon, Google, etc are all quite capable of patching such checks out of the kernel if they want to use SGX. All it'd do is annoy a few engineers at these companies. Likewise, for deployments on the client, it's really only the Microsoft and Apple kernel devs that matter given the Linux communities tiny desktop market share, and I don't see why they would do that.


So there's absolutely no reason not to implement DRM, ever, because it doesn't help anybody? Abstaining from implementing a feature you don't agree with, especially from an open source program licensed under the GPL, is certainly a worthwhile move. It at the very least communicates that you don't agree with the feature, and forces others to think about the issue deeper to e.g. apply a patch to use it. SGX is the anti-thesis of the GPL and what Stallman has been warning about for years: software that you are physically incapable of debugging, reading, or modifying. And that's scary.

Of course, it's silly if you think the kernel won't accept a patch for SGX on moral grounds. It already has several modules for DRM technologies, along with binary blobs for drivers. But it's nice to think about.


Why is it nice to think about? It'd be a move as pointless and self-destructive as GCC refusing to implement a usable IR because of Stallman's belief that it'd hurt free software. End result: LLVM is now replacing GCC as the de-facto compiler toolchain of choice.

SGX is an optional feature. It doesn't magically run software against your will and it is not "immoral". Ascribing moral positions to CPU features seems ridiculous to me, sort of like describing a pickaxe as "immoral". If you don't want to use it, then don't execute software that includes SGX instructions. If you do want to use it, then all Linux getting in your way will do is convince you that maybe you should be using a different kernel whilst you wait for Intel's own driver to compile and install.


This stance ignores what happens when people are complacent over the long term. If we make it easy to run SGX instructions, that will encourage its use by developers, which leads to more closed source dependence.


There's some support in Intel's Management Engine for DRM, called Intel Insider (the successor of PAVP). One of the SGX papers mentions plans for hooking up SGX enclaves with PAVP.

Based on public docs, you can't do DRM decoding in an enclave. But you can execute some complex logic to validate a user's license and decide whether you want to release the encryption key to the ME or GPU.


If SGX becomes successful, Intel becomes the Verizon+ATT+Tmobile+Sprint of hardware security. No signed enclave, no security.


It sounds like SGX is intended as a DRM mechanism and intel wants to get a cut.


I'm certainly not an expert, but more like they want a cut from people hoping to deploy their software on third-party cloud platforms. I presume Intel is hoping that Amazon et al will use the chips and people will pay a premium to run software in an environment where it is hard to compromise the running code.


For the scary applications (regarding user freedom) of Intel SGX, see Joanna Rutkowska's two blog posts about Software Guard Extensions.

Part 1: http://blog.invisiblethings.org/2013/08/30/thoughts-on-intel...

Part 2: http://theinvisiblethings.blogspot.com/2013/09/thoughts-on-i...

Intel SGX also had some useful applications alongside those that are harmful to users, like search engines that provably don't log queries, mail servers that provably don't keep your mail, provably safe Bitcoin mixers and so on. But if using Intel SGX requires a business agreement with Intel, I worry we will only see the bad things and not the useful ones.

It is possible I am wrong and cloud providers will give people who aren't Hollywood access to Intel SGX. But all the applications require trusting Intel and the NSA. Hollywood surely does not mind trusting them, do we?

Intel x86 considered harmful[1] talks about all the scary stuff with Intel's processors.

[1]: http://blog.invisiblethings.org/papers/2015/x86_harmful.pdf


Alex Ionescu wrote a paper [1] on Win10 and SGX.

  > It’s important to realize that for now, only Intel has 
  > the required key to allow an enclave to be launched 
  > without knowing the required CPU-specific enclave key, 
  > and no other (even signed) enclaves can be launched 
  > without it. Once Intel releases a permissive loader, or 
  > if Intel ME vulnerabilities are found to extract the key, 
  > then the real abuse will begin.
  > 
  > Indeed, one area of further research is the Intel SGX 
  > Driver that was released for recent Intel SGX-enabled 
  > Dell Laptops, which contains a le.signed.dll file that is 
  > the Intel Launch Enclave. Additionally, it contains 
  > Intel’s EINITTOKEN that can be used to launch such 
  > enclaves, as well as a service and set of APIs which 
  > appear to make it possible to launch additional enclaves. 
  > Windows 10, on its own, does not seem to ship or support 
  > its own Intel-signed Launch Enclave.
[1] http://www.alex-ionescu.com/Enclave%20Support%20In%20Windows...


Do you happen to know if the Launch Enclave has the debug flag set? If so, you can't use it to launch production enclaves.


The initial batch of Skylake CPUs do not implement SGX: http://www.anandtech.com/show/9687/software-guard-extensions...

Any word on whether there will be a BIOS update (for example, microcode and ME updates) that will enable SGX for the first batch of Skylakes, or are they just forevermore broken?


The SGX launch has been one of the worst launches of a major CPU feature I've ever seen coming from Intel; it would be really interesting to learn what went wrong and why.

It's years later than people anticipated (and won't be in the E5 xeons for a couple more cycles, so 2017/2018/2019).

Not quite NetBurst level, but pretty horrible from a generally execution-excellent company.


I suspect there was significant opposition to SGX from various groups, even from within Intel.


SGX serves a good purpose, at least in theory. Many people, myself included, wanted it to turn out to be good. So, I don't think many Intel folks objected to it.

Instead, I think that a bunch of MBAs showed up and decided SGX is security, security is an enterprise thing, so SGX must be pay-to-play. For whatever it's worth, I think the SGX designers did a pretty good job of separating the objectionable parts from the rest of the design.

For example, the EPID homebrew crypto is all in software, so Intel can change the algorithm without hardware mods or microcode updates.

Also, the way they set up the Launch Enclave gives Intel time until the very last minute to not be a douche. They still have the option to release a permissive Launch Enclave that only includes the checks needed to keep attestation secure.

The SGX design that doesn't come from MBAs is quite clean, given that it addresses the multi-layered crap pile that is X86. There are some cool tricks in there.


Intel presented on SGX at Real World crypto.

If the functionality works as is being described here, I feel they deceived the audience, both in their presentations and in 1:1 discussions.

This is especially unfortunate, since I know for a fact their actions have influenced purchasing decisions.


Perhaps it means they're trying to be thorough?

A Xeon implementation would have to secure the QPI links between the CPU chips. These run at significantly higher speeds than DRAM, so the current MEE design would likely not be able to keep up. Also, a Xeon implementation would have to somehow have the chips mutually authenticate each other.

That being said, I wouldn't be surprised if Intel was trying to see if they can get away with the licensing bs in desktops before committing resources to tackling the challenges that I mentioned above.


It's also a completely useless feature in the chip without the ability to setup arbitrary keys at boot time. This is the worst DRM incantation ever.

I expect AMD to clean up Intel's act on this.


ARM (well, ARM licensees), more so.


A lot of ARM SoCs, particularly in the mobile world, have always had features like this. Apple even explicitly advertises it as a security feature on their iDevices. I think the difference here is because x86/PC has traditionally been one of the more open platforms, so attempts at locking it down are more strongly opposed. The PC is one of the few remaining computing devices where you can still inspect, modify, and develop with relative freedom.


There's a lot of new stuff coming in the ARM world which goes way beyond what they have today. (I know about a couple under NDA, and about the current state of the art, so I have to assume there are others).

The best "demo" of this in a fairly open way is USB Armory; they exposed the best features currently available in a developer friendly way. But there are some other things coming which are even better.

(I care about end users being able to trust remote servers; I don't really care about remote content owners being able to trust local client devices. There are usually no rights issues with the former, and DRM is the obvious (but not only) use case for the latter. I generally believe whoever buys a piece of hardware deserves full rights to it, but there are cases where that's an organization and they have ever right to expect all of their widely-deployed hardware is untampered...for instance, a closed computing environment for processing PII of third parties.)


I have to admit, the technical aspects of this are way beyond me. Would this technology allow (in theory) secure distributed computation via RDMA?

I've long suspected that Intel understands the implications of mass adoption of cheap, RDMA capable network adapters (iWarp, ROCE and Infiniband): it will cannibalise future CPU sales revenue. Imagine a standard corporate working environment with 1000 workstations on a LAN. At any given time, average CPU utilisation is probably 5-10 per cent tops. It's a similar story with storage and I/O capacity. If you add RDMA (and ultra-low latency networking) to the equation, there is now no need to buy additional computational power for the next 5 years, as there are a tonne of idle resources that can now be efficiently utilised (even for non-parallelisable computation).

From what I can surmise from recent Intel actions, they've opted to not take the 'Microsoft' approach (i.e. hold back the tide), and have instead decided that if CPU markets are going to be cannibalised, they may as well be the ones doing the cannibalising.

Well, that's my theory anyway. Am I crazy?


SGX has nothing to do with RDMA, and I suspect they don't play well together; all data entering/leaving an enclave probably has to be copied. Also, RDMA is only for servers and may not be as powerful as you think.


Thanks for helping me understand this. Please bear with me here, as I'm not as technically skilled as the average HN user.

On your first sentence, is the issue that some fundamental aspect of the SGX security model requires copying data in/out of enclaves, which would make direct computation on data stored in remote memory impossible?

And breaking your second sentence into two parts:

(1) That does seem to be the present state of affairs. From what I can gather, RDMA and low-latency networking is expensive due to the high cost of interconnect/cabling and switching infrastructure. So atm RDMA is exclusively used in HPC clusters and as backplane interconnect between server blades/racks. But I wonder if this will always be the case. Take cabling for example. Retail SFP+ optical interconnect is crazy expensive for even very short runs. If this is because production costs are high by nature, then I'd agree that LL networking and RDMA will remain confined to the server room. But if there are significant unrealised production economies of scale, or there are achievable advances in production techniques that will reduce costs, then deployment at the network edge might be economically feasible once we pass some level of demand/adoption.

2) On low-latency RDMA not being as powerful as I think: This might be because of my limited understanding. From what I understand, LL RDMA would allow a whole bunch of computers to be abstracted as a single 'super computer': the inter-memory-processor latency is so low that it makes this abstraction possible. Have I misunderstood the technology? (genuine question)


The whole point of SGX is that the memory of an enclave is ultra-protected so that nothing can get in or out without the enclave's permission. DMA goes completely against that concept.

Much of the market segmentation between desktop and server is artificial, but there's still nothing customers can do about it. RDMA doesn't exist for 1G Ethernet because CPUs can easily keep up with copies. 10G Ethernet has been around for over 10 years and there's no evidence that it will ever trickle down to the desktop.

Almost no networks support RDMA since it requires special network configuration, special NICs, and special libraries. No clouds support it. So any software that requires RDMA cannot be used by hardly anyone. (Example: http://blog.acolyer.org/2016/01/14/no-compromises/ Of course, the economics of hyperscale cloud providers are different.) Software can be written with an RDMA fast path and a TCP/IP slow path, but then 99% of users will use the slow path and so it's better to optimize around the characteristics of normal networking.


This paper contains a remarkable amount of irrelevant background.

If you actually want to read this thing, read the very beginning and then skip to at least page 57.

There are some interesting bits that are relevant to OS authors. For example:

At first glance, it may seem elegant to have EENTER restore the contents of the XCR0, FS, and GS registers in the current SSA, and have EXIT restore them from the current SSA. However, this approach would break the Intel architecture’s guarantees that only system soft-ware can modify XCR0, and application software can only load segment registers using selectors that index into the GDT or LDT set up by system software (2.7). Specifically, a malicious application could modify these privileged registers by creating an enclave that writes the desired values to the current SSA locations backing up the registers, and then executes EEXIT.

If that's correct (I haven't double-checked thoroughly, but it seems like it's wrong), then it's a problem. But I think the paper is just wrong.

I'd be more worried about RFLAGS in the SSA. Its exact usage is poorly documented, but some bits of RFLAGS are privileged (IF and IOPL).


I'm terrible at writing. I am trying to say that SGX cannot restore things from the SSA, and it has to use some protected area. To the best of my knowledge, they're using the non-architectural area of the TCS, which is protected from any sort of write.


Some (mostly not that relevant) details missing from the article:

1. CPU microcode update packages from Intel ("MCU") are unified "processor package" update containers. They update more areas of the chip other than just the MSROM. This is more obvious in the SoC parts, but it is also true on the discrete parts.

2. MCU can be downgraded, although this is clearly going into "not validated at all" area, so it might not result in a very stable system ;-) It is likely that Intel can set a flag inside the MCU data that forbids this (the MCU loader inside the processor is more than complex enough to support this kind of thing!), but at least up to Westmere downgrades were still working.

2b. and you can always downgrade either just the microcode inside the firmware by modify-and-reflash, or the firmware itself, even if the CPU started to ignore downgrade attempts at runtime.

3. When the MCU update process is done in a trusted environment (microcode update data in the FIT), the reported microcode version CHANGES (the processor reports it as one less than the real version of the microcode). This is relevant for attestation, and it is really something that needs to be added to the IA32 manuals. We only know about it outside of the NDA'ed world because coreboot required a fix for the next issue:

4. As long as you find a way to always feed them the latest microcode (or at least the same revision that you have in the firmware), Linux, VMWare and the BSDs [currently] will always override FIT-provided microcode, thus changing the reported microcode revision (it will not be reported as secured anymore). Since the revision changed, it will break any attestation that depended on it. This looks like a good thing at first glance, given how utterly broken at launch the recent Intel processors have been: anything that would get in the way of an user being able to fix these by updating the MCU is a damn bad idea and NEEDS TO DIE.

5. The microcode update process nicely wastes several million cycles (and it can easily get to a billion cycles in larger systems, as the update cost increases linearly per core) at every operating system boot and resume from ACPI S3/S4/S5 ;-) Try to ensure that your firmware has the latest one if you want to have a smaller carbon footprint, because if the OS decides to update it, the box will be doing this expensive procedure twice at every boot/resume...


Thank you very much for this feedback!

Re: 1 - I re-read the relevant SDM sections, and saw that there is no requirement that the new upgrade version exceeds the current microcode version. Thank you very much for pointing that out! The next published revision will have the fix.

Do you have any public references for 3 and 4? That looks like it'd help make the case that SGX rests on very complex and unstable foundations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: