Once I worked at a place which was very interested in protecting the IP inherent in their firmware. They gave me a research assignment to get an idea of how difficult it would be for an attacker to extract it as a binary given unlimited physical access to a sample device.
Since I read and write Chinese, I did some searching on Chinese-language sites and found a company advertising their ability to do just that... for about $10,000 per job. They listed the chips they knew how to crack, but the one my employer was using was not on the list...
I feel that trying to prevent reverse-engineering by adversaries with unlimited physical access is a fool's errand. So this break of Xilinx FPGAs is interesting... But kind of a shoulder shrug.
There are quite a few Chinese (and Russian, not surprisingly...) companies who will do "MCU breaks". $10k (USD) is near the high end of the price range; price depends on complexity and newness --- less than $1k for some of the common and older parts. Mikatech is one of the better-known and older ones.
They are great for maintaining legacy equipment where the original company has either discontinued support or disappeared completely.
I've often wondered -- and perhaps you don't know the answer, or there is no one typical answer -- but, do these companies in Russia, China, etc., that one pays to break the chip, promise to not re-sell the extracted chip contents? (Or maybe we should just remember the saying "no honor amongst thieves")
I think the last time I looked into a service like this, they required that you send them multiple targets to attack because normally the attack is invasive / destructive, and they might need to "burn through a few" before they succeed. If you have any experience in this area, do you know if that still holds true?
Thanks. Sorry to punish your useful answer with more questions.
but, do these companies in Russia, China, etc., that one pays to break the chip, promise to not re-sell the extracted chip contents?
I don't know --- I imagine a lot of the time they wouldn't even know what equipment the chip came out of, and/or it's something extremely obscure, so it would be very hard to sell the contents.
I think the last time I looked into a service like this, they required that you send them multiple targets to attack because normally the attack is invasive / destructive, and they might need to "burn through a few" before they succeed. If you have any experience in this area, do you know if that still holds true?
If it's a new MCU they've not done before, they might ask for that --- to find where the protection fuses are. It doesn't have to be the actual one you want, just an example of a protected one along with the code it contains in order to "check their answers". That will cost quite a lot more than known-extractions, however.
Having evaluated implemting game drm many times this nuance may or may not matter to the business. Most drm is a house of cards that a determined attacker can take out. It's still very widely used for good and bad reasons. And a lot of those reasons are not closely tied to the strength of a given implementation.
While this does open up the code (sans descriptive text) to external attacks, I rather feel that once you have a logic analyzer or compromised microcontroller on the logic bus of your secure device you've got the attacker on the wrong side of the airtight hatchway.
I'm personally much more interested in what it means for 'attackers' who wish to use it to open up their own hardware. Perhaps that might not align with the goals of Xilinx or the OEM, but it's great for their customers!
I'm not sure this is true. This attack allows reading the encrypted bitstream, but it doesn't say anything about allowing you to sign modified bitstreams.
As the paper explains, there is no "signing" involved (in the sense of a public-key cryptosystem).
Each encrypted bitstream includes an HMAC, but because the HMAC key is part of the encrypted bitstream itself, it basically only acts like a non-cryptographic checksum. An attacker who knows the encryption key can simply choose an arbitrary HMAC key and generate a valid HMAC for arbitrary data.
EDIT: I should clarify that this attack doesn't appear to actually let someone extract the AES encryption key. But they can use an FPGA that has the key programmed as a decryption oracle. And a weakness of CBC mode is that a decryption oracle can be used for encryption as well.
> In this paper, we introduce novel low-cost attacks against the Xilinx 7-Series (and Virtex-6) bitstream encryption, resulting in the total loss of authenticity and confidentiality
This isn't bad for "security" or "secure microcontrollers." It is in fact good for security. Designs running on these FPGAs can now be analyzed and inspected for accidental or intentional security issues. Mind you: the security issues are there whether you know about them or not. The function that the FPGA implements can (and should) still be secure - since the security of its algorithms should never rely on the secrecy thereof. (And to protect secrecy of private key material, it comes down to physical security either way.)
What it's bad for is vendors relying on DRM to protect their assets. Which is normally diametrically opposed to user freedom.
This encryption is the only way that you can ensure the integrity of the firmware at the chip level so anything relying on that as part of their chain of trust is going to have to redesign their device now. Firmware is loaded from an external eeprom on these devices DRM wasn't the sole use of this feature.
The chain of trust could already be attacked by replacing the entire FPGA chip with an unkeyed/open one, and then loading you own malicious bitstream.
Also, encryption never ensures integrity; it ensures confidentiality. Integrity would've come from the accompanying signature scheme, which apparently was badly implemented and broken at the same time.
If anything, the encryption makes it impossible to conduct spot checks on a batch of devices you receive, since it prevents the end user from verifying bitstream integrity. (The keys are device specific AFAIK, so the bitstream is device specific too, and signature public keys aren't known.) To establish trust, you ideally need an unencrypted, verifiable, signed bitstream.
(An encrypted, signed bitstream with the keys available does not protect against manufacturer collusion; they can cooperate in sending you a tampered device. An unencrypted bitstream allows comparing a device you received against other devices around the planet.)
On these chips the encryption and integrity checking feature is one and the same you can't turn on one without the other.
Whether or not you use the same key over every device in a product line or a per device key is up to the oem. So you can still verify firmware in the former instance.
Replacing the FPGA chip is a lot harder than re-flashing an eeprom and they would also have to put a lot of effort in to replicating your firmware just to insert their change.
I wonder if everyone who works on stuff like this is pro-DRM/anti-freedom, because while I've seen plenty of DRM-breaking papers which paint a very negative view of their findings (this one included), I can't recall seeing a single one which takes the opposite view that this is another step forward for freedom and right-to-repair. Are the researchers really believing that this is a bad thing, or is it because they're afraid of taking that position since others could disapprove and reject their paper?
1. Security researchers, so they can see what malware may be lurking in FPGA bitstreams.
2. Open source developers working on FPGA bitstream compilers.
3. People who want to steal proprietary IP cores.
It hurts:
1. People who chose the part because of the closed bitstream particularly. In part because they made security decisions that the bitstream wasn't open.
2. Anyone who bought the products based upon the marketed security claims of the product (Hopitals/DoD/etc)
Woah, from what I undeddtand, the bitstream is the fpga equivalent of the compiler output? Then selling something as "secure" because nobody can know what code it is running would be security through obscurity no? How would vendors get away with that sort of BS?
To be clear: just talking about the confidentiality requirement here. Authenticity (ie code signing, right?) is obviously something very useful especially in these cases.
This is about bitstream encryption, so there is an expectation of confidentiality. The keys needed to decrypt the bitstream are stored in nonvolatile memory on the FPGA itself. Assuming that it is implemented correctly (evidently not in this case), it is impossible to decrypt the bitstream without analyzing the FPGA die itself, using tools that are usually beyond what a casual attacker might have. It probably won't stop a nation-state from figuring out how to read out your FPGA design, but it will probably slow down your competitors.
Yes, for IP protection I get why that's interesting. But crucially, it's the vendor's interest. For a hospital or such, the interest is actually opposed to this. They should be looking for secure software that is as open as possible to allow for audit and servicing if needed. So selling DRM as something that somehow makes the customer more secure is BS.
Circumvention of DRM is illegal or at least very much frowned-upon in many jurisdictions, while research on how to produce better DRM is not.
Framing it as "we broke DRM, isn't that great?" would be like framing a paper about a more effective silencer as "we made it easier to get away with murder, isn't that great?" (when instead it could be about "we made our special forces' jobs safer").
Plus, research papers in technical fields should try to be neutral, not a place for political activism.
Is there any way that the breaking of the Xlinix bitstream encryption opens the door to documenting and reverse engineering that bitstream in the same way that was done with Project IceStorm[0] for the Lattice iCE40 FPGAs?
Project X-Ray [1] has been working on reverse engineering the Series 7 bitstream format for a while now, and Dave Shah has an experimental fork [2] of nextpnr that targets some devices using the X-Ray database.
Their competitors would treat it as an illustrated guide for patent infringement suits, for one thing. Security through obscurity still works for that purpose to a great extent.
Not directly. Much of the value in a modern FPGA lies in the specialized proprietary hardware provided by the manufacturer -- transceivers, memory controllers, clock management, dozens of other things -- and in the IP cores that can either be inferred or generated through wizards.
So knowing the bitstream format by itself is only a small step forward, if your goal is to take full advantage of the hardware and IP available. You'd need to reverse-engineer all of the specialized hardware and IP support as well. Opening the bitstream format would still be very worthwhile, but it's not the game-changer that many believe it would be.
The company spent a billion dollars over 4 years developing their new chips which, again, no professional has problems with.
So if you spent a billion dollars and had the livelihood of a few thousand people on your mind, would you protect that investment?
Let their "wisdom of the crowd" come up with a competitive chip that took a thousand highly trained engineers 4 years of 100% time and a billion dollars come up with their own version.
In addition to the other reasons already mentioned, this would likely reveal a lot of small details about the underlying microarchitecture of the FPGA fabric which is a (highly valuable) trade secret.
It'll help with encrypted bitstreams, not much else.
Besides, the lack of public RE efforts is AFAIK a political issue more than anything; the FPGA companies have been known to send lawyers at anyone who tries. The bitstream format itself is, following the layout of the FPGA itself, naturally going to be extremely regular and definitely not hard to figure out. They're really like a "worst kept secret" in the industry --- there are probably a lot of people who have already figured it out, but just don't want to attract legal attention.
>These two attacks show again that nowadays, cryptographic primitives hold their security assumptions, but their embedding in a real-world protocol is often a pitfall. Two issues lead to the success of our attacks: First, the decrypted data are interpreted by the configuration logic before the HMAC validates them. Generally, a malicious bitstream crafted by the attacker is checked at the end of the bitstream, which would prevent an altered bitstream content from running on the fabric. Nevertheless, the attack runs only inside the configuration logic, where the command execution is not secured by the HMAC. Second, the HMAC key K_HMAC is stored inside the encrypted bitstream. Hence, an attacker who can circumvent the encryption mechanism can read K_HMAC and thus calculate the HMAC tag for a modified bitstream. Further, they can change K_HMAC, as the security of the key depends solely on the confidentiality of the bitstream. The HMAC key is not secured by other means. Therefore, an attacker who can circumvent the encryption mechanism can also bypass the HMAC validation
This is another example of what Moxie Marlinspike calls the "cryptographic doom principle". If you do anything, anything with a ciphertext before checking authenticity, doom is inevitable.
If I would really care about security, I would not pick SRAM FPGA in the first place. The are nice Flash based FPGAs out there for projects with high security requirements. They don’t need configuration devices leaking bitstream all over the place.
On the other hand is somehow sad, that popular 7 series is compromised. Though I never saw a company, that cared about bitstream security. It was best case “nice to have” feature, usually being completely ignored.
A lot of flash-based FPGAs are actually an SRAM FPGA with an internal flash die bonded to the configuration pins. The bitstream is harder to get to, but it's still available to a determined attacker.
Actel/Microsemi are very real Flash FPGAs while Altera/Intel MAX10 is SRAM FPGA with configuration Flash inside. Very nice and highly integrated chip, comfy development with it.
It's not just an issue for big corporations and their proprietary software and DRM, but also has serious implications for the free and open source hardware community, especially the infosec hackers. To begin with: While it's not realistic to make secure hardware (let's say, a OpenPGP/X.509/Bitcoin Wallet security token) that can be 100% independently verified and free from all backdoors, but still, relatively speaking, FPGAs are generally a better and more secure option as a hardware platform than microcontrollers (for example, see the talk [0] by Peter Todd on Bitcoin hardware wallet and pitfalls of general-purpose microcontrolles), because of three advantages:
* It's possible to implement custom security protections at a lower level than accepting whatever is provided by a microcontroller or implementing it in more vulnerable software.
* Many microcontrollers can be copied easily, but FPGAs are often used to run sensitive bitstream that contains proprietary hardware/software, manufacturers generally provide better security protections, such as verification and encryption, against data extraction (read: OpenPGP private key) and manipulation attacks.
* Most "secure" microcontrollers are guarded under heavy NDAs, while they are commercially available (and widely used in DRM systems), but it's essentially useless for the FOSS community. On the other hand, because the extensive use of FPGA in commercial systems, security is NDA-free for many FPGAs. It's often the best (or the only option) that provides the maximum transparency - not everything can be audited, sure, but the other option is using a "secure" blackbox ASIC, which is a total blackbox.
Unfortunately, nothing is foolproof, manufacturers leave secret debug interfaces, cryptographic implementations have vulnerabilities, etc. Hardware security is a hard problem - 100% security and independent verification is impossible, making it harder to attack is the objective, but it's worse than software - once a bug is discovered and published, the cost of an attack immediately becomes 0, and it cannot be patched. We can only hope that the increased independent verification, like the researchers behind this paper, can somewhat reduce these problems systematically.
The way to protect secure hardware tokens is not bitstream encryption, it's tamper protection. You store the key material in SRAM that is erased when the device detects any attempt at manipulating.
If your Bitcoin Wallet or whatever token is affected by this, it was IMHO badly designed to begin with, since apparently it was relying on an AES-CBC bitstream encryption scheme. That should've been a red flag even if it wasn't broken.
> The way to protect secure hardware tokens is not bitstream encryption, it's tamper protection. You store the key material in SRAM that is erased when the device detects any attempt at manipulating.
You need both. First, make all external storage (that hold keys, firmware, configurations) unreadable to everything else besides the main processor itself. Also, in the ideal world, implement tamper detection, in most HSMs there are tamper detections, but unfortunately, the world is not ideal, in the FOSS world, I don't see anything that uses tamper detection, developing an open source tamper detection is something has great value to the community, yet, I don't see it happening at anytime soon. Also, the majority of security token/hardware have no tamper detection - SIM cards, bank cards, OpenPGP cards (Yubikeys, Nitrokeys), smartphones, they only depend on encrypting external storage and/or restrict the access of the secret inside a chip to maintain security. In practice, they still have an above-average security level, it clears shows tamper protection is not the only way to protection the hardware, although it's less effective and occasionally something is going to be broken, to be sure.
This specific FPGA bitstream encryption vulnerability may be a non-issue, as pointed out by the critics, relying on external storage is not a good idea to begin with, better to burn everything inside the FPGA. My point is that FPGAs are the only platform to implement FOSS security hardware in the most (relatively) transparent and secure manner, yet, the recent discoveries of FPGAs vulnerabilities indicates they are much less secure than expected, and it's only the tip of an iceberg. If external bitstream encryption has cryptographic vulnerabilities, what comes next? More broken cryptos that allow you to read an internal key?
I don't know what the best prctices are now, but it used to be best practice to blow the CFG_AES_Only eFUSE when using bitstream protection, which prevents the loading of a bitstream which isn't authenticated, and thus foils this attack. If a manufacturer went to the trouble of encrypting the FPGA but then allowed loading of plaintext bitstreams they probably didn't really understand what they were doing.
This attack breaks the encrypted and authenticated bitstream.
I thought that the title "A Full Break of the Bitstream Encryption of Xilinx 7-Series FPGAs" would give some information even for those who don't want to read the article before commenting. :)
While I understand that without the proper context (knowing a bit about bitstream protection in the Xilinx 7-Series FPGAs) my comment may seem a bit obscure, I did read the paper.
As the sibling comment mentions, the attack requires programming a plaintext bitstream in order to perform the readout of the WBSTAR register after the automatic reset caused by the HMAC authentication failure. Blowing the CFG_AES_Only eFUSE prevents the loading of that plaintext readout bitstream and the first stage of the attack is thus foiled (preventing the second stage of the attack from taking place as well).
As the paper explains, the attack requires alternately tampering with the encrypted bitstream (to write one word of the decrypted data at a time to a non-volatile register) and then resetting the FPGA and loading a separate, attacker-created, unencrypted bitstream to read that register's contents.
I don't know enough about Xilinx FPGAs to definitively say whether setting the fuse that OP mentions would prevent the attack, but it seems plausible.
>Therefore the attacker can encrypt an arbitrary bitstream by means of the FPGA as a decryption oracle. The valid HMAC tag can also be created by the attacker, as the HMAC key is part of the encrypted bitstream. Hence, the attacker can set his own HMAC key inside the encrypted bitstream and calculate the corresponding valid tag. Thus, the attacker is capable of creating a valid encrypted bitstream, meaning the authenticity of the bitstream is broken as well
> With the first attack, the FPGA can be used to decrypt arbitrary blocks. Hence, it can also be seen as a decryption oracle. Thus,we can also use this oracle to encrypt a bitstream, as shown by Rizzo and Duong in [41], and generate a valid HMAC tag
This requires the first stage of the attack to succeed. If it fails and the FPGA cannot be used as a decryption oracle, there's no way to generate a valid encrypted bitstream with the technique outlined in the paper.
> On these devices, the bitstream encryption provides authenticity by using an SHA-256 based HMAC and also provides
confidentiality by using CBC-AES-256 for encryption
> We identified two roots leading to the attacks. First, the decrypted bitstream data are interpreted by the configuration
logic before the HMAC validates them. Second, the HMAC
key is stored inside the encrypted bitstream
This is not small issue. Up to 10% of FPGA's in the market can be affected.
RAID-, SATA-, NIC- controllers,
Industrial control systems, mobile base stations, data centers, devices like encrypted USB sticks and HDD's. In some cases it's possible to carry the attack remotely.
Only ones who need to keep firmwares secret will be affected.
There are few companies I knew who transitioned from MCUs to FPGAs solely for their obsession of keeping their "IP" from leaking, hoping that FPGA will provide more obscuration than simple encrypted MCU firmware.
This isn't a worry over device security. It's a worry about cloning. Most of the serious FPGA market is low-volume speciality devices that cost upwards 10-100k per unit, with yearly support contracts of that magnitude on top. Seismology, medical research, telecommunications... many of those products will be a raving success if they sell 1000 units. R&D is almost all of the cost and the hardware design is nothing special. You could clone these things with basically no investment if you have access to the bitstream. "Decompiled" bistream is also much more readable than assembly. That's why they're worried.
You could clone highly specialized devices, but it's not clear who you're going to sell them to. I'd like to think that most of my small, specialized customer base would turn up their noses at an obvious clone of my own product.
The people who need to worry about this are the Ciscos and Keysights and John Deeres and other 800-pound gorillas whose products have a broad user base that includes cost-sensitive customers.
> But if an attacker is already "inside" your system [...] I think you have already lost...
It's not necessarily true. Protecting the system from physical attackers is a legitimate requirement in cryptography.
1. While all hardware secrets can be broken with physical access, but there's a different in cost, cost and cost. A commercial HSM - used by CAs, businesses, banks to hold private keys - contains RF shields, temper-detection switches, sensors for X-ray, light, temperature, battery-backed SRAM for self-destruction, and so on, it's extremely unlikely that anyone has ever succeeded to break into a HSM, possibly only a handful people in the three-letter agencies were able to do that, and even for them it's a great expense, launching a supply-chain attack, bribing the sysadmin or stealing the key are more reasonable options. It's certainly possible to break into it, but the cost is prohibitively expensive for most.
2. You can make the hardware 100% secure against physical attackers if the actual secret is not even on the hardware. If I steal your full-disk-encrypted laptop while it's off, I cannot obtain any meaningful data because the encryption key is in your brain, not in this laptop. This is a practical threat model and often desirable. However, there's nothing to stop me from bruteforcing the key because the hardware itself doesn't have protections.
3. If we make some compromises to trust the hardware, we have another security model used by a modern smartphone - an encryption key is buried inside the chip, I can boot the phone but it's impossible to physically bypass any software-based access control like passwords, since all Flash data is encrypted. All hardware can be broken with physical access, but opening the chip and extract the key may be cheaper than breaking into a HSM, but it's still expensive in terms of cost and expertise. It's difficult to bypass without an additional software vulnerability, this is a good enough threat model and often desirable.
We can combine (2) and (3): Save the actual secret outside the hardware so it cannot be stole, at the same time, implement some hardware-based protection that requires the attacker to launch an expensive physical attack before one is able to bruteforce the secret. It will be a defense-in-depth and the best of all worlds. What we have here is actually a OpenPGP Card (Yubikey, Nitrokey), or a Bitcoin wallet, which uses both mechanism to protect the user from thieves. For example, Nitrokey's implementation first encrypts the on-chip OpenPGP private key with an user-supplied passphrase, and it also set the on-chip flash to be externally unreadable (only readable by firmware itself), so that the private key cannot be extracted, finally it has the standard OpenPGP card access control: If multiple wrong passphrases are attempted, it locks itself, of course, this feature requires an inaccessible on-chip flash - either the flash itself is on-chip, or an encryption key to the flash is on-chip.
If the firmware executed by the chip can be replaced, the attacker can disable all the access restrictions (disable the wrong-passphare lockout, and readback the key), totally eliminating the hardware layer of defense, which is not what we want here. Unfortunately, Nitrokey is based on a standard STM32 microcontroller, and its flash protection has already been broken. Nitrokey Pro remains "secure" - the real crypto is performed on an externally inserted OpenPGP smartcard, which is powered by a "secure" microcontroller, but the card is a partial blackbox and cannot be audited - When Yubikey said it was unable to release the source, many recommended Nitrokey since it's "open source", unfortunately, it is, but it depends on a Yubikey-like blackbox. If you want to implement something better, more trustworthy than a Nitrokey or Yubikey, the option for you here is to write a FOSS implementation of the blackbox, make it becoming a whitebox. Not that the underlying FPGA can be audited, it cannot be, but it's still much better than a complete blackbox.
And now back to the original topic: If your FPGA's bitstream encryption has a vulnerability, it's game over. This is a serious problem. A response may be: relying on bitstream encryption is not the correct approach, one should use external Flash at all, well, yes, but this is not my argument here, my argument is simply that securing your hardware against physical access by an attacker is a legitimate requirement, and that even if everything can be broken with physical access, doing so still has a point.
> A commercial HSM - used by CAs, businesses, banks to hold private keys - contains RF shields, temper-detection switches, sensors for X-ray, light, temperature, battery-backed SRAM for self-destruction, and so on, it's extremely unlikely that anyone has ever succeeded to break into a HSM
A service to lift firmware from Gemalto chips used in SIM, and credit cards costs $25k here at most
I think there's some confusion. Are you sure that you are talking about the same thing? What I meant here is a real HSM, something similar to a IBM 4758 [0] (which was once vulnerable, but only because it had buggy software), not a SIM card or a credit card chip. Do you imply that many HSMs are based on the same Gemalto chip?
This was security through a cryptographic design. It was just a broken design. If you consider confidential symmetric or privkeys "obscurity," sure, all crypto is obscurity.
Can your post be read as "In the near future, we might have the possibility of inspecting/re-programming a bunch of DMA--capable devices commonly found in systems"?
That would be great. I shudder to think what horrors may await us in USB-/Wifi-/Network-controller bit-streams/firmwares. But the sooner these things get opened, the better.
I don't know much about FPGA's but I tried to read the paper. Maybe someone can tell me if this is correct:
Getting your hands to raw gate configuration helps with cloning. It's pita to reverse engineer.
Any device that uses Xilinx 7 or Virtex 6 SPI or BPI Flash remote update is potentially fucked. There is HMAC in bitstream and no other authentication.
The number of people glossing over his to take a bitstream which is the gate configuration of the fabric and read it back to human logic like Verilog is extremely few people and always a lot of time.
No. It means people will be able to copy FPGA's like was possible in the 2000's. It also means that the design in an FPGA could be altered by an unauthorized 3rd party without having to physically replace the device.
This is not the case for the FPGAs targeted here. Encryption is optional on Xilinx 7-Series. Also there already is an open source toolchain coming up for them.
And is also the first of many steps to making a rootkit toolchain that can infect your devices when they’re in the possession of an attacker.
Any device that allows an anonymous third party to modify it is a device that cannot be trusted once it’s been handed to a third party.
Would you be willing to register for an identity certificate to make use of an open source toolchain with your registered device, so that you could be certain others had not silently rootkit’d it and were spying on your work?
Since I read and write Chinese, I did some searching on Chinese-language sites and found a company advertising their ability to do just that... for about $10,000 per job. They listed the chips they knew how to crack, but the one my employer was using was not on the list...
I feel that trying to prevent reverse-engineering by adversaries with unlimited physical access is a fool's errand. So this break of Xilinx FPGAs is interesting... But kind of a shoulder shrug.