> But if an attacker is already "inside" your system [...] I think you have already lost...
It's not necessarily true. Protecting the system from physical attackers is a legitimate requirement in cryptography.
1. While all hardware secrets can be broken with physical access, but there's a different in cost, cost and cost. A commercial HSM - used by CAs, businesses, banks to hold private keys - contains RF shields, temper-detection switches, sensors for X-ray, light, temperature, battery-backed SRAM for self-destruction, and so on, it's extremely unlikely that anyone has ever succeeded to break into a HSM, possibly only a handful people in the three-letter agencies were able to do that, and even for them it's a great expense, launching a supply-chain attack, bribing the sysadmin or stealing the key are more reasonable options. It's certainly possible to break into it, but the cost is prohibitively expensive for most.
2. You can make the hardware 100% secure against physical attackers if the actual secret is not even on the hardware. If I steal your full-disk-encrypted laptop while it's off, I cannot obtain any meaningful data because the encryption key is in your brain, not in this laptop. This is a practical threat model and often desirable. However, there's nothing to stop me from bruteforcing the key because the hardware itself doesn't have protections.
3. If we make some compromises to trust the hardware, we have another security model used by a modern smartphone - an encryption key is buried inside the chip, I can boot the phone but it's impossible to physically bypass any software-based access control like passwords, since all Flash data is encrypted. All hardware can be broken with physical access, but opening the chip and extract the key may be cheaper than breaking into a HSM, but it's still expensive in terms of cost and expertise. It's difficult to bypass without an additional software vulnerability, this is a good enough threat model and often desirable.
We can combine (2) and (3): Save the actual secret outside the hardware so it cannot be stole, at the same time, implement some hardware-based protection that requires the attacker to launch an expensive physical attack before one is able to bruteforce the secret. It will be a defense-in-depth and the best of all worlds. What we have here is actually a OpenPGP Card (Yubikey, Nitrokey), or a Bitcoin wallet, which uses both mechanism to protect the user from thieves. For example, Nitrokey's implementation first encrypts the on-chip OpenPGP private key with an user-supplied passphrase, and it also set the on-chip flash to be externally unreadable (only readable by firmware itself), so that the private key cannot be extracted, finally it has the standard OpenPGP card access control: If multiple wrong passphrases are attempted, it locks itself, of course, this feature requires an inaccessible on-chip flash - either the flash itself is on-chip, or an encryption key to the flash is on-chip.
If the firmware executed by the chip can be replaced, the attacker can disable all the access restrictions (disable the wrong-passphare lockout, and readback the key), totally eliminating the hardware layer of defense, which is not what we want here. Unfortunately, Nitrokey is based on a standard STM32 microcontroller, and its flash protection has already been broken. Nitrokey Pro remains "secure" - the real crypto is performed on an externally inserted OpenPGP smartcard, which is powered by a "secure" microcontroller, but the card is a partial blackbox and cannot be audited - When Yubikey said it was unable to release the source, many recommended Nitrokey since it's "open source", unfortunately, it is, but it depends on a Yubikey-like blackbox. If you want to implement something better, more trustworthy than a Nitrokey or Yubikey, the option for you here is to write a FOSS implementation of the blackbox, make it becoming a whitebox. Not that the underlying FPGA can be audited, it cannot be, but it's still much better than a complete blackbox.
And now back to the original topic: If your FPGA's bitstream encryption has a vulnerability, it's game over. This is a serious problem. A response may be: relying on bitstream encryption is not the correct approach, one should use external Flash at all, well, yes, but this is not my argument here, my argument is simply that securing your hardware against physical access by an attacker is a legitimate requirement, and that even if everything can be broken with physical access, doing so still has a point.
> A commercial HSM - used by CAs, businesses, banks to hold private keys - contains RF shields, temper-detection switches, sensors for X-ray, light, temperature, battery-backed SRAM for self-destruction, and so on, it's extremely unlikely that anyone has ever succeeded to break into a HSM
A service to lift firmware from Gemalto chips used in SIM, and credit cards costs $25k here at most
I think there's some confusion. Are you sure that you are talking about the same thing? What I meant here is a real HSM, something similar to a IBM 4758 [0] (which was once vulnerable, but only because it had buggy software), not a SIM card or a credit card chip. Do you imply that many HSMs are based on the same Gemalto chip?
It's not necessarily true. Protecting the system from physical attackers is a legitimate requirement in cryptography.
1. While all hardware secrets can be broken with physical access, but there's a different in cost, cost and cost. A commercial HSM - used by CAs, businesses, banks to hold private keys - contains RF shields, temper-detection switches, sensors for X-ray, light, temperature, battery-backed SRAM for self-destruction, and so on, it's extremely unlikely that anyone has ever succeeded to break into a HSM, possibly only a handful people in the three-letter agencies were able to do that, and even for them it's a great expense, launching a supply-chain attack, bribing the sysadmin or stealing the key are more reasonable options. It's certainly possible to break into it, but the cost is prohibitively expensive for most.
2. You can make the hardware 100% secure against physical attackers if the actual secret is not even on the hardware. If I steal your full-disk-encrypted laptop while it's off, I cannot obtain any meaningful data because the encryption key is in your brain, not in this laptop. This is a practical threat model and often desirable. However, there's nothing to stop me from bruteforcing the key because the hardware itself doesn't have protections.
3. If we make some compromises to trust the hardware, we have another security model used by a modern smartphone - an encryption key is buried inside the chip, I can boot the phone but it's impossible to physically bypass any software-based access control like passwords, since all Flash data is encrypted. All hardware can be broken with physical access, but opening the chip and extract the key may be cheaper than breaking into a HSM, but it's still expensive in terms of cost and expertise. It's difficult to bypass without an additional software vulnerability, this is a good enough threat model and often desirable.
We can combine (2) and (3): Save the actual secret outside the hardware so it cannot be stole, at the same time, implement some hardware-based protection that requires the attacker to launch an expensive physical attack before one is able to bruteforce the secret. It will be a defense-in-depth and the best of all worlds. What we have here is actually a OpenPGP Card (Yubikey, Nitrokey), or a Bitcoin wallet, which uses both mechanism to protect the user from thieves. For example, Nitrokey's implementation first encrypts the on-chip OpenPGP private key with an user-supplied passphrase, and it also set the on-chip flash to be externally unreadable (only readable by firmware itself), so that the private key cannot be extracted, finally it has the standard OpenPGP card access control: If multiple wrong passphrases are attempted, it locks itself, of course, this feature requires an inaccessible on-chip flash - either the flash itself is on-chip, or an encryption key to the flash is on-chip.
If the firmware executed by the chip can be replaced, the attacker can disable all the access restrictions (disable the wrong-passphare lockout, and readback the key), totally eliminating the hardware layer of defense, which is not what we want here. Unfortunately, Nitrokey is based on a standard STM32 microcontroller, and its flash protection has already been broken. Nitrokey Pro remains "secure" - the real crypto is performed on an externally inserted OpenPGP smartcard, which is powered by a "secure" microcontroller, but the card is a partial blackbox and cannot be audited - When Yubikey said it was unable to release the source, many recommended Nitrokey since it's "open source", unfortunately, it is, but it depends on a Yubikey-like blackbox. If you want to implement something better, more trustworthy than a Nitrokey or Yubikey, the option for you here is to write a FOSS implementation of the blackbox, make it becoming a whitebox. Not that the underlying FPGA can be audited, it cannot be, but it's still much better than a complete blackbox.
And now back to the original topic: If your FPGA's bitstream encryption has a vulnerability, it's game over. This is a serious problem. A response may be: relying on bitstream encryption is not the correct approach, one should use external Flash at all, well, yes, but this is not my argument here, my argument is simply that securing your hardware against physical access by an attacker is a legitimate requirement, and that even if everything can be broken with physical access, doing so still has a point.