It is not a backdoor, nor does it circumvent anything.
It is a front door convenience feature which has distinct privacy/security trade-offs.
There exists no magical way to provide a means of lost password/device recovery which doesn’t grant Apple access to decrypt your data. It turns out that a lot of users want to have a way to recover from a lost device/password and are willing to let Apple decrypt their data.
You do this by ticking the ‘iCloud Backups’ toggle on your iPhone.
A backdoor by definition is not a user facing and configurable feature which is thoroughly explained in end-user documentation.
I’m not sure about that. Face and fingers are typically authentication mechanisms. They can grant access to a key, but they cannot themselves be the key.
The thing doing the authentication can be your local device, or a cloud-device. That thing must necessarily store a validator for your face/fingerprints which it can use to decide your submitted capture is “close enough” to consider a match, after which it grants access to the key, usually indirectly, by allowing certain cryptographic operations with the key.
Apple takes pains to ensure the biometric validators never leave the Secure Enclave of a local device. Possibly they could allow syncing these validators between Secure Enclaves of paired devices but I think you have to re-enroll. Absolutely never do they transmit these biometric validators to the Cloud in a readable form.
So in a lost-device scenario, you are also losing the biometric validators as well as the keys which were unlocked by the validators.
I think storing decryptable biometric validators is worse than storing decryptable device backups. Such a fingerprint database would almost certainly be abused by a government (forced to match a terrorist’s fingerprint against their users).
The singular reason I am willing to use biometric authentication on my phone is because the authentication is done locally.
For example Amazon’s recently announced project to link Amazon Pay to a palm print in stores is a total non-starter for me. Besides the fact that it’s a clumsy and bad idea to begin with, no way I want them having my palm print validator sitting in the Cloud.
> They can grant access to a key, but they cannot themselves be the key.
My assumption is that device recovery is such a special case, that it can use very different algorithms than those used in phones today, they could be very computationally expensive and turn fingerprints into usable keys. And of course there is no need for anyone to store them or being able to match them individually or even just tie to an identity of a person.
There are two things that make this problem “hard” if not “intractable”.
Encryption keys are precise integer values (or can be represented as such) and they gain a large part of their security from two facts; a key that is wrong by even one bit will appear totally wrong / disclose zero information, and two, the key space is unfathomably large.
To turn a fingerprint directly into an encryption key would require first; some sort of mapping between the analog representation of the finger/face (which could be two or 3 dimensional) into a digital value, and second; for that value to be absolutely repeatable over time.
The biggest problem is that of course neither your face, nor your fingerprints, are absolutely unchanging over time.
So the first thing you would somehow need to accomplish is a way to map the biometric scan to a repeatable precise integer value. Such a mapping would require, by definition, a loss of precision.
How much precision? Well, it’s directly a result of how resilient you want the algorithm to be in the face of things like scanning error, micro-abrasions on the finger, body fat percentage, the temperature of your hand, swelling, hair growth, etc...
The less precise you make it, the more different fingers (or different scans of the same finger) must necessarily resolve to the same key.
This is the same thing as saying that we are reducing the key-space.
Once you have reduced the precision of the mapping from a biometric scan into a key that will reliably generate the same key over time, you have, by definition, reduced the key space to the point where the encryption is fundamentally unsound.
The only exception to this would be perhaps using DNA sequences, but even then, I believe DNA is not actually perfectly unchanging over time, and is also not at all random [1]. But assuming you could probably handle the minute coding changes that do occur, and reliably scan the same part of the genome, I think you could end up with enough entropy to generate a secure key. Assuming you are willing to precisely sequence a chunk of DNA in order to generate your key. This is rapidly becoming feasible, if not somewhat dystopian and entirely impractical.
But you still have the fundamental problem that the key is not being generated as a uniformly random value in the key space. This happens to be extremely important to the security of encryption algorithms. You wouldn’t want, for example, a close relative to be able to cut your entropy from 512-bits down to 64-bits and into the realm of brute force.
In short, biometrics will remain an authentication method rather than a direct encryption method, likely indefinitely.
I found some research on fingerprints [1]. At 512 dpi fingerprint sensors have 0.01 bits per pixel of information mutual between samples but still individual, meaning that 160x160 sensors can give 256 bits of information usable for keys. And there are multiple fingers, so it seems enough to derive an encryption key from and even some room for redundancy.
Refreshing it every few years isn't a big deal (as obviously none of it will be used directly as an encryption key for all of your data, but only to encrypt an actual encryption key).
That paper has absolutely nothing to do with generating keys directly from an image of a finger. They are discussing the lower bounds on how small a fingerprint sensor can get.
It doesn’t seem like you read my reply at all.
It’s not a question of raw entropy from the sensor, which is what the paper is discussing. It’s an issue of repeatability.
To quote spoc in ST-TWOK: "not a lie, an ommision".
It isn't a deliberately implemented backdoor. It is a deliberate decision to not install doors at all, just empty frames. I know we are arguing semantics here, and it doesn't make it right, but it doesn't go against the letter of how they've claimed they'll behave.