I don't see how this "reassuring"; to me it's rather very confusing (as mentioned in many other comments).
If Apple could in fact write a software backdoor, doesn't it mean that the backdoor exists, at least potentially?
And how can one be sure that Apple is the only company able to build that door? At the very least, couldn't the right Apple engineer be either bribed or forced (by terrorists or the government) to build it?
"Impossible" should mean "impossible", not "not yet done, but possible".
What it means is that the best the FBI can come up with is "Make a way for us to brute force attack the passphrase." And brute force attack is worthless for a strong enough passphrase. That's what is reassuring.
Not to mention that this is for the iPhone 5c. As other comments have mentioned, newer iPhones have the hardware-based Secure Enclave which add to the difficulty of breaking into the phone. https://www.apple.com/business/docs/iOS_Security_Guide.pdf
My reading of that PDF is that Secure Enclave software can be updated too, it simply does an independent verification of the Apple's digital signature.
So while the Secure Enclave enforces the delay between brute force attempts, Apple could still release an update that removes that delay.
It could also be that the FBI has already obtained the info, or can obtain the info through illegal means and simply want to use the issue to force legislative and policy changes or public opinion changes in their favor, and or just want to legitimize their having the information.
In the world of cryptography, it is always possible, because you can always be lucky and guess the right "unlock" code. In fact, social engineering is normally used to find the right "unlock" code[0].
The FBI can also unsolder the components in the phone, make a full image of the content, find the encrypted section and then brute-force. This is what is done for SSD. They do not power up the drive, unsolder, put the memory modules in a special reader and copy the data before the controller of the SSD automatically wipe out data because of automatic optimization after a delete/trim.
I guess my family will take care of my physical stuff. For the online part, some of it can probably be handled through support (facebook, etc...), and the rest will stay as is until it is deleted for lack of use. Or never deleted. Both are okay.
Leaving a physical trace of my passwords is not only bad practice from security point of view, but quite useless since I know them.
Also, my online accounts are useless if I can't use them because I'm dead, so I don't really care if no one can access them anymore. What happens to important things such as banking is already dealt with.
I just got a Facebook birthday notification for my cousin, who died three years ago. So, that has some impact on me and others in our extended family. Maybe that's ok, maybe it'll get weirder at some point; but its definitely something to think about.
That's a very real issue indeed. Sorry for your loss.
But I think the right solution here would be for Facebook to have a way of handling deceased people, not giving your password to everyone in case of sudden death.
Because you don't want to make a difficult situation even harder for your relatives?
See, for example, the people who know they're going to die and who leave their iPads to their relatives in their wills. Apple doesn't take grants of probate as sufficient legal documents (everyone else does (eg banks)) and insist on a court order.
It's not a backdoor, it's a frontdoor. In cryptography, there's no way to make repeated attempts more computationally expensive. The lockout just an extra feature Apple put on, that Apple could easily remove. If we're going to have 4- and 6- digit PINs, there is no way to stop a dedicated attacker frome brute-forcing it. None.
True. But Apple, with such a focus on UX, cannot reasonably afford more than ~200 millisec when checking a password; and still it scales linearly, so the solution for concerned users still involves creating a more complex password. Doubling the amount of time it takes to hash a password will have the same effect as adding 1 more bit of entropy to the password, which can easily be beaten by adding a single character to it.
If you consider caching of keys, there's no reason that the first login attempt after a cold boot couldn't take 1-2s. Each subsequent login would be roughly instant.
"there's no way to make repeated attempts more computationally expensive"
That's not true actually. For example, the industry standard for storing passwords on a server (bcrypt) is specifically designed to slow down password match attempts.
Nope. If the attacker (Apple in this case) can replace the OS, they will just do so before the phone gets wiped—replacing the OS will remove that wipe feature.
Meh. Then the attacker can simply replace the hardware. Remember, our attacker model is Apple; non-cryptographic security measures mean very little to a company with such complete knowledge of the hardware and software involved.
Nope. On newer devices the key is derived from a random key fused into the SE during manufacturing, a key fused into the ARM CPU, and a key randomly generated on-device during setup (derived from accelerometer, gyro, and altitude data) and stored in the SE. The SE's JTAG interface is disabled in production firmware and it won't accept new firmware without the passcode.
You can't swap the SE or CPU around, nor can you run the attempts on a different device.
Can't you? Seems like the kind of problem you can point an electron microscope at, and perhaps some very high precision laser cutting. In any case, I imagine if you are willing to spend resources on it, you could read the on-chip memory somehow and start cryptoanalysing that.
Against a sufficiently capable adversary, tamper-resistance is never infalible, but good crypto can be.
> Against a sufficiently capable adversary, tamper-
> resistance is never infalible, but good crypto can be.
Nonsense, it all comes back to "sufficiently capable", every time.
To a sufficiently capable adversary, _all_ crypto is just "mere security by obscurity".
"Oh, mwa-haha, they obscured their password amongst these millions of possible combinations, thinking it gave them security - how quaint. Thankfully I'm sufficiently capable.", she'll say.
The point is that the key is stored there too (part of it burned during production in silicone) and can't be read or changed.
Sure, if they wanted to they could implement a backdoor. But assuming they correctly created and shipped the secure enclave it shouldn't be possible to circumvent it even for Apple.
It's sounding like that's the problem. They left an opening for this sort of thing by allowing firmware updates to the secure enclave. That basically makes it a fight over whether the FBI can force Apple to use the required key to sign an update meeting the specifications the FBI directs.
Well, i read elsewhere in this thread, that updating the firmware for the secure enclave wipes the private key contained within. Which means you've effectively wiped the phone.
Secure Enclave is not really ‘hardware’; despite being isolated from the main OS and CPU, it is still software-based and accepts software updates signed by Apple.
If those software updates force it to erase its private key store, though, then it's functionally isolated from that attack vector. An updated enclave that no longer contains the data of interest has no value.
Making it fixed just means you can't fix future bugs. The secure approach is to ensure that updates are only possible if the device is either unlocked or wiped completely.
Not possible without backing it into iOS with some update that FBI is claiming will be used on one particular phone in this particular case.
Apple reasons that there is no way to guarantee no one will take the same update and apply it to other iOS devices. Or government taking this a step further by making Apple to build that into future update for the whole user base.
This limitation must be built into security hardware used by iPhone so software couldn't do anything about it. I was under impression that it's how iOS security model works. If it's not and in fact this check implemented in iOS itself, it's much weaker protection and it's really looks like an intended backdoor from Apple.
It's not really built into ‘hardware’, it's enforced by the Secure Enclave, which is software-based and accepts software updates signed by Apple. It's secure against kernel exploits and third-parties, but not against Apple.
It sounds like it'd be trivial to nop out the timer on repeated passcode attempts. Which makes sense... Leaving any short passcode trivially crackable.
Yes but like where would you nop? You can't statically analyse the code because the image is encrypted at rest (and potentially partially in ram also?)
The code which decrypts the system (and is responsible for wiping the drive on repeated failures) is definitely not encrypted. How would it be able to take the input in order to decrypt the drive.
Yes but that code is all running in ram, precluding static analysis. You can still dynamically analyse it, but that is much harder.
The way I understand it (and, correct me if I'm wrong) is that the code flows from disk through the aes engine where it is decrypted and then placed in a presumably interesting/hard to reverse place in ram at which point it is executed. I imagine even more interesting things are done to higher value data in ram, but that's not code - because as you said, code has to be decrypted (at the latest) by the time it reaches the registers.
Their security PDF says that the system has a chain of trust established, anchored at an immutable loader residing inside the chip, and each step verifies the digital signature of the next step against the hardcoded Apple CA certificate.
I agree. I was under the impression that Apple's security was such that even they didn't have the power to decrypt a device because the crypto made it impossible without the password/pin/key. I'm interested to understand the reasons that it was not done this way.
The current implementation is done this way indeed:
even Apple cannot decrypt without using the right password.
The right password can be obtained by either knowing it, or by guessing it.
As an additional security measure, the software shipped with the phone prevents brute force attacks by wiping the device after a given number of failed attempts.
Apple has been asked to modify the software so that it won't wipe the phone, thus allowing the authorities to try many passwords.
If anybody could circumvent this additional security measure, actual security would be lower. The authorities are not asking Apple to ship this change to all users: they only want to install it on the device in their possession.
However, Apple is concerned that once they provide the authorities such a modified software, it could be leaked and thus be used by third parties to breach the security of any Apple device.
It should be noted that currently all data encrypted for example on your laptop's hard drive is already subject to this kind of brute force attacks. It's a well known fact that authorities or malicious users can already attempt brute force attacks on encrypted data if they can access the data on a passive device such as a hard-drive.
It's important to understand that Apple (at least not in this case) is not being asked to implement a backdoor in the encryption software. It's also important to understand that even if Apple was forced to install a backdoor, it would affect only the ability to access future data and not help the investigation of the San Bernardino case.
This very request by the Authorities suggests that Apple does not currently install any backdoor on stock phones.
However there is a logical possibility that the Authorities are either not aware of any such backdoor or in the worst case they are publicly requesting this feature just to hide the real reason of a possible future success at decrypting the phone: they can claim that Apple didn't have any backdoor, and they were just lucky at bruteforcing the device; in fact Apple wasn't even cooperating with them at relaxing the brute force prevention limit, so they could claim they did it in house.
(I'm personally not inclined to believe in such improbably well coordinated smoke and mirrors strategies, but they are a logical possibility nevertheless).
Why doesn't the FBI simply clone the current device, make brute force attempts and then clone again if locked out? Yes, lots of work but also doesn't force Apple to participate.
The iPhone uses AES encryption which would prevent cloning the flash storage [1]. There was an informative discussion of this over on AppleInsider - http://forums.appleinsider.com/discussion/191851
That's what the A7 (iPhone 5S and later) design does:
“Each Secure Enclave is provisioned during fabrication with its own UID (Unique ID) that is not accessible to other parts of the system and is not known to Apple. When the device starts up, an ephemeral key is created, entangled with its UID, and used to encrypt the Secure Enclave’s portion of the device’s memory space. Additionally, data that is saved to the file system by the Secure Enclave is encrypted with a key entangled with the UID and an anti-replay counter.”
Yeah, it wasn't exactly unknown before but it wasn't terribly common outside of certain security / compliance circles. I think I've seen more links today than in the previous year.
It sounds like it has been built that way - the data cannot be decrypted without the correct passphrase - but the user secured the phone with a 4-digit PIN, giving a mere 10,000 possible combinations - easily brute-forceable.
The iPhone prevents this by locking up (and potentially erasing the phone) after 10 failed attempts, but this a restriction created in iOS. If they provision a new backdoored version of iOS to the phone, that restriction wouldn't apply any more, and they could brute-force away.
But why would you even think apple, google or facebook would be a good bet to defend your privacy in the first place ? They got the most terrible track record of not caring about.
If you have things that you need to be private, don't put it on a smartphone.
I wish people would stop lumping Apple with Google/Facebook with regards to privacy.
Apple has implicitly for a long time, and lately much more vocally, cared about privacy. They don't have the same data-driven business model that Google and FB do.
> Apple has implicitly for a long time, and lately much more vocally, cared about privacy.
They say that. But with closed source software we can't verify that it's true. I'm not saying they don't care about privacy, only that we don't really know if they do or not.
With open source software, it doesn't appear that people can verify things are safe either given the long-term security issues with things like OpenSSL et al.
We found the bug in OpenSSL BECAUSE it was opensource. If it weren't, nobody would have seen it.
Plus, with open source you can verify intent, which you can't with apple.
Which provide a device getting your finger prints, all your phone numbers, internet search, bank details, some paiements, network communication, voice communications, text communications, localisation using GPS and wifi + hotspot + phone towers and soon ihealth device collection body metrics.
And they are profit oriented, not people oriented.
> We found the bug in OpenSSL BECAUSE it was opensource.
Sure but they were there for years before anyone noticed. Same with PHP's Mersenne Twister code. Same with multiple other long-standing bugs. It's disingenuous to toss out "Oh, if only it was open source!" because reality tells us that people just plain -don't- read and verify open source code even when it's critical stuff like OpenSSL.
Actions speak louder than words. The most revealing test of the strength of a company's commitment to privacy is how it handles situations when privacy can conflict with profits. Privacy on the internet relies critically on browsers only trusting trustworthy certificate authorities. When CNNIC breached its trust as a certificate authority last year, Apple sat tight waiting for the furor to subside (https://threatpost.com/apple-leaves-cnnic-root-in-ios-osx-ce...).
I would argue that handling security problems in general has not been Apple's strength historically.
I agree that failing to fix a problem like this in a timely fashion is bad, but sins of omission are generally judged differently than sins of commission, for better or worse. Apple failing to apply proper prioritization to security holes isn't the same as Apple collecting data to be sold to the highest bidder.
So, again, Apple should not be treated as equivalent to Google and Facebook. Feel free to judge them harshly, but don't paint them with the same brush.
If Apple could in fact write a software backdoor, doesn't it mean that the backdoor exists, at least potentially?
And how can one be sure that Apple is the only company able to build that door? At the very least, couldn't the right Apple engineer be either bribed or forced (by terrorists or the government) to build it?
"Impossible" should mean "impossible", not "not yet done, but possible".