> There are schemes that would allow for your data to be encrypted and secure from even the cops except in cases where they have a acquired a warrant. The argument that there is no way to do so is more political than technical. This is a solved problem, cryptographically speaking.
It very much is not. All of the schemes that purport to do so involve a systemic risk that the master key is lost to a hostile foreign government or criminal organization, and they inherently prohibit forward secrecy.
> All of the schemes that purport to do so involve a systemic risk that the master key is lost
Look up 'secret sharing schemes' and 'threshold cryptosystems'. The idea that any scheme allowing law enforcement to decrypt a cell phone must inevitably involve a single master key is a strawman argument.
> and they inherently prohibit forward secrecy.
This is a non-issue for encrypted disks, which is what law enforcement has an issue with.
> Look up 'secret sharing schemes' and 'threshold cryptosystems'. The idea that any scheme allowing law enforcement to decrypt a cell phone must inevitably involve a single master key is a strawman argument.
I'm aware of these things. But splitting a master key into five parts doesn't make it any less of a master key. The vulnerability is not in how many keys you need to open a lock, the vulnerability is in requiring the same keys to open all locks.
> This is a non-issue for encrypted disks, which is what law enforcement has an issue with.
Forward secrecy for encrypted disks is implemented by regularly changing your encryption key and destroying all copies of the old key. An attacker who can copy the encrypted contents of your disk and later compromises your key then won't be able to decrypt the copied data with it, because the current key won't decrypt the old ciphertext.
This inherently doesn't work if the government keeps a key that will decrypt the old ciphertext because the attacker with the old ciphertext can still compromise the government's key(s) to decrypt it.
> The vulnerability is not in how many keys you need to open a lock, the vulnerability is in requiring the same keys to open all locks.
And why would you use the same keys to open all locks? Here's a quick off-the-top of my head solution:
The device manufacturer creates a public/private key pair - maybe they make a new one for each device, or maybe for efficiency they make a new pair for each batch or once a month or whatever they deem acceptable. The point is to change it regularly. The court system creates its own public/private keys, changed every two months or so. The FBI creates their own as well, let's say changed every three months.
When the device is manufactured, the current public keys for the manufacturer, court and FBI all go on the device. When the disk is first encrypted by the user, a key is generated, encrypted by the FBI, manufacturer and court in that order, then stored in a separate location on the disk. Later on when the FBI gets its hands on the phone and wants to decrypt it, they send the encrypted key to the court along with the warrant application; if the court approves, they decrypt it and it gets sent on to the device manufacturer. They look up the serial number of the device and decrypt with the appropriate key, then send it on to the FBI. The FBI finally decrypts using their private key and can subsequently get the initial key used to encrypt the hard drive.
In order to decrypt a device without going through this process, you would have to get physical access to phone and also compromise all three private keys. If you did somehow manage to get all three of the keys, you'd only be able to decrypt devices manufactured within at most a two-month time frame. If that's still not an acceptable level of risk, it can be further limited by increasing the frequency at which keys are replaced, creating multiple keys for each window, adding additional agencies with their own keys into the process, etc.
> An attacker who can copy the encrypted contents of your disk and later compromises your key then won't be able to decrypt the copied data with it
I'm trying to imagine a situation where this is actually an issue... the closest I can come up with is: a cop arrests me, fails to get a warrant, illegally copies the encrypted data off my cell phone and retains the encrypted data anyways. I'm let go and my phone is returned, and I subsequently delete all of the incriminating evidence from my phone (overlooking the fact that destroying evidence is a crime). The police later arrest me for something else, confiscate my phone and this time they do get a warrant to search it. Now they decrypt their old data and discover the files I deleted, none of which is admissible in court because it was illegally acquired.
I personally think it's a little far fetched for your average criminal suspect, but I'll play along and say that it's maybe within the realm of possibility for someone high-value enough. I suppose the simple solution would be to do something like use a file system that keeps some sort of hash of the file structure and last modification time, so that you could prove that the file in question didn't come from the data they were authorized to collect. I think that's probably going too deep into performance trade-offs for something that's unlikely to occur.
If you're that worried about incriminating evidence that was left undiscovered on your phone, the simpler solution is to just get a new phone. I wouldn't trust any device that an adversary had physical control over then handed back to me. In this case, why would someone risk eventual conviction to save $400 for a new phone?
> And why would you use the same keys to open all locks?
Because it's an inherent characteristic of the outcome you're looking for. In order for the government to be able to decrypt any encrypted disk, there would have to be some process the result of which is the ability to decrypt any encrypted disk. Fiddling with the internals doesn't change the nature of it.
> I'm trying to imagine a situation where this is actually an issue... the closest I can come up with is: a cop arrests me, fails to get a warrant, illegally copies the encrypted data off my cell phone and retains the encrypted data anyways.
You're assuming the attacker is US law enforcement. Try this one: A foreign government (e.g. China) takes your encrypted device at a border crossing for long enough to have copied it. You stop using that key forever to make sure they never have a chance to steal it and use it to decrypt their copy of your secrets.
But under your system the foreign government can keep a copy of everyone's device until they sufficiently infiltrate the US government and then decrypt all the years of data they've been collecting, at which point China gets everyone's trade secrets, the list of democracy advocates in their country, etc.
> there would have to be some process the result of which is the ability to decrypt any encrypted disk. Fiddling with the internals doesn't change the nature of it.
Yes, that process is called allowing authorized users to access the data. The owner is an authorized user, and there's a process for him/her to access the data on the phone. A cop with a valid warrant is just as legally authorized to access that data as the user is.
> But under your system the foreign government can keep a copy of everyone's device until they sufficiently infiltrate the US government and then decrypt all the years of data they've been collecting
Then make it more time consuming and cumbersome break the key escrow than it is to just break the user's password. There's all sort of things you can do: airgap all of the keys; move them to a different airgapped system after a couple of years; archive them to an encrypted tape after a few more years; generate multiple keys for each time period, allow the device manufacturer to choose one at random, then store the private keys at different locations. Imagine how happy China will be if they spent 15 years trying to break into the systems at FBI Washington only to discover that the key they were after is stored on an encrypted backup tape at FBI Boston.
At this point I have to stop and ask how much protection are we devoting to the task? Technically, no security system stops a determined adversary - it just slows them down. The idea is to either make the enemy have to expend more time and resources to get their data than they are actually willing to expend, or slow them down long enough to be caught. I used to work for the military in a secure facility a little over a decade ago. We kept our sensitive documents locked in safes. Safes are rated based on how long it would take trained safecracker to break into them. The safes we used were rated at around 45 minutes each - and they were behind a thick vault door rated at about an hour. Why was this acceptable? Because our sensitive documents were all behind that door and split between multiple safes, security walked by the facility every half hour and it took less than two minutes for a large group of armed guards to get there.
While we want to slow down our adversary, at same time an authorized user needs to be able to get to the data when they need it. The use case scenario for the device owner is different for a cop with a warrant. It's acceptable to have a lengthy, somewhat cumbersome process for the police with more security in place, since a cop doesn't need to gain access to a user's phone multiple times a day - they'll generally never have to access the contents, and if they do it's probably only going to be once. An iPhone user isn't going to tolerate getting cryptographically secure sign-offs stating that they are authorized to access the device from multiple secured facilities every time they turn on the device. A cop looking into a serious criminal investigation will.
Now I'd ask who it is that we're designing the system to protect against? How determined and well funded are they? Am I designing a system to protect the user from someone willing to devote a nation state's resources to breaking it, or am I designing a system that will protect a user from data theft by criminals? Apple's existing encryption system won't stop them from getting your data if they really want it - especially when you are physically in China. Walking around with a cell phone in your pocket is itself a huge security vulnerability. If we redesign the iPhone to be a computing device that's indefinitely secure against Chinese intelligence services, it ceases to be a cell phone and instead becomes a standalone computer sitting under armed guard in the basement of the Pentagon.
If you're travelling through China with sensitive data that the government wants, perhaps you should reevaluate storing it on your cell phone.
It very much is not. All of the schemes that purport to do so involve a systemic risk that the master key is lost to a hostile foreign government or criminal organization, and they inherently prohibit forward secrecy.