My powered on phone was 0day'd by Cellibrite according to internal Law Enforcement emails in disclosure and my powered down phone was not supported at all.
Edit: The cracked phone has been in my possession for a while now and I've only powered it on twice for mere minutes since getting it back if there are any phone experts around that want to investigate.
Seems to be a common thread - if you want your phone to remain secure, it must be powered off before seizure. This way, there is no encryption key in RAM. As long as the key is sufficiently strong (random passphrase, not a 4-digit PIN), you have a reasonable guarantee that they will fail to crack it.
Looking at the iOS platform security document ([0] page 68), apps use NSFileProtectionCompleteUntilFirstUserAuthentication by default, which keeps keys in memory after the first unlock, regardless of the Power + Volume Down lock. If an app opts into NSFileProtectionComplete, I believe the keys are purged from memory upon locking.
More obscure phones might not have support, but that doesn't mean they can't be opened with more effort.
When off, you're relying on the strength of the FDE passphrase and whatever key strengthening they implemented, and that the OS didn't leave some key fragments somewhere (accidentally on flash, which would be very bad, or remanent in memory if it has only been off for a short period).
Using a long alphanumeric (>12 random, >20 passphrase), not installing random apps, keeping it patched and keeping it powered down is probably the best you can do. I wouldn't use the baseband comms if I could avoid it, just a huge 4G attack surface.
If you root your phone, you can set your FDE passphrase to whatever you want while keeping a usable shorter unlock code. My phone's FDE passphrase is 26 characters long.
Even better, if you have a reasonable worry about losing control of your phone with data that is on there, you should just get a second one that contains nothing sensitive or personally identifiable, uses wifi only, no sim.
This is even good practice if you go to public places where you can get your phone stolen.
What sort of complexity was the S7's passcode? 4 digit pin? more than 4 digits? short password? longer passphrase?
Also, which Android versions? Stock Samsung or an alternative rom? (My S6Edge is "stuck" on Android 7 without replacing the OS with a non Samsung alternative. My S4 is running a much newer Lineage Android version...)
?l?d mask with I think 9 characters using default Android OS with updates. Both phones I think were minimum Android 5.0+ because that is when they switched to Scrypt to store password/encryption keys.
So, are we to gather that encryption on most android / iphones can now be easily broken? I seriously doubt this capability is just confined to law enforcement. How do we protect ourselves? I've started running full disk encryption on my laptop, but if I'm completely honest the data on my phone is just as valuable if not more. Any solutions to this?
Most people do not use secure passphrase to lock their phones, but use a PIN. Even an 8 digit PIN is only 10^8 guesses, which any modern desktop can crack. To solve this, Android/iOS make an extremely long salt, so that + your PIN is your passphrase. In addition, there is logic to deter brute force attacks (e.g. after 10 guesses wipe my device)
The attacks have to overcome both of those, but if they do, cracking the original secret (your PIN)
is easy.
The other method is attacking a "live" phone/laptop, as it has the unprotected key, and one just needs to exfiltrate it.
If you are truly concerned, DO NOT USE a trivial crackable PIN! Use a long password that is hard to crack by itself. Additionally, turn off your device if you think you are in a position where you have to surrender your electronics! That way the unprotected key is gone.
(As a side rant, I wish Android and iOS would allow you to have a "start up" password and a screen unlock password. You used to be able to do that on Android, but it was through root and not officially supported. But it would be nice because then you could have an easy to unlock screen password and a really difficult encryption password).
My understanding is that the master encryption key does not exist on the device in non-volatile storage, but rather is computed by combining a secret key (I guess you could call this a “salt”) stored in the secure enclave with the user’s passphrase. When you start the device and authenticate, this derivation happens, and then the derived key is kept in memory and it’s only software keeping an attacker out.
If the locked state without a derived key loaded is as secure as it theoretically should be, doesn’t that mean recovering data off the device without uncapping (or similar) requires correctly guessing the passcode in however many master key derivation attempts the secure enclave is configured to allow before it wipes the secret key and renders the stored data useless?
This seems really secure on average, even with a weak PIN. I’m trying to decode Cellebrite’s claims about access to “locked” phones and I’m assuming they mean locked with key loaded, though I guess I can imagine some vulnerability that sidesteps the maximum attempts logic in the secure enclave.
> My understanding is that the master encryption key does not exist on the device in non-volatile storage, but rather is computed by combining a secret key (I guess you could call this a “salt”) stored in the secure enclave with the user’s passphrase. When you start the device and authenticate, this derivation happens, and then the derived key is kept in memory and it’s only software keeping an attacker out.
Sorry yes I sort of glossed over this part and some other parts in my explanation.
> If the locked state without a derived key loaded is as secure as it theoretically should be, doesn’t that mean recovering data off the device without uncapping (or similar) requires correctly guessing the passcode in however many master key derivation attempts the secure enclave is configured to allow before it wipes the secret key and renders the stored data useless?
You are correct. As pointed out below, this also assumes there is no vulnerability in the enclave. Having a secure passphrase with a secure key derivation function provides a more secure foundation however, because even having unlimited guesses means you are still down to the original brute force.
>To solve this, Android/iOS make an extremely long salt, so that + your PIN is your passphrase. In addition, there is logic to deter brute force attacks (e.g. after 10 guesses wipe my device)
nitpick: it's not just a "long salt", the "salt" is burned into the phone's security processor (trustzone or secure enclave) and can't be extracted so any attempts must be made using the exact phone. Using a gpu cracking cluster wouldn't help, for instance. AFAIK the anti-guessing logic is also implemented like this, otherwise you can just extract the salt and make unlimited attempts since the phone can't keep track of how many attempts you made.
Trustzones or secure enclaves can be "broken" via uncapping. They are designed to be resistant to that, not unbreakable. Uncapping is not new and has been around for a long time, it just takes a lot of expensive hardware and expertise. Ultimately, the keys are bits on silicon, and with some acid, a scanning tunneling electron microscope, and a lot of time, the keys can be pulled off.
It is very unlikely this sort of technique would be used against anyone but the most extreme threat actors (potential terrorists, etc). But with physical access, all bets are off. It isn't a matter of if they can decrypt, but how long it will take them given the resources available.
> Trustzones or secure enclaves can be "broken" via uncapping.
That's true, but it's a big step up from the “push a button, data appears,” described in them article.
If you become "a person of interest" to MOSSAD, good luck. You'll be needing those magical amulets and the submarine to hide in - and YOU'RE STILL GONNA BE MOSSAD'ED UPON!
Does the Titan M not store data in some physical medium?
As far as I know, there isn't any way to prevent a truly dedicated and resourceful attacker from reading data that exists physically. The only constraints are money and time.
A chip that e.g. self-destructed on detecting signs of tamper would be harder to break, but if the data exists physically, it must be recoverable in some way, no?
Bingo! Money and time. When you're going against an adversary with massive amounts of both, all bets are off. Physical access means eventual compromise.
Shouldn't a strong symmetric encryption with a long key prove to be safe enough as to not be crackable in a reasonable amount of time (~10 years), even if the hardware is directly accessible ?
Probably, at least in practice if no critical components in the chain fail due to bugs/exploits and are as secure as intended (excluding the metaphorical "hammer VS fingers" attack).
I think the difference here between you and the poster you replied to is that you're probably correct given certain assumptions but the earlier post also remains correct in a conceptual or theoretical sense, like pure information/security theory
I don't know why you were voted dead for asking a question.
That's a new account (name in green). I think they start out dead. I think one or more comments need to be voted up and then the account becomes not dead (alive?).
The trick for this case is a very long key, 1mb of more, try to recover all of them, you can lose about 60 bits, but not much more. What is the speed of extraction? You only need to delay it for a couple of decades.
Yeah some higher end security chips have passive mechanisms that make the microstructures more delicate and harder to reach and therefore harder to physically read. This is on top of active mechanisms that try constrain operation to well defined range of temperatures, voltage and power conditions to prevent brown out and booby traps that trigger effacement.
You have to disclose a lot of info during the cert process though so I don't think this is too much of a hurdle for big players like the NSA.
Newer devices (only Pixel 3 and newer, and Galaxy Note 20 and newer) also come with hardware security modules called Strongboxes (like Apple's Security Enclave).
The big issue with the unlock password being used for FDE is that the average user unlocks their phone many times/day, often in view of cameras.
If I worked for a LEO and was asked to "crack" a locked phone, I would start by looking at surveillance footage from anywhere the suspect might have pulled their phone out of their pocket.
That's my set-up with Android now. However, on a Pixel3a, I feel like I have to do a manual input several times a day, usually if I wash my hands, use lotion, things like that. It also doesnt work well if I am outside in the cold and need gloves.
Also, it is probably possible to narrow down pin codes by looking at the surface of the phone for finger prints/screen damage. At least on Android, the unlock will appear in the same location every time and unless your pin uses every digit, there will be heavier use in some areas. It is unfortunate that stock android does not rotate the input keypad, I believe some ROMS (AICP?) would do this and thus eliminate this attack surface.
Computationally I understand that cracking PINs is trivial for modern computers. But does anyone know by what means these mobile forensics tools are able to brute force? I thought iPhone's will lock you out for a certain amount of time after a certain amount of failed attempts. Depending on the device/OS, are there ways to extract some sort of hashed password then crack on desktop like usual? Is a vulnerability of some sort always required to extract phone hashes?
But Every process can be prone to glitching. Just mess with the “how many password failures have occurred? If >10 start erasing” process and there’s no such thing as too much failure anymore.
iPhones used to increment (or save the increment) after the password entry failure, so bypassing used to be as simple as quickly disconnecting power before your failure got recorded.
It’s also possible they can make copies of enough data and have a “slave” iPhone embedded in the device to continue cracking offline, just in case only the guts of an iPhone can do the decoding.
> I thought iPhone's will lock you out for a certain amount of time after a certain amount of failed attempts.
One of the techniques used is, copying the memory from your phone onto another phone, and then testing passwords on that. When it locks up, wipe it and recopy. I don't know if that is still used, however.
why not copy memory to another device and run a crack against a segment of the memory known to hold specific values until you match? I am not sure if they can block taking encrypted data off a device.
So my question is, case intrusion. Can this be detected in various forms in order to render the contents of the device useless? The safety factor to the owner is that they would have backed up the device to a trusted source before they voluntarily submitted a phone to being physically opened. you would need to not only check for separation but drilling through either side.
Not allowing "easy" access is not the same as "no" access though. I could easily imagine a radar/MRI imaging system generating instructions for a cnc drilling rig that would make micrometer accurate holes through the chip packaging, then inserting extremely precise probes directly into the wires inside the package.
I could also imagine a team of 5-10 engineers making such a system in a year (total costs <10 million), with 20-50 million in off-the-shelf hardware costs as a pessimistic estimate. As a company, you then "only" have to amortize this cost over 20 countries each wishing to crack the phones of 5 phones of high-profile criminals and/or dissidents each to get to an average cost of 600k per phone. It would easily be worth that much to the US to crack the phone of a high profile drug lord.
Long story short, private companies (even with as much resources as Apple) either need watertight mathematical proofs of security or accept that they stand no chance at all against nation state adversaries.
Unless your Apple product has a T2 chip. If it does, there's not protecting it other than don't let it physically fall into someone else's hands. The T2 debacle is probably the single greatest let down for me in Apple products. There have been plenty of things I bitch and moan about, but nothing has been as compromising as the T2 failure.
Thanks for the link, whose article is understandable by non-security technology types like myself.
Here are some limitations to the T2 vulnerability which give those of us with T2-equipped devices less to fret about:
> There are a few important limitations of the jailbreak, though, that keep this from being a full-blown security crisis. The first is that an attacker would need physical access to target devices in order to exploit them. The tool can only run off of another device over USB. This means hackers can't remotely mass-infect every Mac that has a T2 chip. An attacker could jailbreak a target device and then disappear, but the compromise isn't "persistent"; it ends when the T2 chip is rebooted. The Checkra1n researchers do caution, though, that the T2 chip itself doesn't reboot every time the device does. To be certain that a Mac hasn't been compromised by the jailbreak, the T2 chip must be fully restored to Apple's defaults. Finally, the jailbreak doesn't give an attacker instant access to a target's encrypted data. It could allow hackers to install keyloggers or other malware that could later grab the decryption keys, or it could make it easier to brute-force them, but Checkra1n isn't a silver bullet.
If the device is left unattended at a border crossing and it is taken away to a back room, you should just assume its been hacked or modified and shouldn't trust it again.
I wasn't referring to someone leaving/forgetting/losing a device. I'm talking about carry your devices through border inspection, and having those agents confiscate your devices.
> In addition, there is logic to deter brute force attacks (e.g. after 10 guesses wipe my device)
The brute force protection has historically not been flawless. Rumour says the Greykey used to brute force the pin/passphrase by glitching the power every few attempts which reset the counter.
What length PIN would you recommend in light of what commercially-available equipment is capable of cracking? Is an actual, honest-to-god password supported on latest Android?
It really depends on how your password is selected. "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" is a long password but won't protect you. You really want to optimize for ease of entry and memory, rather than using the largest character set or having the longest password. For a point of reference, these three choices have (roughly) equivalent security (128 bits of entropy):
1. 27 characters - all lowercase, randomly selected
2. 20 characters - upper/lower case + 15 special characters found on the first page of the iOS keyboard, randomly selected
3. 45 characters - 10 words from eff's short diceware wordlist, randomly selected
the first option would be the easiest to enter, despite being 7 characters longer than the second one, because you don't have to constantly switch between the various character types (since you're using a touch keyboard). The third one is 66% longer than the first one, but probably is easier to remember.
As for how many bits of entropy to use, AFAIK a preimage attack against md5 (128 bits) hasn't been pulled off yet, so it's probably safe for for the foreseeable future. You can probably go lower than that and still be safe, considering that a md5 preimage attack is significantly easier than cracking a phone pin (eg. no memory/cpu hard KDF, no trustzone/secure enclave).
> honest-to-god password supported on latest Android?
Yep, I have one. I haven't kept up with the latest, but the "standard" good password should be good (i.e. above 12 characters, letteers, numbers, etc.)?
They aren't easily broken - they can be broken by tools which would stop working if the underlying vulnerabilities would get detected and fixed, so the availability (and thus usage) of these tools is quite restricted.
E.g. many of law enforcement agencies can break encryption on modern phones by shipping the phone to someone like Cellebrite and paying a hefty fee per device, but they can't do it themselves, and often can't buy the capability to do it themselves (they don't get trusted not to leak the tools somehow and kill the goose that lays the golden eggs so to say) and thus can't do it on a large scale.
There almost certainly are some intelligence or LE agencies in the world that have such a capability in-house, but it's a quite high bar, and I believe that most countries and most agencies in a decentralized policing environment like USA don't have such tools available to themselves - they could have the connections to get it done for some specific devices by one of the very few actors who can do it, but that's not what "easily broken" would mean.
To be clear, LE won't get Cellebrite's tools because they don't trust themselves to use the tools properly, but LE demands key into all companies' systems and claims they can be trusted to use those kes properly?
Law enforcement is not a monolithic entity. Your municipal police department or county sheriff’s office can barely check its email, let alone backdoor a tech company. However it might be able to get technical assistance from the FBI or a private digital forensics consultancy on an important case.
Clearly the "solution" is for backdoors to be administered by a centralized, "trusted", private, for-profit monopoly! I'm sure VeriSign would be happy to bid on it.
Use your phone as a thin client. For instance, dont install email software but instead use a webmail interface via a secure browser. That way when they unlock the phone they dont get your email. The airport cop might see a bookmark to an email service but will still have to ask you for login details, giving you control over which account they see.
This is a hard balance, though. Using unique, randomly generated passwords for the 30 or so sites I routinely use is safer against breaches, but starts to almost require a password manager, which presumably can be broken if someone is already exfiltrating secrets from your phone.
In my opinion, the only route to actually make this work and still be good against breaches is to memorize a function that generates a unique password given the website name. If you type it enough you’ll eventually memorize it, otherwise you can just use that function to figure it out more slowly.
Not to mention that if you have access to my texts a lot of sites will just let you reset my password.
> In my opinion, the only route to actually make this work and still be good against breaches is to memorize a function that generates a unique password given the website name
That's a bad idea. If one guesses your function, all your passwords are gone and it's not very hard.
That's a compromise. The best idea is to memorize a totally unique machine-generated password for each separate site and service. But there can't be more than a handful of people in the world with the skill and drive to do that correctly. There needs to be some degree of tradeoff with accessibility, or you're just designing security for spherical cows in a vacuum.
I think a security savvy person using the function approach correctly is in much better shape than most people, and indeed most security professionals.
But is this about website account breeches or the FBI unlocking phones? There is no omni-answer to privacy. My recomendation was only to address the threat of law enforcement unlocking devices, not securing one's accounts on dozens of random websites.
There’s a key word that, I think, further validates what you said above and that I missed considering in my post. “Random” assumes one has equal security for everything, when there are things like email or cloud storage in general that should be protected above and beyond.
If the password is ever exposed via a breach, generally smart people can pick up on that pattern and then all of your passwords are cracked.
There are ways you could alter the pattern, maybe a separate short salt for each site in addition to the domain, so all you need to remember is the salt, like "dog" for pets dot com, that could make it a bit more secure. Or vary how you combine the master PW and the domain (i.e. count the number of letters in the domain and insert the domain starting with that number, or each letter of the domain that many characters appart embedded into the master PW.)
Or...just use random passwords, a password manager, and make sure you both trust the provider of the password manager (or use one you control) and use a super high security password as your master password in addition to other forms of authentication. And never let it allow you to stay logged in to your password manager.
I personally use a password manager except in a few physical cases where I use the function approach or have just directly memorized the PIN (iPhone, YubiKey pre-fingerprint).
The function approach is a normal password, just that it scales O(1) with your memory for O(N) passwords. The expectation is that if you have a complex enough function it will be non-trivial to back-fit it, this doesn’t necessarily hold for any arbitrary function, of course.
There are definitely some workarounds for encryption available. For example, it appears that even for some newer phones Cellebrite supports an approach of brute-forcing the PIN without locking the phone due to too many retries. But those workarounds don't seem to be the primary benefit / use-case of Cellebrite: It seems that in lots of cases people surrender their passwords when asked and the main benefit of Cellebrite is in extracting and presenting information from a phone once it's unlocked.
Once unlocked data from iPhones is relatively easy to extract with a local backup. Android is much more limited in terms of which data is available as backup and the common approach appears to be to use ADB for extraction. Either by downgrading an application or by automated screen-scraping of the data (or by using exploits on outdated Android versions). For WhatsApp it appears that the "downgrade" approach might be used, which downgrades to an earlier version that makes extraction easier. That's the same approach that's also used by consumer backup tools such as Wondershare Dr.Fone.
Regarding the more complicated approach to extract data from Android: Starting with Android 11 and API level 30 Android is starting to ignore the allowBackup manifest attribute, thereby allowing to locally extract data from applications via ADB even if the applications don't allow that (source: https://www.xda-developers.com/android-11-force-app-local-ba...). While definitely a benefit for users, I wonder if another motivation for that change was to simplify data extraction for law enforcement.
If you are in an environment in which someone else can physically access your device (like an airport), don't bring it with you. For example, when traveling abroad, use a cheap alternate cell phone.
That sounds like a good idea in theory, but how does it work in practice? Asking as someone who's never tried it.
For example, most of what I do on my phone is saved to the cloud. That's why I can mostly migrate from one device to another without actually directly copying anything over. Contacts, photos (if you pay for e.g. Google One), user contents of virtually all apps, etc. Last time I did it, I actually moved over the call log, SMS history and maybe that's it? Can't remember.
So what benefit is there to using an alternate phone if you sign in to the same accounts? And if you don't sign in to the same accounts, then your alternate phone is more than just for travel. You're using it (or accounts associated with it) to plan your travel, for example. You've now got to have two parallel 'digital identities' continuously in use, which gets complicated fast.
My strategy for dealing with this is not to worry about it, but to only travel to countries where I don't have to worry about it. I'm not a journalist, a politician or an activist, so the list is quite large. The countries where I could get in trouble, I won't travel to anyway (because I don't want to be arrested for being a member of a cultural organization they deem 'extremist').
It might be a huge drop in usability but a solution might be only logging into services via browser and never logging in via apps or system accounts (not even logging in with your brower sync feature).
This way once you have purged your browsing history/cookies your device doesn't have access to your accounts (even if likely it will still remember quite a few personal informations).
>So what benefit is there to using an alternate phone if you sign in to the same accounts?
It depends on whether you need all the "same accounts" when you travel, and what specifically your threat profile for "travel" is specifically in time and geography. I suspect in many cases, the threats vary tremendously, with the most sophisticated dragnet ones existing primarily at specific times/places like airports or borders. In fact those usually explicitly have unique legal regimes associated with them, all nation-states have strong interest in their ability to filter what enters/leaves. But once someone is past that, they often have protections that are much stronger both as a legal matter and a purely practical one that it's still hard/impossible for authorities to apply the same kind of tools and scrutiny they do at chokepoints. The threat for most would shift to common criminal activity which is different to defend against.
So in my case, for the specific act of traveling the only accounts I really need are ones that frankly have zero particularly privacy vs government actors. I'll have my accounts for the specific modes of travel (airline, taxi/car rental, maybe train) all of which are covering data that is shared as a matter of course (air travel is massively regulated and involves a lot of security concerns, I'm going through government control points and checks for international travel regardless). I'll probably have some stuff that falls under the category of "financial", ie., a couple of credit cards, but the financial industry is also heavily regulated. A dedicated travel email for a few hours is easy. I might load up a few ebooks to read, but there's plenty of completely innocuous stuff I'm interested there. I can read plenty on the web without being logged in. And that's about it, and even that is "luxurious", it's still perfectly possible to travel for a few hours with no digital devices on at all.
So the most risky parts become easier to get through with "nothing to hide", and then you can pull up a secure channel back to your own systems (or cloud you trust more) on arrival from memory. From there you can restore what is needed for local use, or run purely online. In some cases one could even do an offline "ship themselves an encrypted drive" separately. No need for anything in parallel. This also has the advantage of simply less worry about having a device lost/stolen/confiscated, reducing such an event to purely a matter of money which can be insured against.
Obviously the logic has to change if someone, specifically, is a Person Of Interest to a state-actor. But generalized dragnets and just bad luck (some border guard thinks a random person is "acting suspiciously") are a bigger chance of hassle for most people and being able to react with that to reduce stress, time and cost for low effort is useful to at least consider.
Granted it could be even easier on handhelds (for regular systems there are a lot of powerful tools one can use or whip up themselves already), and I'd love to see Apple and Google explore the concept of profiles that can automatically switch what apps & data are fully encrypted/on-device or not depending on user defined criteria.
>> If you are in an environment in which someone else can physically access your device (like an airport), don't bring it with you.
Agreed.
After going to a few infosec conferences and getting flagged on way home, I started doing this now whenever I travel, no exceptions. Full disclosure: I'm a web developer. I have little if any connections to known hackers or highly public Blue Team members; so I'm still not sure why I got flagged, detained and all my electronic devices (smartphone, laptops, storage devices) seized and searched.
I now use an old windows Lumia 950. No fingerprint sensor, fake email accounts set up for the outlook app and I use a vpn with the mobile IE version on the phone to access web mail accounts. Still get texts ok, but if I need to save something, I just log into my email, copy/paste then delete the texts before I get to the airport.
If I'm stopped now, I can be totally compliant and still have somewhat peace of mind they won't have anything worthwhile.
Since Windows has long since killed off its smartphones, they are easy to get a hold of on ebay and elsewhere. The other nice thing is I'm not sure Microsoft is even tracking users data on these phones anymore so you might have a nice advantage that way too.
The encryption is not broken. The protocol is broken. Modern ciphersuites (AES128+SHA256+ECC) are bulletproof. The problem is the protocols for storing keys, which is every single attack.
If someone wants you gone and has the power to make you gone, whatever encryption you have is irrelevant. Just do enough to protect yourself from automated or low skill attacks.
I’m kind of curious though... what are you storing that would draw the attention of a nation state with the power to do this? Whatever you’re storing is not worth the anxiety of storing it.
I know you didn’t say this explicitly - but in the interest of privacy I don’t want anyone looking at anything I own without permission. It’s as simple of that. We’ve gotten so far away from basic rights to privacy with digital devices it’s sad.
I read a great analogy - everyone knows what happens in a bathroom but you still shut the door (and usually lock it) because you want some privacy not because you have something to hide.
Correct, but nobody builds bank vault doors for their bathroom. That’s my point here. This article is about how law enforcement can break down bank vault doors and everyone here is like “this makes my bathroom door irrelevant”.
Your bathroom door was already irrelevant, you just never found that out because nobody is busting down bathroom doors.
None of us here has anything actually illegal to hide IMO.
Political climate and affiliations change however. What was quite fine and protected by freedom of speech today can be declared illegal by an oppressive government tomorrow. In a situation like this you definitely don't want to have your personal data gathered as easily.
To be fair, random assholes on Twitter are much less likely to have nation-state-level resources with which to investigate you, so the threat model is a bit different for one type of witch hunt versus the other.
>None of us here has anything actually illegal to hide IMO.
If they can break your encryption, they can add the illegal stuff that was clearly yours to begin with. Works great as a character assassination. And of course any questions about such incidents would be dismissed as conspiracy theories.
As for how society would react to such a setup? I think the guy involved in 3d printing guns gives a close enough example.
"Private" communication exists using the Internet. Sometimes your device is in custody because you are in custody for suspicions they already have. In which case, good luck. Other times your device is in custody just because you happen to be crossing a border, for example. In that case, having the device be actually locked might prevent you from being turned away at the border or even ending up in custody yourself.
> "Private" communication exists using the Internet.
Sure, but virtually nobody depends on it to protect the sort of data that you are worried about if you're worried about an oppressive regime deciding to persecute you for what you say. For example, if you're afraid something you post on HN might cause an oppressive regime to come after you, securing your phone, or for that matter your laptop or desktop, isn't going to help you at all. Same goes for pretty much any place people say things on the Internet. Unless you never say anything except in a private, locked chat room secured by unbreakable encryption, and never send any email to anyone without using PGP and without being certain that they are also taking the same precautions you are, anything you say on the Internet is out there for the finding if anyone gets interested enough in you to look.
Signal is incredibly easy to use and many people use it. It’s easy to limit sensitive things to a medium like that. It’s also easy to create profiles that are some degree of “anonymous” that you can use for semi-sensitive data.
You don’t have to resort to never using the Internet as you imply. The vast majority of things people do online are perfectly safe for you to do, even if you are a potential target of a government.
> Signal is incredibly easy to use and many people use it.
For particular kinds of communication, yes.
> It’s easy to limit sensitive things to a medium like that.
It's possible, I suppose, but I would not say it's easy. People want to do many, many different things online. Signal only facilitates a tiny fraction of them. Most people won't be willing to limit their activities online so drastically.
> The vast majority of things people do online are perfectly safe for you to do, even if you are a potential target of a government.
Either you have a particularly limited set of online activities you engage in, or you are drastically underestimating the lengths to which governments will go if they become sufficiently oppressive.
Fair enough, but that's not my point. My point was more about persecution, not outright killings. That's an extreme that many governments won't reach for, even in the third world.
It forces the gestapo to go door to door (i.e., expend human resources). Many jews disappeared from western countries simply because the nazis used the civil records first -- it detailed what everyone's religion was where they lived. Who needs to go door-to-door when you can use centralized records to pinpoint exactly where you need to be?
> what are you storing that would draw the attention of a nation state with the power to do this?
The costs they're discussion (several hundred dollars to a million dollars) aren't only the province of nation states; plenty of larger criminal organizations can afford that. And the price of software & hardware generally goes down over time.
I don't think a nation-state is looking for my info and even if they are, there's nothing I can realistically do to stop them. Thee NSA (or whoever) simply has more resources than me.
So, my primary concern is protecting from ID theft and nosey police officers/TSA agents. A long passcode is mostly sufficient. Lock the device before interacting with said LEO and hope they don't decide they need the $5 wrench.
Such techniques should, at a minimum, be disclosed and vetted at trial.
In the US, the government is probably going to ask for (and receive) some sort of national security waiver to sharing the details. Generally up to the judge to determine if that request is valid.
Or, the law enforcement agency will use parallel construction to get a conviction without exposing their tech.
True, but the judge has likely instructed them that the evidence is acceptable as presented (without technical details of the phone hacking). And jury nullification isn't common in the US.
The judge can say that the evidence is admissible. It's up to the jury to decide what weight, if any, to place on the evidence.
Now, true, the judge is probably not going to let the defense attorney, in closing arguments, say "That evidence came from a hack of the defendant's phone, which is a really invalid thing to do, so you should ignore that part of the evidence." Judges get really unpleasant with attorneys who cross the judge's lines on what they can and can't say in front of the jury.
So this would probably be left to the jury to figure out on their own. I'm not sure how likely that is...
From the courts' point of view, I don't see why that would be considered relevant. Assuming the phone was seized properly with a search warrant, there is no legitimate expectation of the secrecy of its contents, no matter how encrypted or secured, all of that data can be legitimately used as evidence and for general investigation purposes.
There is some precedent where courts have used the safe analogy; disclosing the passcodes might be considered analogous to disclosing a safe combination, and forcing that disclosure may violate anti-self-incrimination provisions (such as 5th amendment in USA); however, there's no contest that if the officials are able get access to the safe (or, equivalently, the phone) without that information, they are allowed to do so, and the methods used for breaching the safe aren't really relevant, just as the specific methods for breaching the phone. The evidence would be admissible no matter what method was used so refusing to disclose the specific methods for drilling the safe (or breaching the phone security) would not invalidate the evidence. If the defence would demand that the specific method of breaching that safe must be publicized, any judge would throw that demand out as irrelevant to the case.
The only thing that would matter is that the defence might contest whether the data that the prosecution claims to have been extracted from the phone was really on that phone. But the standard measures for convincing the judge and jury of that - mainly, careful documentation of provenance of any data and careful custody of physical items - generally don't require exposing the details of the specific method. And if the specific data isn't used as evidence but just as information to find some people or physical evidence, then there can be no contest at all if there was a valid warrant for searching the contents of that phone.
The defence has no legal grounds whatsoever to demand that such techniques should be disclosed to the public. They might have legal grounds to require that the techniques used should be vetted, but this can (and is) easily done in a closed manner by having court-appointed experts doing the vetting and testifying "yep, we verified the tool and methodology used and the results match what was on the device" while being prohibited to disclose any commercial secrets about the techniques used.
FOIA? The answers probably no but it doesn't hurt to ask.
Assuming the techniques aren't already in the public somewhere or another, these things are pretty big sales, so I bet someone better on the phone than I could probably social engineer quite a lot of information about the outlines of what they're doing.
You are reversing cause and effect; people who are drawn to authoritarianism are disproportionately drawn to law enforcement, where they have a protected position from which to exert their authoritarian impulses.
> Do they not understand their role as a vanguard against tyranny?
Many of them do enough to use it to justify their own authoritarianism, especially law enforcement leaders who tend to cite it as an excuse for their own arbitrary actions overriding or neglecting the law they are called on to enforce. Fewer seem to take it as more than an excuse for their own tyranny, though.
As a counterpoint, this isn't always the case. My local sheriff has gone on record saying he will refuse to do things he believes are unconstitutional. It has come up. I'm actually a big fan.
I really hate the "my hands are tied" attitude that leads to stupid things like children getting arrested by the school resource officer.
A psychiatrist once told me that the mental paths to become a criminal were exactly the same than to become a cop and the end result was mostly because of their previous life experience.
Because it is not the responsibility of the police to protect you [1], [2]. Their primary job is to protect the wealth and power of certain classes by delivering punitive measures for all sorts of things (like morality crimes [3]) to keep the lower classes in line. Those upper classes want LEO to have every available tool in their power, regardless of legality. Even worse, LEO is now being infiltrated by militant partisans, so there is even more motivation for them to ignore laws. [4]
Genuine question: how many of them do you think have ever thought about this?
Thinking about it would also not necessarily result in the conclusion you want. Even they did think about this, a passing thought to the effect of "I should protect the people against tyranny rather than do what the boss says" is most often not enough to change a person's general disposition. This kind of benevolent action requires a person of moral character to begin with.
I suspect (with no evidence) that it is often the case that they're just there to get a paycheck.
No, and I don't think anyone sees them this way (the same way "free speech" loving Republicans wouldn't consider themselves "liberals").
The police are almost always tools of authoritarianism. They act as, well, authority figures when it comes to who is a criminal, which allows for so much abuse. Most courts basically take it for granted that if a cop and a civilian say opposing things, if there is no other evidence the cop is to be the trusted figure.
Most of the mobile encryption schemes are using file system encryption, not block encryption, i.e. the metadata is not encrypted other than directory and filenames. There's a range of opinions to what degree the unencrypted file metadata: file sizes, dates & time, relative locations in the directory tree, can be used to infer... what? some kind of pattern that suggests the owner is using or once used an app banned in the country they're entering?
We know metadata between e.g. browser and server, two callers, message sender and recipient, can reveal quite a lot. But it's quite a bit more limited with file metadata, but to what degree is this a problem?
Edit: The cracked phone has been in my possession for a while now and I've only powered it on twice for mere minutes since getting it back if there are any phone experts around that want to investigate.