Hacker News new | past | comments | ask | show | jobs | submit login
Justice Department Wants Data from About 12 Other iPhones (wsj.com)
168 points by rosser on Feb 23, 2016 | hide | past | favorite | 94 comments



"the FBI said they are not seeking to set a precedent in the case, but to get the company to help them open a single phone that may hold crucial evidence to help explain the most deadly terrorist attack on U.S. soil since Sept. 11, 2001."

What qualifies this as being a terrorist attack? Is it because the colour of the perpetrator's skin wasn't white? Sandy Hook had double the number of resulting deaths and so is technically more deadly.

Virgina Tech was done by a South Korean born man with even more deaths than Sandy Hook.

Poor reporting WSJ


This is the national narrative of the event now. Presumably, this is because the perpetrators allegedly voiced some kind of allegiance for ISIS or something. The phrasing is meant to invoke all the fear, uncertainty, and doubt of 9/11. It's also intended to capitalize on the scaremongering that flows so easily into a national consciousness reeling at the shock of an event like this. We can't call mass shootings terrorist attacks when they're perpetrated by disturbed, home-grown killers who haven't made some kind of political statement in connection with their actions. This is arguably why the VA Tech shooting is not terrorism, but this one—and others, like the Planned Parenthood shooting, the Oklahoma City bombing—qualify as terrorism ... and then get repeatedly thrust into the national conversation with those labels so people don't think of them merely as crimes.


> We can't call mass shootings terrorist attacks when they're perpetrated by disturbed, home-grown killers who haven't made some kind of political statement in connection with their actions.

The Virgina Tech shooter sent an 1800 word manifesto and 27 videos to NBC News during his rampage. I'd say that qualifies as making statements.


It's been a few years, and my recollection is hazy. Sorry. Were these statements political statements of a sort that opposes the US government in a clearly defined way? Did they state some kind of ideological agreement and allegiance to ideas considered anti-US or anti-US-govt? It's late, and I guess I wasn't as clear as I needed to be. I was making a pretty obvious and clear distinction on what kind of political statements I meant. I didn't say statements of any sort qualify the terrorism label.


Well I have to admit, my understanding of what constitutes terrorism was not solid, so I googled it. It seems the statements don't necessarily need to be political in nature.

The FBI defines domestic terrorism[1] as:

> "Domestic terrorism" means activities with the following three characteristics:

> - Involve acts dangerous to human life that violate federal or state law;

> - Appear intended (i) to intimidate or coerce a civilian population; (ii) to influence the policy of a government by intimidation or coercion; or (iii) to affect the conduct of a government by mass destruction, assassination. or kidnapping; and

> - Occur primarily within the territorial jurisdiction of the U.S.

The first point, I imagine mass shootings qualify as acts dangerous to human life that violate law.

For the second point, I would argue that sending a manifesto[2] to media with nuggets like what follows are designed to intimidate/coerce a civilian population

"Thanks to you, I die, like Jesus Christ, to inspire generations of the Weak and Defenseless people — my Brothers, Sisters, and Chil- dren —that you fuck. Like Moses, I spread the sea and lead my people —the Weak, the Defenseless, and the Innocent Children of all ages that you fucked and will always try to fuck —to eternal freedom. Thanks to you Sinners, you Spillers of Blood, I set the example of the century for my Children to follow."

Lastly, the attack occurred on US soil.

1 - https://www.fbi.gov/about-us/investigate/terrorism/terrorism... 2 - https://schoolshooters.info/sites/default/files/cho_manifest...


Hey, I think there's a chance I misunderstood your original comment to which I replied and started what, to me, has turned into a tangentially related discussion. I wasn't actually trying to provide a legally sound and properly defensible definition of what constitutes calling the San Bernardino event a terrorist attack versus other mass shootings. I was merely offering that it is the narrative chosen for discussing the event, as well as advancing a political agenda, regardless of whether its defining merits actually differentiate it enough from other mass shootings to cross the line into terrorism territory—which I think long ago crossed into the nebulous and ill-defined. Personally, I reject this narrative, and see this event as either YAMS[1], or all mass shootings as YATA[2].

That said, I think there's allegedly more to why the Va Tech shooter wasn't called a terrorist in the national narrative and reporting on the event. That excerpt of his statement sounds like the ravings of a lunatic to me. Of course, I think the same of any type of statements that champion murderous religiously motivated intent. I don't read that and get the sense of any desire to effect political change, influence policy, intimidate, etc. There is a certain practical and theoretical argument that can be made arguing all public actions taken by human agents are inherently political. But some actions are more political than others. I don't think that shooter's statements really had the effect of intimidating or coercing the public. I don't think they even registered in the public consciousness.

Anyway, my original point was that this San Bernardino shooting has been thrust into the national consciousness as part of the terrorism narrative because it's a politically convenient message, and because, as you wondered, the ethnic and religious identity of the perpetrators fits that narrative so perfectly. I mean, it was instantaneous. Had the shooters been radical, white Christians, I don't think that would have happened. There would have been news reporting that asked the question, "Is this a terrorist attack?", and then give a no. Holding the Judeo-Christian god as one's source of obligation doesn't yet fit that category.

[1]: yet another mass shooting

[2]: yet another terrorist attack


By that definition, the U.S. government and military are terrorist organisations.


> FBI investigators have said that Farook and Malik had become radicalized over several years prior to the attack, consuming "poison on the internet" and expressing a commitment to jihadism and martyrdom in private messages to each other. Farook and Malik had traveled to Saudi Arabia in the years before the attack. The couple had amassed a large stockpile of weapons, ammunition, and bomb-making equipment in their home.

It's pretty offensive to focus on the shooters' skin color, instead of what they were: violent Islamic fundamentalists.


We've seen violent Christian fundamentalists recently with the Colorado Planned Parenthood shootings.

While many around the world consider that a terrorist activity, it had not been labeled as such by US officials (the guy is only being charged with murder and not additional offenses).

I find the double standards to be what is truly offensive.


The Colorado Planned Parenthood shooters didn't pledge support to a worldwide movement of Christians trying to overthrow the western world order.


The one still alive describes himself as a "warrior for the babies". He also

> Mr. Dear described as 'heroes' members of the Army of God, a loosely organized group of anti-abortion extremists that has claimed responsibility for a number of killings and bombings."

He said the attacks were politically motivated.

https://en.wikipedia.org/wiki/Colorado_Springs_Planned_Paren...


But it's terrorism nonetheless. The anti-abortion movement in America is just as organized and well-funded as the group behind the San Bernadino shootings, if not better organized and funded. They also have the protection of 1st Amendment liberties here.


That's absolutely irrelevant. Terrorism is politically motivated violence, full stop. You don't have to be a card-carrying member in a "designated terrorist organization" to be a terrorist, and to suggest that you do is myopic and naïve.


What exactly is this nebulous western world order? These guys we're supposed to be wetting our pants out of fear of can't even agree themselves beyond some blathering about Israel.


No seriously, how come this case is so much more important than all those other mass shootings (and I'm hearing a lot more about San Bernadino than I'm hearing about Dylan Roof)


Because when a fundamentalist Christian shoots up a Planned Parenthood clinic, it's a "lone gunman" and not a "terrorist". Welcome to America.


WSJ is simply reporting what the FBI said in reference to Sandy Hook. The FBI is at fault here, not WSJ.


He-said/she-said is poor reporting.


"He-said/she-said" is referring to reporting rumours. This is reporting of a public statement of a government agency.


It's also about reporting statements verbatim without trying to discern what the truth is. Writing down a public statement is basically just relaying a message; at that point you are writing a press release for them, not doing journalism. And then you wonder why they replace you with a Reuters feed...


> at that point you are writing a press release for them

Or, as is often the case, cut-and-pasting from a press release they provided you.


http://time.com/4136457/terrorism-definition/ tries to give an answer, which doesn't make much sense to me. That's the "official" answer, though.


Thank you for the article. Very informative


A terrorist is someone who uses violence or the threat of violence to coerce or intimidate.

Revenge murders or anger murders are not terrorists. They kill people and then kill themselves; directly or by police. Because they are dead they are no longer a threat. There is no parent organization to fear.

At least that's my definition. And what I think along the lines of what most people intuitively think.


> A terrorist is someone who uses violence or the threat of violence to coerce or intimidate.

Not trying to nitpick, but with this definition, any police officer who has used the threat of shooting someone with their firearm or taser is using the threat of violence to coerce people.

I'm not a supporter of terrorist organisations in the slightest. I am just sick of the racial profiling that goes on which is helping to fuel their cause.


>Terror on the other hand is practiced by governments and law enforcement officials, usually within the legal framework of the state.[1]

Depending on the circumstances the scenario you describe can be terror. IMHO swatting on shaky grounds would qualify.

[1] https://en.wikipedia.org/wiki/Terror_(politics)


I think that having a political agenda, and stating that they were loyal to an organization (ISIS) that is calling for acts of violence in order to achieve political aims is what moved San Bernadino into the category of terrorism, and something that was missing from Virginia Tech and Sandy Hook.


"Terrorism" is a topic where we have to come to grips with definitions early on, or the analysis goes seriously astray.

To me, terrorism is the use of stealth to deliberately target civilians in a media effort to change political opinion.

I've had this definition for several years, and I have never reached some of the crazy conclusions other commentators reach about the topic "George Washington was a terrorist!" "One man's terrorist is another man's freedom fighter" and so on.

Definitions matter.

Note: This is not related to the merits of the FBI's case. My point is simply that we can all use the word "terrorism" and all actually be talking about different things.


On a side note: The FBI didn't/couldn't even properly investigate 9/11 so why do they still dare to use this wording? ("to help them open a single phone that may hold crucial evidence to help explain the most deadly terrorist attack on U.S. soil since Sept. 11, 2001") What came out of PENTTBOM? One single court case against Moussaoui, who was not directly involved in 9/11 at all, but that's it. Hundreds of arrests which led to nothing but illegitime long-term hold ups under military law, but no due law.

If Apple can help them to extract data from those phones, fine. But Apple apparently built secure phones without bypass, so they are out of luck and it makes no sense to come up with fantasy warrants without any technical solution.


> the encryption of personal devices has become a serious problem for criminal investigators in a variety of cases and setting

And so has the second amendment -- but that doesn't mean we should get rid of it. Yes, their job is hard. That's the nature of the job. Just because something makes a job more difficult doesn't mean it's a bad thing. I don't get why they insist on making this argument.


I agree with the general sentiment of "don't get rid of things only because they make certain jobs more difficult" - but I don't think the FBI is arguing for getting rid of encryption. I think they want to be able to break security surrounding encryption sometimes and under certain circumstances.


They amount to the same thing, depending on how its done and who gets it. If any law enforcement who tries hard (FBI, MI5, China) to fight what they want to fight gets hold of Apple's tool then that encryption is not so much outlawed as pointless to use.

It's because of the feverant fight on something (communism, black civil rights protesters, anti capitalists, future civil rights upholders) leads to questionable use of any tool that we can't give them too many tools. In the UK the mayor of London bought 3 water cannon trucks but has been forbidden to use them - where will they be in ten years time?


I think the fear is that those 12 will lead to the other 700 million iPhones.


That's not a justified fear. Requiring Apple to backdoor all phones is not similar at all to requiring Apple to help hack particular phones, that they have the capability to hack, in response to court orders.


You're telling me you would trust that software to remain in the hands of trusted actors? In 2015, alone, the IRS was breached, LastPass, the director of the CIA, Hacking Team, even Kaspersky Labs was breached! There can be no absolute guarantee that this backdoor would remain safe indefinitely. That is just the most blatant problem, not to mention the overt displays of cynicism and misuse of authority by the NSA as revealed by the Snowden leaks. Consider the political climate in the US at the moment. Now imagine a truly evil actor came into power, being handed over control of organizations with unheard-of amounts of surveillance power. I don't mean to seem paranoid, but in this case the feeling is completely warranted.


A gaping "backdoor" already exists in the form of the software update mechanism. By the same logic you're using, Apple can't keep their source code and signing key secure forever, which would be a much worse leak than them losing control of a modified iOS that would let someone brute force a PIN for a phone in their physical possession without the phone being wiped.


So would it be okay for the FBI to have to bring the phone to Apple?


Don't you see the slippery slope here? Next, the German or French police will knock on Apple's door. When they get access, China, Russia, and others will line up next.

Moreover, who says that the FBI or some other agency will stop after this iPhone, or the next 12 iPhones. Why not push to get their own signing key after they succeed in this case? They will try to get as far as possible.

We live in 2016. many of our devices with our private data are directly addressable from anywhere in the world. Intentionally weakening encryption and security in any way is a dangerous proposition.

It's good that Apple fights this tooth and nail. Sure, it may align up with their PR. I don't care, it benefits every citizen of the net who wants privacy and security.


worse. If this becomes "normal", then rather than the CTO or CSO or VP iOS Engineering having to unlock the code-signing keys to sign releases (presumably via N of M), it'll have to be automated so that "oh, the FBI needs another custom build for the 4th time this week, just do the build and click here to sign it". At that point the security of the master code-signing keys has evaporated to nothing and we're all sunk.


I think the precedent that emerges could be that law enforcement can force companies to sign specific functionality. This access-to-data power would be in line with their access to telephone and telegraph copper in centuries past.


People keep suggesting that on the slippery slope of Apple complying with cases of limited scope the next logical step is compelling the production of a signing key.

This is the very essence of why a slippery slope is often a logical fallacy, because one does not necessarily follow from the other, and no supporting evidence is given that it would.

There are plenty of things that would keep this step from occurring, primary among them that there would be no legal justification - in the context of the All Writs Act, there would already be a solution available to law enforcement that was less burdensome to Apple than providing the keys to the kingdom.

I, too, think it is laudable that Apple would fight for privacy, but I also think that a lot of the argumentation around this case is soft and ideological.


The main point here is that neither the government nor Apple could guarantee that this backdoor software would remain safe. If the FBI brought the phone to Apple to update the custom OS, the software would then exist, which is inherently a dangerous situation. Furthermore, suddenly a precedent is set wherein Apple is obliged to continue unlocking phones for the government with custom created software. I sincerely doubt any organizations involved would tolerate the continual creation and destruction(can data be truly destroyed?) of customized OS's each time the government needed access to a phone. That means, the incentive is to create and maintain this backdoored version of the OS, setting up the exact vulnerability Apple is trying to fend off.

Basically, this project, from whatever angle it is approached, has a non-zero chance of backfiring dramatically thus risking the security and privacy of millions of iPhone users throughout the world.


I'm under the impression that this "backdoor" is the fact that these phones can download updates to the OS. The phone checks that the update is signed by Apple. If Apple can be trusted to keep their code-signing keys secret, then surely they can be trusted to keep a special version of the OS secret.

I'm still opposed to this, though... The FBI wants to be able to dictate to a private company what features to build (because of the All Writs Act).


Only if you think Apple should be an unhackable "trusted actor".

Let's face it, the fact that Apple can be compeled to create this bypass is a civil rights issue, but the fact that it can create it at all is already a security bug. One that, in their defense, they seem to be trying to patch a step at a time in newer versions of their devices.


What if there were a limit of 3 unlocks a year? Like in NFL football, where the coach is only allowed to challenge 3 plays a game, even if each challenge is successful. (I know, still too slippery a slope)

Or if each unlock cost the requester a $10 billion fee (donated to charity)? To make them think harder about when to use an unlock.


Then there would be another terrorist attack and emergency legislation would be passed giving the coaches 6 challenges.


How about forcing the prosecutor and Director Comey and the judge to place the entire contents of their own phones on a public server? Every call, every email, every GPS coordinate. Let them feel our p.


pain.


You haven't been paying attention. Please explain how a security hole can only be exploited by Apple+FBI and no one else.


Why? It's a crystal clear agenda. The FBI has called for an end to effective crypto in the hands of US citizens.


> The FBI has called for an end to effective crypto in the hands of US citizens.

This is where it will end, and perhaps rightly so. There is no physical space in the US where US citizens can hide from the FBI and other authorities. Why should there be a virtual space where US citizens can hide. US citizens are not free to hide from the authorities and the law.


But it wouldn't be just the US. It would be any other country, and not just the country's citizens.

"Why should there be a virtual space where people can hide from US/China/Russia/Saudi Arabia/etc."


> perhaps rightly so

I see no 'rightly' in this, at all. The end-game is an abject panopticon that makes the Stasi look like amateurs.

(Regardless of what Google, etc. may collect, corporations do not have guns or the ability to put you in jail. Not yet, anyway).


Question for HN in general: Is it possible in principle (for Apple or someone else) to construct a smartphone that can accept software/firmware updates, but that Apple cannot push malware to at some later time?

E.g. can we implement all security functionality in hardware/burn it into the silicon? Or accomplish the same ends by some other means?

Intuition says "no," because "security functionality" is sort of nebulous. But it would be great if a device could be constructed in such a way that all such future demands for collusion by hostile actors such as governments could be rendered preemptively impossible.


"can we implement all security functionality in hardware/burn it into the silicon? Or accomplish the same ends by some other means?"

Yes. The software could be burnt into PROM (which is unchangeable) or one could even create a custom ROM chip, and if necessary contain hardware or code that checksums the ROM.

However, a company doing that must be willing to run the risk that there is a bug in that unchangeable software/hardware and then either tell their customers that they are screwed, or that they can get a free replacement phone. It also may lengthen development cycles, as you cannot, at last minute, order your factory to open a million boxes and update that part of the firmware anymore.

Alternatively, a fully open phone would allow customers to inspect updates and reject them or perhaps even to partially reject them (partial rejection would prevent the case where users want a feature, but only can get it by accepting weaker security). That requires a 100% open phone (hard- and software) and enough knowledgeable people willing to invest time in looking at the code.


> That requires a 100% open phone (hard- and software)

I am eagerly awaiting the http://neo900.org/

Unfortunately the baseband modem is still unfree, but at least it's isolated over the USB bus, versus having direct memory access as many phones do. Unfortunately no phone has a free, legal modem.

Anyways, I'm more excited about the prospect of the phone itself being completely free. In their own words:

> Not a single line of closed code will have to run on the main CPU to be able to use the Neo900. Using free telephony stacks like FSO or one from QtMoko, FLOSS Linux drivers will be available for every single component. In order to get 3D acceleration working, which is not necessary to operate the device, closed drivers would be needed.


It warms my heart to see the N900 getting a revival. While not the sleekest phone out at the time, that device was tremendously underrated as a portable computer.


> That requires a 100% open phone (hard- and software) and enough knowledgeable people willing to invest time in looking at the code.

Which, sadly, seem to not exist (cf the long term bugs in OpenSSL et al)


It should be completely possible for them to not accept any software updates unless the phone is unlocked. They may not wish to do this for various reasons, but it is definitely technically possible.


Or to wipe the phone if it's updated without being unlocked.


You could certainly reduce some of the attack surface, but you couldn't eliminate it entirely.

I think a more reasonable option is to design phone in such a way that attempting to load software on the phone when it's locked bricks it and causes the encryption keys to be destroyed.


That's not necessary. Simply tying the encryption to a strong password instead of a 4-digit PIN would have been sufficient to prevent a firmware update to a locked device from weakening security, at least in this particular case.

Throw in a memory-hard key derivation function to make parallel brute-force more expensive.


But making updates require password input would close the backdoor entirely


This could be possible if at setup time the user was asked to create their own key, which any update would have to be signed with. This would then unburden Apple of having the "master key" needed to sign software updates, but would create an extra hoop for users to jump through to update their phone. I'm sure we've all experienced putting off a necessary update to software because of the inconvenience it would cause. Imagine millions of iPhones becoming vulnerable overnight because of mass user laziness.


How about a simpler rule like "no updates will install unless the phone is unlocked"? Such a rule would make discussion in the San Bernardino case moot. In response, the government would have to outlaw this practice and force device makers to retain the ability to push updates to a locked phone.


From what I understand, the discussion is about an update being installed through the low-level bootloader, while the phone being locked is a function of the higher-level operating system (which probably already has the "no updates will install unless the phone is unlocked" rule).

The relevant rule would instead be something like "installing any update through the low-level bootloader always wipes the encryption keys and the data partition". Normal updates wouldn't be through the low-level bootloader, so this rule isn't too restrictive.

Not that it makes any difference. The attackers in this case don't have to install the update in any normal way, be it through the normal operating system or the bootloader; they can instead desolder the NAND chips and write the update directly to them.

What I believe the attackers actually need is a signature from Apple. If the bootloader chain checks the operating system's signature, it won't boot unless it's signed by Apple.


Presumably you could set up to automatically sign anything signed by Apple, if that tradeoff suits you.


The security issue of Apple being vulnerable to being compelled to create signed backdoors would still remain, in that case.


Yes, for the people who chose to be, when they chose. This choice would be decoupled from the OS choice.


It is possible, with caveats. The user has to be able to decide whether to accept an update or not, and signal their assent or rejection with their password. And the software has to be open source, so that the user can inspect a proposed software update and decide whether it is malware or not (or let others inspect it and know they have the same thing). In practice this is imperfect, because trustworthy volunteers willing to audit software are scarce.


http://mjg59.dreamwidth.org/39999.html -- lock to the owner's signing key, not the manufacturer's.


> Question for HN in general: Is it possible in principle (for Apple or someone else) to construct a smartphone that can accept software/firmware updates, but that Apple cannot push malware to at some later time?

I've been thinking about a related question: how to construct an update system so the updater cannot push malware to a specific phone, without making it visible and available to everyone else?

My answer would be something like Certificate Transparency (https://en.wikipedia.org/wiki/Certificate_Transparency). The update system would only accept an update if it's signed by several independent entities, spread all over the world (in many independent jurisdictions), and the signing system in these entities would be designed in such a way that every signature is written to an append-only log which is published to everyone to see, and these entities would also publish a copy of every update they signed.

That way, Apple could still push malware to a phone, but they couldn't push malware to a phone without making that fact public, at least to security researchers who can reverse-engineer firmware updates.


If the US can get a presidential plane diverted and forced to land where it did not want to I don't think it is beyond imagination to see that 'independent entities spread out all over the world in many independent jurisdictions' could be coerced as well.


They could involve third parties in signing updates and if a device receives an update that isn't disseminated to all third parties it doesn't accept it. That way apple couldn't dispatch custom/backdoored updates to individual devices without revealing that they did it.


This would be a good step forward, but maybe not enough.

For example, they could be forced to first publish an update to all devices (through the third parties) which disables the third-party-checks. Then, they could be forced to put the backdoor on individual devices.


> For example, they could be forced to first publish an update to all devices (through the third parties) which disables the third-party-checks.

And the device wouldn't accept it. If I subscribed to an organisation over which USG has no sway, say Computer Chaos Club from Germany, then my update wouldn't be accepted unless additional signatures were provided from them. Unless USG forced Apple to abandon such scheme for everyone then they would be powerless to backdoor individuals. I'm also not sure how relevant this is here, but in US it's been established that software is speech and is protected by the first amendment so there are limitations to ways in which US can influence Apple.


So you expect other communities to review all code changes? No way. This would be a huge time sink, and so far has never worked in the past.

This only works for very narrow-scoped projects, where the reviewers are also part of the project. Thinking or OpenBSD, Qt or other projects where peer-review is part of their internal structure.

So if other communities were to sign the releases, this would have to be automatic or semi-automatic. It would not be a review, but it would be still helpful, since those external observers then can see all releases in hindsight. So maybe years later some people are finding stuff and can trace it back to the point in time and the exact update by which it was introduced.


> So you expect other communities to review all code changes?

No. Third parties would be just a barrier against odd updates that don't hit everyone. If update starts hitting everybody then it's probably harmless because everyone is getting it, if it appears just at 10 clients then it's probably a backdoor.


Security and usability are always going to be on a spectrum.

Even in the San Bernadino case, if the FBI hadn't changed the user's iCloud password, just bringing the phone to a trusted WiFi network would have caused it to automatically back itself up to the cloud. Even assuming the phone were completely 100% locked down perfectly, as long as you're sending backups to Apple that aren't encrypted, you're putting the data one subpoena away from the FBI (or from a hacker who breaks into Apple).

So say you don't use iCloud, or Apple starts a service where the backup is encrypted by your password. Your password needs to be really strong for this to work, and Apple needs to use something like bcrypt or better to hash it, but say they do that as well, and you have an long random password.

Then you can put a chip on the phone that refuses to ever be updated, and that implements the password lockout logic (try 10 bad PINs and it wipes the key). What they're asking Apple to do now would simple be impossible.

But you're allowing iOS itself to be updated, right? So if, for instance, you're discovered to be a terrorist, Apple could push a patch to the main OS that simply grabs the key after you've unlocked the phone and sends it to them. No more need for the secure chip; just decrypt the flash directly.

Or easier, the patched OS could just slowly upload all the data on the phone to a backup server whether or not the user opted in to such a backup plan.

If there's a lot of data (tons of photographs, for instance), they could even have the phone wait until it detects a known wireless access point to trigger the backup. The FBI could then arrange for that access point to be active near you and your device, and it could connect and upload to a server sitting in the FBI van nearby.

Also keep in mind that just about every release of iOS was designed to be impossible to jailbreak. And it seems that iOS 9 can still be jailbroken. This is typically done using an OS vulnerability that's exploited; the FBI/NSA could easily use those approaches to hack your phone (though all would require an already unlocked device, so they'd probably need to get you to run something via social engineering, but some of the hacks only require you to click on a specially crafted link on a web site...).

Finally, remember what I said about security and usability? Yeah, now if you forget your password and need it to be reset, you no only lose the data on your phone, but all of your backups. Oops. All of that and you're still not protected from a coordinated attack.

It's great for Apple to do whatever they can to resist giving the government the keys to everyone's privacy, because that can be abused. But it's safest to consider anything you typed into a computer or phone to be something that might get posted publicly.


there's quite a few more than are involved in probably about 9 innocent deaths PER DAY. http://www.huffingtonpost.com/2015/06/08/dangers-of-texting-...

Does the justice dept want those unlocked too?


Comey said the FBI wants the ability to do this even for car accidents, so a qualified "yes."


I for one am happy for this issue to be at the forefront of discussion.

The worst thing that could happen is all of these requests going unspoken, buried beneath less important topics.


Sadly this is a tough sell, given that the general public perceives "encryption" as "password", unaware of the underlying technology and implications. It's doubly sadly that it was a government employee, in a government-controlled environment, using a government-managed device, and even when government had access they went and changed the password, locking themselves out.

Critical thinking would lead one to question the need of any data at all, given the thoroughly demonstrated incompetence. Yet public filter stops on perception that that Apple once cooperated, but now chooses not to do so.


I had a really bad experience with an iPhone update and a password storage app.

I had been running my iPhone on iOS 8.x. No need to update to 9.x.

The day finally came when I was forced to allow the update.

Now, without my knowledge the update also enabled automatic updates of apps. All apps were thus updated to their latest versions without my explicit consent.

I chose the app I am using to keep hundreds of account passwords specifically because they DID NOT transmit anything over the internet at the time I got it. I could do what they called "wifi sync" to synchronize and backup my database to the desktop version of the same software running on my PC within the same network.

Well, with the forced update "wifi sync" went away and now the only option is "internet sync". I did not realize this whe the app ran through and synchronized to my PC.

So now the dilema. This fucking company is doing this because they want to sell cloud storage for your data and force you into an annual subscription in order to be able to "internet sync". And, of course, the huge violation of the security of my data which, up until the unauthorized automatic update, had been kept private and never left my network.

Not only do I have to find a new password and data vault that will not try to take ownership of my data and pull a bait-and-switch after, I also need to change every simgle password I have due to my database now being in their cloud.

Unbelievable.


Instead of relying on proprietary software that is out of your control for storing your passwords (and getting burned like you have described), why not use something open source like KeePass? I don't have an iPhone, but on Android there is KeePass2Android, which is open source and (if you choose to use it as such) offline-only.


That's the migration plan when I get the time to move the data over.


I'm not sure I understand this. Apple seems to be limiting all discussion to in-situ mechanisms of cracking, but no one is talking about external means. For example ICE-level debuggers require sophisticated hardware that is not easily available to everyone. Likewise, the XBOX encryption was very difficult to crack without Ghz hardware.

Just put the critical path in the prom and then bypass the prom with your own hardware-level circuit. The device itself can be keyed so that only Apple hardware bypass is allowed to connect in this way. Now you have a physical bypass that is difficult if not impossible to get around, but enables warranted access by agencies that own the limited hardware. This also has the advantage of human cost. You can't easily apply this method to millions of phones without a huge cost in time and effort. Even if the device is stolen, it limits exposure to phones in the physical possession of the hardware bypass which is surely better than compromising millions of phones. And so what that the critical path patch exists out in the open? Knock yourself out and make an emulator that will unlock hw emulated phones (which is a difficult task, not even the IOS emulator is a true hw emulator), but it won't work on the actual hardware unless the prom is swapped which is hardly trivial.

The key signing argument has little weight by the way. DVD manufacturers had the same stance and the root key was leaked to the public. How can Apple guarantee the same won't happen with their keys?

It seems both Apple and the FBI are withholding something, but on face value the technical requirements should allow warranted access. The fact that they don't is a flaw in the technology design.

Case law surely has precedents in this area? Can safe manufacturers be required to make bypass mechanisms for bank vaults? What about non-criminal property law? Say a family member dies and the legal estate needs access?


This may be a totally absurd question, but is there a way to comply such that the government explicitly acknowledges that a case cannot be used as precedence?

i.e. comply with the order, but this case legally can't be referred to ever again as justification of precedence in any other case.


What about a different approach? A society does want to allow access to information for investigative purposes while protecting security. Then what about allowing court orders for companies to bypass security in their products with the caveat that the access method is made publicly reproducible.

This forces the company to fix the vulnerability and forces the government to carefully consider which cases are important enough.

Just brainstorming..... not a solid proposal. Shoot some holes in it please.


What happened to "just one phone"


...and it's an entirely reasonable position to say that they should get it, if they have a warrant. Especially if the case as cut-and-dried as the San Bernardino one.

The public debate on this has reached truly sad, nigh-Trumpian levels of hysteria and uninformed commentary. There is no "back-door" here. Encryption is not being compromised. This has very little to do with encryption at all, really: if the criminals in question were to use a strong password instead of a four-digit PIN, Apple could just shrug, say "not possible in our lifetimes", and that would be the end of it. But these criminals have easily brute-forceable PIN codes, and the investigators want to brute force them.

This situation is about a legal fight of very narrow parameters: should it be possible for the government to compel a company to help extract its customers' "secure" data, via this specific, very old law. Reasonable people can disagree on this point.

Unfortunately, the public debate has gone completely round the bend, with famous people grandstanding on totally irrelevant things (like "encryption back doors"), which have no bearing on anything at all. Moreover, as it turns out, Apple has been doing this for years for police investigations, and the empire has not yet fallen. If you're worried about the slippery slope, well...we're already well downhill, and our bottoms are wet. Perspective.

I realize that it's not popular amongst the tinfoil-hat set that has set up residence here, but I think that there are times when we want our government to be able to do things like break into a suspect's phone. There should be safeguards (like warrants), of course, but it's a perfectly reasonable position to say that privacy is not absolute.


I agree with you that people capitalize on events to fit into their political narratives, and I also agree that privacy is not an absolute value, and that tensions between values calls for tradeoffs to be made.

But I do believe that calling this a backdoor is proper framing. Apple provided a weak password knowing that convenience often beats security, but Apple also provided a mechanism by which one can have weak passwords and still have strong security via a max-attempt mechanism. It is a circumvention of security features, and "backdoors" are about security, not encryption (which is merely a subset of security).

I also think the legal fight is not circumscribed around narrow parameters with predictably narrow legal outcomes. The FBI cites a law from 1789 that says that the court may issue "...all writs necessary or appropriate in aid of their respective jurisdictions and agreeable to the usages and principles of law". It does not sound easy to predict what case law shall determine to be "necessary or appropriate" 5-10 years from now.


"Apple also provided a mechanism by which one can have weak passwords and still have strong security via a max-attempt mechanism. It is a circumvention of security features, and "backdoors" are about security, not encryption (which is merely a subset of security)."

I don't deny any of that, but there's still a bright-line distinction between "circumventing security features" for a single, badly protected phone, given a warrant, and weakening security across-the-board for everyone. This is a case of the former, not the latter.

"I also think the legal fight is not circumscribed around narrow parameters with predictably narrow legal outcomes."

The legal fight is, factually, centered on the question I stated. It doesn't involve any of the other technical stuff that's being tossed around this debate. That was my point. But like I said: I think it's a legitimate question, so I'm not sure who you're arguing with right now?


Fully agree with your comment about the discussion going full on hyperbole but at the same time agree that it's not Apple's job to intentionally weaken the security parameters of their firmware. Asking for encryption back doors is just another step from this, are you confident enough that it's not a risk worthy of consideration?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: