"Nobody knows how?" They have a pretty plausible explanation in the back half of this very article:
> With iOS 12, Apple implemented a highly-anticipated change called “USB Restricted Mode.” This shuts off lightning port access on the iPhone if it hasn’t been unlocked by a user in the last hour. This was widely believed to be Apple’s solution to foil companies like GrayShift and Cellebrite but we don’t know for certain if that did the trick. Apple did not return our request for comment.
That would definitely do it. If Lightning goes down they cannot differentially backup/restore while attempting to guess the user's passcode.
Given that customers themselves had access to the beta, if that were true it would have been pretty easy for someone else with access to a GrayKey to refute that.
There's not really any motivation for people with access to GrayKeys to confirm things that GrayKey has said. I would imagine that GrayShift frowns upon its clients talking publicly about GrayKey, so there'd have to actually be a good reason for someone to do so. "GrayShift told the truth" isn't a particularly good reason. But "GrayShift is lying" for many people would be a good enough reason to be worth speaking up. Heck, even an anonymous claim that GrayShift is lying would be something, but if anyone made such a claim, it wasn't publicized anywhere.
Somewhat tangentially, it’s actually a fairly smooth process to pull data off a MacBook when the keyboard/graphics/touchpad are dead.
A friend dropped a drink through his and it looked very dead. Turns out with a thunderbolt cable, another Mac, and his disk crypto password & he had easy access so his old disk. Don’t ask me how, but kudos Apple.
Disk encryption usually employs some form of PBKDF to turn the user’s passphrase into a key that’s usable for crypto. If you know the specific derivation function and the passphrase, unlocking the storage device on another machine is a matter of setting up the software. When both ends of this are Apple machines (one end: the damages machine running in target disk mode; other end: a working machine) the key derivation is taken care of by definition, requiring the user to merely enter the passphrase.
The question I have is how did he get the "dead" machine into target disk mode? That normally requires a reboot and a specific key chord on the keyboard.
The older MacBooks with Firewire had target disk mode where you pressed T on boot, and the MacBook then acted as an external disk to the other host Mac. Very useful for backing stuff up!
No clue how you'd do that with the newer ones (dongles?) or with a dysfunctional keyboard though...
Only if your luggage can be effortlessly duplicated and there’s a setting to automatically copy your trunk luggage to secure storage on a regular basis.
You pull down the back seat and get into the trunk from inside the car of course.
Snark aside, if your phone is dead, the only way to get data out of it will be to get to the "disk" on the device, if that's even possible.
Not at all. Of course it's good to back up a phone regularly, but even if you've done so it would always be preferable to back up one last time and then restore your replacement device from that backup.
This isn't always possible, but if the storage is undamaged, it would be ideal from a continuity perspective.
I rather the device is secure and you protect your device better, than to have a security hole so that your can backup your device after being careless.
I'd wager this is a fairly fringe case that isn't worth the trade off. If you don't have a recent backup and absolutely need the data, take it to a third party repair shop/buy a screen from ifixit/data recovery lab, connect a new display, enter your PIN, back it up.
I keep an old iphone 5 with a busted screen for this reason - I would dispose of it but there’s no way to clean up the data. What’s that you say, it’s encrypted so the data looks like noise? Welp TFA says that’s not a problem since the screen broke a while ago, far prior to the latest ios release. Which prompts an interesting thought, how long will this current twilight last?
You replace the screen, enter passcode and do a backup, easy. Dont worry, Apple is hard at work implementing parts blacklists and soon wont allow 3rd party technicians screen replacements, because fraud or something. Oh, and Apple doesnt repair/recover broken devices, but they will gladly "upgrade" you to a new phone for only 599, or swap your broken one for a refurb.
> Dont worry, Apple is hard at work implementing parts blacklists
Source?
We get iPhone repairs out of Apple very often - If we mention it's urgent they'll swap it for a refurb, which is a good thing: It's allowed me to be in-and-out in 15 minutes versus dropping an iPhone off and picking it up the next day.
We have a business account - I've not broken my personal iPhone in years so I can't speak as a consumer. I did have a MacBook Pro 15" keyboard (pre-butterfly) fail WAY out of warranty and they replaced that for free too. Maybe I'm lucky, or maybe Apple aren't so bad to deal with after all?
> GrayKey, which counts an ex-Apple security engineer among its founders
How is it possible for someone to start a business built on breaking the security systems they had a role in implementing? It seems like a huge ethical breach, and I'm surprised there would not be contractual considerations to prevent this in the security field.
Why would it be unethical to get a job trying to break the security you previously helped implement? Security is not accomplished via obscurity so it's not like there's a trade secret issue here.
Really the only problem I can think of is if you put backdoors or weaknesses into the system with the expectation of being hired later to defeat the same system. But the problem there is the fact that you deliberately crippled the system you were building and not the fact that you were hired later to defeat it.
It becomes unethical as soon as you discover the vulnerability and refuse to disclose it.
It's like if you have a key to an apartment and you move out without the landlord asking for it back. It's not unethical if you then discover that the key still works. It becomes unethical when you do not disclose this fact. Worse still if you start making copies of the key and selling it to people who intend to break into that apartment.
I think the ethics depend heavily on when a flaw was discovered.
If this person knew of a flaw when at Apple, and did not disclose it to Apple (his employer), but instead left to go exploit it--that would be clearly unethical and possibly even illegal.
But if they left, and then, as an ex-employee of Apple, discovered a new flaw in Apple's security... how are they any different from a random person who never worked at Apple?
Do security researchers have a general ethical obligation to disclose a product's security flaws to the company who created it? I think most security researchers would say no... the obligation is more rightly put on the company itself to produce secure products.
Company managers don't generally have an ethical obligation to their ex-employees after the employment period ends. It doesn't seem fair to say that ex-employees should be obligated toward their former employer. The exception would be if they acted unethically while employed there and then reaped the benefits later.
Or look at it this way--imagine an employee of GrayKey went to Apple, and then learned how Apple is defeating GrayKey. Does that employee have an ethical obligation to tell GrayKey about that? If ethics don't work the same in both directions, they're probably not strong ethics.
That is, unless there is some higher moral at stake, like "breaking security is always wrong." But even that is problematic because if you never try to break security, it never gets better.
I think the ethics depend heavily on when a flaw was discovered.
If this person knew of a flaw when at Apple, and did not disclose it to Apple (his employer), but instead left to go exploit it--that would be clearly unethical and possibly even illegal.
But if they left, and then, as an ex-employee of Apple, discovered a new flaw in Apple's security... how are they any different from a random person who never worked at Apple?
They are still different from the average person on the street. Security is more than just the binary flaw/no flaw distinction. Perhaps while working at Apple the researcher knew about some old libraries that were still in production release but had not been worked on, let alone updated in years? That sort of insider knowledge could help them find exploits the average person wouldn't think of.
GrayKey is a completely unethical company. They are selling unauthorized access to people's devices. The fact that their main clients are law enforcement officers is irrelevant.
I can't believe you'd suggest that Apple is in the wrong for hindering GrayKey's efforts. Am I in the wrong for changing the locks on my door to hinder burglars?
Security flaws are ticking time bombs that threaten all of society. Discovering them and exploiting them rather than helping to fix them is ethically akin to discovering chemical or biological weapons and helping to put them into use.
> GrayKey is a completely unethical company. They are selling unauthorized access to people's devices. The fact that their main clients are law enforcement officers is irrelevant.
I'd say this is an example of a higher ethic or moral at stake. Not everyone agrees with you that law enforcement access to a device is always unethical. It certainly can be, if the police are acting unethically (which some do). But if they are properly investigating a crime and get a warrant, that's going to cross into the OK zone for a lot of people.
> I can't believe you'd suggest that Apple is in the wrong for hindering GrayKey's efforts.
I did not suggest that and I don't believe that. I think it's great that Apple is fixing their device security.
I'd say this is an example of a higher ethic or moral at stake. Not everyone agrees with you that law enforcement access to a device is always unethical. It certainly can be, if the police are acting unethically (which some do). But if they are properly investigating a crime and get a warrant, that's going to cross into the OK zone for a lot of people.
Not for me. There's nothing sacred about the police. They spend a heck of a lot of time and effort acting as foot soldiers in a class war against the impoverished. They engage in widespread legalized highway robbery in the form of civil forfeiture. And if you want to include customs and border patrol (I do) they also spend a ton of time breaking up desperate families and conducting dragnet surveillance against innocent travellers. And their effectiveness in all these endeavours? Abysmal, if you take their stated aims at face value.
Why are you accusing the parent of suggesting Apple is in the wrong for hindering GrayKey's efforts? They didn't say that. The question is whether an employee has an ethical obligation to their former employer to disclose security vulnerabilities discovered after their employment was terminated. I agree with the parent; once your employment has terminated, you do not have any moral obligation to disclose information gained after the termination. I would argue you do have an obligation to disclose security vulnerabilities discovered while still an employee, or at least refuse to exploit those security vulnerabilities, as knowledge of them is essentially privileged information that belongs to your former employer. But any knowledge gained after termination is fair game.
Maybe the landlord usually changes the locks but forgot this time? Heck, think about a quiet suburban neighbourhood. If someone forgets to lock their doors at night and they get robbed, it's still the burglar's fault for committing the crime.
One of the guiding principles of ethics I follow is Kant's "ought implies can" [1]. If you make a mistake and forget to do something, that's not the same as deliberately choosing not to do it. All humans make mistakes of this kind. To imply otherwise is to imply that a person could rise above all of humanity. That is asking far too much.
The basic principle of modern security is that a secure system must be secure even if an attacker knows all aspects of the system’s design. This is what warnings against “security by obscurity” are referring to.
The only ethical breach I see here would be if they deliberately weakened the system so that they could break it later, which seems unlikely.
Any sufficiently popular "hacking device" is doomed in the long run. In this occasion this is good.
We have observed much more massive battles for game consoles chipping, which were all lost at the end by modchip makers. They are just training defences.
The modchip makers won in the end just as soon as the consoles were no longer supported. I can easily chip my Wii or Xbox because the manufacturers of those devices no longer release security updates for those particular models. This was not true when they were current.
As I said in a sibling comment, it's a constantly ongoing cat-and-mouse game.
Is it really 'winning' when your competitor gives up playing? It seems to me the like console manufacturers won - they resisted the hacking attempts while the consoles were economically viable, then stopped trying when they weren't.
>Any sufficiently popular "thing to crack" is doomed in the long run. ... They are just training offences.
No, because these are not symmetric operations. There is nothing mathematically impossible about bug free software. While the Halting Problem means that we can't just formally take any arbitrary program and solve all its states, it is possible to formally prove small programs and additively build up, and to approach a limit of zero with enough brute force over time particularly in simple programs. If they are not changing and actively gaining new informal features then they will not be automatically gaining new vulnerabilities either, and for all these very reasons it's standard practice to try to separate security core code and make it as simple and static as possible beyond bug fixes.
So unlike the real world in software defense is actually in a naturally stronger position if managed well. Software history is a spiral not a circle, security practices are improving and legacy is being reduced over time. A lot of easy mistakes of the past don't happen so much or at all anymore, and Apple has definitely benefitted long term from the hunt for exploits in hardening their ecosystem. Bugs still exist, but they're not low hanging fruit anymore, they're harder to find and more restricted in scope and more must be chained together for wider access.
Amen to that. My company does very large cyber security contracts for DoD and has a god awful home page. It's almost like a badge of honor to someone how bad it is.
Maybe there's a style guide for the US intelligence/cybersecurity apparatus that demands ugly design straight out of the 90s? Maybe that style guide is also top secret ;-)
I've been wondering about that since Snowden afforded us a glimpse into the internal PowerPoint world of the NSA. That stuff - all of it - looked just like I would have designed presentations around 1996. When I first discovered PowerPoint. As a kid.
any organization that has to support computers that were brand new in the 90s ends up with websites like this. you literally cannot do any better. plus the same users would want the same look even when they're going onto a newer pc which further constrains designs.
hell this is happening to apple right now - look at their giant wonky volume button and full screen call window when the rest of the mobile world has already ditched those design concepts? settings are also still hidden in the settings menu even though it is bog standard on android to be able to directly goto a settings menu from a toggle via long press
Is it even known how graykey broke iPhones? Like details of the attack, how they bypassed pin-code delay? I thought that those delays were hardware implemented by secure chip, but obviously that's not the case.
The screenshot is actually showing a section called "Allow Access When Locked". In order to have your content secured, you'd want USB Accessory access turned off when the phone is locked. The screenshot is correct. Their description of the feature has it backwards.
Does the description change as you flip the toggle? Otherwise it certainly seems the like description implies it should be ON in order to achieve that behavior, not OFF.
Yes, it does. When it's off it says something to the effect of "Turn off to prevent devices from accessing your phone's contents without requiring your passcode."
Wow, when they word it that way, you wonder why it’s an option at all!
But in any case, from the screenshot it seemed confusing but with the full description of the UI it sounds like they got it as right as a cryptic security setting buried 3 menus deep could ever be.
That section is “Allow access when locked...” with a list of things like Siri or returning missed calls. Having USB accessories disabled from access when locked is the feature.
Previously I remember seeing some speculation that having tools like GrayKey available can be in a sense good for Apple. If the tools are hard enough to get and limited to law enforcement, that should not affect too much the public perception of iPhones being very safe and secure. If these tools are available in some form, it reduces the pressure from the government towards Apple to provide access to data in the devices.
The tweak made to ARM pointers is a protection against return-oriented programming attacks. While it's conceivable that GrayKey's operation relies on attacks of this nature, in all likelihood this is not the answer.
He says put out a press release saying it's fixed, while laughing that they can just remove the storage from the phone and use the other methods to break into the device.
The storage is encrypted using keys known only to the secure enclave on the device (and those keys cannot be extracted from the secure enclave). If you remove the storage, it's useless.