Hacker News new | past | comments | ask | show | jobs | submit login
iOS 10 Security White Paper [pdf] (apple.com)
273 points by IBM on March 25, 2017 | hide | past | favorite | 94 comments



This is really cool:

> Securely erasing saved keys is just as important as generating them. It’s especially challenging to do so on flash storage, where wear-leveling might mean multiple copies of data need to be erased. To address this issue, iOS devices include a feature dedicated to secure data erasure called Effaceable Storage. This feature accesses the underlying storage technology (for example, NAND) to directly address and erase a small number of blocks at a very low level.

I guess that means separate storage, as the main storage in recent iPhones is an NVMe SSD and not raw NAND attached to the processor.

BTW, is there a good / easy way to connect raw NAND to a normal desktop PC?


What purpose do you want to access raw NAND? If you are okay with just a basic low speed connection to read the NAND, there is a fairly standardized async protocol which you could achieve with a dozen GPIO pins. You could also use a FPGA or and NAND flash programmer (like of like the old EPROM programmers)

However beyond this, you need to know a bit more information to interpret this raw data. This includes any data framing structure, error correction, scrambling, encryption and read error recovery algorithms. A lot of this information is non-standard or only available under NDA from the manufacturer.


I just want it to show up in the OS as MTD so e.g. I could use it with JFFS2 like OpenWrt does on routers. But on a desktop PC.


In that case, I'm not familiar of any practical way to to interface to the NAND flash except that which comes with some embedded controllers boards. But let me add that even if you interface to the NAND, you still need to solve some of the things I mentioned above if the NAND is any technology node below 20nm or so. If you don't do it right, even with decent error correction you will get a high bit error rate.


It's cool but note that you would never be saving secrets in cleartext to disk.



And here's a direct link to the computerality diff in plain text for easy viewing on mobile:

https://gist.githubusercontent.com/computerality/3e0bc104cd2...


Its previous edition was required reading for CS 161 at Berkeley. Would that it were required reading in Mountain View.

http://www-inst.cs.berkeley.edu/~cs161/fa16/

(Yeah, it says optional on the syllabus but Weaver said required in lecture.)


Do they don't know/care about security, or is it simply the case that it is hard to have something like the secure enclave across all Android devices? Genuine question.


Mmm? Secure enclave is present on most Android devices and is mandatory since Android 6.0.

(It's just called something else.)

Historically Android has been lagging behind a bit from iOS devices when it comes to security, but Pixels and their software have a very similar security model and design (with some exceptions - less granularity with file-based encryption and some other mostly minor details).

Non Google devices however are usually significantly less secure - not so much due to Android design, as due to manufacturers deliberately disabling Android's security featuers (e.g. only Pixel actually uses dm-verity at this moment if I remember correctly), refusing to update them, building devices with bad trustzone drivers... etc.

If you keep to the 1st party (Google-branded) devices like in iOS world, you're mostly ok.


Yes, they're using the ARM Trusted Execution Environment rather than a separate Enclave chip with its separate OS (L4). Apple is an ARM architecture licensee, designing their own compatible chips. So the TEE would have been an available path (compatibility is still required, no?) but they instead went the extra yard with a separate Enclave chip. As their white paper details, they also go to insane levels with that chip and moreover with its communications rather than just trust the TEE within an ARM chip and call it a day.


More recent ARM chips (A9+) come bundled with ARM TrustZone[1]. In a nutshell, the processor has two (hardware) isolated execution environments each running a different OS and different software. By default, the secure environment of TrustZone runs an L4 kernel (edit: this is incorrect, see reply below).

Could it be the case that Apple is leveraging TrustZone but with a customized L4 kernel? Or is it confirmed that the Secure Enclave is a custom IC designed by Apple? I wouldn't be surprised if it's the former as it becomes much cheaper to implement the required security features.

Edit: Check out this previous discussion on this exact topic: https://news.ycombinator.com/item?id=8410700

[1]: https://www.arm.com/products/security-on-arm/trustzone


> By default, the secure environment of TrustZone runs an L4 kernel.

By default no SW runs on HW. "Mobicore" (now called "Kinibi" from Trustonic) is based on L4.


I know that mate, no need to get snarky. What I meant is that it was bundled with the core by default, but thanks for the correction. I thought I read it somewhere, but judging by a quick search, it seems I'm mistaken.


Depends on the HW manufacturer and SKU on what is bundled or not.. Even ROM code can be different per SKU.


TrustZone was announced 2012 (?). The Security Enclave is a separate very Apple designed chip. They've patented aspects of it, dated also 2012:

https://www.blackhat.com/docs/us-16/materials/us-16-Mandt-De... https://www.google.com/patents/US8832465


> TrustZone was announced 2012

No, 2012 was when Trustonic was formed from competing TEE vendors: ARM, Gemalto, and Giesecke & Devrient.

TrustZone has been around since before that. TI OMAP were front-runners of using it.


Yes, indeed they went above and beyond - probably because they also need to defend not only against external threats, but against the user of the device himself to keep the walled garden intact.


Yes (dunno why all the downvotes) but Apple went even further than the walled garden would require. They could have easily left an Apple backdoor. But they encrypt the protocol going over wires to/from the Enclave. They go insanely far rather than sufficiently far.

Yeah, nation state level attacks will still work, especially if they have the phone. But with Android it's not nation state level. It's corporate level and maybe less if they have the phone.


I felt that Apple's description of the initial key setup between the enclave and the main processor was hand-wavy at best.

I know of another similar implementation that's used by Microsemi for their FPGA-based secure boot process[1]. They claim to protect the initial AES key transmission using an "obfuscated" crypto library that is sent to the processor over SPI on boot[2]. Also, I wonder if Apple exchanges a nonce during the setup to prevent replay attacks?

[1]: https://www.microsemi.com/products/fpga-soc/security/secure-...

[2]: It's a C/C++ library called WhiteboxCRYPTO. There is a whitepaper (http://soc.microsemi.com/interact/default.aspx?p=E464), but AFAIK the gist of their argument is that the code and keys are sufficiently obfuscated to prevent reverse engineering (typical marketing-speak).


Read the Apple patent I cited above. Apple isn't exporting an API so don't expect much in the docs. But they do have to teach in the patent.


There was an article about iOS security where someone argued that Apple controls the enclave for security reasons, to which I answered that this is basically security by obscurity. You can see I was downvoted for this: https://news.ycombinator.com/item?id=13676135

I still downvoted izacus because it was an uncharitable fanboy rambling. The charitable interpretation would be that the walled garden (in regards to the enclave) is a side effect of their implementation, and not the intention.


My OnePlus 3T uses dm-verity as well, sadly. Displaying an "unlocked" badge during boot is acceptable. Actually pausing boot for 10 seconds every time is not by a long shot.


That's good to know when recommending devices.


Nexus 5x and 6p have dm-verity as well.


Yes, IMHO they know. Their Chrome security model was required reading in CS 261.

https://people.eecs.berkeley.edu/~raluca/cs261-f15/

But to me, they seem to be trying to find a moderate level of security with a profitable cost of goods. It doesn't seem that their heart is in it the way Apple's is with the Enclave. iOS is still breakable at the nation state level but well that's quite a high bar. Nation states are breakable at the nation state level.


Do you have a source on the "breakable at the nation state level"? Last I heard they've only been able to compromise models before the 6.


"Breakable" and "have been broken" are different budgets.


It feels like users don't care about security. Nearly all Android phones are not running a supported OS.[1] As an Android user it appears my only choice is a custom ROM or buying a new device every 2 years.

[1] - https://developer.android.com/about/dashboards/index.html#Pl...


If you go with the Pixel, which is basically the iPhone of Android, you'll get a similar to iOS update experience.


You are mistaken. The pixel has the same 2 year support length of the Nexus series.

iPhones are typically supported for 4 years.


Nitpick: Pixel has 3 years of security updates, and 2 years of Android OS updates.


I think you get something like 3 years, not as good as iOS, but by the 4th year, you only get a very crippled version of the newest iOS anyways.


It's probably a little bit of both - they don't care as much when it comes to Android (since Android's "open" nature is one of its primary selling points) and they also don't control the entire supply chain like Apple does.

The latter is important: at the end of the day, software can only be as secure as the hardware on which it is installed. For example if someone can tamper with the hardware random number generator then your crypto becomes compromised.


I would guess that it is tougher to secure the OS when you don't own the hardware, although I don't know enough about this to comment


Agreed. Apple's vertical integration is really nice here. As an Android OEM it's tough to have to consider all the trade-offs between different HW vendors (best HW for battery/performance may not be so great for security or software support, etc) versus being a consumer of some internal group.


This is one of the reasons why Apple is still great. Their designs are thoughtful and deeply-considered. They may disappear up their own asses with a fair amount of regularity, but that doesn't stop them from excelling in certain areas (such as, indeed, privacy and security).


And stuff like accessibility and environmental sustainability.

There's a bunch of areas which only matter a lot to a small group of people where, when you investigate it, Apple has quietly been doing the right thing for a long time.


It is sad that something like the iCloud Keychain is so poorly implemented across the different devices.


How so? You mean from user access and usability standpoint?

I know I certainly wish there was a Keychain access app like on macOS available for iOS rather than only being able to access passwords via Safari settings.


Probably referring to the key distribution process, where to enable iCloud Keychain you have to approve from another device. It's a sound design in theory - the keys are only stored locally, so even Apple can't access them - but I've personally experienced issues several times where the approval notification wouldn't show up on my other devices, or the UI was in an inconsistent state, etc.


It's true- I think one of Apple's main stumbling points lately has been failing to consider the user experience in cases where, for example, a good Internet connection isn't available, or a particular piece of data has an unexpected attribute.


Thanks for saying what I should have said. I have devices that refuse to apparently sync the keychain. One device has the right key for something but the other does not, and there is no indication of why.


It's worth noting that the iOS and macOS keychains are very different. In fact, if you check the design docs you'll find they share only four functions between them. Essentially, as far as I could tell, the iOS keychain participates in the sandboxing of apps, while the macOS one does not.


What's lacking is a requirement that Apple Store apps must cooperate with user privacy settings. If the user denies an app access to location services, contacts, or calendars, Apple should require that the app still run. For example, if the user denies the Uber app location information when the app is not being used, Uber car ordering should still work. Apps should not be allowed to demand access they do not need to serve the user.


This, every app that requests "Always" location access should have an option available to restrict to "While Using" no reason not to


You picked a bad example, as Uber car ordering does work with location services disabled.

Any better examples come to mind of apps that refuse to run unless hey have an unreasonable feature granted?


On Android, GM's Maven car-sharing app (similar to ZipCar) does not run unless all permissions are granted, which include the ability to manage phone calls.

The Chinese WeChat messenger also refuses to run unless location access is granted, even though messaging apps do not depend on location to work.

This type of behavior makes fine-grained permissions systems not very useful. It should be prohibited by the Apple App Store and Google Play Store.


Agreed - I don't think this type of thing should be allowed. I'd be very happy if Apple required all apps that use locations services to offer the 'when app is running' option - seems perfectly reasonable to me.


Perhaps this has changed recently, but the last time I tried to use Uber without "allow location access even when not using the app," I was unable to call a ride. Instead, I was given instructions on how to enable that setting.


I've used it constantly without location services since the day they first requested the feature. When I open it, the splash screen has 2 options - Enable Location Services and Enter Pickup Address. Perhaps you missed the second option?


Signal won't run without access to your contacts (at least on iOS). Whether that's considered "unreasonable" is being actively argued on Twitter at the moment...


Personally, I don't consider using a phone number as reasonable. Usernames should be allowed for those who don't have or don't want phones.


I dunno. It's hard.

If you allow user-generated usernames, what's to stop me signing up as Linus Torvalds or Hillary Clinton, and creating drama for the lulz?

Using the phone number as a unique and verifiable identifier seems like a pragmatic - if not perfect choice. By using the SMS confirmation it makes it much more difficult for me to impersonate Linus or Hilary - because I'd need to impersonate their phone number _and_ respond to an SMS sent to it. Not nation state secure, but better than nothing...

The other problem Moxie's trying to solve is the discoverability problem - which jwz _doesn't_ want solved (nor do people with abusive exes or other categories of users Signal if often very vocally advocated for "Use TOR. Use Signal. Use a VPN!!!"). Moxie wants to be able to calculate the intersection of your contact list with every other Signal user's contact list, so it can prompt you to let you know you can use Signal to communicate with them which you'd otherwise probaby no know. And as he says, to be most valuable, e2e encrypted messaging needs to become the default messaging channel under normal use, so it'll not need to be installed/setup/learned under stress when it's need becomes critical.

I think Signal's got the "soundbite message" of what they do very carefully crafted and it's very enticing, but by nature soundbite sized or elevator pitch sized message inevitably leave out the complexity of edge cases.

I'm 99.99% sure Moxie isn't lying about what we could all read in the sourcecode if we cared enough to spend the time reading it - all the people jwz is concerned about sending him Signal messages already had his phone number in their contact list so could have already been sending him text messages. Moxie's view is jwz is better off having all those people know they can _also_ contact him using e2e encrypted messaging as well. jwz doesn't agree, and doesn't think letting all those people know he has installed an encrypted messaging app is "privacy protecting". There's certainly merit in both points of view.


I dont often join in, but the fact this doesn't work without contacts seems alarming. Any guess as to why?



Be careful about jwz links on HN. He detects the referrer and redirects to a prank image.


Whoops. And it's too late now to edit the link to point at a referrer-stripper, too.


Yeah - but if you're interested/curious, copy-paste the link into your browser. Moxie weighs in in the comments on jwz's blog there.

The "interesting" bit (to me) of Moxie's explanation of what happens is that Signal sends "truncated sha256 hashes" to the Signal servers so it can compute the intersection of all the numbers it scrapes from your contact list with everyone elses.

Seems to me there's just not enough entropy in phone numbers to make that nation-state secure.

If Moxie gets served a warrant (and a NSL) it wont take _too_ much effort to reverse out all those truncated SHA intersections into a social graph...

But then Moxie's POV seems to be "those people would get that same info from your telco records if you use SMS, and at least that's the _only_ metadata we leak, your telco probaby hands that over without a warrant along with at least the date/time of every SMS you've ever sent or received and quite probaby the contents as well...

I lean a lot towards jwz's argument that they're _way_ overselling the privacy-preserving nature of Signal. Especially if one of your adversaries is someone who knows your mobile number and would benefit from knowing you choose to use encrypted communication (like, say, everybody in the UK right now...)


Use a service that respects that then. Lyft.


I really do respect Apple's attention to security and privacy, however I was a little disappointed when I came across an Apple ID leak from their login form [0] last week. They patched a fix a couple days after I reported it, but still haven't responded to my initial report. It's quite concerning given how easy this simple flaw could have been used for malicious purposes to potentially collect millions of Apple ID's.

[0] https://github.com/zaytoun/Apple-ID-Data-Leak


That looks like extremely irresponsible disclosure? Publishing to GitHub and then "edit: I contacted apple"

????


There's nothing wrong with disclosing a security bug immediately.

https://hn.algolia.com/?query=author:tptacek%20responsible%2...


Wow, he's nothing if not consistent... you gotta respect that. Same opinion and phrasing going back 4+ years!



Literally every company is going to have some non-zero number of security leaks. I don't think it's reasonable to be disappointed in an entire company because of a bug written by (likely) one engineer. God knows I've written my share, but none of my software is on routes easily accessible to the public. Unless it's part of a larger pattern, this reaction is going to lead to you being disappointed with 100% of producers of software, past, present, and future, which doesn't seem like a useful state.

Incidentally, in the list of IDs you published, are those real? If they are: that's BS that you are publishing real people's IDs, and I'm also surprised by the number of numeric qq.com accounts.


Iirc, QQ uses a number as the "username," sort of like a phone number.


A list of Apple ID's isn't exactly a big deal. I've got mine listed in my HN profile.


And what exactly would you do with those Apple IDs? Just knowing the email address doesn't really get you very far.


You could spam them.

Anyways, it's a privacy violation. Apple shouldn't be handing out your email without your permission. Facebook has a setting allowing you to choose whether you want your email to be public or not.


Sure, you could spam them, but that's not a special property of Apple IDs. Apple certainly shouldn't be leaking emails, but describing this as "leaking Apple IDs" instead of "leaking emails" makes it sound like it's more serious than it is.



Did they never release an iOS 10 security paper before now? I'm asking because iOS 10 has been out for a while and iOS 11 is just around the corner.



Slightly offtopic, that cant be a wordpress instance can it? I've only seen `wp-content` paths on wordpress blogs...



wordpress is popular man


Amazing. Microsoft running wordpress on the main corporate site.


Their LEO guide is just as good, but not updated in a while.


Link?


Is there any similar document about macOS ?


The only one I could find provided from Apple applies to 10.6 Snow Leopard. Strange there isn't a similar security guide for Mac OS like they have for iOS.


[flagged]


> Anyone here on an Android phone ever been hacked?

You must be new, but here's some resources I suggest you review before you go on a crusade in future Apple articles:

https://en.wikipedia.org/wiki/Stagefright_(bug)

https://arstechnica.com/security/2016/06/godless-apps-some-f...

https://arstechnica.com/security/2016/10/android-phones-root...

http://blog.elevenpaths.com/2016/07/another-month-another-ne...

Also, keep your bait to other sites like reddit, we don't stand for it here on HN.


I suggest you read the Android Security 2016 Year in Review before posting any further links to blog sites whose primary goal is to post scaremongering articles for click bait. According to Adrian Ludwig there has not been one known successful StageFright exploit in the wild.

https://static.googleusercontent.com/media/source.android.co...


> I suggest you read the Android Security 2016 Year in Review

Oh I have, and I am well aware of the links I chose and their accuracy, regardless of you having a different "primary goal" in mind.

Thank you though, but I've got it from here.


FWIW, the Wii U jailbreak is via stagefright. I realize it's not android, but worth noting it has been used in the wild in some capacity.


Interesting. I didn't know the Wii U used Stagefright code. The Switch also uses Android code and was recently hacked using WebKit exploits.


Anyone who knows how to use a smartphone properly simply won't have security problems to deal with.

What difference does any of this make with iOS when we all know the US Gov't can simply access backdoors whenever they please? Don't fall for this security meme.

What does any of this matter when iCloud is a hacker's dream??

And I didn't ask if HN readers' phones CAN be hacked, I simply asked if they WERE asked. F off Elitist troll.


We've banned this account for violating the site guidelines.


"Propaganda" LOL.


[flagged]


Please don't do this here.



"There have not been any breaches in any of Apple’s systems including iCloud and Apple ID," the spokesperson said. "The alleged list of email addresses and passwords appears to have been obtained from previously compromised third-party services."

http://fortune.com/2017/03/22/apple-iphone-hacker-ransom/


This was a better comment before you "improved" it with the CVE count baloney.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: