> Securely erasing saved keys is just as important as generating them. It’s especially challenging to do so on flash storage, where wear-leveling might mean multiple copies of data need to be erased. To address this issue, iOS devices include a feature dedicated to secure data erasure called Effaceable Storage. This feature accesses the underlying storage technology (for example, NAND) to directly address and erase a small number of blocks at a very low level.
I guess that means separate storage, as the main storage in recent iPhones is an NVMe SSD and not raw NAND attached to the processor.
BTW, is there a good / easy way to connect raw NAND to a normal desktop PC?
What purpose do you want to access raw NAND? If you are okay with just a basic low speed connection to read the NAND, there is a fairly standardized async protocol which you could achieve with a dozen GPIO pins. You could also use a FPGA or and NAND flash programmer (like of like the old EPROM programmers)
However beyond this, you need to know a bit more information to interpret this raw data. This includes any data framing structure, error correction, scrambling, encryption and read error recovery algorithms. A lot of this information is non-standard or only available under NDA from the manufacturer.
In that case, I'm not familiar of any practical way to to interface to the NAND flash except that which comes with some embedded controllers boards. But let me add that even if you interface to the NAND, you still need to solve some of the things I mentioned above if the NAND is any technology node below 20nm or so. If you don't do it right, even with decent error correction you will get a high bit error rate.
Do they don't know/care about security, or is it simply the case that it is hard to have something like the secure enclave across all Android devices? Genuine question.
Mmm? Secure enclave is present on most Android devices and is mandatory since Android 6.0.
(It's just called something else.)
Historically Android has been lagging behind a bit from iOS devices when it comes to security, but Pixels and their software have a very similar security model and design (with some exceptions - less granularity with file-based encryption and some other mostly minor details).
Non Google devices however are usually significantly less secure - not so much due to Android design, as due to manufacturers deliberately disabling Android's security featuers (e.g. only Pixel actually uses dm-verity at this moment if I remember correctly), refusing to update them, building devices with bad trustzone drivers... etc.
If you keep to the 1st party (Google-branded) devices like in iOS world, you're mostly ok.
Yes, they're using the ARM Trusted Execution Environment rather than a separate Enclave chip with its separate OS (L4). Apple is an ARM architecture licensee, designing their own compatible chips. So the TEE would have been an available path (compatibility is still required, no?) but they instead went the extra yard with a separate Enclave chip. As their white paper details, they also go to insane levels with that chip and moreover with its communications rather than just trust the TEE within an ARM chip and call it a day.
More recent ARM chips (A9+) come bundled with ARM TrustZone[1]. In a nutshell, the processor has two (hardware) isolated execution environments each running a different OS and different software. By default, the secure environment of TrustZone runs an L4 kernel (edit: this is incorrect, see reply below).
Could it be the case that Apple is leveraging TrustZone but with a customized L4 kernel? Or is it confirmed that the Secure Enclave is a custom IC designed by Apple? I wouldn't be surprised if it's the former as it becomes much cheaper to implement the required security features.
I know that mate, no need to get snarky. What I meant is that it was bundled with the core by default, but thanks for the correction. I thought I read it somewhere, but judging by a quick search, it seems I'm mistaken.
Yes, indeed they went above and beyond - probably because they also need to defend not only against external threats, but against the user of the device himself to keep the walled garden intact.
Yes (dunno why all the downvotes) but Apple went even further than the walled garden would require. They could have easily left an Apple backdoor. But they encrypt the protocol going over wires to/from the Enclave. They go insanely far rather than sufficiently far.
Yeah, nation state level attacks will still work, especially if they have the phone. But with Android it's not nation state level. It's corporate level and maybe less if they have the phone.
I felt that Apple's description of the initial key setup between the enclave and the main processor was hand-wavy at best.
I know of another similar implementation that's used by Microsemi for their FPGA-based secure boot process[1]. They claim to protect the initial AES key transmission using an "obfuscated" crypto library that is sent to the processor over SPI on boot[2]. Also, I wonder if Apple exchanges a nonce during the setup to prevent replay attacks?
[2]: It's a C/C++ library called WhiteboxCRYPTO. There is a whitepaper (http://soc.microsemi.com/interact/default.aspx?p=E464), but AFAIK the gist of their argument is that the code and keys are sufficiently obfuscated to prevent reverse engineering (typical marketing-speak).
There was an article about iOS security where someone argued that Apple controls the enclave for security reasons, to which I answered that this is basically security by obscurity. You can see I was downvoted for this: https://news.ycombinator.com/item?id=13676135
I still downvoted izacus because it was an uncharitable fanboy rambling. The charitable interpretation would be that the walled garden (in regards to the enclave) is a side effect of their implementation, and not the intention.
My OnePlus 3T uses dm-verity as well, sadly. Displaying an "unlocked" badge during boot is acceptable. Actually pausing boot for 10 seconds every time is not by a long shot.
But to me, they seem to be trying to find a moderate level of security with a profitable cost of goods. It doesn't seem that their heart is in it the way Apple's is with the Enclave. iOS is still breakable at the nation state level but well that's quite a high bar. Nation states are breakable at the nation state level.
It feels like users don't care about security. Nearly all Android phones are not running a supported OS.[1] As an Android user it appears my only choice is a custom ROM or buying a new device every 2 years.
It's probably a little bit of both - they don't care as much when it comes to Android (since Android's "open" nature is one of its primary selling points) and they also don't control the entire supply chain like Apple does.
The latter is important: at the end of the day, software can only be as secure as the hardware on which it is installed. For example if someone can tamper with the hardware random number generator then your crypto becomes compromised.
Agreed. Apple's vertical integration is really nice here. As an Android OEM it's tough to have to consider all the trade-offs between different HW vendors (best HW for battery/performance may not be so great for security or software support, etc) versus being a consumer of some internal group.
This is one of the reasons why Apple is still great. Their designs are thoughtful and deeply-considered. They may disappear up their own asses with a fair amount of regularity, but that doesn't stop them from excelling in certain areas (such as, indeed, privacy and security).
And stuff like accessibility and environmental sustainability.
There's a bunch of areas which only matter a lot to a small group of people where, when you investigate it, Apple has quietly been doing the right thing for a long time.
How so? You mean from user access and usability standpoint?
I know I certainly wish there was a Keychain access app like on macOS available for iOS rather than only being able to access passwords via Safari settings.
Probably referring to the key distribution process, where to enable iCloud Keychain you have to approve from another device. It's a sound design in theory - the keys are only stored locally, so even Apple can't access them - but I've personally experienced issues several times where the approval notification wouldn't show up on my other devices, or the UI was in an inconsistent state, etc.
It's true- I think one of Apple's main stumbling points lately has been failing to consider the user experience in cases where, for example, a good Internet connection isn't available, or a particular piece of data has an unexpected attribute.
Thanks for saying what I should have said. I have devices that refuse to apparently sync the keychain. One device has the right key for something but the other does not, and there is no indication of why.
It's worth noting that the iOS and macOS keychains are very different. In fact, if you check the design docs you'll find they share only four functions between them. Essentially, as far as I could tell, the iOS keychain participates in the sandboxing of apps, while the macOS one does not.
What's lacking is a requirement that Apple Store apps must cooperate with user privacy settings. If the user denies an app access to location services, contacts, or calendars, Apple should require that the app still run. For example, if the user denies the Uber app location information when the app is not being used, Uber car ordering should still work. Apps should not be allowed to demand access they do not need to serve the user.
On Android, GM's Maven car-sharing app (similar to ZipCar) does not run unless all permissions are granted, which include the ability to manage phone calls.
The Chinese WeChat messenger also refuses to run unless location access is granted, even though messaging apps do not depend on location to work.
This type of behavior makes fine-grained permissions systems not very useful. It should be prohibited by the Apple App Store and Google Play Store.
Agreed - I don't think this type of thing should be allowed.
I'd be very happy if Apple required all apps that use locations services to offer the 'when app is running' option - seems perfectly reasonable to me.
Perhaps this has changed recently, but the last time I tried to use Uber without "allow location access even when not using the app," I was unable to call a ride. Instead, I was given instructions on how to enable that setting.
I've used it constantly without location services since the day they first requested the feature. When I open it, the splash screen has 2 options - Enable Location Services and Enter Pickup Address. Perhaps you missed the second option?
Signal won't run without access to your contacts (at least on iOS). Whether that's considered "unreasonable" is being actively argued on Twitter at the moment...
If you allow user-generated usernames, what's to stop me signing up as Linus Torvalds or Hillary Clinton, and creating drama for the lulz?
Using the phone number as a unique and verifiable identifier seems like a pragmatic - if not perfect choice. By using the SMS confirmation it makes it much more difficult for me to impersonate Linus or Hilary - because I'd need to impersonate their phone number _and_ respond to an SMS sent to it. Not nation state secure, but better than nothing...
The other problem Moxie's trying to solve is the discoverability problem - which jwz _doesn't_ want solved (nor do people with abusive exes or other categories of users Signal if often very vocally advocated for "Use TOR. Use Signal. Use a VPN!!!"). Moxie wants to be able to calculate the intersection of your contact list with every other Signal user's contact list, so it can prompt you to let you know you can use Signal to communicate with them which you'd otherwise probaby no know. And as he says, to be most valuable, e2e encrypted messaging needs to become the default messaging channel under normal use, so it'll not need to be installed/setup/learned under stress when it's need becomes critical.
I think Signal's got the "soundbite message" of what they do very carefully crafted and it's very enticing, but by nature soundbite sized or elevator pitch sized message inevitably leave out the complexity of edge cases.
I'm 99.99% sure Moxie isn't lying about what we could all read in the sourcecode if we cared enough to spend the time reading it - all the people jwz is concerned about sending him Signal messages already had his phone number in their contact list so could have already been sending him text messages. Moxie's view is jwz is better off having all those people know they can _also_ contact him using e2e encrypted messaging as well. jwz doesn't agree, and doesn't think letting all those people know he has installed an encrypted messaging app is "privacy protecting". There's certainly merit in both points of view.
Yeah - but if you're interested/curious, copy-paste the link into your browser. Moxie weighs in in the comments on jwz's blog there.
The "interesting" bit (to me) of Moxie's explanation of what happens is that Signal sends "truncated sha256 hashes" to the Signal servers so it can compute the intersection of all the numbers it scrapes from your contact list with everyone elses.
Seems to me there's just not enough entropy in phone numbers to make that nation-state secure.
If Moxie gets served a warrant (and a NSL) it wont take _too_ much effort to reverse out all those truncated SHA intersections into a social graph...
But then Moxie's POV seems to be "those people would get that same info from your telco records if you use SMS, and at least that's the _only_ metadata we leak, your telco probaby hands that over without a warrant along with at least the date/time of every SMS you've ever sent or received and quite probaby the contents as well...
I lean a lot towards jwz's argument that they're _way_ overselling the privacy-preserving nature of Signal. Especially if one of your adversaries is someone who knows your mobile number and would benefit from knowing you choose to use encrypted communication (like, say, everybody in the UK right now...)
I really do respect Apple's attention to security and privacy, however I was a little disappointed when I came across an Apple ID leak from their login form [0] last week. They patched a fix a couple days after I reported it, but still haven't responded to my initial report. It's quite concerning given how easy this simple flaw could have been used for malicious purposes to potentially collect millions of Apple ID's.
Literally every company is going to have some non-zero number of security leaks. I don't think it's reasonable to be disappointed in an entire company because of a bug written by (likely) one engineer. God knows I've written my share, but none of my software is on routes easily accessible to the public. Unless it's part of a larger pattern, this reaction is going to lead to you being disappointed with 100% of producers of software, past, present, and future, which doesn't seem like a useful state.
Incidentally, in the list of IDs you published, are those real? If they are: that's BS that you are publishing real people's IDs, and I'm also surprised by the number of numeric qq.com accounts.
Anyways, it's a privacy violation. Apple shouldn't be handing out your email without your permission. Facebook has a setting allowing you to choose whether you want your email to be public or not.
Sure, you could spam them, but that's not a special property of Apple IDs. Apple certainly shouldn't be leaking emails, but describing this as "leaking Apple IDs" instead of "leaking emails" makes it sound like it's more serious than it is.
The only one I could find provided from Apple applies to 10.6 Snow Leopard. Strange there isn't a similar security guide for Mac OS like they have for iOS.
I suggest you read the Android Security 2016 Year in Review before posting any further links to blog sites whose primary goal is to post scaremongering articles for click bait. According to Adrian Ludwig there has not been one known successful StageFright exploit in the wild.
Anyone who knows how to use a smartphone properly simply won't have security problems to deal with.
What difference does any of this make with iOS when we all know the US Gov't can simply access backdoors whenever they please? Don't fall for this security meme.
What does any of this matter when iCloud is a hacker's dream??
And I didn't ask if HN readers' phones CAN be hacked, I simply asked if they WERE asked. F off Elitist troll.
"There have not been any breaches in any of Apple’s systems including iCloud and Apple ID," the spokesperson said. "The alleged list of email addresses and passwords appears to have been obtained from previously compromised third-party services."
> Securely erasing saved keys is just as important as generating them. It’s especially challenging to do so on flash storage, where wear-leveling might mean multiple copies of data need to be erased. To address this issue, iOS devices include a feature dedicated to secure data erasure called Effaceable Storage. This feature accesses the underlying storage technology (for example, NAND) to directly address and erase a small number of blocks at a very low level.
I guess that means separate storage, as the main storage in recent iPhones is an NVMe SSD and not raw NAND attached to the processor.
BTW, is there a good / easy way to connect raw NAND to a normal desktop PC?