Agreed. In other comments here however, there are folks arguing that this shows iOS both is/isn't secure and that Project Zero has a hidden goal of making Apple look bad. I think the poster above is trying to respond/provide context to that.
Apple provides security updates longer than Android though.
One could argue being able to own a phone for 5 years and receive security updates for a higher up front cost is preferable to buying a new phone every two years.
To be blunt, that policy + the fact Apple has no business units actively incentivized to invade my privacy (no targeted marketing dept) makes me choose iOS, even if it's "less free".
My phone is home to my most intimate conversations, I need to know it's secure for the long haul.
I agree with you, and that's why I have the phone that I do. There are clear tradeoffs between the two ecosystems. My comment was providing context to the one above it.
In this sentence: "For people who think that there is far more CVE on iOS than Android", when did I speaked about evaluating a security of a platform based on CVE? When?
I was only discussing about the fact that iOS does not have more CVE than Android, based on other discussions on this thread...
I am sorry if this message is mean but... did you read my comment at least?
No, I did, although I should apologize for the slightly snarky response. I think your original comment was a valuable one, as it did bring up an interesting point that Android sees more CVEs than iOS does. However, I think the point is even more fundamental: counting the number of bugs on a list is not a great way of showing a platform is secure/insecure, although many people may believe so, just like they may look at this list and think "gee, iOS is so full of bugs it must be worse than Android!" Really, the point of this blog post was just a compilation of methods that achieve arbitrary code execution in the kernel based on how they did so and which iOS versions they apply to, instead of being some sort of comparison between operating systems.
Great summary list near end of article of the various mitigations used by iOS - there is quite a few hardware protections implemented within the processors themselves.
Android is less of a monoculture, but also has less opportunity to tune the processor to include strong hardware mitigations against whole classes of vulnerabilities.
While iOS seems to pioneer every mitigation technique ever described in the literature, they also shipped ancient versions of C image parsing libs (OpenEXR), have an endless stream of remote code execution vulnerabilities from their peculiar serialization schemes and all the other issues you would expect from the use of native ObjC.
Their commitment is squarely to mitigating issues that threaten the walled garden (and therefore the kernel), not so much userspace.
The issue doesn’t seem to be ObjC per se. It’s the use of a facility that tries to transparently serialize an object in any rich language. Python pickle has the same issue, although it’s arguably worse in Python.
The solution is to abandon the whole approach — these systems should not be used across trust boundaries. Apple should create iMessage v2 using a real serialization system (protobuf, JSON, XML, ASN.1, etc) and deprecate
the old scheme. It’s surely possibly to write a safe but non-extensible deserializer for the old protocol if compatibility is needed.
>Their commitment is squarely to mitigating issues that threaten the walled garden (and therefore the kernel), not so much userspace.
As a practical matter, doesn't this mean if I'm confident in my ability to avoid phishing messages, iOS is better?
Anything that "roots" a phone can also run roughshod, turn on my mic, grab signal messages stored locally, etc, correct?
What I'm getting at is a phone in a walled garden + a laptop that's open source might be the best way to get the security of the walled garden and the utility of open source.
I'd argue it is the opposite. The people that want to grab your Signal or WhatsApp messages can often do so by simply sending your some picture or specially crafted message; it's only the people that want to run their own software that need to break all the kernel defenses and 'root' the device.
No, they are entirely different features. Pointer authentication is largely a control-flow integrity feature. Hardware memory tagging attempts to prevents arbitrary overwrites/memory corruption, even to data. There are dozens of other features that prevent memory corruption and it's not really "playing word games" to distinguish between them.
I assume you're not talking about this: https://developer.apple.com/documentation/security/hardened_...? And to your other point: the difference between these things do matter because they have entirely different semantics and protect against different things, although the overarching goal of preventing undesired operation is the same. Taking the metaphor one level up to apply to computer science rather than security, this is like saying that the borrow checker and asserts are both the same from a certain point of view because they both help prevent bugs. Which is true, of course, but when you're talking about a programming language's safety features saying something like "C can prevent use-after-free because of its borrow checker" is not correct nor can you make it relevant by saying something like "yeah but C has asserts and they also help prevent bugs so from my point of view I think you're being pedantic".
Thing is, every time we discuss this it ends up following the same path.
I see these iOS features as a mechanism to improve C safety on iOS, while you don't.
We would be better if Apple would just reboot the whole stack in Swift, but that will take years, if ever, so from a security advocate point of view, I see that better as what PCs offer nowadays, specially after the misstep that MPX ended up being.
I have nothing against improving the safety of languages that have historically had issues in this regard; in fact, I have actually called out companies for turning off mitigations: https://news.ycombinator.com/item?id=23448161. The point I'm making here is that the things you mentioned are different things and they protect against different types of attacks.
Just for completeness, BTW, iOS devices after iPhone XS ship with PAC, which is part of ARM-v8.3; memory tagging is an extension of ARM-v8.5. Rumors point to the A14 chip supporting that but there's no word on whether Apple will support the extension.
iOS also sometimes creates their own mitigation techniques. Sometimes these are quite strong but often they are…not really mitigating anything; in rare cases they make the situation worse. Shipping every mitigation under the sun is not necessary a good idea.
Hoping to see such a list for Android too. I don’t see one right now. [1] Some kind of comparison between iOS and Android on the kinds of issues and underlying causes would also be interesting.
I really appreciate and enjoy the work done by Project Zero.
But, it often does feel like it could be retitled Project Schadenfreude. This particular post almost feels timed specifically for release right before WWDC.
They're doing Apple a huge favor by discovering these bugs. They're doing the security community a huge favor by publishing blog posts about them for others to learn from. They also do plenty of Android research, although iOS is a higher priority since most security-conscious people use iOS (including the researchers themselves). This is not a hit piece on Apple.
I’m not sure how this argument makes sense. Most people use Android. I don’t see any evidence that supports the claim that most “security-conscious“ people use iOS.
If it is somehow meaningful to make that claim, then it is all the more important for project zero to focus on Android, since people who are not security conscious are less likely to practice other forms of security.
Project zero simply doesn’t seem to publish these pieces about Android at the same rate they do about iOS. Perhaps this is unintentional.
vendor=Google returns 145 (bugs in Samsung's Android kernel,etc. are tracked separately)
vendor=Linux return 54
To be fair, a huge number of things make this not an even comparison, including the underlying bug rate, different products (Google lacks a desktop OS and an iMessage equivalent, for example), and downstream Android vendors being tracked separately. Also, # bugs found != which ones they choose to write about.
> Google lacks a desktop OS and an iMessage equivalent, for example
Nitpicking, but Chrome OS is a desktop OS, and Google has had at least 7 things similar to iMessage.
On topic, fron my perspective when I was working somewhere that got bug reports from project zero, it was great. I mean, not great that we had the bug they found first, or the follow-up bug they found after we fixed that one; but great that they were clear problems that we could solve. If we didn't want to be written up, we could have done better to begin with, and taken more care in looking around when the first bug was reported.
Is Chrome OS sufficiently unique enough from Linux to be its own category (genuinely asking)? I was aware of it when I made the original comment, but considered it more a subset of Linux, in that most major kernel security bugs would be shared.
Also, what are you considering similar to iMessage? My view is that iMessage presents its own unique & powerful attack surface that hangouts/etc dont have. Maybe RCS?
I think Chrome OS at least has a unique libc and GUI stack versus normal desktop Linux distributions? And there's certainly room for errors in their updater stack and all that.
gChat, Allo, Duo, SMS (whatever it's called today), Hangouts, Meet, ??? They're all relatively similar, send messages including media (remember stagefright)
> gChat, Allo, Duo, SMS (whatever it's called today), Hangouts, Meet, ??? They're all relatively similar, send messages including media (remember stagefright)
Most of those have a much more limited attack surface than iMessage, at least in my understanding. SMS is shared and doesn't try to do what iMessage does, thus the issues
I'd guess they have better things to do than to align their work with the schedule of wwdc. Their job is to find security issues; they probably don't care much about wwdc one way or another.
There's no trustzone, no, but boot is completely verified in much stronger ways. Software updates are uniquely signed per device, per execution to prevent downgrade attacks.
TrustZone is orthogonal to secure boot. It's close to functionality to SGX, trying to allow for some SecureElement-like functionality on the main SoC (in which you can do crypto, DRM, etc).
IIRC the general idea is that the pin code or password is required to be entered when an update is requested, which unlocks a key pair. The public key is included in the update request, which is then sent to Apple. Apple sends back a download that is signed in a way that the firmware can verify, and Apple guarantees never to send another download in response to that exact request. This protection also relies on the secure enclave never authorizing the installation of an unsigned OS update.
> Apple sends back a download that is signed in a way that the firmware can verify, and Apple guarantees never to send another download in response to that exact request.
Interesting! Thank you for the detailed explanation.
So according to this article those of us on iOS 13.x (93% of the installed base) used to have one vulnerability, which we got patched through auto-updates 8 months ago. I'm quaking in my boots.
I hope you remember to point out the historical nature of this when you pass on the link.
In fairness, the question is less "how many vulns does this exact device have so far" and more "how many vulns are likely to occur for this device total", in which case this article could be evidence that the last few versions of iOS have each had their share, and therefore it is reasonable to extrapolate that to expect a handful of issues on this version. Now the first obvious catch is that you can't necessarily accurately project the past into the future; if most of these are the result of some underlying design strategy that Apple has stopped doing, then the exploits would dry up. On the other hand, of course, they could start shipping some new technology that turns out to introduce more vulns (not likely, but it could happen).
Of course, I'm pretty sure the same list of exploits per-version of Android would be much longer, so if anything this list, if complete, really does paint iOS in a very good light.
Hope you also tell people who use Android about the dismal state of updates (actually the lack of it) from most manufacturers and how most new phones come with older versions with security vulnerabilities not patched on that device (and probably will never be patched).
When I had my Samsung Galaxy device it used to get a security update like once or twice a year, and usually it was not even available for my phone until my carrier allowed it a couple weeks later. That was a 800€ device too. It's absurd
And it's for 2019 only