For one thing, I hate the idea that such vulnerabilities can be used against me (e.g. to exfiltrate my data), but they cannot be used in any way to help me, such as allowing me to sideload a third-party ROM (such as LineageOS) and continue to use the phone I paid $1000 for after the OEM decides they don't feel like supporting it anymore.
Because (a) it would let you root your device, allowing you to do what you want with it (b) it would make these 400 vulnerabilities especially dangerous
Best to use a bank which treats you like an adult and trusts you to make grown-up decisions about using a rooted device. Monzo fits into this category.
While they are still withholding info about how to exploit the bugs, there is more technical detail in their Defcon talk, "Pwn2Own Qualcomm Compute DSP for Fun and Profit" https://www.youtube.com/watch?v=CrLJ29quZY8
Seriously I'm beyond pissed at the state of Android, patches and open-source compliance. If we are lucky 10% of current phone models will get any form of update. The rest will be vulnerable for years until the devices finally break.
And that's only the Qualcomm stuff. There is another CPU vendor beginning with M who is big in el-cheapo hardware - look at their Android kernel leaks, wherever you dig you find horrid, HORRID code.
Google should mandate full open source disclosure of all GPL'd components as part of the Play Store certification and unlockable bootloaders, otherwise this shit is never going to change.
Heh, I once found a "feature" in a kernel driver in my Xperia (with a SoC from the company with a name starting in M) that allowed you to read arbitrary kernel memory from userspace, by passing the appropriate structures via a ioctl interface. Didn't even have to dig around too much.
I have a detailed report on HackerOne (with some proof-of-concept code), but visibility of the report is set to private. Not sure if it's possible to change it to public, to be honest.
Anyway, the issue is fixed in recent firmware versions.
That is never going to happen, when Treble came out we thought it would change, but since they don't force OEMs to actually deliver updates, everything stayed the same.
When questioned about this on the Android Platform 11 AMA last month, they stated that they think OEMs freedom is what makes Android a rich ecosystem.
So there you have it. Can check by yourself on Reddit.
The architecture is a cool design, where Linux gets encapsuble in a kind of micro-kernel where classical drivers are "legacy" drivers, and Treble drivers get their own process, can be implemented in C++ or java, talking with the kernel via Android IPC.
However not forcing OEMs to provide updates, even with Project Mainline and GSI now as Treble updates, doesn't change anything from consumer point of view.
That phone is barely 2 years old from a reputable brand, I sure as hell hope it's still getting updates.
My older OnePlus 3 got updates for almost 4 years I think. Not bad, but it's not like apple's 5-6 years. Still, it was half the price of an iPhone with better hardware to boot so fair trade I guess.
I don't like frivolous spending on phones but I never keep a phone more than 4 years anyway. The progress of camera, microphone and speaker quality alone across 4 years is enough of a quality of life improvement for me to upgrade.
At this rate of Android security issues, my next phone will probably be the next iPhone SE but only if they update the display to a larger 1080p 90Hz panel and add an ultrawide camera lens, I don't care about anything else.
Still getting updates for my 7 year old ipad air 2. About to get ios 14 as well. Android has warped peoples perspectives on how long a device would get updated. On PC you can just keep installing updates until the device can't keep up anymore.
The iPad Air 2 was introduced just under six years ago. But even the original iPad Air, which was introduced nearly seven years ago, still gets security updates. The last update was released less than a month ago. It's stuck on iOS 12, though.
Imagine how good things would be if the drivers were open source and in the kernel. We would still have bugs but at least it would be possible to fix them.
Aside from the massive maintenance effort: what is keeping the community from taking all the driver code from the tons of official and unofficial code dumps and bringing them to mainline?
You can't just drop leaked code in to the kernel due to legal reasons. And even if the vendor does provide an open source dump of the source you still can't just drop it in to the kernel because it will not meet the code quality standards for linux. Vendors just hack it until it works and call it a day since they don't have to worry about unmaintainable code if they never plan to maintain it.
“A single SoC (Software on Chip) may include features to enable daily mobile usage such as image processing, computer vision, neural network-related calculations, camera streaming, audio and voice data.“
There seems to be some confusion on the authors' behalf about what a DSP is, and what an SoC is ("software" on chip, as they call it...) I'm just nitpicking, of course.
I think you have a point here. When they say "DSP chip" instead of "DSP core inside the Snapdragon chip" it makes me wonder what else they got wrong. I don't think the oversimplified language is any more approachable here.
(As it happens I read the slides and this is a legit vulnerability but you'd never know it from the press release.)
From Apple's perspective Qualcomm has been insufficient for a long time for many reasons, the security issues here would only be one of the many factors involved in the decision to do their own development.
For what it is worth, a modern chip as complex as the A* series is essentially guaranteed to have vulnerabilities. Maybe not 400, but definitely not 0.
This is a thing I think people constantly underestimate... Intel's cores are not necessarily dramatically more broken than everyone else's chips, they just pay for more auditing and public research.
A quick survey of the papers published in 2019 and later (i.e., post Meltdown/Spectre, inclusive) listed at [1] indicate that Intel contributed financial support to the majority of them. ARM was the second-most corporate contributor, followed by AMD.
> Meltdown was independently discovered and reported by three teams:
Jann Horn (Google Project Zero),
Werner Haas, Thomas Prescher (Cyberus Technology),
Daniel Gruss, Moritz Lipp, Stefan Mangard, Michael Schwarz (Graz University of Technology)
Spectre was independently discovered and reported by two people:
Jann Horn (Google Project Zero) and
Paul Kocher in collaboration with, in alphabetical order, Daniel Genkin (University of Pennsylvania and University of Maryland), Mike Hamburg (Rambus), Moritz Lipp (Graz University of Technology), and Yuval Yarom (University of Adelaide and Data61)
This is very much an opinion, not a fact. "Intel is only in trouble because they got caught, AMD is surely incompetent as well, but hasn't been found out".
My point is that there is more academic research on Intel processors than AMD. For a hacker, an Intel vulrability would of course be more lucrative than a AMD one.
That's a good point, about half the results go away when you add "processor" to the query. Interestingly, the same happens for the AMD query so the ratio is still similar.
That number is literally the most meaningful number there. Meltdown caused more scare than all of these 400 bugs described here, just because intel is not expected to have any sort of vulnerability and the people who really care about security chooses intel(not talking about self described privacy pundits on HN, but military and banks). There had been much more research on intel security than all other chips combined.
I dont think it has much to do with competence, the order of complexity in these chips are reaching super human levels of intellect to decipher. Finding vulnerabilities is hard but safeguarding against them is even harder. Take 'spectre' for instance, it is a fundamental problem with the speculative architecture can't really get rid of it.
Even if they wouldn't, I imagine the exposure is enough. Windows, Android, Linux probably have more eyes on them than all the other software in the world, combined.
"If you want half the world's hackers to audit your code, put it in an Apple product. If you want all the world's hackers to audit your code, put it in a Nintendo product."
I don’t doubt that they have more independent security analysis than just the bounty program; but using it as an argument that they’re paying people is not realistic.
Bug bounties are very different than auditing. In an audit, there is a contract in place with specific analysis objectives based on agreed-upon criteria. I find it unlikely anyone in the industry would have more experience than Intel about CPU manufacturing, although there might be security consulting firms that are advanced enough to merit a real corporate NDA. But given the breadth and depth of their IP, even that seems unlikely.
But I would still really be interested to know who Intel hires to audit their products, if this is true. I'd like to do that kind of work.
Isn't this why apple doesn't trust the CPU with secure functions and has dedicated hardware for it? So a vuln in the cpu won't expose the encryption keys bypassing face id.
Hard to tell anyone's intentions, but that's probably, at least partially, a side effect.
Apple seem to use security primarily for two things, marketing, and to ensure they have control over the platform, and the developers who write applications for it.
Maybe that's three things? Anyway, the totality of what they do in security isn't user centric enough that the reason for external security hardware would be to primarily increase the user security. Obviously they have to do this (increase user security) to make it palatable to the customer, but there's a certain asymmetry in their actions that makes it seem unlikely that actual increased user security was the original goal.
Looking at the slides from a different article, these are not really in the chip per say but in the SDK. So any lib compiled to use the chip would be affected but not really a hardware issue. Basically fuzzy testing found 400 library calls that fail with segfaults. These can sometimes (but not always) be modified to do a takeover, but I didn't see anyone claiming to have done that.
It's even more intentionally misleading than that. The SDK generates wrapper libraries that allow you to interface with your code running on the DSP. Some of the wrapper functions generated have vulnerabilities. The 400 vulnerabilities are the few vulnerabilities found in the SDK template multiplied by how many different generated wrapper libraries they found.
So you fix the handful of errors in the SDK templates and all the 400 vulnerabilities go away.
An Open Source OS can help, sure, and is a start. A DSP is a programmable hardware device. Both phones to which you linked use variants of ARM processors and then use third-party baseband systems. You're not getting rid of closed-source hardware vulnerabilities by replacing Android or iOS.
There's Osmocom[1] for that. Sadly, it doesn't support modern PHY layers of the modem not modern baseband stack. It demonstrates, though, the possibility. I wish it had more traction.
Agree, we need open source hardware (like RISC-V) to mature in order to eliminate this class of vulnerabilities. I haven't heard much on mobile class RISC-V SOCs though.
RISC-V is an open source ISA, which means anyone is free to implement it, interface with it, customise it etc.
But most RISC-V devices are not open source as far as I know, as least currently. And a mobile class SoC would still be a very complex device, therefore with vulnerabilities (and also therefore with much less motivation for a company to open source the whole design). You'd have a similar problem as now.
That said, if someone wants to work with me on a RISC-V mobile class SoC (or server/supercomputer class) do get in touch, I'd love to do it :-)
Yes, you're not getting rid of closed-source hw vulnerabilities, but you get a lot of control. Librem 5 allows to replace the modem. Both phones ensure that it cannot access anything in the OS. You can also use killswitches if you require location privacy.
That's not a panacea. OpenSSL was completely open source, and it took, what, 2-3 years for Heartbleed to be discovered and rectified? And it's a major building block of the internet.
For open source to help, people have to actually review the code.
National security is and will continue to be a joke until there is strong regulation forcing all involved vendors to fix their crap for as long as their devices are used.
Hardware vulnerability and issues are hard to address at times as it can be connected to other vendor hardware or softwares. I always wonder if these flaws are left in the design intentionally or its just a sneaky bad bugs.
> The report notes that this security flaw is present in all the devices running chips between A7 and A11 Bionic. Apple has already fixed the exploit in A12 and A13 Bionic chips so newer devices are safe.
That's four generations of Apple hardware, the latest being iPhone X and iPhone 8/8 Plus (Sept 2017). The patch being fixed in A12 means the iPhone XR and iPhone XS (and later) are unaffected.
I wonder how would it be like to fill application forms for over 400 CVE numbers, or reading a security advisory with the first page exclusively occupied by CVE numbers. Well, seriously speaking, they'll probably group these vulnerabilities and apply a big one.
Would having an open source chip with a rolling release be more secure? Like as soon as the vulnerability is discovered you would push the fix and the next generations would already be fixed. Or would such frequent changes to the chip design be to difficult to mass produce, due to having to modify the production process?
This is coming from a point of view that Linux is quite a success and thus maybe the same philosophy could be used for hardware?
first you have to have open source chip, and then have fabs willing to want to make it (and someone willing to pay them up front), and have phone makers want to use it..
And no changing chips every few months possibly breaking compatibility (people working around your bugs) is not a feature that a lot of hw designers want.
This may change eventually. I have high hopes for RISC V but we will see
It's dependent on the SoC whether there's IOMMUs at all and whether they're rigged up to all the bus masters in the system. A lot don't have them as it was seen as a virtualization feature rather than a security feature for the longest time.
I'm referring to businesses because they have SLAs or other customer obligations. AFAICT Graphene isn't a business but is a FOSS project without customer support requirements or obligations.
> Broadcom filed suit ... claiming SpaceX hired a number of Broadcom’s top engineers to develop “a family of sophisticated, customized computer chips.” The two companies had been working together on the development of advanced computer chips for an undisclosed project, but SpaceX ultimately ended the collaboration.