Hacker News new | past | comments | ask | show | jobs | submit login
Over 400 vulnerabilities on Qualcomm’s Snapdragon chip (checkpoint.com)
381 points by Flenser on Aug 10, 2020 | hide | past | favorite | 114 comments



Do any of these vulnerabilities let us unlock the bootloader?


That's what I came here for as well!


Way would you want that?


For one thing, I hate the idea that such vulnerabilities can be used against me (e.g. to exfiltrate my data), but they cannot be used in any way to help me, such as allowing me to sideload a third-party ROM (such as LineageOS) and continue to use the phone I paid $1000 for after the OEM decides they don't feel like supporting it anymore.


Because (a) it would let you root your device, allowing you to do what you want with it (b) it would make these 400 vulnerabilities especially dangerous


(c) it would prevent most banking and finance apps from working.


Best to use a bank which treats you like an adult and trusts you to make grown-up decisions about using a rooted device. Monzo fits into this category.


You can use Xposed and Magisk to make them work


Just use the website.


Website for NFC payments? How?


Use the card.


Use an implant.


Not a fair comparison. ;)


Not "W[h]y [you] would ... want [to do this]" lol ;)


While they are still withholding info about how to exploit the bugs, there is more technical detail in their Defcon talk, "Pwn2Own Qualcomm Compute DSP for Fun and Profit" https://www.youtube.com/watch?v=CrLJ29quZY8



Seriously I'm beyond pissed at the state of Android, patches and open-source compliance. If we are lucky 10% of current phone models will get any form of update. The rest will be vulnerable for years until the devices finally break.

And that's only the Qualcomm stuff. There is another CPU vendor beginning with M who is big in el-cheapo hardware - look at their Android kernel leaks, wherever you dig you find horrid, HORRID code.

Google should mandate full open source disclosure of all GPL'd components as part of the Play Store certification and unlockable bootloaders, otherwise this shit is never going to change.


> horrid, HORRID code

Heh, I once found a "feature" in a kernel driver in my Xperia (with a SoC from the company with a name starting in M) that allowed you to read arbitrary kernel memory from userspace, by passing the appropriate structures via a ioctl interface. Didn't even have to dig around too much.

Ah well, at least I got a t-shirt from Sony.


The fuck? Is there any documentation for this? CVE?


I have a detailed report on HackerOne (with some proof-of-concept code), but visibility of the report is set to private. Not sure if it's possible to change it to public, to be honest.

Anyway, the issue is fixed in recent firmware versions.


That is never going to happen, when Treble came out we thought it would change, but since they don't force OEMs to actually deliver updates, everything stayed the same.

When questioned about this on the Android Platform 11 AMA last month, they stated that they think OEMs freedom is what makes Android a rich ecosystem.

So there you have it. Can check by yourself on Reddit.


Treble was meant to be the key to making android roms easy to support but now the ROM scene is a fraction of what it was 5 years ago.


The architecture is a cool design, where Linux gets encapsuble in a kind of micro-kernel where classical drivers are "legacy" drivers, and Treble drivers get their own process, can be implemented in C++ or java, talking with the kernel via Android IPC.

However not forcing OEMs to provide updates, even with Project Mainline and GSI now as Treble updates, doesn't change anything from consumer point of view.


Google did it with their Chromebook. So it is possible.


Naturally it is possible, but they aren't willing to do so with Android.


Are you referring to the fabless bunch whose name starts with M and ends with ediaTek? :)


Google are the ones that went on an Apple-esque crusade to extinguish all GPL in AOSP.


Still getting updates for my one plus 6. YMMV.


That phone is barely 2 years old from a reputable brand, I sure as hell hope it's still getting updates.

My older OnePlus 3 got updates for almost 4 years I think. Not bad, but it's not like apple's 5-6 years. Still, it was half the price of an iPhone with better hardware to boot so fair trade I guess.

I don't like frivolous spending on phones but I never keep a phone more than 4 years anyway. The progress of camera, microphone and speaker quality alone across 4 years is enough of a quality of life improvement for me to upgrade.

At this rate of Android security issues, my next phone will probably be the next iPhone SE but only if they update the display to a larger 1080p 90Hz panel and add an ultrawide camera lens, I don't care about anything else.


Still getting updates for my 7 year old ipad air 2. About to get ios 14 as well. Android has warped peoples perspectives on how long a device would get updated. On PC you can just keep installing updates until the device can't keep up anymore.


The iPad Air 2 was introduced just under six years ago. But even the original iPad Air, which was introduced nearly seven years ago, still gets security updates. The last update was released less than a month ago. It's stuck on iOS 12, though.


You are getting updates for the OS and kernel but not for device drivers. That's a big surface area for someone to hack your phone.


Imagine how good things would be if the drivers were open source and in the kernel. We would still have bugs but at least it would be possible to fix them.


Aside from the massive maintenance effort: what is keeping the community from taking all the driver code from the tons of official and unofficial code dumps and bringing them to mainline?


You can't just drop leaked code in to the kernel due to legal reasons. And even if the vendor does provide an open source dump of the source you still can't just drop it in to the kernel because it will not meet the code quality standards for linux. Vendors just hack it until it works and call it a day since they don't have to worry about unmaintainable code if they never plan to maintain it.


You definitely can drop crappy drivers to the kernel devs for later improvement as somebody else's problem. It is called "staging".


I just put Android 10 on my OnePlus One (via LineageOS). Best phone I've ever owned.


Be quite,Don't let Trump know!


“A single SoC (Software on Chip) may include features to enable daily mobile usage such as image processing, computer vision, neural network-related calculations, camera streaming, audio and voice data.“

Should be SoC (System on Chip)


Here's a link to the DEF CON talk: https://www.youtube.com/watch?v=CrLJ29quZY8


There seems to be some confusion on the authors' behalf about what a DSP is, and what an SoC is ("software" on chip, as they call it...) I'm just nitpicking, of course.


I think you have a point here. When they say "DSP chip" instead of "DSP core inside the Snapdragon chip" it makes me wonder what else they got wrong. I don't think the oversimplified language is any more approachable here.

(As it happens I read the slides and this is a legit vulnerability but you'd never know it from the press release.)


I wonder if Apple/others knew about such vulnerabilities, and passed up on using the chip as a risk? Or, was it just dumb luck that they avoided this?


From Apple's perspective Qualcomm has been insufficient for a long time for many reasons, the security issues here would only be one of the many factors involved in the decision to do their own development.

For what it is worth, a modern chip as complex as the A* series is essentially guaranteed to have vulnerabilities. Maybe not 400, but definitely not 0.


This is a thing I think people constantly underestimate... Intel's cores are not necessarily dramatically more broken than everyone else's chips, they just pay for more auditing and public research.


> they just pay for more auditing and public research.

Did Intel finance the research that turned up any of the major headline vulnerabilities over the last few years (meltdown, spectre)?


A quick survey of the papers published in 2019 and later (i.e., post Meltdown/Spectre, inclusive) listed at [1] indicate that Intel contributed financial support to the majority of them. ARM was the second-most corporate contributor, followed by AMD.

[1]: https://gruss.cc/


They did not.


It was a Google researcher mostly.


> Meltdown was independently discovered and reported by three teams:

Jann Horn (Google Project Zero), Werner Haas, Thomas Prescher (Cyberus Technology), Daniel Gruss, Moritz Lipp, Stefan Mangard, Michael Schwarz (Graz University of Technology)

Spectre was independently discovered and reported by two people:

Jann Horn (Google Project Zero) and Paul Kocher in collaboration with, in alphabetical order, Daniel Genkin (University of Pennsylvania and University of Maryland), Mike Hamburg (Rambus), Moritz Lipp (Graz University of Technology), and Yuval Yarom (University of Adelaide and Data61)

https://meltdownattack.com/#faq-systems-meltdown


This is very much an opinion, not a fact. "Intel is only in trouble because they got caught, AMD is surely incompetent as well, but hasn't been found out".


A google scholar search for "amd security" turns up less than 100k results while a search for "intel security" has <2 million results.


ok, but what's the ratio of the number of Intel cpus running on something worth hacking compared to the number of amd CPUs?


My point is that there is more academic research on Intel processors than AMD. For a hacker, an Intel vulrability would of course be more lucrative than a AMD one.


"Intel" has another meaning, especially when placed next to the word "security". The number of results from your two google searches is meaningless.


That's a good point, about half the results go away when you add "processor" to the query. Interestingly, the same happens for the AMD query so the ratio is still similar.


That number is literally the most meaningful number there. Meltdown caused more scare than all of these 400 bugs described here, just because intel is not expected to have any sort of vulnerability and the people who really care about security chooses intel(not talking about self described privacy pundits on HN, but military and banks). There had been much more research on intel security than all other chips combined.


I dont think it has much to do with competence, the order of complexity in these chips are reaching super human levels of intellect to decipher. Finding vulnerabilities is hard but safeguarding against them is even harder. Take 'spectre' for instance, it is a fundamental problem with the speculative architecture can't really get rid of it.


The most "broken" thing about Intel's chips was discovered by Google


Even if they wouldn't, I imagine the exposure is enough. Windows, Android, Linux probably have more eyes on them than all the other software in the world, combined.


"If you want half the world's hackers to audit your code, put it in an Apple product. If you want all the world's hackers to audit your code, put it in a Nintendo product."


Please tell me where this came from, and that it's not just something you made up?


It came from GPT-3.

Just kidding.


> they just pay for more auditing and public research.

Who is Intel paying to audit their chips?


Anyone who wants to report something via their bug bounty program.

https://www.intel.com/content/www/us/en/security-center/bug-...


Auditing/public research and bug bounties are not really the same category.


Famously, telegram has a bounty program- but was widely criticised for it, and for not doing a formal audit.

Criticisms here: https://news.ycombinator.com/item?id=6940665

I don’t doubt that they have more independent security analysis than just the bounty program; but using it as an argument that they’re paying people is not realistic.


Bug bounties are very different than auditing. In an audit, there is a contract in place with specific analysis objectives based on agreed-upon criteria. I find it unlikely anyone in the industry would have more experience than Intel about CPU manufacturing, although there might be security consulting firms that are advanced enough to merit a real corporate NDA. But given the breadth and depth of their IP, even that seems unlikely.

But I would still really be interested to know who Intel hires to audit their products, if this is true. I'd like to do that kind of work.


Isn't this why apple doesn't trust the CPU with secure functions and has dedicated hardware for it? So a vuln in the cpu won't expose the encryption keys bypassing face id.


Hard to tell anyone's intentions, but that's probably, at least partially, a side effect.

Apple seem to use security primarily for two things, marketing, and to ensure they have control over the platform, and the developers who write applications for it.

Maybe that's three things? Anyway, the totality of what they do in security isn't user centric enough that the reason for external security hardware would be to primarily increase the user security. Obviously they have to do this (increase user security) to make it palatable to the customer, but there's a certain asymmetry in their actions that makes it seem unlikely that actual increased user security was the original goal.


Looking at the slides from a different article, these are not really in the chip per say but in the SDK. So any lib compiled to use the chip would be affected but not really a hardware issue. Basically fuzzy testing found 400 library calls that fail with segfaults. These can sometimes (but not always) be modified to do a takeover, but I didn't see anyone claiming to have done that.


It's even more intentionally misleading than that. The SDK generates wrapper libraries that allow you to interface with your code running on the DSP. Some of the wrapper functions generated have vulnerabilities. The 400 vulnerabilities are the few vulnerabilities found in the SDK template multiplied by how many different generated wrapper libraries they found.

So you fix the handful of errors in the SDK templates and all the 400 vulnerabilities go away.


The implication is that Apple's own chips are somehow bug-free, which they probably aren't.


If you have connections to the real infosec world. They'd avoid it.


Not sure what you mean?



An Open Source OS can help, sure, and is a start. A DSP is a programmable hardware device. Both phones to which you linked use variants of ARM processors and then use third-party baseband systems. You're not getting rid of closed-source hardware vulnerabilities by replacing Android or iOS.


There's Osmocom[1] for that. Sadly, it doesn't support modern PHY layers of the modem not modern baseband stack. It demonstrates, though, the possibility. I wish it had more traction.

[1] http://osmocom.org/


Agree, we need open source hardware (like RISC-V) to mature in order to eliminate this class of vulnerabilities. I haven't heard much on mobile class RISC-V SOCs though.


RISC-V is an open source ISA, which means anyone is free to implement it, interface with it, customise it etc.

But most RISC-V devices are not open source as far as I know, as least currently. And a mobile class SoC would still be a very complex device, therefore with vulnerabilities (and also therefore with much less motivation for a company to open source the whole design). You'd have a similar problem as now.

That said, if someone wants to work with me on a RISC-V mobile class SoC (or server/supercomputer class) do get in touch, I'd love to do it :-)


Yes, you're not getting rid of closed-source hw vulnerabilities, but you get a lot of control. Librem 5 allows to replace the modem. Both phones ensure that it cannot access anything in the OS. You can also use killswitches if you require location privacy.


The Pinephone at least isolates the baseband from the main CPU and memory.

https://www.pine64.org/2020/01/24/setting-the-record-straigh...


That's not a panacea. OpenSSL was completely open source, and it took, what, 2-3 years for Heartbleed to be discovered and rectified? And it's a major building block of the internet.

For open source to help, people have to actually review the code.


Nothing is a panacea. FLOSS is just the right direction. At least you can fix the bugs with it without waiting for vendors, sometimes forever.


Can you make phone calls on the pine phone?


I guess this makes "national security" as an argument a bad joke.


National security is and will continue to be a joke until there is strong regulation forcing all involved vendors to fix their crap for as long as their devices are used.


unless they force everyone to buy Apple phones ;)



Hardware vulnerability and issues are hard to address at times as it can be connected to other vendor hardware or softwares. I always wonder if these flaws are left in the design intentionally or its just a sneaky bad bugs.


Is there any related data for Apple?


They have a lot too, a new one just popped up a few days ago https://www.bgr.in/news/apple-products-have-a-new-unpatchabl...

I'm not sure if anyone has compiled a list of how many


> The report notes that this security flaw is present in all the devices running chips between A7 and A11 Bionic. Apple has already fixed the exploit in A12 and A13 Bionic chips so newer devices are safe.

That's four generations of Apple hardware, the latest being iPhone X and iPhone 8/8 Plus (Sept 2017). The patch being fixed in A12 means the iPhone XR and iPhone XS (and later) are unaffected.


I wonder how would it be like to fill application forms for over 400 CVE numbers, or reading a security advisory with the first page exclusively occupied by CVE numbers. Well, seriously speaking, they'll probably group these vulnerabilities and apply a big one.


Keep reading; there are 6 CVEs assigned. The 400 is different binaries that have the same vulnerability.



Would having an open source chip with a rolling release be more secure? Like as soon as the vulnerability is discovered you would push the fix and the next generations would already be fixed. Or would such frequent changes to the chip design be to difficult to mass produce, due to having to modify the production process?

This is coming from a point of view that Linux is quite a success and thus maybe the same philosophy could be used for hardware?


Hardware is different in that it can't be updated once it's leaves the factory and has to be "right first time".



first you have to have open source chip, and then have fabs willing to want to make it (and someone willing to pay them up front), and have phone makers want to use it..

And no changing chips every few months possibly breaking compatibility (people working around your bugs) is not a feature that a lot of hw designers want.

This may change eventually. I have high hopes for RISC V but we will see


Shouldn't proper IOMMU usage prevent this?

In theory when properly configured the DSP or GPU should be unable to touch system RAM outside of buffers that are specifically assigned to them.

I'm not very familiar with the status of IOMMU on Android devices.


It's dependent on the SoC whether there's IOMMUs at all and whether they're rigged up to all the bus masters in the system. A lot don't have them as it was seen as a virtualization feature rather than a security feature for the longest time.


Google has pushed the patch for this back to October. I wonder what will happen to downstream vendors (Samsung, CopperheadOS)?


You mean GrapheneOS.


I'm referring to businesses because they have SLAs or other customer obligations. AFAICT Graphene isn't a business but is a FOSS project without customer support requirements or obligations.


You say vulnerability, we say feature.


insecure by design


If the US government hadn't sanctioned Huawei, we could have an alternative to these chips.


There are other alternatives, e.g. Samsung.


Yeah because Samsung has so much better history of fixing security issues...


That's why we need all of them, instead of sanctions on either one of them.


Time to collect ash pots of SuperH from graveyard


SpaceX designed custom SoCs for their isolated offshore/offworld network of Starlink satellites, https://spacenews.com/spacex-accused-of-poaching-chipmakers-...

> Broadcom filed suit ... claiming SpaceX hired a number of Broadcom’s top engineers to develop “a family of sophisticated, customized computer chips.” The two companies had been working together on the development of advanced computer chips for an undisclosed project, but SpaceX ultimately ended the collaboration.


What's your implication here? SpaceX perhaps saw a bunch of security vulns and decided to DIY?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: