Hacker News new | past | comments | ask | show | jobs | submit login
BleedingTooth: Linux Bluetooth Zero-Click Remote Code Execution (google.github.io)
328 points by ndrake on April 7, 2021 | hide | past | favorite | 103 comments



What was notable in this case was that Google disclosed this issue to Intel, and Intel then took responsibility for public disclosure. Intel then proceeded to mail patches to public trees without flagging them as having any security relevance, posted a public disclosure despite the patches only being in pre-release kernels and claimed that affected users should upgrade to 5.9 (which had just been released) which absolutely did not have the patches fixed.

Coordinating disclosure around security issues is hard, especially with a project like Linux where you have an extremely intricate mixture of long-term support kernels, vendor trees and distributions to deal with. Companies who maintain security-critical components really need to ensure that everyone knows how to approach this correctly, otherwise we end up with situations like this where the disclosure process actually increased the associated risks.

(I was part of Google security during the timeline included in this post, but wasn't involved with or aware of this vulnerability)


To me it feels like we owe Google a lot for what they are doing to find security vulnerabilities. Did companies do stuff like this before Google's Project Zero?


We benefit a lot from Google doing this, for sure. Do we owe them a lot? For that to be true, it’d have to have an altruistic motive and for the value being delivered to be less than Google derives from the open-source community / security community in general.


I understand the value derived vs. provided, but I disagree about it having to be altruistic. Someone donating large amounts of money to charity just to write it off on their taxes isn’t doing so altruistically, but is still doing a lot of good. I’d say we still owe them gratitude - or at the very least, the people they’re helping.


> Someone donating large amounts of money to charity just to write it off on their taxes isn’t doing so altruistically

I mean, it's still altruistic because they're going to 'lose wealth' by donating. You're never going to earn back in tax write-offs as much as you spent by donating.


Sort of... most companies that fund this type of research do it for product development (IDS/IPS related stuff), or as vulnerability development (for sale as parts of exploit packs or use in engagements). This is a gross over generalization, but like everything else related to the disclosure and research field, there is a lot of history and drama involved.


oof.

from the guys who brought you "the spectre patch in the kernel thats disabled by default" and "ignore your doubts, hyperthreading is still safe" comes "the incredible patch built solely around shareholder confidence and breakroom communication"

EDIT: spectre, not meltdown. oops.

https://www.theregister.com/2018/11/20/linux_kernel_spectre_...


> "the meltdown patch in the kernel thats disabled by default"

I'm not sure to what you are referring to here.

I was one of the people who worked on the Linux PTI implementation that mitigated Meltdown. My memory is not perfect, but I honestly don't know what you're referring to.


I'm guessing they're referring to Linus' ranting on Intel here: https://lore.kernel.org/lkml/CA+55aFwOkH8RH12Dzs=hT3e7eS3Ckz...

> The whole IBRS_ALL feature to me very clearly says "Intel is not serious about this, we'll have a ugly hack that will be so expensive that we don't want to enable it by default, because that would look bad in benchmarks".


> benchmarks

Live by the SPEC, die by the SPEC


It seems odd that they would have notified Intel rather than the security team, given how poorly Intel has handled disclosures in the past... Its good that they do note that "The Linux Kernel Security team should have been notified in order to facilitate coordination, and any future vulnerabilities of this type will also be reported to them"


Thank the lord for distros like Fedora. They deal with security and other issues, and are big enough that if Intel tried to sneak something past, almost assuredly Intel's engineers working on Red Hat would have noticed something.


It's like Intel is releasing the same CPUs with new boxes and naming hoping that people won't find out :-) That company has become such a failure. What happened 5 years ago?


I wonder what effect self selection bias has in people who end up writing hand crafted complex parsing code in C for untrusted data in ring 0. You either have to believe that it's doable to get right or that it doesn't matter much if you don't, or "it's not my worry".


// quick hack, fix later

{ .... }


Not a valid C comment.


Double slash single-line comments are valid C99 and later.


They also work in Borland C 3.1 for DOS


What are you trying to imply? That you have an equivalently useful and featured kernel in a safe language? Do you have a similarly featured Bluetooth stack in a "memory managed" language? Why are all OSes wrong?

I apologize the readers for the rant but this whataboutism is so demeaning of the multitude of very inteligent people working in the field, and creates negative valued divides between the low level people and the userspace/web people with huge mutual distrust as shown in the parent and in mine.


There are plenty of tools designed to help you write parsers that compile down to C. Alternatively there is the microkernel approach. Either one (or both) would satisfy GP's implication that hand-written C parsers in ring 0 are bad.


Well, Android's Bluetooth stack is being rewritten in Rust so you probably can go pretty far in memory-safe languages (though of course Rust isn't a managed language, just a safe one).


Dumb question (not a kernel or C developer): can't you call into code compiled from a memory safe language, like in a shared object file?


Yeah, and we're slowly slowly moving towards that model. One barrier is that some safe languages, like Rust, support far fewer target platforms than the Linux kernel does. There are C compilers for everything, but only a few Rust backends.

Something similar to Wuffs[0] (posted on HN very recently), which compiles down to C, might be a good compromise between portability and safe languages. (There may be some contorted way to have Rust emit C, too.)

[0]: https://github.com/google/wuffs


Rust uses LLVM, which likely should have easier time being ported than an entire Rust compiler. Rust handles cross-compilation pretty well so you don't have to compile on an actual embedded device, or something.

I wonder if an LLVM backend that issues a very simple and predictable subset of C would be a viable way to support exotic old architectures which only have a C compiler. LLVM-cbe is a thing: https://github.com/JuliaComputing/llvm-cbe


Definitely, for example check how to make Go .so libraries.

https://medium.com/swlh/build-and-use-go-packages-as-c-libra...

Naturally the runtime also comes along, but that is another matter.


Do you know C.A.Hoare?

I advise you to read his Turing award speech from 1981.



Indeed! Hoare wrote about his self-reflection on the failures of his own software group.

There was also failure in prediction, in estimation of program size and speed, of effort required, in planning the coordination and interaction of programs, in providing an early warning that things were going wrong. There were faults in our control of program changes, documentation, liaison with other departments, with our management, and with our customers. We failed in giving clear and stable definitions of the responsibilities of individual programmers and project leaders,--Oh, need I go on?

What was amazing was that a large team of highly intelligent programmers could labor so hard and so long on such an unpromising project. You know, you shouldn't trust us intelligent programmers. We can think up such good arguments for convincing ourselves and each other of the utterly absurd. Especially don't believe us when we promise to repeat an earlier success, only bigger and better next time.


Apologies. Yeah, there's room for a lot of whataboutism as long as browsers and other user facing apps are also using unsafe languages for parsing untrusted code. But still that's not an excuse to do it IMO.

And indeed "the system is to blame" in that it's hard to get drivers using safe techniques into the Linux kernel, and Linus is famously anti-security. But I think for the individual programmer, they still end up choosing one of the above 3 mental models.

So I still think it would be interesting to know how people think about security when writing parsers in C in ring 0 for Linux drivers that exposed by default in billions in devices.


its called a microkernel


Yup. C bad, Rust good.

It's not like NASA sent a rover, written almost fully in C, to Mars. It's not like billions of cars and even more billions of their ECUs are written in C. It's not like the firmware of the keyboard you're writing your comment on, or even the OS/browser you're using is written in C. C bad, Rust good.


This isn't about Rust, the same problems were well known (and long suffered & complaned about) before Rust came around and there are well known much older engineering techniques for parsing untrusted data. It's nice that the Rust phenomenon has brought with it some new spirit of vigor and momentum to break out of the apathy, though.

Re Rover code and ECUs - this is the difference between safety critical code and security critical code at the attack surface.

The first kind deals primarily with "don't keel over or go crazy when natural phenomena throws unexpected circumstances at you", the second deals with inputs crafted by intelligent adversaries who can see your code and test & iterate attacks to exploit any flaws they uncovered through analysis or experimentation against your implementation. (Of course if we nitpick, an intelligent attacker is a natural phenomenon.)


> even the OS/browser you're using is written in C

Those have tons of exploitable bugs!!

NASA rovers and car ECUs have minimal people looking to exploit them, so I'm not overly convinced they're exploit free either.

I'm not a Rust evangelist or even a user, but the current paradigm of "THIS TIME, we'll write safe, complex, performant C/C++ code properly" isn't the solution, nor is manually squashing bugs one by one.

The solution seems to be a combination of improving the tooling around existing C\C++, and starting new projects in safer languages when possible.


I should be safe. After the latest ubuntu update, my bluetooth refuses to connect.


As they say, bluetooth in linux keeps only the honest people out.


Bluetooth on Linux is very simple: it doesn't work.

That quip was originally about something else, I think.


At least it's not display drivers anymore


The write-up doesn't appear to have any author details, but the main page [1] credits Andy Nguyen.

[1] https://google.github.io/security-research/pocs/linux/bleedi...


This might be a pretty naive question, but: in a hypothetical world where the vast majority of systems programming is done in "memory safe" langs, what would most vulnerabilities look like? How much safer would networked systems be, in broad strokes?


A related post from Google Security Blog[0]:

> "A recent study[1] found that "~70% of the vulnerabilities addressed through a security update each year continue to be memory safety issues.” Another analysis on security issues in the ubiquitous `curl` command line tool showed that 53 out of 95 bugs would have been completely prevented by using a memory-safe language. [...]"

[0]: https://security.googleblog.com/2021/02/mitigating-memory-sa...

[1]: https://github.com/Microsoft/MSRC-Security-Research/blob/mas...


Likely we'll have less 'os-level' pwns, but to be fair these aren't really the most exploited class of vulnerabilities today anyway. I'm just as effective doing a sql injection and stealing your client's PII if you have or don't have your bluetooth stack written in a lang that prevents some memory corruption exploits from being feasible, and that's the actual goal of most attacks.

You're going to get owned in future by people obtaining creds to important stuff (say, aws creds) and by crappy userspace applications, we can hope that OS security continues to improve but even if it does get bulletproof the story is far from over while our apps are all piles of garbage.

At least, that's what I recon'


Of course proper escaping/parameterization can be enforced in a good quality library as well. So hopefully we will see SQL injections in the future as well if these safer libraries become the default.


Web development is done mostly using "memory safe" languages and we can see that it is far from being secure. The list looks like: https://owasp.org/www-project-top-ten/

Which is not to say that "memory safety" is not a significant issue in C/C++. I wonder why wuffs [1] is rarely used in C projects to parse untrusted data given that it can be translated to C.

[1] https://github.com/google/wuffs


Just adding slices to C would kill a very large proportion of bugs, but there are dimishing returns after a certain amount of safety because you start to reach the end of dangerous code, and into bad code (e.g. you forgot to check the password entirely): You can still catch the latter type of bug using typesystems and formal verification but it's not easy, whereas catching memory safety bugs even using a sanitizer atop of regular C code is actually extremely well-trodden ground now.


One way to count is to count all the bugs that get fixed. Another would be to consider the security issues with the greatest impact on the past year. Solarwinds, and probably the Exchange vulns, would be my vote. Which would a memory safe language have prevented?


You still have missing validation of input, using weak algorithms, logic errors.


Even those safe language will use unsafe block, so there will be still small vector of attacks.


Sometimes I wonder if driver support such as Wi-Fi on FreeBSD is terrible by design. That OS has almost no attack surface.


OpenBSD 5.6 (~6 years ago) removed the Bluetooth stack altogether, due to security/maintainability concerns, so yeah, pretty much.


FreeBSD users sit back and laugh. Bluetooth? Wifi? Hah!


FreeBSD has Bluetooth in netgraph last time I looked. It’s not compatible with much.


And wifi. I have a little home server running FreeBSD, with an intel skylake processor from a few years ago. Wifi works out of the box. I haven’t tried Bluetooth but for my hardware, driver support has been fantastic.


I'm curious how the "BadChoice" vulnerability did not get picked up by a static analyzer. Only initializing part of a structure should be very easy to catch.



Clearly I'm missing something - in BadKarma, why does the compiler not baulk at sk_filter being passed a struct amp_mgr* instead of a struct sock* as expected? A type confusion like that ought to be prevented during typecheck, no?


It's indirected via `chan->data`, which is probably a `void*`. (Implicit) pointer casts between `void*` and other arbitrary types are allowed.



So, tl; dr upgrade our kernels? Or is there more?

(This would be useful to have on Google's site here, but I understand if it's supposed to be for academic audiences)


Does anyone pay bounties for this kind of vulnerability in the kernel or in widely used low-level libraries? I mean legally, not in darknet markets.


I bet there are probably a handful of government-adjacent contractors/companies that do offensive hacking on a clandestine basis that would pay a lot for them, possibly while promising legal legitimacy.

If so, they wouldn't want the public to know why they are or that they are "buying and using cyberweapons" - both to stay effective and for political/international relation reasons. So probably hard to find or contact them.

I wonder how often unsolicited emails are sent to *@cia.gov with subject: "I have RCE on ___, wanna buy it?" and how they respond.


BT spec is huge. We need smaller and developer friendly specs.


OpenBSD user: sips tea


This first appeared six months ago.



WiFi stack, and drivers would be even more scary to look at.


Or kernel, where there are known unresolved issues — no looking required.


What kinds of attacks could we actually see with this? Servers don't use bluetooth and desktops often don't, but Linux laptops and IOT devices often do. With Linux laptops being a rarity and IOT devices already being insecure pieces of crap whose value to a serious attacker is questionable, I fail to see how relevant this is for the average tech user.


Android has a Linux kernel and uses it's stack. As a person in the field I can tell you that the IoT is not just smart lights. There are starting to appear smart sensors or even car infotainments which are Linux inside. Sure, car infotainments are not controlling the engine power or throttle but it is for sure requesting engine start up or displaying the glass cockpit in modern cars. Otherwise how do you think you can turn on the car with the new fancy app?

From what I see we are going back from general computers running an old version of windows xp or red hat into special purpose Linux system on a module devices.


Infotainment isn't well isolated from the more important parts of the car which use the same bus.

For example I recall reading there was an example of a car which wouldn't trigger collision avoidance during a phone call because it was erroneously triggering the breaks to a very small degree and the logic was not to trigger the breaks when the user was already braking.

There is every reason to believe security is as mediocre on cars as elsewhere.


Android has a Linux kernel, but uses a totally different bluetooth stack called Bluedroid, and speaks raw HCI to the controller, bypassing all bluetooth drivers in the kernel.


Many desktops nowadays come with wifi, and wifi cards often also have Bluetooth. I just looked at a nearby Dell desktop (which came with wifi built-in), and it shows Bluetooth as enabled.


Bluetooth peripherals are relatively common even for desktops. ITX boards in particular commonly have bluetooth and wifi built-in.


TLDR; horrendous security exploits uncovered in code written in known unsafe language.

The code is an utter shitshow, inviting disaster through seemingly normal use of the language. It contains a mess of malpractices that make any modern C++ or Rust developer cringe: goto, memcpy, naked pointers, type unsafe casts, raw loops, using malloc to allocate memory for input buffers.

Why do people continue to juggle chainsaws? I think it's fear of new things, fear of change. Old habits die hard.


I think now I see the presence of Rust Evangelism Strike Force, seriously I see these "C cringe" comments 6 times today In less than an hour.


Probably at around the same frequency that developers of yesteryear saw “FORTRAN cringe” comments, if I had to reckon.


At least C is a complete language and a solid building block which is possible to reason about, compared to C++ which is becoming more and more bloated in a modernization attempt, so C++ projects need to be constantly updated and rewritten to catch up, which rarely happens in practise. On the other end, C++ can hardly adopt things like memory model based on Rust's borrow checker, so in this regard C++ is a dead end.


In my mind, C++ is one of the only languages that is evolving with the times.

They don’t add features that break backward compatibility, or features that sacrifice runtime performance especially if you don’t use those features. Most new features added to C++ are actually just additions to the STL, which you aren’t forced to use.

It is an open standard, so there is plentiful competition in the realm of compilers. Rust only has 1 so you are SOL if you don’t like something or if the project goes bust one day.


You may think putting C++ and Rust in the same category against C will make your comment a bit more popular. But the reality is Rust people cringe at C++ code a lot more.


Look, it's 2020s. I'm getting put to sleep by stuff like this. You gotta amp the names up to ‘Genital Grinder’ or ‘Vomited Anal Tract’ if you want people to pay attention.


dont forget its gotta have a vector graphic mascot and a custom TLD from a design team of MFA students. if the disclosure page doesnt take over my mouse like a new iPhone release website and include a youtube video i just cant be bothered.


Pfah, all the graphic stuff you'd want was done ages ago: https://img.discogs.com/x-gix4VLRA0rKQTfSf4lx6HKeMw=/fit-in/...


Reminds me of this from last week: https://catinthehatattack.com/


"Blue Death spreads remote code execution like warm butter on a french toast"


The virus spreads in an instant,

Your holes are its seducing.

It slays from a distance,

You're on the list for

E-XE-CU-TION

GAAAAAARRRRRRRGH!!!


At least they didn't register a domain for it.


> Look, it's 2020s. I'm getting put to sleep by stuff like this. You gotta amp the names up to ‘Genital Grinder’ or ‘Vomited Anal Tract’ if you want people to pay attention.

Hmmm. Either BrokenTooth or BlenderWave are apropos to the year.


Best I can do is Poopy Dentures.


How about "ShitEatingGrin" or "AphexTwinAlbumCover"?


My decision not to have Bluetooth pays off again!


What to say, very solid work


Would be even more worrying if macOS or iOS also had these sort of bugs in the bluetooth stack. Plenty of iOS/Mac users using AirPods these days since Apple removed the headphone jack. Bluetooth is already turned on in the default install.

Going to put this comment here as a reference to quote later when I see a zero click RCE for iOS devices using Bluetooth for drive-by exploitation.


OSX had I similar issue in WiFi drivers while back (also thanks to Intel), so it's not impossible.


Every week there's a new vulnerability in the Linux kernel - is it time to admit that (A) the "many eyes" theory is disproven (B) the Linux kernel has evil malware agents "oopsing" bugs in exactly as fast as we discover them?


> the "many eyes" theory is disproven

The way I see it, all those vulnerabilities prove the opposite. If there were no "many eyes", I doubt most of those vulnerabilities would have been exposed to the public at all. But I bet that malicious actors would still be using those.

That argument you made reads similar to "hospital theory is disproven, because whenever we get more hospitals and doctors, more people end up with a diagnosis".


The "many eyes" theory includes the phrase "all bugs are shallow", which to me certainly implies that they shouldn't lay there for 10+ years.

The only conclusion people should be drawing from the last 20 years of security being taken seriously is that writing secure software is hard, finding bugs is hard, and business model doesn't really matter.


The maxim doesn't state an exact eyeballs-to-bugs ratio, nor does it state a timeframe in which the bugs actually become shallow.

It's quite possible that for 10 years, the number of eyeballs had not been enough, until it was. The open source model makes it more likely to gather more code reviews.

I hope you and the grandparent get your horses into rehab once you finish your ride. ;-)


So your argument is either that Linux didn't have many eyes on it, or that it taking 10 years and an intense study by Google to find it is shallow. In either case, that's effectively saying that the maxim is so loose as to be completely meaningless (i.e. "broken").

Even throwing out the fact that equivalent closed source software has a stupendous amount of money spent on code reviews, the open source model makes those reviews possible. It doesn't necessarily make them likely. That is a very important difference, theoretical eyes make no bugs shallow.

> I hope you and the grandparent get your horses into rehab once you finish your ride

Please refrain from making condescending, smug comments like this here. They do not in any way contribute to the debate.


> Please refrain from making condescending, smug comments like this here. They do not in any way contribute to the debate.

HN today (over the course of previous weeks) is very quick with broad-swipe sensationalist statements, at least this is the sentiment I'm getting:

— The law of enough eyeballs is disproved by a decade-old bug!

— Sleep deprivation is used for some depression cases, therefore, let's banish sleep and crank all-nighters!

— SOLID is obsolete and debunked, and moreover, the old boomer Robert Martin defends it, so let's banish SOLID!

Repeat ad nauseam about any "mainstream" viewpoint or paradigm. It's getting old very quickly. Thus my abrasive passage that you quoted.

I'd like to see instead a more elaborate discussion about limitations of this observation (about eyeballs and bugs) which has proved itself quite more than once, rather than a sweeping statement. Right now the thread reads like a call to abolish all Newtonian mechanics and using relativistic calculations for everything, just because Newtonian physics got "debunked".

I'd argue that maybe a codebase can grow so much that no number of human eyeballs, even using eyeball enhancers like fuzzing and analysis tools similar to Coverity or PVS Studio, will ever bring all the bugs to the surface (and of course there can be design flaws undetectable with tools). And maybe realizing this should alter the way we design complex systems that should be as bug-free as it gets.


The bug was found by fuzzing, so not really the case that anyone reads the code. I'm pretty sure code reviewers are a lot slacker now than in 1995. There's just so much code, and so often the costly things are in bad thinking that leads to unmaintainable messes not bad security.


This comment is clearly bait, but I'm going to take it anyway and respond with a link to Microsoft's Security Response Center. This isn't exclusive to Linux at all (for better or worse) https://msrc-blog.microsoft.com/


Would be nice if we could judge Linux on it's own merits, without comparing it to Microsoft Windows. "Your Ferrari only goes 30mph? Well it's still better than this lawnmower. Stop complaining."


If Linux is a Ferrari and the dominant commercial operating system on earth is a lawnmower, there's no metric by which Linux is failing other than grass-cutting.


IMO, the quip about "many eyes" never was true, but you know, that doesn't invalidate open source or anything, it just means that Linus said a thing that sounds cool to a magazine and it was just hot air. That saying, was.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: