Hacker News new | past | comments | ask | show | jobs | submit login

Article poses a good question. How did a privately reported zero-day leak from Bugzilla into an attacker's arsenal?

Also, why did it take 2 months to fix an RCE? It's an RCE, not some XSS. I'd imagine this would be a high-priority. No?




TFA's first answer is the most likely. If one researcher discovers a vulnerability, another researcher can also discover it. No "leak" required.

I agree that RCE should be a priority!


I'm betting on insider access. Microsoft had to lock down internal access to their security bugs when some employees were selling the bugs on the black market.


For what it's worth, Mozilla locks down internal access to security bugs too. I can't see those bugs, which is exactly how it should be, as I have no need to know.


How many people can read these?


As a Mozilla employee I can say that I was in the security group but lost access at some point since I wasn't very active.


How’d you get access in the first place?


I helped out with UI related security bugs (e.g. address bar spoofing) which we had a bunch of at the time.


I don't know.


Do you have a source for that? Google's not giving me anything. I'd definitely like to know more - I can't help but wonder how widespread that kind of behavior is.


Locking security bugs from wide internal read access has been SOP everywhere I've worked for decades.


I think they're asking for a source on the specific claim about Microsoft employees selling bugs on the black market, which is what I would also like to see.

I don't need to be convinced that security bugs should be on a need-to-know basis during the responsible disclosure period, that seems obviously prudent. Anyone not working specifically on security can learn about the details at the same time as the wider public.


I don't know anything about that event, but it reminds of me when 20 Apple contractors had a scheme selling Apple user data for $7M.

https://www.nytimes.com/2017/06/09/business/china-apple-pers...


No source, but I'd be willing to bet it's very widespread.


If it is insider access they will be caught.


Really, that Mozilla would let a reported RCE vulnerability simmer for two months until it bit someone would seem to reflect very poorly on their priorities and competence. Can anyone postmortem why it took so long now that it's fixed?


Firefox likes to bundle security fixes into .0 releases. 67.0 was released May 21 (and went to nightly/beta whatever May 13) and 68.0 won't be released for a few more weeks.


Is there a good reason for this? I would think that a security issue should be addressed and patched into user's computers as soon as possible, especially something like RCE.


Security fixes carry the usual risk of regressions (even more than the average bug, when the fix limits something that used to "work"). Therefore they need just as much bake time as other kinds of changes.

Also, shipping security fixes in stand-alone updates makes it much easier for attackers to identify security-critical changes (especially if they have access to source code, which they do for Firefox) and reverse-engineer the flaw. Firefox developers often land critical fixes with somewhat obscured commit messages to increase the work required by attackers to identify the critical security fixes in the torrent of commits that go into each regular release.

Obviously this only makes sense while the bug is believed to be unknown to attackers. If Mozilla believes the bug is being exploited, they can and do issue an emergency update.


> Firefox developers often land critical fixes with somewhat obscured commit messages to increase the work required by attackers to identify the critical security fixes in the torrent of commits that go into each regular release.

Wow, that's fascinating. Do you have any interesting reads to point to in this regard?


Do you know why? Isn’t a security fix a bug fix?


Nope. Security vulns are not regressions!


And how do you qualify "Meltdown" and it's notorious bad fix "Total Meltdown" in that case ?

To me, the bug fix introduced a clear regression, allowing an even more powerful vuln in the process.


I’m confused what do you mean? Fixing security vulns can often times lead to regressions since overtime users become dependent on a behavior that relies on a insecure behavior.


Secure behaviors should generally trump API guarantees.


Your parent comment didn't say security fixes couldn't lead to regressions, they said security vulns themselves aren't regressions.


> How did a privately reported zero-day leak from Bugzilla into an attacker's arsenal?

This was a VERY valuable bug. I mean, it's sad to think about but the most likely scenario is that someone with access to the report at Mozilla or Google (or maybe elsewhere if it was shared more widely) called a friend of a friend of a friend and... sold it.


Moreover, people are bad at keeping secrets. Social engineering is clearly a thing, even among infosec circles. Sometimes all it takes is being in the right bar and having a good ear.


I mentioned the possibility of an untrustworthy person gaining access to bugzilla yesterday but it seems that most people disagreed with it: https://news.ycombinator.com/item?id=20221397



>from Bugzilla into an attacker's arsenal

Typically, they leak the opposite way.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: