Hacker News new | past | comments | ask | show | jobs | submit login
GitHub Security Bug Bounty (github.com/blog)
118 points by mastahyeti on Jan 30, 2014 | hide | past | favorite | 37 comments



The one obvious flaw is the "email us for our PGP key" - distributing the public key in private and over an insecure channel makes it vulnerable to replacement.

Has anyone written a "best practices" guide for designing a security page ?


Why don't they just publish the key on the site?


They should.


They should post it, and preferably on an https site.


Probably because security@ emails are routed through their normal helpdesk system which doesn't handle PGP properly.


I doubt that their security emails do that, but even if they do, changing how you get the key wouldn't fix a PGP compatibility issue, since once you have the key you'd still be emailing them using it.


In theory, PGP public keys shouldn't depend on being sent over a more-secure medium like SSL, because they're signed. One of the main points of PGP's design is that you can't spoof a public key, because you can't spoof its signatures.

That being said, in practice, I don't know that everyone is diligent about checking signatures of public keys they receive. An attacker could create a spoofed key, sign it with several other identities controlled by the attacker, and hope those signatures are enough to fool the unweary.


The point is that you need an entry point of trust. So either you have been in many different signing parties and you happen to have a reasonable connection with the key, or they must give you a trust reference on the website, preferably through HTTPS. At which point, they can just publish the key on the website.


Yes, that would definitely be a viable strategy. It's probably the easiest one, and it's what I would do. In general, using HTTPS for anything related to your bug bounty program is probably good hygiene.

Though even without that, I don't think you need to have been to a lot of key signing parties. The entry point of trust could very well be another organization--not GitHub, not someone at a key signing party. As long as the signature chain points back to an identity you can trust, you're good to go.


Hey githubbers, could you please stop repeating the tired "responsible disclosure" meme?

Full disclosure is not irresponsible and attempts to frame it as such are bordering on malicious toward the exact community in which you are attempting to engender goodwill.


I disagree that "responsible disclosure" is tired, nor a meme.

Software development is hard. Most projects are developed by teams- not single contributors. Consequently, part of reporting bugs is enduring the back and forth of communications with teams. Reporting bugs is not an all-or-nothing game.


It certainly isn't tired, and "responsible disclosure" policies are absolutely preferred over any sort of free-for-all distribution and posting of PoC code to the world before the vendor. I think most professional security researchers have always subscribed to the general idea of responsible disclosure of vulnerabilities, even before things like RFPolicy [0] brought the concept to a wider audience.

However by any reasonable definition [1] it is a meme, being a "unit for carrying cultural [...] practices that can be transmitted [...] through writing [or] speech." Remember that memes existed as a concept long before LOLcats and formulaic GIF images with amusing text macros on the Internets...

    [0] http://www.wiretrip.net/p/libwhisker.html
    [1] https://en.wikipedia.org/wiki/Meme


My problem is not with companies preferring advance notice, or with people who abide by what is called "responsible disclosure". Indeed, it is often the Right Thing To Do.

The problem is that use of the phrase "responsible disclosure" FRAMES anything that does not conform to that narrow definition as "irresponsible disclosure", when in reality it simply is not. (It is not irresponsible to pull a @homakov, for instance.)

It's "framing": a way that use of language shapes our thinking about the world and events therein, sometimes and usually without our explicit conscious consent to such bias.

Please stop using the term. "Advance developer/vendor notification" is a suitable replacement if you wish.


If we were playing table tennis, I would comment on the tremendous amount of english you put on that ball.

On the one hand, I concede your point. I think your phrasing is certainly more accurate. However it isn't quite as expressive to the layman.

On the other hand, there are so few people out there who truly can grasp the nuances that you're focusing on, I am wary of propogating your valid point.

I still think "responsible disclosure" is a better (albeit damaged) descriptor.


Isn't $5000 ridiculously low compared to the black market value of a GitHub exploit, or the time required to develop it?

Assuming a company thinks it's pretty secure, putting real money on the line (the same money you'd normally pay an expert to pentest your system) would get some more prolific minds involved.


No, it is probably not. People have weird ideas about how much random web bugs are worth. Big ticket bugs are easily monetizable, and/or attack a huge install base with a very slow patch cycle. People hear about 5-6 figure bugs, but those are typically reliable browser clientside RCEs.


Github also has slow patch cycle. Enterprise edition


It is easy to think that but I think that isn't the case for a few reasons:

- You only need one person to report it, and so if Nefarious Nigel has found it and is planning to use for profit, then Sweet Sarah find it and reports it then it worked. I imagine this is the case for the majority of bugs (but can't prove it).

- $5000 isn't in a different order of magnitude to Google's rewards, and they paid out several million dollars. This demonstrates that it does motivate people but also that adding a 0 on to that would likely have a far larger impact on revenue than Nefarious Nigel and his evil plans.

- I think a large number of smart people would (rightly) be scared about taking the black market route, but are motivated when they know their isn't a legal risk. Or put differently the risk to reward ratio ("pot odds") becomes worth it for this value for legal prize.


Making 5k legally might be more appealing than making 20k on the black market, for example. When you have to hide your tracks and risk getting caught a lower for sure sum might be more appealing. Also, Github is new at this, they might raise the bounty once they see how the program progresses.


Bug bounties are rarely competitive with their black-market value. I think in most cases they're intended more as a "thanks!" than a "please don't hack us".


You're sort of just re-stating the question. I think everyone understands that's the way things are. The OP is saying that the way things are doesn't make much sense.

My guess is that the thinking goes something like this: White hats aren't going to hack us anyway, and will be fine with the tiny rewards we give them. So there's no reason to increase the rewards for them. Black hats probably aren't going to be dissuaded even by very high rewards, or perhaps even with high rewards they'd try to have their cake and eat it too, selling exploits first and then reporting them. Basically, they can't be trusted so trying to buy them off with a fair-market price isn't even worth it, so we may as well ignore them in our pricing strategy.

I don't know if that reasoning is correct, but I think approximates the thinking that leads to the status quo in this case.


I doubt it. Why wouldn't github want to pay more so that black hats also sell them bugs? Indeed, these are the very bugs that are going to be exploited, so it makes perfect sense for them to pay whatever it costs.

The real reason is probably: because nobody else does. I think it is doubtful black hats would sell their bugs to github unless github was paying 2-3 times the market rate, since the black hat can sell the same bug to multiple people.


It is a mistake to assume that bug bounties exist to compete with black market prices.

I argue that bug bounties are a pressure release valve for people who know that there's a problem, but are unsure if they're at risk of getting lawyer'd or prosecute'd for disclosing vulns.

No private entity can compete with nation states for vulnerability rewards.


Someone on the black market will almost always pay more than a company. The real value in responsible disclosure is typically from a consulting contract that may follow the report. Their leaderboard list also seems like a good way to build credibility in the community as well.


I think an ideal balance point (for all companies, not just GH) is one where someone can make a very comfortable living finding and reporting security flaws. You simply don't do that with $100-$5k bounties. GitHub, more than nearly every company out there, is entrusted with trade secrets that are the livelihood of their customers. Paying top money to get security bugs found is not an option, but something that should be regarded as a "cost of doing business".


Great to see Github recognizing processes for security researchers between the ages of 13-18 in the FAQ. In the new age of crowdsourced skills, it's good to see age not playing a part as a barrier.


Now there is nothing to hax.


You should ask them to apply it retroactively.


I dont need anything. I was saying now you need some good 0days because it is hard to find silly xss on github these days


Great news. I'm happy to see this program in "public mode" now. GitHub launched this program already as private beta in May 2013.

https://twitter.com/totally_unknown/status/42899282447475916...

Don't expect to earn easy cash here. :)


I wonder if the reward values have changed since the beta? I'm sure it is much harder to find anything now than it would have been back then, assuming they got a good turnout from really experienced people and 7-8 months of headstart.


That's from the private beta:

"We are using a simple severity ranking scheme: Low - Medium - High - Critical. Rewards range from $100 up to $5000 and are determined at our discretion based on a number of factors. For example, if you find a reflected XSS that is only possible in Opera, and Opera is only 1.64% of our traffic, then the severity and reward will be lower. But a persistent XSS that works in Chrome, at 59.53% of our traffic, will earn a much larger reward."


You just have to wait for homakov to put himself at the top of the leaderboard.


This looks great.

I especially like that they have 'rules for us' and also they have a section at the bottom which discusses discretionary bounties for their properties not covered in the main list.


We recently launched our bug bounty program for Commando.io as well https://commando.io/security.html.


So the Firefox UI is much like the Chrome UI minus border radius.


If you're referring to the browser screenshot I think it's Opera (which is now based on Chromium) rather than Firefox. That will at least explain some of the similarities.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: