Hacker News new | past | comments | ask | show | jobs | submit login
A little note about Slack’s Bug Bounty program (abhartiya.wordpress.com)
364 points by wwarneck on March 27, 2015 | hide | past | favorite | 48 comments



I read the PDFs of #39132 and #51179, and first, these are very clear and well written vulnerability reports. Props to the author for that, many times these reports can be extremely hard to follow and these are shining examples to the contrary. I found them easy to follow, enough details to reproduce, and quite valid issues.

Second, I'll put my neck out here a bit and say, I find myself agreeing with the author's stance. Namely,

1) Independently discovered vulnerabilities are not "owned" by the first to discover it. As a courtesy, you may defer to another researcher, or combine your efforts, but I don't think there's any requirement to do so.

2) 90 days notice is more than enough time to expect at least a cursory response when you say, "Has this bug been fixed? Shall I go ahead and disclose it?", and then again, "This is a heads up that I will be blogging about this on March 12, 2015 i.e 90 days after the initial disclosure unless I hear otherwise. Thanks!", and then AGAIN, "This is a reminder that this bug will be disclosed in 4 days :-)".

In Google's case, for example, it's not just 90 days notice, it's a 90 day deadline to fix. In this case, a simple, "no, we need more time, please don't disclose this" response on the 2nd issue could have avoided the whole problem.

Bug bounties, particularly a fully managed program through HackerOne, encourage Engineers to spend value time and resources investigating and writing up detailed reports of complex issues. If you sign up to run a bounty program, it's essential you give participants the time of day, like responding to their repeated inquiries about disclosing an issue.

It wasn't clear to me if the author was banned from HackerOne or just Slack's program. If the later, well, that's fine, Slack absolutely has the prerogative to invite whomever they like to participate in the program. I think they are missing out on a great contributor in this case though. If author suffered an outright ban on the platform, that would be distressing.

Lastly, if the 3rd vulnerability was unknown to Slack before author reported it, I think author should be properly compensated based on the terms of the program.


I was banned from reporting any bugs to the Slack Bug Bounty program on the HackerOne platform after reporting the 3rd bug. What's worse is that I wasn't even notified of this? Even HackerOne didn't think it was necessary to do that.


I guess that means that you will be forced to publicly disclose any more vulnerabilities that you find... /s


Why the sarcasm? That's exactly what this means.


Does this mean you should do immediate public disclosure when you find bugs now?


What a wonderful reward for your service.


On point 1, how do they even expect a researcher to know that some other researcher has found the exact same bug if it's not disclosed yet? This seems like a fake rule whose only possible effect is to suppress disclosure.


Exactly. Their word is as good as mine, right? This is the biggest problem I see in bug bounty programs. You are at the mercy of the program.


Isn't this a "solved problem"? You publish a hash of each report when it's received (or sent, if you're the sender intending to establish precedence), then it's clear when all reports are revealed which ones were reported in which order.

It doesn't let you know what others have discovered/reported, but it solves the "their word against my word" problem...

(Of course, if they're actively trying to minimise bug bounty payouts and are prepared to screw over people attempting "responsible disclosure" to do so, they've got a lot of motivation to _not_ implement this from their side. Doesn't stop researchers posting hashes when they make reports, then the rest of us being able to verify those hashes when the bug is publicly disclosd.)


Well, they did say "no, we need more time, please don't disclose this" on the iOS auth bug. The author's response was to wait 90 days and then disclose it without waiting for the go-ahead. Is this SOP in security circles? It's certainly unusual in non-security interactions.


"Disclosure deadlines have long been an industry standard practice. They improve end-user security by getting security patches to users faster." http://googleprojectzero.blogspot.com/2015/02/feedback-and-d...

"Vulnerabilities reported to the CERT/CC will be disclosed to the public 45 days after the initial report, regardless of the existence or availability of patches or workarounds from affected vendors. " https://www.cert.org/vulnerability-analysis/vul-disclosure.c...


No, the author's response was to contact them again to get a status, get no response, then disclose after 90 days, all while informing them multiple times of the imminent disclosure.

It's great that they said they needed more time, but they completely dropped the ball on that communication chain, and got more than enough chances to ask him to wait.


I'm currently in charge of answering reporters on a HackerOne program and I can tell that the way Slack is managing its own is completely unacceptable. Those reports were really high quality ones, whenever I receive a report like that I cry of joy. If you run a bounty program you should:

- Be ready to answer every single report on a short timeframe

- Be fair and provide feedback to the reporter

- Be nice, be thankful and reward the researcher if they deserve it

- Be patient with the duplicate reports and people just trying to get an unfair HoF

Otherwise it may backfire you and eventually it will.


Not a surprise they got hacked...

These two issues (the hack and how they treat bug reporters) challenges the way I see slack as a company...


Thanks. It is good to know that such programs exist as well.


Good stuff. The general guidance we put in front of folks at bugcrowd:

Its important to note up front that Bugcrowd programs differentiate between technical validity and rewardability, in order to maintain fairness to researchers and organizations alike. This means our customers only pay for issues with impact to them and researchers get solid technical feedback about their submissions.

We’ve found it is not uncommon to have a submission that is technically an issue, but lacks the impact necessary to reward it. In other words, it’s an ‘acceptable risk’ to the organization. Conversely, we’ve handled situations where features were ‘as ­designed’, but turned out to be major security flaws that were fixed. This process has helped our customers determine what to do in those cases. If you’re thinking about starting a program, we’d encourage you to differentiate between these concepts as well.

The Golden Rule(s): If you touch code or configuration because of the submission, reward the researcher. Here’s the detailed process:

1. Does the submission adhere to the terms and conditions? If not, mark as invalid and explain.

2. Was the submission communicated as an exception in the program brief? If so, mark as invalid and explain.

3. Has the submission, or its root cause been reported previously? Duplicates can and should be traced back to the code or configuration change that would resolve the issue. If that change has already been made, is in the queue to be made, or has been accepted as a risk, mark the issue as a duplicate and explain.

4. Is the submission technically reproducible? If not, mark as invalid and ask for clarification if you think it has technical merit. If so, it should be communicated as valid and you can move to determining the reward.

5. Will it cause you to make a code or configuration change now, or in the future? If so, it’s rewardable within the terms specified in your bounty brief. Issues with more impact should be rewarded at a higher level. Additionally, if it’s noted in the focus areas for the bounty, it’s worth more. If it does NOT cause you to make a code or configuration change, then provide reasoning to the submitter, and to the extent possible, push it to the brief as an exclusion for future testers.

It’s very important to consider the work of the researcher in step 5... If you think of other areas in the target that are impacted, or your security posture has been improved by discussion from the submission, we encourage you to reward it. If you’re on the fence, push the submission back to the researcher and ask for more information or how they view the impact. Remember, the researcher put effort into finding it for you, and it’s in your benefit to work closely with them and encourage them to go further.

The Golden Rule, transparency about your choices, and proactive communication should guide your judgement at all times.


Combined with their recent hack and their exposing of team names a few months ago, I'm becoming wary of using Slack. They don't seem to have a security-focused culture, and that's a massive problem for a communication service.


I accidentally signed into another organization's channels on Slack. Had to delete and re-make an account to get into my organization's channels. Apparently there were already concerns about Slack not taking security seriously, so it wasn't really all that shocking to me. Currently not using Slack, obviously.

Sorry other organization – I hope you guys move off some time soon.


Keep their attitude toward security flaws and the disclosure today in mind when you consider what would happen if your entire company's private chatlogs suddenly became public. Same goes for Hipchat too.

User accounts and passwords are easy to reset and fix.

Credit cards are easy to reset and fix.

Years of private company discussion showing up in the wild? You're unlikely to ever recover.


"Years of private company discussion showing up in the wild? You're unlikely to ever recover."

I'd be angry but my company wouldn't be ruined. The worst thing I ever say in internal communications is to poke fun at a couple of our grumpier or more entitled users. I also probably curse slightly more than is entirely prudent. But, that'd be mildly amusing to have exposed, not ruinous.

Perhaps if you're saying stuff that you would be "unlikely to ever recover from" if it were shared with people outside of your company, maybe that's not the kind of thing you should be saying.


I'm not saying nor worried about statements which are hateful, sexist, racist, etc. Like most mature non-brogrammer companies, we don't have those sort of issues in the first place.

I'd be worried about company strategy, vision, intellectual property, keys/passwords, system infrastructure or other details leaking which could hurt our competitive edge, lessen our valuation, or expose our user's PII.


I'm siding with the vulnerability researcher here. This is ridiculous. The spirit of a bug bounty program is for the company to incentivize a researcher to find bugs and work together to squash them.

Maybe this more telling of HackerOne as a platform.


I think HackerOne as a platform failed here as well. I understand they try to leave themselves out as much as they can and just collect the fees for the bounties paid. But, in cases like this, I feel they should have been a little more proactive about it. I received an email from HackerOne shortly after I wrote this blog saying they will investigate but I haven't heard a word till now. Soon after that, I found out myself that I have been banned from reporting any bugs to Slack without any notification. HackerOne could have at the very least sent me an email out of courtesy but no, never happened. I even sent them a follow up email asking for clarification but haven't heard a word till date.


The stats are right there on the homepage of HackerOne. $2,36m paid bounties and 7,662 bugs fixed. Sounds like a lot of willing participants.


If a company is progressive enough to participate in bug bounty programs they need to be damn certain that they are doing everything possible to interact with their community in a clear and friendly manner. The Slack security team clearly dropped the ball here is regards to properly communicating with the researcher and responding to them in a reasonable amount of time. This type of behavior is unacceptable and goes against the spirit of crowd sourced security research.


It feels to me that Slack is trying to be all high-and-mighty/"no you're wrong, we're right [even though you're right]"


I definitely got a very bad attitude from them. I was really trying to be nice and report bugs with detailed reports including detailed PoC videos demonstrating everything. But, it was disappointing to see their responses honestly.


http://slackhq.com/post/114696167740/march-2015-security-inc...

http://valleywag.gawker.com/slack-is-letting-anyone-peek-at-... https://news.ycombinator.com/item?id=8425799

Tbh, at this point I wouldn't be surprised if these "problems" occurred after someone discovered the bug and reported it.


More (or less?) shocking here is the total lack of an official Slack response to this thread. Does not bode well..


In case you hadn't seen yet, I suspect everybody at Slack in a position to comment on the company's behalf in pubilc is probably kinda busy with more pressing matters right now:

http://slackhq.com/post/114696167740/march-2015-security-inc...

(And I can't help but wonder if these two issues aren't connected somehow. Someone may have been sitting on a 0day trying to do the right thing in expectation of a bug bounty program reward, only to discover exploiting or selling it was the only avenue likely to actually pay out... Surely this isn't just coincidental timing?)


I've been a big fan of slack since the beginning, but this seems pretty scathing. The folks at Slack should really address the contents of this post.

As it stands, if I'm ever in a place considering using Slack, I would be coming back to this HN thread to see how this particular issue was resolved.


Funnily, I was looking at the requirements for a Security Engineer at Slack. https://slack.com/jobs/dfd75111/security-engineer The secure coding practices or protection against attack part of the job is minimal. Compared that to Google Information Security Engineer. https://www.google.com/about/careers/search#!t=jo&jid=29144& Well, job descriptions in advertisements are not indicating much. But one gets the reason.


While i agree that reports are high quality and slack should've handled it better, I don't see any security risks in both reports. They would close it as N/A one way or another. Just rude response though.


This is somewhat unrelated, but maybe folks here have some experience with disclosure programs like HackerOne, which is something that I don't have much familiarity with.

We'd love to have a way to encourage security researchers to focus on our software and give us reports, but we're Open Source and our budget is miniscule. What is considered "insulting" as a minimum reward? What will actually get professional people looking at it with a critical eye? Is its popularity (~1 million users and a pretty well known Open Source project) enough to compensate for not paying very well for disclosures?


You generally wont get professionals unless they're feeling charitable. They're busy with paid work and don't need credit. You also won't insult anybody if you're a non-profit. It's for-profit companies who offer t-shirts that get criticism.


I would assume that there wasn't any technical issue such as Slack not actually getting notifications of his messages. If that's not the case, this seems like a pretty low way for Slack to have handled the issue.


Notes on “a little note”.

Hi, this is Ryan. I work at Slack.

Bug bounties are great, but managing them can be a challenge. Like many companies that run a popular bounty program, we receive quite a few vague reports, invalid reports, and reports generated by automated scanners. We work through these daily to ensure we are focused on the bugs that can have an adverse impact on our users.

We have positive interactions with the people who report bugs, and we appreciate the hard work involved in uncovering issues. If you find a bug, report it via HackerOne and we will reward your work. We have rewarded researchers for over 300 bugs found so far!

Anshuman sent us the first report in December. At a glance his report appeared to be well written and detailed. When triaging bugs, those two things are especially helpful. (We appreciate well written POCs!) We reproduce every report received, so below I will convert his report into a description of the problem and a series of steps needed to reproduce it (original report quoted).

------------

From the report:

“Slack users are allowed to share files (posts, snippets) with other users and within channels.”

True

“When a file is shared in a channel and unshared again, it is clearly mentioned on the website that: Un-sharing the file will not remove existing share and comment messages, but it will keep any future comments from appearing in the channel.”

True. This is what the un-sharing feature does. As stated above, files.unshare is in no way an access control feature.

“This makes it obvious that on sharing and then unsharing a file within a channel, it will still remain shared and can be viewed by others on that channel. This is the way it is supposed to be.”

True. Again, this API call is used to stop new comments about a file from appearing in a channel, not to remove the file from a channel. (Deleting is done by the files.delete method) So far no bug, just things working as expected.

“Now, when a file is shared with a Slack user, currently, there is no way to unshare it again from the UI.”

True.

“But, this can be easily done by sending a request to the https://<domain>.slack.com/api/files.unshare end point instead of the https://<domain>.slack.com/api/files.share end point.”

The reporter is proposing that the victim call files.unshare to utilize a “hidden feature”. The reason a user might do this is left to the imagination.

“It is as simple as that.”

There is no instance of files.unshare being called this way in the UI, because that is not what it does. Calling an API method that is not documented is never guaranteed to do what you assume it does.

------------

What Anshuman has created is a scenario where the “victim” must:

1) Use Slack via the Web, Mobile or Desktop Application. 2) Share a file with another user 3) Observe API calls (or read the javascript). 4) Make an assumption about what api/files.unshare is used for. 5) Call that API method directly. (curl, js, whatever..). 6) Expect that the method does what you have guessed. (it doesn’t, because the reporter's guess was incorrect.)

------------

Testing this report involved working with multiple developers to review the nature of files.share and what the impact of this bug would be. At the end of our investigation we replied to the reporter saying that we appreciate his effort, but this is not a vulnerability, because files.unshare is never used in this way. Unfortunately, we then received this message from Anshuman:

“I am giving you a heads up that I will be blogging about this sometime today. Thanks for your time.”

So after hours spent reproducing this and then explaining to Anshuman why it isn't a vulnerability, his reaction was to create a blog post titled “Hidden Feature in Slack leads to Unauthorized Information Leakage of Files”.

I believe that HackerOne is a valuable platform, and outside of this instance our experience has been extremely positive. We will continue to use it and look forward to working with new people.

Btw, I’m not off the hook, because I did something wrong too. I failed to keep Anshuman updated on a second report he filed in December. I absolutely agree that bug bounty participants should receive timely replies to their queries. This oversight is regrettable and this mistake will not be made again. My apologies to Anshuman for not keeping him updated on the status of the bug, which would have allowed proper coordination and disclosure.

Good Hunting,

Ryan


Good to see a response on this from someone at Slack. However I'm now somewhat confused.

If you accept that Anshuman should have been updated on the 2nd bug and it was only after multiple unanswered requests that he blogged about it (a completely reasonable reaction I'd say), why did you then (I'm guessing you're the same Rhuber as the one commenting on the 3rd bug) say that he had gone against the spirit of the site and had him removed from your bug bounty programme?


There are a few reasons for my comment about going against the spirit:

1) https://hackerone.com/disclosure-guidelines states:

"If 180 days have elapsed with the Response Team being unable or unwilling to provide a disclosure timeline, the contents of the Bug Report may be publicly disclosed by the Researcher. We believe transparency is in the public's best interest in these extreme cases."

2) He set an arbitrary 90 day disclosure checkpoint.

3) We explicitly asked for more time in dealing with the bug.

4) We had an extremely negative experience with him during his first report. He was unnecessarily adversarial when we patiently explained that he had not found a vulnerability.

---------

Within the HackerOne interface, a "Duplicate" is actually listed as a Closed:Duplicate issue, and doesn't appear in the Open issues tab at all. Perhaps a method of attaching duplicates to the original and allowing communication between all involved is useful? ¯\_(ツ)_/¯


Regarding point 4 above, the entire conversation can be found here - https://docs.google.com/document/d/1q-aKtxS6xNIhal0As743tBE1...

So, I report a bug that I think is a security vulnerability. You fail to even understand the report in the first place. You don't even try to watch the video PoC demonstrating it in action. In a nutshell, you handle it completely wrong in the first place.

Then, you come back and tell me it's not a security vulnerability because it's a hidden Feature or whatever the reason you have.

At this point, there is not much I can do but to present my justification as to why I think you are wrong. I present my opinion which I'm entitled to just like you are. And, I let you know that I will blog about it.

Do you really think I was being "unnecessarily adversarial" there? I rest my case.

With regards to the second issue being duplicate, I believe you guys must have already fixed it by now? If So, do you mind disclosing the original reported bug to bring some more light to the questions being asked whether it was really a duplicate or not. I understand you don't have to do that but it's just a suggestion. Feel free to ignore.


Good point about the Hacker One disclosure timeline, sounds like the reporter should have waited for that to elapse prior to disclosure.

Not sure I'd say 90 days is entirely arbitrary as some of the big boys (i.e. Google Zero) seem to have come to a conclusion that that's the appropriate delay between disclosure and fix (whether that's always reasonable is another matter).

I'd guess that the more time thing he may have felt didn't apply as he wasn't getting any more communications about the bug status...

And sounds like a good feature request for Hacker One on dupes, this won't, I'm sure, be the only instance where this kind of mis-communication happens!


considering the fact that they were unresponsive and dismissive and some feedback about slacks bug bounty program from other fellow researchers, I really didn't think 180 days was worth the wait so yeah I chose my own. I am at the liberty to do that just like how slack has the liberty to ban me from their program. So yeah your guess is good!

And it's not only me who has had such a terrible experience with their program. I know atleast 3 different researchers who have reached out to me to tell me that they have gone through the same experience. They prefer not to speak out. I did. Period.


[flagged]


> I will not speculate on your real intentions here, but i will let you know that everything you have done looks like you have ulterior motives. The post you made, while wordy, utterly fails as an excuse or even apology, the last of which you are due for.

This is hyperbole, unnecessarily accusatory, and counterproductive. Nothing above made it look like Ryan/Slack had ulterior motives. You disagree with how they interpreted things.


You and i may disagree on whether something looks like actions taken out of ulterior motives, but to me the whole episode looks like it, and that is honesty, nothing else.

As for interpretation: There is no room for interpretation when someone claims "no vulnerability" and "disclosed a vulnerability" on the same thing.


Given how slack doesn't care at all about security, looks like one more reason to migrate and contribute to Let's Chat[1].

(Note: I'm not related to letschat or sdelements in any way)

[1]: https://sdelements.github.io/lets-chat/


I was awarded for a big bounty for finding a big hole last summer (and it is a BIG one). I had a pleasant experience. Maybe the folks who were managing the program are gone. But it is sad to see that some people had negative experience.

In general, Slack is full of security holes, if you look at the number of bounty awarded. On the other hand it's great they do pay people for finding security bugs. I just hope they can tighten up the code and run more security checks before deploying to production...


What's your idea of BIG ? This is not at all helpful if you don't state the amount.


I'm having a hard time sympathizing with the filer since he seems to envision a team of people sitting on the other side waiting for him to press submit.

But there's simply no excuse for the mediocre treatment of what by all accounts appears to be an authentic bug and genuine submission (and with quite a bit of care and energy behind it).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: