Hacker News new | past | comments | ask | show | jobs | submit login
Instagram's Million Dollar Bug (exfiltrated.com)
1562 points by infosecau on Dec 17, 2015 | hide | past | favorite | 516 comments



Thank you to everybody who cautioned against judgment before hearing the whole story. Here is my response: https://www.facebook.com/notes/alex-stamos/bug-bounty-ethics...


I think the root cause of the problem is the unclear policy by FB. Privilege escalation can be hard to catch, and can be a separate bug in and of itself, even if it requires a separate exploit to get the initial privileges.

The published policy didn't say anything about not doing what he did. I'm not going to argue that what he did should or shouldn't be ok, but FB has no control over what other people do. Yeah, maybe it'd be better if people asked for clarification first instead of asking forgiveness, but there's no way to force them to do that. FB does have control over what their policy says and allows/disallows. If you don't want people to exfiltrate any data and look at it on a local machine instead of just keeping a session on the exploited machine, then put that in the policy. If you don't want people poking around for other exploits after gaining access, then spell that out in the policy.

The point of the policy isn't to stop everyone. Sure it will stop some/most people, but some people don't listen. The point is that when it happens again you can point to the clear policy and say "you're an asshole, we're not paying you because you violated our explicit policy, and we are reviewing what you did with our lawyers to see if we should notify law enforcement".

Yes, doing this fix/policy update now doesn't fix this situation, but it prevents anyone else from doing something similar and claiming ignorance of this situation and FB's position.


Correct, the policy isn't clear and needs improvement. The bug bounty's policy definitely falls under the CSO's purview. So even if you approve of Alex's handling of the matter, you can't forgive him for running a sloppy bug bounty program. It's one thing if he claims mea culpa and says we could do better. But there's not one iota of regret, remorse, or apology on not making things more clear in Alex's response.

If you're going to persecute someone on details, you had better make sure your policy is very detailed, not vague, and not left open to interpretation. In this regard, Mr. Stamos failed.


The policy reads clear enough to me to warrant a huge reward. Adding additional conditions after the fact is dealing in bad faith.


most RCE bugs can be compounded into major data dumps. That doesn't make each individual RCE a million dollar bug.


Every RCE that could have been sold on the black market for a million dollars is worth almost the same reward (modulo the advantage of being legal).


Think about this guy being a Russian hacker and selling the ability to access restricted accounts, pose as Instagram administrator and I assume access user's data freely


I would have come here to say this if you had not said it already.

A major root cause is that the published guidelines say nothing directly about exfiltrating sensitive data. This leads to legitimate confusion for exactly the reasons given. The actual policies make sense given what the published guidelines say, but that's not good enough.

The policy needs to be changed. Not by much, but it needs changing. Here is a Responsible Disclosure Policy that might work better than your current one:

We expect to have a reasonable time to respond to your report before making any information public, and not to be put at any unnecessary risk from your actions. Specifically you should avoid invading privacy, destroying data, interrupting or degrading services, and saving our operational data outside of our network. We will not involve law enforcement or bring any lawsuits against people who have followed these common sense rules.


Why do the policy specifics matter? A blackhat won't be respecting those rules, and won't need to negotiate a reasonable payday with facebook.

The real issue here is facebook's poor infrastructure security and slow response time. If the exploit had been previously reported, why was the privilege escalation still possible? Why did a (supposedly) known-to-be-vulnerable host have access to secret information at all?

The exfiltration of data may have been unethical, but facebook has no one to blame but themselves for it even being possible.


> Why do the policy specifics matter?

Companies take big risks in running bounty programs. They are giving hackers permission to test their live site. This isn't something that is popular with everyone inside a company. Bounty hunters need to respect that bounty programs are a two way street. If you find a serious issue like remote code execution you need to be extra careful. Wineberg was an experienced hunter. He should have known better.


Usually serious security issues requires some kind of escalation, and escalation probably requires, at some point, exfiltration of (non personal) data. If the rules of the program are that restrictive I don't know how many serious bugs will be found by "ethical" hackers...


That might not be the point. The point might be to allow the intersection between what is palatable for the company and what serious exploits white hat hackers can come up with.

No company wants to include in their privacy policy that anyone can legally access and download your data if they are trying to perform exploits on the system.


No, the root cause is having a 2 year old, known RCE that was only patched after this researcher got SSL certs and app signing certs.


The policy hasn’t been changed, though. There’s still no explicit statement that privilege escalation invalidates a report: https://facebook.com/whitehat


Let's take a step back here: Facebook threatened to have a security analyst arrested for demonstrating and promptly disclosing the full extent of a serious exploit in a non-destructive manner. Whatever other behavior he engaged in that was unnecessary or ineligible for the bug bounty program, that's incredibly unethical on your part. Especially so, because you clearly didn't believe he was going to do any damage to your system or you would've actually called the FBI instead of someone he worked with.

So, you just wanted to cause him reputational damage and personal problems as an act of petty retaliation. You're right on some of the technical issues here, but in terms of ethics, your behavior has been far worse than his. I don't think you realize how much long-term damage you're doing to your relationship with the wider security community by threatening to jail people who were at no point acting maliciously and at no point caused any damage.


This isn't all that complicated, as far as I can tell.

Guy discloses a vulnerability. He knows it potentially has wide reaching security concerns, and downloads enough data to prove that if necessary.

Guy gets shortchanged on the bounty, indicating that either a) facebook is trying to shortchange him, or b) facebook doesn't realize how big of a vulnerability this truly is

Everything about Facebook's response indicates b): they didn't realize how big a vulnerability this truly was. Otherwise, the data he downloaded would have been useless by the time he used it.

You can argue that the guy "went rogue" by hostaging information, but fact is he deserved to be paid more and he was able to prove it. Now facebook looks bad.


Guy discloses vulnerability. Facebook is not as impressed as guy would have hoped. Maybe it's because he's one of several people to disclose the same vulnerability. Maybe there are just a lot of vulnerabilities (they've paid out 4.3m in bounties).

Guy's reaction to rejection: take hostages and threaten Facebook. Facebook moves to defense and cuts guy off.

You are not a good neighbor for kidnapping someone's family to prove to someone their busted lock is a big deal. You show them their lock is busted and trust they can figure out what harm that could lead to. The alternative is companies being hostile to people just looking around their locks, which is the world in the 1990's and 2000's that responsible researchers are trying to avoid going back to.


This is, of course, Facebook's narrative which conflict's with Wes's.

One obvious hole I can see in Facebook's story is that they insinuate that Wes broke back into the server after they disputed the bounty. If this were true, they did nothing in response to the problems Wes found for over a month.

If you look at Wes's timeline, he says access to the server was no longer possible a few days after he filed the second report.

It comes down to who you believe. Personally, I find Wes to be more credible. It sounds like it was most likely a misunderstanding by FaceBook. Now they are doing damage control.


"With the newly obtained AWS key... I queued up several buckets to download, and went to bed for the night."

He definitely took data off of Facebook's server.

Also you misunderstand his access being denied was a firewall change earlier in his story. This was merely to speculate other systems he could have penetrated--completely separate from the S3 buckets he took data from.

From Facebook's perspective it could very well have seemed like he went back for the goods since he submitted three separate reports, the last of which triggered the response. But this is also irrelevant, the question is whether he took data off or not and this is unambiguously yes, by Wes's own admission.


Honestly, I think he did go too far downloading the S3 data, but nothing in their policy stated or implied that was against the rules. He did not violate their written guidelines. And so, Facebook should have paid him (and then changed their policy), even if begrudgingly.


Here is what's happening right now:

FB: He's an experienced bug bounty hunter and should know where reasonable borders are.

All the experienced security guys itt: He's an experienced bug bounty hunter and should know where reasonable borders are or at least not pivot/escalate without asking. Also never dump and hold data.

Everyone else: What he did isn't technically against the rules FB wrote, so they are screwing him, despite it also being written that they have sole discretion.


> All the experienced security guys itt...

Ah, so those who disagree are inexperienced? No true scottsman indeed!


How is that a "no true scotsman"? Most people in this thread commenting have not indicated they work in the infosec industry.

(For the record, I do, though I'm not sure I'd flatter myself by saying I'm "experienced" exactly.)


The problems I have with your absolute statement:

* You are stating that all (not some) experienced security folks are agreeing unanimously. The implication is that those show disagree are not "experienced security guys" (as you called them: "everyone else") - they are the ones who aren't true scotsman

* you assume those who don't explicitly indicate that they work in infosec industry do not work in the infosec industry

* also, you do you need to be "experienced" in the infosec industry to be correct / wrong.


I wasn't the one who made the comment you're referring to. I'm just saying there is no evidence of a "no true Scotsman" here, as far as I can tell.


apologies - didn't notice you weren't OP. IMO, the "no true Scotsman" is implied (might be unintentional)


The general theme of the thread seems to be security industry people, like tptacek (or commenters self-identifying as being in the industry), expressing concern with the researcher's actions (while still admitting Facebook didn't handle it well). The primarily negative comments don't seem to have a specific affiliation tied to them. And given HN's demographic, odds are much more of them are developers than are infosec people.

I don't think the person you were replying to was suggesting that any infosec people who fully support the researcher aren't real infosec workers. I just don't think he saw any who even claimed to be.


I disagree; this is non customer, non-financial data which is often considered fair game because downloading data is useful to locate many security bugs. Source code or config data is a prime target, but so is network diagrams etc.

Defense in depth means every defense needs to be validated not just the outer layers.

PS: Further, if FB says they know about a bug then anything he downloaded could easily be in the wild and should be investigated.


This. Literally every single person who identified themselves as in the security fields that I saw said the researcher went too far.

What's really getting to me is the overwhelming number of responses containing idea that everything that isn't explicitly banned is permitted, despite the recipient saying "No" (even indirectly/without justification) at some point. How to deal with the grey area of consent is something that every adult should know, and it's worrying to me that so many here seem to feel entitled to whatever they can take as long as it wasn't explicitly forbidden.

Obviously FB should update their policy, but at the same time it's important that we as the community use this as an opportunity to learn and discuss where the implicit boundaries are, where one needs clear-cut agreement to proceed.

Consent is sexy.


I'm a security guy and I think what he did towards the end is dubious and strange, but again, he was following their guidelines as written.


I disagree. It's not about whether or not he downloaded the data. That is an undisputed fact between both parties.

The question seems to be if he did it in good faith and within the rules of the bug bounty program.


No. The question is whether FB understood the severity of the bug and paid in proportion to its severity. When you run a bounty program, that's what you do.


This whole thing is silly. Facebook (or any other tech company) have a lot of flexibility and hardly any accountability in defining what a "million dollar bug" is. You really can't believe they are going to just hand you over 1m because you think it is a 1m bug. It very well may be but in the end facebook will be the one deciding the value of said bug and you will have nothing to do with their decision so assume they just won't do that.


Sure, they'll be the one deciding. Except, that other bounty hunters are watching their reaction and their fairness in paying out people for their work.

The next $1M bug that gets discovered will probably go out onto the black market because of Mr. Alex's actions here.


No, the free market decides the value of the bug. You can either pay that value to a white hat to find it or wait til a black hat sells it.

Facebook has now demonstrated that they will not only not pay you, but they will attacking you publicly, slander you, and threaten you. Now what does that mean for the next hacker coming along? Someone who is clean and wants to stay clean will avoid Facebook. Someone who isn't will realize that Facebook is now an easier target because of the clean guys staying away.


Exactly this. Facebook have just demonstrated that at best they'll get an anonymous warning and then all their private keys dumpd onto pastebin when they do nothing.

At best.


I don't think he is claiming 1 million for the bugs, he mostly wanted to share the whole story (that title was just to get some eyeballs instead of using maybe "facebook cheated me")


At no point did he take hostages. It's that sort of thinking that lead to all this drama in the first place. He did however disclose, which is pretty reasonable considering a lot of us are trusting these services to protect our information.

What if Instagram blead all your browser information? So people can now fingerprint billions of people and figure out who (and their pictures) are surfing their sites? What if there are pics on instagram that people rely on being private?


Downloading data is where he crossed the line and what I meant by hostage:

"Wes was not happy with the amount we offered him, and responded with a message explaining that he had downloaded data from S3 using the AWS key..."


You make "downloading" sound more sinister than it is. Downloading something from the network is the only way to see that it's there or know what it is. There is no substantial difference between downloading and viewing in this case.


> "With the newly obtained AWS key... I queued up several buckets to download, and went to bed for the night."

This isn't about whether viewing files on an internet is technically downloading them; this is about retrieving files of enough size and quantity that you have to queue them up for an overnight download.


He kept it for a month. That is different than looking at it.


Under the assumption the keys would be revoked it's just trash anyways - it'd have been useless anyways, but apparently they didnt realize how serious stuff was, otherwise they would have revoked it A month is plenty of time to change critical S3 credentials


And how long does your browser cache the pages and assets you've looked at?


"Maybe it's because he's one of several people to disclose the same vulnerability"

The thing that gets me about this whole situation is that Facebook either didn't understand the extent of the vulnerability (which seems to be the case to me, and in which case I think Wes Wineberg should have been rewarded far greater than they did for showing them how serious it was, though I wouldn't say this is literally a "million dollar" bug) or they were grossly negligent for not patching it up a lot sooner than they did. They can't have it both ways.

Are they bad at managing their bug bounty program, or just bad at responding to serious security issues? It has to be one or the other.


I'm not sure you understand how the law works


I'm not sure anyone really understands how the law works when it comes to bug bounty programs and legal retaliation by companies. Is there any case law precedent yet?


In most cases where the opposing parties are one large publicly-traded company and one small company or individual, the law works like this:

* little guy offends large company, usually through some totally well-meaning and innocent activity that, if illegal at all, is only so due to obscure, obsolete, and/or obtuse laws

* large company unleashes unholy wrath of $1000/hr law firm on little guy threatening to destroy little guy's world if he doesn't immediately comply with all demands

* lawyers laugh at the plight of little guy and say it doesn't matter what he thinks because he can't afford to oppose large company

* little guy is forced to comply no matter how absurd large company's demands are, because only other large companies can oppose large company in court

* should the large company feel inclined to sue the little guy even after he acquiesced to their ridiculous demands, little guy loses all of his possessions in his attempt to pay legal fees. little guy will run out of money before the case wraps, resulting in him getting saddled with a judgment for massive personal liability (cf. Power Ventures)

* large company is free to make the same infractions whenever they feel it's appropriate to do so, because what are you gonna do, sue them? (cf. practically every company who has ever brought a CFAA claim; Google's whole business is violating the CFAA, as well as various copyright laws)

* bonus points: large company has friends in the prosecutor's office and gets the little guy brought up on life-destroying criminal charges (cf. Aaron Swartz). if the case makes it to trial, little guy spends time in jail (cf. weev)

I don't think I missed anything.


Total aside: I have a startup idea to throw a wrench into your accurate depiction of how things currently play out: little guy hires full time lawyer from large pool of unemployed lawyers, suddenly has legal counsel at reasonable (relative) price for extended time. Suddenly little guy has more of a fighting chance to fight back against lawsuit, instead of having to pay out his counsel at $1,000/hr. (He can add a full time yearly lawyer at the clip of every 2 weeks of his adversary's costs)


Especially when Facebook expressly authorizes this type of activity (to some degree). The relevant passage is cited in the original article.


I'm not sure in this case, that's true. But whether or not this was illegal I generally support skirting laws if it makes everyone else more secure. To that end, I also support Snowden.


laws aside, USD2500 for all that data? hmmm, is our data that cheap?


Sounds like FB acted pretty unprofessionally both in the infrastructure department and in handling of the situation. You had some embarrassing mistakes and instead of acknowledging them you tried to scare the reporter into shutting up and leaving you alone. That part is pretty clear. Whether he violated your rules and how much you pay him I don't care.


Especially in the infrastructure department. This is the huge story here.. putting all your creds on S3 in the open protected by one key?? Craziness.


Yes, exactly this. Without escalating an RCE, how would he have been able to expose this absolutely huge flaw? The initial report was inconsequential, but this seems like at the very least a much more than $2500 bug. If things like this are considered "unethical" it kind of makes finding million dollar bugs in a bug bounty close to impossible.


I agree. According to Stamos, though, there was no flaw:

> The fact that AWS keys can be used to access S3 is expected behavior and would not be considered a security flaw in itself.


If he thinks that is how it should be and nothing needs to be changed then god save their user data. He conveniently missed out the key separation and privilege escalation shown by the researcher.


Yeah, that's like gaining root access on a server and being told "well, the fact that those commands will execute is merely Linux working as designed". Talk about missing the point...


That surprised me too. Of course, AWS keys can be used to access S3, but I don't see how exposing private AWS keys on a public facing server can be "expected behavior".


He only got $2500 because the bug had already been reported by others. Most programs pay nothing in that case.


Then the first one to report it should have been paid a lot more than the $2500. The fact is that FB didn't understand the impact of the bug, and it needed Wes to show them how severe the bug was.

And once they knew how severe it was, they ought to have acknowledged the severity and paid him a lot more.


I feel that privilege escalation/lateral movement is implicitly discluded from almost all bug bounty programs, most researchers know that.

It's a really grey area beyond an initial 'access bug', so it pays not to go there. Otherwise, where should Wes have stopped? keep proving more vulnerabilities until he's downloaded their code? or got private photos of Zuckerbergs kid? Just to show that it is indeed a serious bug?


why would private keys be on any system somehow accessible from the internet? gotta put all in the cloud?


Yeah, scaring the guy with his employer and telling him that kind of bug is USD2,500 worth makes me think about how important is my data for them


The hypothetical question Facebook should ask is:

"If the security researcher did not disclose the RCE, but instead sold it to highest bidder, how much would that likely pay in this situation?"

Paying security researchers to properly disclose is a way of financially encouraging the right behavior. While it may be tough to stomach a large payout for responsible disclosure, do you really want them considering the alternative? It's like tipping in a restaurant to ensure food quality.


Agreed. To me as an outsider, this escalation bug looks a max bug, definitely dwarfing any particular admin console vulnerability, and that the processes the researcher claims to have followed were pretty much necessary to show it. Whether or not this followed the letter of the policy, by responsibly reporting the escalation in the spirit of the policy, the researcher has fulfilled the spirit of the goal.


How is this unprofessional behaviour ?

They are trying to condone the behaviour of data access which in all honesty falls on borderline unethical behaviour.

Any professional who participates in any company's bug bounty should respect their rights as well.

Whether the keys were accessible and it is a technical blunder is secondary but the action the researcher took a) accessing the data he did not need to b) making this into a big deal when he was the one not respecting the bug bounty's limits makes this a case for FB.


I am not saying that the sec researcher is right here. I don't care about him, he is just some random guy who wants publicity. Talking about FB is more interesting because it is a huge public corporation which should behave smartly. But if you want talk ethical/not ethical -- he found a serious problem in their infrastructure. Had he not looked at the data ("respected their privacy") he wouldn't have found it. You can't make the omelette w/o breaking eggs. Perhaps this is more of penetration testing, not bug bounty stuff, but again, i don't care. He found stuff. He didn't use it (AFAIK) for anything bad. FB has to thank him and quickly fix their process. Complaining to his boss and acting all pissed suggests that they do not understand they they did mess up big time.


I am not saying that the sec researcher is right here. I don't care about him, he is just some random guy who wants publicity. Talking about FB is more interesting

You're right. An important thing has gotten lost in the shuffle. We should be pointing and laughing at Facebook. Then when the giggling dies down, asking: Something this bad and with such a "trivial" vuln manged to get published, what else have their now-proven-to-be-shitty practices left open?

He found stuff. He didn't use it (AFAIK) for anything bad.

Reminds me of the way business dudes and non-security devs used to react before security got all popular and legit. And they could have even avoided the whole public brewhaha if communication had been better between the tester and the product staff. Classic blunder.

Complaining to his boss and acting all pissed suggests that they do not understand they they did mess up big time.

They jumped to contacting someone over his head before engaging in real talk with him. And then their public response is covering their ass by arguing over the fine print of how he shouldn't have been poking around where he was.

Obviously there are differences, but similarities are fun too!


So the bits where you lost the ssl keys, auth cookie keys, app signing keys, push notification keys - and had to ask him (via his employer) about what data he'd accessed are all true? Implying you have no records of who else might have done this and acquired those keys?

Boggle!


That's one interpretation: the other is that you're placing faith in them being honest, and you'll get a list of what he'd got without the time of doing forensics of the systems, and hence being able to change the keys sooner.


"Placing faith in them being honest", in the same conversation you're having with their uninvolved employer saying what they found is " trivial and of little value" at the same time as threatening them with Facebook's legal team and law enforcement?

Doesn't pass the sniff test from here.

(Admittedly there's no doubt an iceberg-sized bit of this whole drama that neither side are admitting exists.)


The bigger issue here, and the one that Alex at Facebook seems to gloss over - if Wes got this data using a 2 year old well known exploit -- then who else got it without anyone knowing?

While Alex may have a right to be upset at Wes for taking data, Alex should recognize Wes is likely the least of his worries now. Wes wasn't/isn't a professional security researcher... and he was able to do this. That should frighten Alex, and Facebook should have been much more rewarding to Wes for forcing this issue to be taken care of.


"This bug has been fixed, the affected keys have been rotated, and we have no evidence that Wes or anybody else accessed any user data."


> and we have no evidence that Wes or anybody else accessed any user data

This raises way more questions than it answers. Most notably: why aren't you recording who accesses user data?


Reads to me that they are recording the access and no-one did access it.


Not necessarily. If that were the case, wouldn't they use the stronger, "and we have evidence that no data was accessed through this exploit"? The fact is: They can't possibly protect user data once the private SSL keys leak. At that point anyone can intercept user data on third-party, non-Facebook servers if they're affiliated with an ISP, wifi hotspot or other point of access. Anyone could send targeted phishing emails for their servers: How would they know, if the SSL cert looks legit and the DNS is regionally poisoned?


Because of the sequence of events that played out...


Yes and he got paid for it.


I'm not quite sure I understand your point? Of course he got paid, that's how bug bounties work... that doesn't detract in any way from the point I made above.


And I don't understand yours. You were concerned about other people other than Wes accessing the same data via the same flaw, Alex said that did not happen.


But until Wes told them, they had no evidence that Wes was accessing the data! Or are you saying that they did have evidence, but chose to take a "wait and see" approach to someone gaining control of their entire platform?


No, he claimed _not to have any evidence_ that it did happen.

"Quick, shut off the logging on those servers, so we don't have any record of who logged in on them!"


Alex said they "have no evidence" it happened, which is classic slippery legalese. From that phrase it is reasonable to infer either that they have evidence of absence, or absence of evidence, which are not the same thing.


It's standard wording for something like this even if they had 100% evidence of absence.


Correct. It's the standard wording, whether or not they actually have evidence. Therefore we cannot assume, as you have earlier in this thread, that they do in fact have it.


This response deepens my concern about the situation, rather than alleviating it. In this response, you make it sound like calling this security researcher's employer's CEO was a reasonable escalation of the situation, and that is deeply concerning to me, especially given the actual text of the post Wes published here.

It also appears, based on your post, that you think that stating, approximately, "I hope we don't need to contact our legal teams or law enforcement about this," does not constitute a threat of legal or law-enforcement action, and I also find that deeply troubling. While I think you could make a legal distinction that these weren't technically threats of such action, any reasonable person in the researcher's position would by positively idiotic if he/she failed to feel threatened in that way by such statements.


I told Jay that we couldn't allow Wes to set a precedent that anybody can exfiltrate unnecessary amounts of data and call it a part of legitimate bug research, and that I wanted to keep this out of the hands of the lawyers on both sides. I did not threaten legal action against Synack or Wes....

In case it isn't clear, most people will interpret "I want to keep this out of the hands of lawyers" exactly as a threat to start legal action. To be honest I'm not really sure how else it should be interpreted?


"I want to keep this out of the hands of lawyers" is almost universally understood to mean "please do what I say so that I don't have to sue you, which is what I will do if you do not comply".


Maybe someday the response to this sort of threat will be "In the interests of sharing, I already passed on this information to your favorite class action law firm and the media. It's already in the hands of lawyers and your company is already being sued."


Alex, I am always in to hearing from both sides. But despite your reply, I see wrong doings on both sides unfortunately. I dont think you have discussed this message with public relations dept or rep management one.

OK, so lets look at this - your response showed us one extremaly important issue. No clear rules in your system. Wes actually by exploiting your system, exploited your lack of rules regarding the handling of white hat hackers.

Listen, hacker should exploit ALL possible issues. He exploited your weakest one - the rules behind the system. Close the case - reward him XX,XXX for exploiting weakness in your policy for dealing with white hat hackers, spend another as much to bulletproof your policy. Do not reward him for hacks, that are unethical, as it would be wrong, but do it for the other exposure - small dent on your white hat hacker system.


The lesson here is when you find Operations issues (particularly Security Operations) at Facebook don't report them. Those make the CSO look bad directly.


Yep. Code bugs, no problem. Engineers don't report to Alex!


Ok, so here's the thing. Your $2500 payout was not commensurate with the severity of the bug. It ought to have been more. A LOT more.

You're basically telling bounty hunters to not go any further to "prove" the severity of the bug because you're saying, "Trust us. We'll measure the maximum impact and reward you fairly"

And yet, you're not being fair at all. So the bounty hunter needs to "prove" the severity of the bug for you. You're digging your own grave here by not acting in good faith. The next guy who finds a good bug is not going to disclose it to you - he's going to sell it on the black market for a few hundreds of thousands. Or millions.


The real question is did you rotate the keys (and do further hardening, I hope!) because of the vuln report Wes made? If so, than you should be grateful for his work pointing out your mistaken single point of failure via AWS S3 security and you should have rewarded him handsomely.


I think that is the key right there. It seems like the sensu.instagram.com was simply firewalled at first and the AWS keys were not changed. He was then awarded the bounty for reporting this bug. Afterward he demonstrated that the AWS keys were another vulnerability, and it wasn't until after reporting this, that the AWS keys were rotated.

To me, this demonstrates that had Wes not reported the AWS keys, then Facebook would never have rotated them. I would argue that the fact Facebook found need to take action to resolve Wes' third vulnerability submission, could be considered an admission to its legitimacy as a bug. Therefore concluding that the bug is indeed worthy of a bounty.


I can't work out how to not make this sound almost infinitely cynical, but their ssl key expires in 13 days - they only had to shut him up for another few weeks and they could have pretended they weren't currently MITM-able:

https://www.instagram.com

Not Valid After: Thursday, 31 December 2015 11:00:00 pm Australian Eastern Daylight Time

Maybe they'll upgrade it to something better than: Signature algorithm SHA1withRSA WEAK


Does this have anything to do with the SHA1 sunset on 31 December?


That'll be why the key expires on Dec 31 even though it was only issued back in April.

It doesn't explain why Instagram has been happily using a known-compromised wildcard ssl key for two weeks now.

Makes you wonder who actually values and protects Instagram's user privacy more - the researcher or the Facebook CSO...


>Makes you wonder who actually values and protects Instagram's user privacy more - the researcher or the Facebook CSO...

No, I don't wonder about this at all.


Different key, dude. We rotated what was exposed.


So this new rotated key I'm seeing that has an April 2015 start date is a different key to the one your team replaced after it expired and broke everything back in April?

What a coincidence...


[flagged]


> Do you believe that after this chain of events anyone still believes you?

Personal attacks, which this crosses into, are not allowed on Hacker News. Please comment civilly or not at all.


I don't see that as uncivil or a personal attack. It's either a reasonable direct question or a rhetorical one. And as a rhetorical question, it's not a personal attack, but rather makes the point that other posts seem to damage his credibility.


It's obviously not a direct question (there are people defending him in this thread, so of course he "believes" that), and as a rhetorical one it implies that he is lying. That's not a civil debate tactic—there's a reason why parliamentary systems expel people for using it.

Everyone needs to err in favor of respect when addressing someone on the other side of an argument, especially when one's passions are agitated, because the default is to forget all that.


Seems like a reasonable, if rhetorical, question. Hope Alex doesn't complain to his employer about it though ;-P


I am not talking about the person, but the company.

And I am sorry, but after these acts the company has taken, the little bit of trust that was left in the company is gone.

I am sorry if it sounded like a personal attack, that was not intended.


OH COME ON Dang, Alex called up Wes's employer and threatened him with criminal charges and then had the balls to lie about it in his facebook post that he didn't "Threaten". Are you seriously defending this??


Asking HN users to be civil defends nothing except civility.

There's a relevant general point here though. Reactions like this, and many others in this thread, are reflexive. That's really not what this site is for. Good comments for HN aren't reflexive, they're reflective. Practicing that distinction is the most important thing for being a contributor here, and it's orthogonal to one's actual views.


Asking HN users to be civil defends nothing except civility.

This would only be true if that request were applied equally whenever HN users were uncivil. As it stands, it does generally come off as defending specific users.

...it's orthogonal to one's actual views.

Believing this is going to made you a worse moderator -- this is "fair and balanced"-style thinking. There are many perspectives whose projection onto comment reflectivity are anything but zero.


> if that request were applied equally whenever HN users were uncivil

That's asking us to operate like machines—supermachines, in fact, with incivility detection and moderation powers. That's unrealistic. HN users' capacity to be uncivil exceeds our capacity to ask them not to, so the latter maxes out.

> it does generally come off as defending specific users

We try hard not to play favorites. I'm biased, of course, but there's more than one kind of bias here. People are more likely to notice us criticizing a comment they identify with than the cases that go the other way. We're biased to notice what we dislike and assign more weight to it.

> Believing this is going to made you a worse moderator

In that case I'm a bad moderator already, because everything I've learned about HN is packed into what I said there.


outstanding question.


If he had reported the keys along with the original submission, I think it's safe to assume they probably would have rewarded him handsomely.

Instead, he sat on the keys for over a month, and in the meantime used them to download everything he could find onto his personal computer. Simply testing that the keys were live and disclosing this immediately would have been more than enough proof of a bug here.

Edit: downvoters - please explain how using keys to access production systems for over a month without disclosing is acceptable white-hat behavior?


They said they did rotate the keys.


Bug bounties are supposed to represent a high probability payoff of a lesser amount of money for finding a bug. This is in comparison to going the black hat sales root, where probability of sale might be lower, but the payoff might be higher. I can imagine one or two state actors who might pay top dollar to have keys to the kingdom to a major social network.

All I'll remember of this entire story is the outcome- huge vulnerability found (high black market value), and Facebook is talking about lawyers and paying small bounties. Nobody will remember that technically he broke a rule that wasn't well explained. The next Wes will have his major vulnerability in hand, and have this story in his mind. It may change his decisions.

Make this right. Even if you are in the right who cares? You need the perception of your program to be impeccable, paying more than researchers expect. Facebook can afford it more than they can afford to blemish the image of their big bounty. Invite Wes to help you rewrite the confusing parts of the rules. Leave that story in everyone's memories instead.


According to the rules https://www.facebook.com/whitehat/ "We only pay individuals"

Wes COULDN'T have been working for Synack to find bugs as your program doesn't even allow for it.


And according to the update on the post, Alex chose to contact his 'company' (that he had contracted for) even though he had not contacted them through the company email (meaning he sought out a way to go about intimidating Wes). Seem's incredibly petty and intimidating of Alex and reflects poorly on Facebook imo.


Yea, wouldn't want to "set a precedent" that infosec researchers will be rewarded for doing the right thing.

Next time someone uncovers your private keys at least they'll know upfront that there is no money in doing the right thing which might just make selling them to the highest bidder seem like a more compelling option.


With regard to to your final sentence: "Condoning researchers going well above and beyond what is necessary to find and fix critical issues would create a precedent that could be used by those aiming to violate the privacy of our users, and such behavior by legitimate security researchers puts the future of paid bug bounties at risk." Regardless of whether one thinks Weinberg's actions were ill-advised, there seems to be a general consensus that they were instrumental in the discovery of some very critical issues, and that you are lucky it was he who found them.


There is a definite issue with the Facebook bug bounty program in that there are many serious issues with the platform that don't fit within the relatively narrow parameters of the program. I reported an issue that enabled anyone to customize a wall post that says it goes to any site of my choosing in the post (cnn.com, whitehouse.gov, etc), completely customize both the content and photo of the post, and have the link actually go to a URL of my choosing instead of the domain it shows in the post. Examples at [1] and [2].

This issue, which enables uber-credible phishing and other attacks with the assistance of Facebook (since Facebook falsely reports to the user that the link goes to a credible domain of the attacker's choosing while actually sending them to any URL controlled by the attacker), was rejected. Not only was I told that it was not a bug that I could be paid for, but that it really wasn't a bug at all, and that they would do nothing about it.

If these kinds of serious issues are essentially ignored because they don't meet the very narrow guidelines set forth in the bug bounty program, Facebook is going to miss a massive number of problems with its platform.

[1] http://prntscr.com/9fj40t

[2] http://prntscr.com/9fj46h


Thanks for the response, but why did you start by contacting the CEO of Synack instead of the researcher directly?


> At this point, it was reasonable to believe that Wes was operating on behalf of Synack. His account on our portal mentions Synack as his affiliation, he has interacted with us using a synack.com email address, and he has written blog posts that are used by Synack for marketing purposes.

I feel like that bullet point answers your question pretty well.


Sorry but this is a coverup that Alex is using to defend himself. He had easy access to Wes, as Wes was actually demanding a reply via Facebook's own system in place to communicate with researchers, and not receiving one.

Alex would have been aware via the original RCE bug that Wes was reporting on behalf of himself and not his employer. Also, it is reasonable that Wes would have mentioned that he is reporting the bug on behalf of his employer from the beginning.

I presume that Alex knew these things, but he decided to take a more dramatic approach to get Wes to stop, by contacting his employer. It obviously would be leverage, and Alex knew that he could also leverage his position at Facebook to use a security firm in the industry (who would understandably not want to do anything to jeopardize its relationship with one of the largest internet companies in the world) to ask their employee to stop.

I do not believe that Alex legitimately believed that Synack (Wes' employer) was behind the research, but he knew it would be an effective way to stop Wes from continuing, so he decided to pull those strings.


I'm more questioning the flow of researcher reports vulnerability, company awards bounty, researcher disputes bounty value, CSO of company contacts CEO of researcher's company. Is that normal escalation procedure?


Wait, you just made something up.

Even the researcher doesn't claim that Alex contacted the CEO of Synack because of a dispute over the bounty.

Rather, it's the other way around: the researcher disputed the bounty, and did so by revealing that he'd retained AWS credentials from Instagram long after they'd closed the vulnerability that he used to get them.

Alex contacted the CEO of Synack to ensure the credentials weren't used, because if they were, Alex couldn't be control Facebook's response: they've got a bug bounty participant who has essentially "gone rogue" and is exploiting Facebook servers long after they've told him to stop. They need him to stop.


The "bug" here is that they aren't really keeping track of their AWS buckets and keys at all. Least privilege, access logging, remote IP flagging, etc. These operational failures are ostensibly the responsibility of the CSO.

I'm not saying this researcher was 100% in the right, but this is the CSO ass covering. "Don't pay attention to the obvious operational deficits, the problem is the researcher overreaching."

A simple phone call directly to the researcher that cut through the bullshit would have made everything better. But he had to make sure it didn't get out and the only way he could do that was by using the only leverage he had: The researcher's employer.


Alex has in the last few months built one of the best teams in application security at Facebook (Facebook security is now seemingly most of O.G. iSEC Partners). I get it, everyone hates big companies and especially Facebook evil Facebook but, come on. They know what they're doing.

If you understand how security works inside of big companies, this is a really silly theory to run with. CSOs are happy when shit like this gets discovered, because it gives them ammunition to get the rest of the company to adjust policies.

If you were working from the understanding that a CSO comes in and just immediately tells a team of (what is it) NINE THOUSAND developers how to do stuff differently... no. That's not how it works.

The problem is that nobody at Facebook with the possible exception of like 10 people none of whom are Alex can make huge operational changes like "change all the ways we store keys across an entire huge business unit". So, you tell Alex you took AWS credentials he didn't know existed and you're going to start mining them for a story you're bringing to the media, and now Alex is in a position where he's NOT ALLOWED to sit back and try to manage the situation himself.

Delete the keys or I have to tell legal what's happening.

The researcher NEEDED TO HEAR THAT.


>> Delete the keys or I have to tell legal what's happening.

>> The researcher NEEDED TO HEAR THAT.

I'm not in security, but from the outside looking in, how things worked out just doesn't smell right.

If "the researcher NEEDED TO HEAR THAT" is the priority, then why waste time looking up who the guy works for and calling them instead?

The simplest and most obvious way to tell the researcher is to tell him directly in the clearest way possible. It isn't as though there wasn't a pre-existing line of communication with the researcher.


My reading of tptacek's subtext is that Facebook wanted to show the researcher that they were really, ALL-CAPS serious, as in "get you fired and ruin-your-livelihood if you don't stop" serious. These mafia tactics are fine because the Facebook CSO "built a good team and knows what he is doing"


If FB wanted to show the researcher that they were really, ALL-CAPS serious, then they would talk to him directly as in "You've got stolen data and we're going have the FBI arrest you, seize your computers and put you in jail ruin-your-livelihood if you don't stop" serious.

So I still don't see how calling the guy's boss trumps that in terms of scariness. Because if I'm the wronged party (i.e., FB), that's what I'd do if I couldn't resolve it amicably.


If we are disagreeing, I don't quite follow your argument - I never said that this was the worst/scariest thing Facebook could to do (there's no upper limit). What I meant was that the action by Facebook was intended to intimidate (and not that the specific form of intimidation was the worst possible)


We're not disagreeing. I think your interpretation of tptacek's subtext is the same as mine.

In some of his posts, he has been, however, comparing the researcher's dump to criminal activity -- something I am not in disagreement with.

His implication that calling the researcher's boss is a sensible approach to intimidating the researcher for potentially criminal activity -- that in particular seems like a stretch if he's being truly objective.


...which all adds up to a smell of "tptacek knows that team is good, because he's on it".


I don't doubt that he's put together a great application security team. Or that he even knows his shit. And I do understand how it works. CSOs are happy when this kind of shit gets discovered when they can't get other teams onboard to fix it. They're unhappy when it gets discovered when they intentionally ignore it in favor of another initiative (particularly if there's a paper trail showing that someone brought it to their attention). Or when they've already spent a bunch of money and resources fixing it only for everyone to find that they haven't fixed it at all.

There are basic things you can do to mitigate or isolate damage in AWS and they either aren't doing it or have done it badly. Even if he couldn't convince the rest of the company that god-mode keys are bad, he still could have built out some basic infrastructure to track when and where they keys were being used from so red flags could be raised when some random IP address is being used to pull down several buckets.


You're perfectly right, but his employer didn't need to hear it. And that's the whole crux of the matter.


If you read the article, his company does security research and found a vulnerability in Hotmail. Plus he was using his company's email address.

> At this point, it was reasonable to believe that Wes was operating on behalf of Synack. His account on our portal mentions Synack as his affiliation, he has interacted with us using a synack.com email address, and he has written blog posts that are used by Synack for marketing purposes.

That's a big mistake. DO NOT EVER USE YOUR COMPANY EMAIL ADDRESS if you are doing this on your own. The employer has the right to know. Imagine using a company email address on Ashley.com. Yeah, plenty of people were embarrassed after that hack.


Actually his write up makes pretty clear that he didn't use his company email until after Alex went over his head to the CEO.

Second, everything else being equal, Alex going to the CEO without calling or mailing the researcher first was a mistake. Going to someone's boss and saying "please do something, I don't want to get the lawyers involved" IS an implicit legal threat, both to synack and the researcher.


What Alex wrote is a bit interesting given his update (emphasis my own):

>At this point, it was reasonable to believe that Wes was operating on behalf of Synack. His account on our portal mentions Synack as his affiliation, he has interacted with us using a synack.com email address, and he has written blog posts that are used by Synack for marketing purposes.

According to Alex this is the timeline:

1. Researcher not happy with sum

2. Researcher already in contact using Synack email address

3. Alex calls Synack CEO

From the researcher's blog:

>I never contacted Facebook or Alex using my work email account. It was only after Alex contacted my employer via email that I sent a reply from my work account. Alex indirectly contacted me at work, not the other way around.

This means that Alex is lying, is telling exactly the facts needed to come to a specific conclusion and nothing more or the researcher is lying. And he's "written blog posts that are used by Synack"? Come now. Reads a lot like someone looking for a third item so they can make it a comma separated list of reasons. His post smells like bullshit.


I like how we're talking about Stamos warning a guy running around with stolen AWS credentials for all of Instagram in the same fashion as we'd talk about a DMCA threat. "Implicit legal threat"? There's nothing "implicit" or subtle about what was happening here.


You seem levelheaded throughout the thread and make good points more articulately than I ever could but this seems a bit emotionally involved.

It's possible that we're all correct: This guy could be a wildcard researcher that plays fast and loose and the CSO could be covering his own ass. You say he's building a first rate application security team. Is it hard to believe that he could have made the mistake of focusing almost exclusively on that?


LAUGH .. I love it, even his staunch defender friend says he's lying: " I did not threaten legal action against Synack or Wes "

https://www.facebook.com/notes/alex-stamos/bug-bounty-ethics...


I get your point in these threads, but unless I'm misunderstanding, who cares about stolen, potentially undeleted Amazon creds? Revoke the key in the portal and be done with it?

Given who I'm replying to, I'm assuming that I'm missing some key piece of the puzzle.

(And I totally acknowledge it doesn't change the circumstances of what either side has done, I'm just curious)


The point is, having those is a prosecute-able offense, if Facebook chose to prosecute. So it's a big threshold to cross legally, even if not meaningful from a programmer's perspective.


Facebook's terms say they will not prosecute /report whitehats to law-enforcement. Facebook could prosecute, at the price of some goodwill from the security industry (or part of it). I'm sure a competent lawyer to mount a robust defence for the security researcher (beyond reasonable doubt, IMO).


You're missing my point and talking about something entirely different from what I'm talking about. I'm not talking about whether Facebook will prosecute and what the consequences of that will be (whether they'll win or lose whatever).

I'm just pointing out that taking AWS keys is a big deal, because it's legally a big deal.


Facebook's disclosure policy reads:

>If you give us reasonable time to respond to your report before making any information public, and make a good faith effort to avoid privacy violations, destruction of data, and interruption or degradation of our service during your research, we will not bring any lawsuit against you or ask law enforcement to investigate you.

IANAL: but it could be argued (in court) that he had Facebook's permission to getting the AWS keys. In his opinion (and mine) he made good faith efforts to avoid privacy violations.

Facebook's official disclosure policy has legal weight. There is legal concept (whose name is escaping me) that could apply that in laymen's terms say the official disclosure policy gives him Facebook's tacit approval - I first heard about it in the Oracle v Google where Google argued a blog post congratulating Google provided tacit approval.


The part you emphasized is dependent on the first part of that sentence, however. In this hypothetical lawsuit Facebook's lawyers would easily be able argue that they would not have done anything for the initial exploit or even demonstrating that he had recovered valid AWS keys but that attempting to hoover up data from S3, etc. violated the “good faith effort to avoid privacy violations” part.


>but that attempting to hoover up data from S3

That's a mischaracterization given his description. He examined the filenames/metadata specifically to avoid buckets that might contain user data.


1. This assumes his description of his actions is completely accurate

2. This assumes that he was perfectly accurate in his assessment of an unfamiliar project's naming conventions, data structure, etc.

3. This assumes that he was perfectly reliable in making the actual copies and didn't accidentally include potential personal data (e.g. who knows what might be in a log file?)

The problem is that we're talking about someone who already decided to exceed the bounds of what was clearly protected under the bounty program. He'd already reported the initial vulnerability and been paid for it but waited until later to mention that he'd copied a bunch of other data, had access to critical infrastructure, and wanted more money.

It seems fairly likely that this wasn't malicious but rather just poor judgement, but that makes it very hard to assume that outside of that one huge lapse in judgement he did everything correctly. It's really easy to see why Facebook couldn't trust his word at that point since it's already far outside normal ethical behaviour.


To your first point: There's being skeptical and then there's calling someone a liar without actually calling them a liar because you don't have any justification for doing so. This is far from the first time I've seen this on HN and it's really not okay. There's no point in speculating about the veracity of this person's statements until there's a reason to.

To the second and third: They only require that a researcher "...make a good faith effort to avoid privacy violations..." and I'd say he met that. You can argue that the entire endeavor wasn't in good faith but he certainly made a significant and conscious effort to avoid private data.

I think his biggest lapse in judgement was that he brought security operations issues to light in a bug bounty program run by the people that would be most embarrassed by them. Application security bugs are created by the engineering team and the CSO's application security team fixes them (or advises or whatever). Security operations issues are entirely the responsibility of the CSO's department.

Facebook (as an organization) should be thanking him. While he didn't expose application security bugs he exposed significant operational issues and blind spots. Keys with far too much access, lack of log inspection, lack of security around what IP addresses a key can be used from, etc. Operational issues and lapses in operational security are what got Twitter in hot water with the FTC in 2010. It's not as easy to play cowboy with operations as it used to be.

The CSO hasn't been around for long but by all accounts he poured a lot of effort into hiring an application security team. Perhaps that's his specialty but even one experienced technical manager hired for security operations could have caught these basic issues. They probably wouldn't have addressed the lack of least privilege in that time frame but they could have easily spun up logging to catch some rando on an unknown IP address using their keys.

But like I said, he hasn't been there for long so I don't blame him for the failure. What I do blame him for is calling up the employer to threaten them as leverage to shut up the researcher. I blame him for posting a thinly veiled justification for doing so. He could have addressed this openly, talked to the guy directly and went to the other C-level execs with it as a justification for getting everyone on board with fixing it but he tried to keep it contained to his department.

I understand how he must feel being the new guy who's responsible for the outcome but not for creating it. I know he'll get questions that he might not be able to answer since they probably aren't logging bucket access. Questions like, "Who else got a copy of these keys and what did they access?" Saying "I don't know and we may never know" in response to that, even if you weren't in charge more than three months ago, is rough.


Again, you're quibbling about legal details that are not relevant to my point. I'm pointing out that his actions are a big deal because they crossed a legal threshold where a company would have a somewhat decent case to prosecute you. I don't care whether or not they would succeed.


Then why the immediate escalation?

Wouldn't it have made more sense to contact the researcher directly, rather than using his position of power to pressure the researcher's company's CEO?

Why not assume good faith? (Which is what I would think a white hat bug bounty program should assume)


I am not sure what part of

"he has interacted with us using a synack.com email address,"

invalidates my reading that he was using his company's email?


Bottom of his post he replies to Alex his post:

> I never contacted Facebook or Alex using my work email account. It was only after Alex contacted my employer via email that I sent a reply from my work account. Alex indirectly contacted me at work, not the other way around.

If that is true, it is either poor judgement from Alex, or bad intent to call Synack


It says nothing about who initiated contact using his company email address. It could have said, "he contacted us using" or "his facebook account was associated with" but instead it says "he has interacted with us using". Sometimes what's not said tells us as much if not more.


The Facebook reply does not state that he was using his company email address to report issues or to communicate prior to them reaching out. The researcher says that he only used the email after Facebook got his employer involved.

The Facebook post does not, in any way, contest that.


Technically it doesn't contest that but he uses multiple weak points in bullet prior to the one about contacting the employer's CEO. The intent was clearly to establish that his decision to contact the researcher's employer had merit. It was clearly carefully written so it remained factual while implying things that aren't.

I'm not disagreeing with you, only making it clear that yeukhon was played by Alex exactly as intended so he'd be out there defending him on sites like HN.


He didn't use his company address until he was contacted through his company.


In any case, there is one question remains. How do facebook defines a "million dollar" bug if the security team is not aware of the damage it can do. Since this is not the first time this bug was reported, did they actually give a big bounty to the first person who did the initial report(Given that it can lead to this much damage)? Or just another small bounty saying that it's not a very important security flaw.


There are enough laws against "cybercrime". If Alex felt threatened he should have escalated the issue to the FBI. There is no single reason to call the employer. By doing so Alex has threatened Wesley to fuck up his life.

edit: Or -after calling the CEO- he should have contacted Wesley directly and so they could deescalate the problem together.


> The researcher NEEDED TO HEAR THAT.

I don't disagree. But why go through his employer, when they already had a direct line to the researcher himself?


Intimidation.


Relative security teams are almost useless. In 2-3 years FB might have it's shit together, but three months is no where near long enough to fix there problems.


Agreed, after looking at this linked in Profile, it's hard to blame Alex for the problems as he's only been there a short while. However, he can be blamed for creating all this unnecessary drama.


If you understood how big companies work, you'd know it takes more than a few months to build "one of the best teams". This is one thing in Alex's favor though, he's new to the job. Still, if you also understood how big companies work, you'd know that everyone hates the drama queen.

The right move here would have not been to threaten Wes, pay him, and just update the policy.

Lesson learned for Alex and his friends: Do not threaten individual contributors or suffer massive freaking drama. Thank you internet.


> I'm not saying this researcher was 100% in the right, but this is the CSO ass covering. "Don't pay attention to the obvious operational deficits, the problem is the researcher overreaching."

The response from FB's CSO is very specific to a very specific blog publication. Not regarding the flaws in how their AWS Buckets are used.


I'm not sure what you're getting at.


Your statement:

> "Don't pay attention to the obvious operational deficits, the problem is the researcher overreaching."

mischaracterizes what the response by FB CSO as one that is attempting to draw criticism away from operation flaws by instead placing focus/blame on the researchers methodology.


I disagree.

A security researcher went public with a story of "I found this massive security hole and Facebook tried to avoid paying what I thought it was worth, and then threatened me with legal action"

The response that Alex thinks he needs to make is "my actions were reasonable because ..."

From external appearances it seems as though he is more concerned about looking like a heavy-handed, lawyer-invoking, CSO than the publicity around FB having an unpatched RCE that allowed access to highly-privileged AWS keys.

What he chooses to write about is reflection of what he saw as the most important news in the original blog post.

I suspect he's actually right. The blog post will probably raise more bad publicity around the way FB handled the research & disclosure than the existence of the bug, and it's the piece that needs to be resolved well.


You're right, that was the purpose of trying to keep him quiet by contacting the CE-freaking-O of his place of employment with an implicit legal threat. The blog post is an attempt to do damage control when he realized the researcher wasn't going to put up with that and went public.


> by revealing that he'd retained AWS credentials from Instagram long after they'd closed the vulnerability that he used to get them.

How would that change anything?

If Facebook did rotate all keys the moment the researcher reported it, they made no difference.

If Facebook did not, then they aren’t taking care of their security properly.


Without defending the researcher here, I thought that was the weakest point in Facebook's response. Was he interacting with Facebook using his synack.com email address during this exchange rather than at some point in the past? Was he signed up on Facebook with his synack.com address? (I haven't used the bug bounty program but it appears to require a user account.) Did he mention his employment with Synack in the course of the exchange? If any of those things were true, I suspect they'd say so, rather than leaving it at "has interacted..."

I don't know, if the guy was just shaking them down then maybe trying to get him fired is indeed a reasonable thing to do, but I don't buy that anyone would have just assumed under the circumstances that he was doing all of this on the clock.


"I never contacted Facebook or Alex using my work email account. It was only after Alex contacted my employer via email that I sent a reply from my work account. Alex indirectly contacted me at work, not the other way around."


I don't think it does. Wes asked for communication via Facebook's own tools for it, didn't get it, and they went around him to his boss. That's crap.

Now, Wes exfiltrating data rather than just looking at it? Not cool. But Facebook's side of the story is just as biased as his.


But it seems obvious that in doing so he wasn't acting in good faith.


Yeah, why not just a quick email- "Hey are you working for Synack here or independently?"


Supposedly he was using his synack email address, why would they assume he worked independently?


He posted a reply on his blog saying that the only used his synack email address after the initial exchange with the synack CEO


At this point, it was reasonable to believe that Wes was operating on behalf of Synack.

Huh? how did you make this connection? Why would he then report his findings to you?

From my point of view, contacting his employer was clearly meant as a gut punch.


This section was 100% written by a lawyer, and is intended to sound obvious without in fact being obvious at all.


Shame on you for contacting his employer directly. This teaches a good lesson to all the black, grey and white hats out there. Next time they'll know to just p0wn to 0wn.


Imo you are just trying to cover up yourself poorly, you should accept the guilt of having had a server with a well known vulnerability that had the keys to the kingdom instead of blaming everything on Wes.


Um... Have to side with Wes here. Your rules were not nearly adequate, and instead of going at Wes directly with adequate and in-depth communication, the CSO went after his employer - which is _not_ ethical.


Sorry, but it looks like your technical issue has become a PR issue. Contacting his employer was an act of intimidation, and no amount of cover-up will make up for it.


Quite frankly I'm not surprised Wes is sour about how this was handled and the amount granted as bounty.

It's very rare for a single vulnerability to grant you keys to the kingdom. If you check pwn2own vast majority of the hacks leverage more than one. Most major attacks start with a small bug.

The real severity of the vulnerability is how far can it be pushed to broaden the scope. In this case that admin panel was just an entry point to a whole chain of security SNAFUs (aws keys in files at a multi-billion-dollar internet company, seriously?).

To reiterate, he got access to: - source code - aws keys - plethora of 3rd party platform keys - a bunch of private keys - user data

This might not be the million dollar bug, but close.

Just thing about what an actual attacker could have done with it: - login as / impersonate ANY instagram account - impersonate whole instagram (code + ssl keys!) - inject malware into instagram app and sign it with your keys - download tons of user data - wreck havoc in aws (possibly expanding what he has access to - we don't know what else he would have been able to access had he spent weeks not hours exploring).

This is not a missing permission check allowing you to delete other peoples photos. This is huge and based of that credit and significantly higher bounty is due.

Aside from that the handling of the whole matter was not good: - if your policy is not precise interpret it to your disadvantage. you screwed up not making it clear - contacting his boss should only happen (if at all) after he has been asked the same account - the post about "bug bounty ethics" misses the point. Following your logic heartbleed investigation should have ended when someone discovered a buffer over-read without exploring where that leads.


"I did say that Wes's behavior reflected poorly on him and on Synack, and that it was in our common best interests to focus on the legitimate RCE report and not the unnecessary pivot into S3 and downloading of data."

You lost me at this point. Who do you think you are really?


He must be pretty delusional if he thinks that's an OK thing to write on a blog. If I was him I'd deny, deny, deny or try and make it seem a whole lot less sinister than it is.


> The fact that AWS keys can be used to access S3 is expected behavior and would not be considered a security flaw in itself.

Isn't it a security flaw that a single AWS key was able to access all of Instagram's data?


No excuses for contacting his employer though. Just plain intimidation.


You talk about ethics like it is an entirely black and white concept. I would consider a lot of Facebook's practices unethical in comparison to my own set of ethics. There are ethical dilemmas, which are basically what most discussion about ethics is about to begin with. You use the word unethical but without discussing ethical dilemmas, and that makes your argument weak even though you potentially have a very strong argument.


What he did do is expose that you guys don't know how to use aws and S3. Those keys should have never been on a server in the first place. I think it would have been in your best interest to fix it and pay him. Now that other hackers know Instagram sucks at server management it is only time before someone finds another key. Guess what they are not going to do? They are not going to report it but download and sell your info.


I hope someone calls your CEO and talks to him about your conduct.


I'm not really impressed by your reaction...


If the intention of a bug bounty program is for white hat disclosure, you have done pretty much everything you can for vulnerabilities to be dealt with a black hat manner.

Well done.


> The fact that AWS keys can be used to access S3 is expected behavior and would not be considered a security flaw in itself.

A security "mistake" then? :)


Thank you for the response, Alex, especially the details about the researcher's email address and affiliation. It makes your actions seem reasonable, in my opinion. As a security researcher, I personally would not be dissauded from reporting to the Facebook Whitehat program due to this incident.

I'm glad companies can offer transparency like this.


I think his response was too personal. They're both adults, and calling his employing company's CEO to make a point because you can, is to me, way too close for comfort.

There were other personal attacks in his response that I've talked about here: https://news.ycombinator.com/item?id=10755402


> Thank you for the response, Alex... It makes your actions seem reasonable... I'm glad companies can offer transparency like this.

The people who like you the most and are the easiest to persuade.


> At no time did we say that Wes could not write up the bug, which is less critical than several other public reports that we have rewarded and celebrated.

There is no bug more critical than one that results in complete access to Instagram infrastructure. Sure, the bug is stupid, but you are fooling yourself.


Couldn't it be argued that instagram's choice to store private keys in a third party system (amazon) is a million(s?) dollar bug?


Why have you not rotated your private keys?

  notBefore=Apr 14 00:00:00 2015 GMT
  notAfter=Dec 31 12:00:00 2015 GMT
(Feel free to respond here if you want to pay me the bug bounty for this)


    $ echo | openssl s_client -connect www.instagram.com:443 2>/dev/null | openssl x509 -noout -dates
    notBefore=Apr 14 00:00:00 2015 GMT
    notAfter=Dec 31 12:00:00 2015 GMT
AWS bucket creds are not the same thing as SSL certs and were most likely specific to only relevant s3 buckets which are totally separate from any load balancers.


I never claimed that AWS bucket creds were the same thing as SSL certs.


Then rotating their SSL keys shouldn't be relevant.


Unless I'm misunderstanding, it's relevant because this researcher was able to access (from the blog): -- SSL certificates and private keys, including both instagram.com and *.instagram.com

If this researcher was able to access it via not much more than a hole that was _already reported multiple times_, then I think it's not a stretch to think that [many?] other less honest parties could (and in my opinion most likely do) already have it.

If it was me, even if it's definitely only a single researcher who got access (and it doesn't sound to me like they know for sure - but regardless), something _that_ sensitive would have to be rotated anyways. If it was someone outside the teams that strictly require access to it operationaly, I'd rotate it, let alone outside the company.


Going to his employer, instead of talking to him direct was just petty.


Sorry Alex, you're in the wrong here. Your threats to go to law enforcement completely undermine the credibility of your bug bounty program. Your publicly calling another professional "unethical" is a serious charge for what is a grey area at best, and the facts and history of issues reported by this person would not lead a reasonable person to conclude malice. And ignoring him but going to his boss, that's just petty.

Not even one attempt to talk to the guy like an adult about what he was doing? You couldn't even be bothered to say anything?

You'd be amazed how a polite reply to the effect of, "thanks, you've proven your point, and we are getting a little uncomfortable with where this is headed" might have solved all of this. If he ignored you and kept hacking after that, by all means steamroll him, but if you don't even have that much respect for your peers, I'm not sure why you bother with the bounty program.


Agreed. You've have quite a list of arguments defending the researcher when only his track record should have been enough to prove his good will. Despite the landslide of evidence of good will, Facebook decided to act in bad faith. Unacceptable, I hope other researchers read and remember this story.


CXOs do not talk directly to anyone other than CXOs right?


8


Yep, my opinion of Facebook reinforced to the highest extent. Utter amateurism and disgusting behaviour. What an absolutely idiotic way to handle this situation, and coming from the very top. I haven't used Facebook in years, thank you for an excellent reminder to delete my Instagram account.

edit: Alex, how about the "shit, we really fucked up; I apologise to our users, yadda yadda" blog post?


In stories like this, try first to remember that Facebook isn't a single entity with a single set of opinions, but rather a huge collection of people who came to the company at different times and different points in their career.

Alex Stamos is a good person† who has been doing vulnerability research since the 1990s. He's built a reputation for understanding and defending vulnerability researchers. He hasn't been at Facebook long.

To that, add the fact that there's just no way that this is the first person to have reported an RCE to Facebook's bug bounty. Ask anyone who does this work professionally: every network has old crufty bug-ridden stuff laying around (that's why we freak out so much about stuff like the Rails XML/YAML bug, Heartbleed, and Shellshock!), and every large codebase has horrible flaws in it. When you run a bug bounty, people spot stuff like this.

So I'm left wondering what the other side of this story is.

Some of the facts that this person wrote up are suggestive of why Facebook's team may have been alarmed.

It seems like what could have happened here is:

1. This person finds RCE in a stale admin console (that is a legit and serious finding!). Being a professional pentester, their instinct is that having owned up a machine behind a firewall, there's probably a bonanza of stuff they now have access to. But the machine itself sure looks like an old deployment artifact, not a valuable asset Fb wants to protect.

2. Anticipating that Fb will pay hundreds and not thousands of dollars for a bug they will fix by simply nuking a machine they didn't know was exposed to begin with, the tester pivots from RCE to dumping files from the machine to see where they can go. Sure enough: it's a bonanza.

3. They report the RCE. Fb confirms receipt but doesn't respond right away.

4. A day later, they report a second "finding" that is the product of using the RCE they already reported to explore the system.

5. Fb nukes the server, confirms the RCE, pays out $2500 for it, declines to pay for the second finding, and asks the tester not to use RCEs to explore their systems.

6. More than a month after Facebook has nuked the server they found the RCE in, they report another finding based on AWS keys they took from the server.

So Facebook has a bug bounty participant who has gained access to AWS keys by pivoting from a Rails RCE on a server, and who apparently has retained those keys and is using them to explore Instagram's AWS environment.

So, some thoughts:

A. It sucks that Facebook had a machine deployed that had AWS credentials on it that led to the keys to the Instagram kingdom. Nobody is going to argue that, though again: every network sucks in similar ways. Sorry.

B. If I was in Alex's shoes I would flip the fuck out about some bug bounty participant walking around with a laptop that had access to lord knows how many different AWS resources inside of Instagram. Alex is a smart guy with an absurdly smart team and I assume the AWS resources have been rekeyed by now, but still, how sure were they of that on December 1?

C. Don't ever do anything like what this person did when you test machines you don't own. You could get fired for doing that working at a pentest firm even when you're being paid by a client to look for vulnerabilities! If you have to ask whether you're allowed to pivot, don't do it until the target says it's OK. Pivoting like this is a bright line between security testing and hacking.

This seems like a genuinely shitty situation for everyone involved. It's a reason why I would be extremely hesitant to ever stand up a bug bounty program at a company I worked for, and a reason why I'm impressed by big companies that have the guts to run bounty programs at all.

(and, to be clear, a friend, though a pretty distant one; I am biased here.)


I think you're right on most points, but after reading the write up and response I do think Alex reached out to the employer first instead of the researcher as an intended act of intimidation. That was a mistake.

If it was not done for the purpose of intimidation, then Alex simply would have asked the CEO if the researcher was acting on the company's behalf and after hearing "no" would have ended the call and contacted the researcher directly.

Seems simple doesn't it? Perhaps you are not seeing it due to your friendship, but it seems like a dirty move and only serves to call into question how Alex handled other aspects of the situation.


> If it was not done for the purpose of intimidation, then Alex simply would have asked the CEO if the researcher was acting on the company's behalf and after hearing "no" would have ended the call and contacted the researcher directly.

Then the CEO is going to contact the researcher and he's screwed either way. God knows what the CEO would have say to the researcher privately. Having a middle man to translate is a bad idea in an emergency.

Let's face it, when you used your work email and made another company paranoid, you are putting people on the spot. Employer needs to know (they have legal responsibility), and given the prior research they did and the researcher's claim, I think the reach out is absolutely correct.

Instgram's infrastructure has flaw. That's bad but everyone's infrastructure has flaw. Shit has to be fixed. Doing more than what was needed is bad. If I am told to stop dumping data, I would stop.


Yeah, totally. "I did not threaten legal action against Synack or Wes" Who the f do you think you're kidding, Alex?


Coming from a pentesting background (and now working as a CISO), I can see both sides to this. tptacek is almost certainly correct in his characterization of the events, and I agree wholeheartedly with what he's said. It's important to note that this researcher didn't just chain several exploits together, but sat on sensitive data unbeknownst to Facebook in order to exploit other vulnerabilities later. Those vulnerabilities could not have been exploited without the initial (fixed) compromise.

Think about it a different way. If this researcher had found SQL injection in a webapp, dumped the usernames and passwords, and reported the vulnerability for a bug bounty, he should get paid. If he kept each of those credentials, and then logged into other systems using higher-privilege accounts that he'd compromised even after the SQLi is fixed, he is basically continuing the exploitation of an already-fixed bug. Those don't deserve payouts. Similarly, if he'd established some sort of persistence (such as a reverse shell, etc) on compromised assets, he can't keep coming in and getting more and more bounty payoffs. Fruit of the forbidden tree, in this case.

Where I disagree with tptacek is with regard to the benefit of bug bounty programs. Although I'm not currently running one, I find the idea fascinating and helpful for two primary reasons: first, you're almost definitely going to see generally better results in a well-managed bug bounty program (not necessarily something like Facebook's White Hat program) than traditional pentests or application security assessments. More eyes are almost always better when searching for tricky problems. Secondly, if you're a large enterprise, there are already people "testing" your security. I'd much rather be able to pay out a researcher than drive them to more nefarious buyers. You will probably encourage many people to test your security (which screws up metrics) but if finding security problems is the ultimate goal, it's worth it.

Even in this case in point, Facebook did discover an RCE that could have been (and kind of was) extensively exploited due to the fact that they held the bounty. If an actual malicious hacker had found that problem first, they would have been in significantly worse shape.


> If he kept each of those credentials, and then logged into other systems using higher-privilege accounts that he'd compromised even after the SQLi is fixed, he is basically continuing the exploitation of an already-fixed bug.

Why did those credentials still work post-report?

What if those credentials were accessed from a public dump?

The outcome of this entire clusterfuck of a bounty is one of the reasons there are still very well paid blackhats. There are no rules or terms to follow.

If their terms aren't clear (the terms he's citing certainly weren't intended for keys, rather Facebook user accounts/information), pay out and fix them.


I'd agree, but technically speaking, the bug is not fixed if credentials don't get reissued. Someone might already have access to them.

Also, you can't just expect that "oh, just delete your data pls" will work, can you? You can't trust anyone that literally hacks your system.


Whilst I'd agree that bug bounty programmes can be a good idea for Internet facing assets, I thought that this story actually neatly illustrated their limitations.

With a bug bounty programme you don't generally authorise the kind of post-exploitation activities which we see here as leading to the really serious exposures, and that's not surprising as you can't easily authorise a set of unknown people to be processing your customer data.

This differs from an engaged penetration testing firm, with whom you have a contract which covers things like handling of data gained during a test.

So I don't really see bug bounties ever replacing penetration testing companies for internal work or anything that requires accessing customer data as part of the exploit...


Thanks for the writeup. Based on what you've written, it sounds like you would have been surprised if Facebook had paid $1 million for the original report (and no further nefarious behavior by OP) since it was probably due to a simple oversight, even though it was a RCE that obviously could have been turned into total ownage of instagram. Is that accurate? If so, what class of vulnerability would make you say "Yep that's totally worth $1 million".

Or do you think he should have just stopped and Facebook should have realized how bad it was and paid him a lot more than $2500?


There isn't a parallel universe in which this finding is worth $1,000,000. It it was, every pentester in the country is getting way underpaid, because this is not an uncommon pentest finding.


> It it was, every pentester in the country is getting way underpaid, because this is not an uncommon pentest finding.

No wonder there's a flourishing (and well paying) blackmarket for vulnerabilities. I wonder how much this keys-to-the-kingdom vuln would be worth (Mitm Instagram, bootstrap a botnet, steal celebrity pics, ... the possibilities are endless)


This is no market for these kinds of vulnerabilities at all.


Makes sense, I'm just trying to get a sense of what sort of thing would be worth that much. Obviously only Facebook can answer that for sure. Heartbleed?


It's really dependent on the company. Ruby RCE would have the same affect heartbleed would to an entirely Ruby stack company.

I don't believe any company would pay $1M for a bounty on their own systems. Only people who intend to use the vuln, or to fix it as they are the vendor.

Fr a vuln to go for $1M requires "discovering SQL injection"-levels of vuln. MS paid $100K for an entire vuln class for ASLR/DEP bypass discovery, and promptly patched the shit out of it. For a remote vuln class, I could see them paying $1M quite happily to not have all of their products re-owned.


What about the parallel universe in which bug bounty hunters are blackhats who directly profit from the exploit? It seems like someone with that level of access could run up, among other things, a decent AWS bill.


I don't know about you, but I value the certainty of not losing a few years of my life to court proceedings/jail time at significantly above $50M.


Well, obviously we're talking about the mirror universe where nerds get away with things instead of scapegoated. Also goatees everywhere.


> † (and, to be clear, a friend, though a pretty distant one; I am biased here.)

Alex is good friend of mine and I've known him since college. He's definitely a good guy and understands the ins and outs of security vulnerability research, having done it himself for many years. I'm sure he didn't take the action of calling the researcher's employer lightly, and probably had a really good reason to do so.

There has to be a side of this story we aren't hearing, and probably never will.


> I'm sure he didn't take the action of calling the researcher's employer lightly

He's the CSO, and this occurred under his watch. The exploit was 2 years old, and well known. It highlights an internal security problem at Facebook et al, of-which Alex sits at the top.

In this situation, his years of "doing it himself" is unlikely to have factored in - rather, he felt like he dropped the ball and could be facing some consequences, or at the very least felt embarrassment.

This would have led to a rash thought process, and perhaps Alex jumped to the conclusion of some sort of sabotage by another company.


> I assume the AWS resources have been rekeyed by now

It doesn't look like the SSL cert on instagram.com has changed recently, and the pentester specifically claims to have obtained its private key.


a private key. It's not uncommon to have multiple simultaneously-valid certificates for the same domain. I'd argue that it's actually sort of irresponsible and therefore surprising for a site at the scale of Instagram not to, for backup purposes.


but using that private key can still grant him access to someone's traffic to their machines. isn't revocation necessary to imply security in that domain ever again?


every network has old crufty bug-ridden stuff laying around

"stuff" was the keys to the kingdom, do you think this is acceptable for a company like facebook? So instead of them making an apology, the CSO is trashing the guy who gave them the wake up call?

I do think you are heavily biased ;)


They should have just paid him the money, told him not to do it again, fixed the architecture bug, updated the rules, and moved on.

Alex just went the drama route.


If there's a grey area in your ToS, and a security researcher/hacker type is in the middle of it - the smart route is to appease them and fix the grey area. FB has a lot of resources, and it wouldn't have to deal with the blowback from this.

Why make such a bad situation worse, if you don't have to?

FB messed up. The researcher partly messed up too. Fix it and move on.


If you're biased, you should do the ethical thing and stay out of it, honestly. There is a ton of asymmetry here, and you and your Facebook CSO friend are being bullies. This is pretty grey, you don't have first hand knowledge, and obviously Alex can do no wrong in your eyes.


Everyone is biased. Presenting your arguments and declaring your biases so others can take them into account is the ethical thing to do.

This reminds me of the illusion of objectivity in journalism. If you pretend to be perfectly objective and unbiased, you're lying.


Please address where in your story calling the employer by your good distant friend would be justified.

Sounds like a jerk to me.


As mentioned in Alex Stamos' response, he believed Wes was working on behalf of Synack, and contacted the CEO directly.

Escalating issues with a company to the CEO of that company doesn't seem like jerk behavior.

Wes counters that, "[Alex] never for a second believed I was operating on behalf of Synack"

I'm not sure how Wes knows what is going through the mind of Alex, so I'm inclined to take Alex's word on this.


> Wes counters that, "[Alex] never for a second believed I was operating on behalf of Synack" I'm not sure how Wes knows what is going through the mind of Alex

As blazespin[1] mentioned in this thread, Facebook's own terms states that they only pay individuals. That's how Wes knows - because Facebook's bounty program never deals with companies. The only other explanation would be Alex is ill-informed about the terms of Facebook's bounty program.

1. https://news.ycombinator.com/item?id=10755746



There is no reason for the researcher not to retain those keys, IMO - Once those keys were found to be compromised by the company, they should have been revoked immediately, and considered 'in the wild'. The fact that they didn't revoke these keys is basically a security violation itself.

Dumping the users table on an 'internal' (heh) dashboard -- any company that is doing these bounty programs needs to clarify what a 'user' is. Is it someone using their application, or all employee information as well. It's an important distinction.


Your characterization of the AWS keys being sat on for over a month does make sense, now that you frame it in that light.

That said, Alex Stamos and the rest of the security team should have tried to figure out what vulnerabilities existed from this server instead of just nuking it and thinking that the problem was solved. That was lazy and stupid.


If the researcher didn't try to find what he could do with those AWS keys, they would likely be still valid. It's conceivable that some other people have found them too and did the same the researcher did, only kept everything to themselves. Thus, if the researcher didn't do the thing you consider bad, users of instagram would currently be more vulnerable. Why is then the thing the researcher did bad?


> 5. Fb nukes the server, confirms the RCE, pays out $2500 for it, declines to pay for the second finding, and asks the tester not to use RCEs to explore their systems.

The issue here is that, in hindsight, FB failed at this step.

They nuked the server, but they didn't determine what sensitive information was available on that server, and take steps to mitigate those risks.

I think that's an understandable mistake - cleaning up after a server intrusion is hard. Knowing how much to do after a possible intrusion is even harder. But it is still a mistake and it happened on Alex's watch.

If the purpose of the bounty program is to find out about your security mistakes, then the program did its job here, and Alex should be pleased that the problem was reported so that they could fix it.

That the researcher found the mistake by overstepping what is considered ethical (and I have no doubt that they did overstep) creates a very difficult situation - you don't want to reward that behaviour, but you do want to know about security problems and this one was only discovered/reported because of that bad behaviour.

In that difficult situation it is all the more important to tread carefully. The easy cases where you're paying out a $10k bounty typically don't require much finesse. It's the tricky cases where you need to make sure your actions are well considered and above-board at every step.

From Alex's own summary it's evident that he didn't handle it as well as he could have.

Two of the longest paragraphs in Alex's write up cover what he said to the CEO of Synack, even though Synack had nothing to do with this. Even if we accept that Alex thought it likely the Wes was acting on behalf of Synack (personally, I don't think that was a reasonable conclusion to draw, thought I assume Alex is sincere in his view that it was), he should have determined that up front, and then, once he knew it was not work related, he should have avoided:

- making accusations about Wes's ethics to his boss ("Wes ... had acted unethically")

- suggesting that his external behaviour has implications for his employment ("Wes's behavior reflected poorly ... on Synack")

- bringing in the threat of lawyers ("keep this out of the hands of the lawyers")

When faced with the difficult situation of legitimate security research that has (well) overstepped the ethical boundaries, all the evidence is that Alex jumped to the position of protect yourself, protect the company, intimidate and control the researcher and though that is a common and understandable reaction, it's not the way you turn a bad situation like this into a good one.


As a security researcher and engineer, I'd like to point out the following, without taking sides:

1. Facebook is not going ballistic because this is a RCE report. They have received high and critical severity reports many times before and acted peaceably, up to and including a prior RCE reported in 2013 by Reginaldo Silva (who now works there!).

2. The researcher used the vulnerability to dump data. This is well known to be a huge no-no in the security industry. I see a lot of rage here from software engineers - look at the responses from actual security folks in this thread, and ask your infosec friends. Most, perhaps even all, will tell you that you never pivot or continue an exploit past proof of its existence. You absolutely do not dump data.

3. When you dump data, you become a flight risk. It means that you have sensitive information in your possession and they have no idea what you'll do with it. The Facebook Whitehat TOS explicitly forbid getting sensitive data that is not your own using an exploit. There is a precedent in the security industry for employers becoming involved for egregious "malpractice" with regards to an individual reporting a bug. A personal friend and business partner of mine left his job after publicly reporting a huge breach back in 2012 (I agree with his decision there), and Charlie Miller was fired by Accuvant after the App Store fiasco. Consider that Facebook is not the first company to do this, and that while it is a painful decision, it is not an insane decision. You might not agree with it, but there is a precedent of this happening.

I'm not taking sides here. I don't know that I would have done the same as Alex Stamos here, but it's a tough call. I do believe the researcher here is being disingenuous about the story considering that a data dump is not an innocuous thing to do.

I'm balancing out the details here because I know it will be easy to see "Facebook calls researcher's employer and screws him for reporting a huge security bug" and get pitchforks. Facebook might be in the wrong here, but consider that the story is much more nuanced than that and that Facebook has an otherwise excellent bug bounty history.

Edited for visibility: 'tptacek mentioned downthread that Alex Stamos issued a response, highlighting this particular quote:

At this point, it was reasonable to believe that Wes was operating on behalf of Synack. His account on our portal mentions Synack as his affiliation, he has interacted with us using a synack.com email address, and he has written blog posts that are used by Synack for marketing purposes.

Viewed in this light (and I don't believe Stamos would willfully fabricate a story like this), it is very reasonable to escalate to an employer if they seem to be affiliated with a security researcher's report.


> The Facebook Whitehat TOS explicitly forbid getting sensitive data that is not your own using an exploit.

This seems to be the crux of this whole thing. The article suggests that is not true, including some quotes from what I assume is "The Facebook Whitehat TOS" at [0] along with his interpretation of those quotes. As an unsophisticated person reading through that document, I don't see anything I would describe as "explicitly forbidding getting sensitive data that is not your own using an exploit". The closest seems to be: "make a good faith effort to avoid privacy violations". I'm inclined to believe you and others in this thread that this was not the most responsibly done, but the seeming repeated claim that there is an explicit policy against this, which doesn't seem to be findable makes me scratch my head. Is there some other document, that is more explicit, or is this just supposed to be implicit knowledge, or what?

[0]: https://www.facebook.com/whitehat


The "privacy violations" statement is what I was talking about. I suppose you could make an argument that this is not sufficiently explicit for this scenario, but I believe it covers this ground. It is a privacy violation to retrieve sensitive data via an exploit.


It is worth pointing out that Wesley specifically avoiding dumping data from the S3 buckets which were directly related to User Data / Information. "There were quite a few S3 buckets dedicated to storing users' Instagram images, both pre and post processing. Since the Faceboook Whitehat rules state that researchers need to "make a good faith effort to avoid privacy violations", I avoided downloading any content from those buckets" In fact, the only 'sensitive data' he retrieved in regards to user account information were the weak employee logins.


Is gathering up the credentials of employees not also a privacy violation? At this point you're going way beyond proving that you have access to something - you're actively trying to probe and see how deep the rabbit hole goes. I don't (personally) believe that this is acceptable behaviour under a white hat program.


I see your point but I'm not sure if having passwords like 'changeme' qualifies as being a privacy violation... You should almost expect it to happen at that point.

But I do recognize that cracking passwords goes a step too far.


Fair enough, I can only say that it seems like they could be more explicit on that point, but I don't see anybody arguing against the idea that that their rules could use clarification.


"The Facebook Whitehat TOS explicitly forbid getting sensitive data that is not your own using an exploit."

LAUGH.. Where does it say this?

https://www.facebook.com/whitehat/

I think instagram should be asking themselves: Would they rather have an honest researcher report this or North Korean hackers not saying anything and just slurping data? Security Researchers are always going to see things they shouldn't. That's just a fundamental rule. You have to know who your real enemies are and not come down on someone just because they got a little enthusiastic.

Wes [edit] is one of the good guys - he went overboard, sure, but he should be rewarded, he should be asked not to go crazy next time, and the rules should be updated.

Personally, I think by saying the exploit was trivial shows that the CSO should be fired. If he has to make a phone call, it's not trivial.


>If you give us reasonable time to respond to your report before making any information public, AND MAKE A GOOD FAITH EFFORT TO AVOID PRIVACY VIOLATIONS, destruction of data, and interruption or degradation of our service during your research, we will not bring any lawsuit against you or ask law enforcement to investigate you.


I certainly wouldn't consider dumping credentials to test for reuse/continued use a privacy violation. If FB wants people not to dump data, they need to make that explicit and specific.


Really? The article states: "To say that I had gained access to basically all of Instagram's secret key material would probably be a fair statement". How on earth would holding on to that data not be a privacy violation?


Holding credentials is not violating privacy. It would be possible to use those credentials to violate privacy, but merely having them is not that act.


Holding sensitive credentials is absolutely a violation of privacy. This is like saying that having a user's password is not a privacy violation unless you use it to gain access to their account.


So would you agree that holding the keys to someone's house is also a privacy violation? What if instead of keys, you were holding a set of lockpicks? Would everyone's privacy of home be immediately violated?


It's all a question of intent. If you keep the lockpicks so that you can pick locks, then yes. If you're a lockpick collector, then no.


Holding a manually chosen password can be a privacy violation because it's a small peek into the user's psyche. (I wouldn't say the employee "changeme", "instagram" etc. passwords count, although the act of running a password cracking tool meant that he could have seen a more personal password.)

Holding some randomly generated numbers that could be used to access a server is not.


I understood it was employee credentials, not customer.


Surely they would have to revoke all the keys anyway as they would have no idea if a blackhat got their first and took the keys before the vulnerability was reported?


According to the timeline, Instagram have known about the ssl keys since 1 Dec.

My browser is currently showing an ssl cert for instagram.com that was issued in April and expires on Dec 31.

Doesn't look like they're in any hurry to revoke that one. (I guess like Alex Stamos told his employer - it's "trivial and of little value"...)


Or, like almost any company that's reasonably competent, they have multiple certificates with different private keys.


And they just happen to only leave some of them in their S3 buckets?

Seems … contradictory.


Whose privacy did Wes violate? Do webservers have data personal to them?


Privacy in this case is in an infosec context. Not a personal information context. Finding the open/unsecured/unpatched server is a bug. Downloading and testing a password keyring found as a result of that bug is not finding a bug. That is exploiting a bug for additional gain.


Finding a sql injection in a query string is finding a bug. Is using the injection to dump a table exploiting the bug for additional gain?

It sounds like you're only allowed to penetrate one layer of a defence in depth system. If you gain access to some edge system that isn't sensitive, I'd assume that would pay little. If you gain access to some core system, I'd assume that would pay lots. Why then are you not allowed to pivot from some nothing system to some larger system?

The purpose of bug bounties is to secure your systems. If you only ever secure the first layer, if some malicious actor finds another vector into the same system and there is a really easy pivot in sight (like full access to an S3 account!) then you've lost. If the bug bounty hunter found the escalation though and responsibly reported that, then a potential second vector loses its potency.

I'm not a security person at all so I'd like to hear some perspective on my thoughts above. It just seems fairly short sighted to specifically forbid pivoting.

FWIW dumping S3 buckets as a white hat does seem wrong to me. Listing them probably ok.


Running a bug bounty is not a suicide pact. A team had to convince a finance group that it was valuable to give money away to people who might be assholes. Bounty hunters are not a community- but if you are a bounty hunter, you should understand that many of your peers are total assholes. The company that wants to pay you a reward has to figure out if you are going to make them regret offering you a reward.

There are 4 categories of reporters: great, good, shit and crazy. Again- if you are a reporter, you should be trying to make it easy for the team to distinguish you in one of the first two categories by being simply being polite & respectful.

I will take a side- it's Facebook. Dumping data is the end of the Proof of Concept. Trying to determine if there is more data you can access through a single vulnerability chain is over the line.

Boats sink. The engineers know it. If you sink a boat in order to prove the boat had a hole, you will not get your payout.

And one final thought-

In my experience, bounty hunters almost never realize the full consequences of a vulnerability that receives a reward. Most of the time, the "Bad thing" that they identify is just the tip of the iceberg.

The choices of the researcher reflect inexperience and immaturity. The researcher has a significant misunderstanding about what is happening in the bug bounty marketplace. I think they need to apologize if they want a future in the infosec world.

Publishing this blog post was a huge error. Going to the journalist was another huge error. I don't see how this person could ever be considered employable by a reputable company.


Are you saying that if Wes hadn't pointed it out, than Alex wouldn't have to refresh all those keys? That if Wes hadn't dumped the keys than they were 100% secure?


Good lord no.

I am saying explicitly- Wes went past the point at which he should have stopped.

He also should have known better, and the fact that he didn't is a problem in itself.


Very well said. This is a mature understanding of bug bounties.


As someone outside the infosec industry, I think the dissonance I feel reading this comes from this line:

"[Alex] then explained that the vulnerability I found was trivial and of little value"

coupled with the fact that he seemed to be very worried about the problems that could be caused by the author in exploiting it. Something seems amiss.


I feel he meant the original RCE Ruby bug which then allowed all this extra access. It was not some huge, architecture-changing security problem, just a simple upgrade to fix.


What he revealed however, was that Facebook doesn't pay attention to least privilege with key access, what those keys access[1] and more importantly where those keys access data from[2]. I have a feeling there's some scrambling to cover these blind spots over at Facebook.

[1] http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerLogs.ht...

[2] http://docs.aws.amazon.com/AmazonS3/latest/dev/LogFormat.htm...


Nothing in here is exactly wrong, but we do have to acknowledge that this whole back and forth has essentially informed everyone that:

Facebook considers the keys to their kingdom to be worth $2,500. OR Facebook doesn't know what the keys to it's kingdom look like.

Facebook will not update keys/credentials even if they are known to be compromised.

If you have the keys to the kingdom, you can use them and Facebook won't find out about it unless you tell them.


It's weird how this flies over the head of so many.


> This is well known to be a huge no-no in the security industry. I see a lot of rage here from software engineers - look at the responses from actual security folks in this thread, and ask your infosec friends...

The problem is that on the one side you have security professionals who do this full time. They build up a background of implicit knowledge through extensive interaction with other security professionals, via training, mentoring, team activities, etc.

On the other side you have folks like the guy who found this vulnerability -- don't specialize in security, basically moonlighting / hobby, not necessarily connected to other security professionals or even other hobbyists. They won't have the same kind of implicit knowledge.

When someone from the first category communicates with someone from the second category, the communication can break down. That's what happened here.

Offering a million dollar bounty makes this kind of communication problem more likely -- a potential million-dollar payout catches the interest of people who have spare time and encourage them to pick this as the thing they do on the side. And further, encourages them to try anything and everything you don't explicitly forbid in by giving them hope that if they just try hard enough, they'll be able to turn what initially looks like a ho-hum two-year-old Ruby exploit into a million-dollar payday.


But his LinkedIn profile suggests he is a security specialist.


Alex Stamos' (CSO of Facebook) reply to OP:

https://www.facebook.com/notes/alex-stamos/bug-bounty-ethics...


The problem that Alex is skimming over here is that if Wes got access to this data, you have to ask yourself - WHO ELSE GOT THE DATA?

If Alex knows anything about his job he should know that he has to refresh all those keys even if Wes didn't report it or say anything.

The diff between Wes and everyone else is Wes just explained to Facebook how completely screwed they are. Alex is just pissed because Wes made it bluntly clear how much he screwed up.


Alex has been a vulnerability research since the 1990s, and co-ran iSEC Partners, one of the best-known software security firms in the world, through the 2000s. I'm pretty sure they're on top of the key situation.


A lot of things has changed since 1990...


Yes, that's true, and Alex is one of the reasons they've changed.


judging by this exploit and the fact that they didn't rotate keys and other folks probably got this data, I would say this wasn't one of their finest moments, wouldn't you agree?


Take the top 10 tech companies on the west coast.

Select the most senior security person at those companies.

Roll 1d10 and substitute that person for Alex in this exact situation.

Now bet your life that you won't have your life wrecked by a prosecutor based on the outcome of that die roll.

I don't love Stamos calling the guy's boss, but if it's between "call his boss" and "tell legal that a bounty participant has FUCKING GONE ROGUE WITH ALL OF INSTAGRAM'S CREDS", I think he made the right goddamn call.

Jesus.


That just sounds like ass covering to me. The fact of the matter is that Alex had no idea and no one at Facebook had any idea if this researcher indeed went rogue with their credentials, because of the lack of security that the hack exposed. No logs on S3 buckets? No separation of access between user data and operations buckets? Give me a break. Calling the guy's boss or the guy himself wouldn't give any authoritative answers as to what's on the researcher's laptop, so I really don't see how calling the researcher's boss was a way out of "telling legal that a bounty participant had THE KEY TO THE KINGDOM BECAUSE CAPS ARE REALLLLLY AWESOME!" If you think that simply calling the guy's boss was the right call and not acknowledging the massive security holes that this guy exposed, then I hope you work for a company that has a more clear bounty program and deal with equally ethical researchers who will tell you about a full systems exploit without violating any user privacy and hope he's happy with your $2500. That will happen..


> but if it's between "call his boss" and "tell legal that a bounty participant has FUCKING GONE ROGUE WITH ALL OF INSTAGRAM'S CREDS"

False dichotomy - those weren't his only options, had he bothered to think more on it. There was an even better option, which strangely he chose not to take (assume an actual rogue actor got there before Wes and react accordingly: rotate the AWS keys, password reset for affected users, update SSL signing keys).

It bears asking - what exactly was he trying to achieve by calling Wes' boss, and has he achieved it? This is not his brightest moment.


Now bet your life that you won't have your life wrecked by a prosecutor based on the outcome of that die roll

What I get from your comment is that it's never a smart move to take one's chances dealing with company security people. The only smart move is to sell anonymously to the highest bidder.


It says right in that response that the keys were already rotated.


Or... one could actually read the response article: "This bug has been fixed, the affected keys have been rotated, and we have no evidence that Wes or anybody else accessed any user data. "


Didn't they have to ask wes to figure out what data he accessed in the first place, and even then they couldn't figure out he had accessed the keys?


'We DO NOT have evidence that X happened' is evidence of incompetence.

The competent responses would be:

"We DO have evidence that X DID NOT happen", or

"We DO have evidence that X DID happen".

A bag of rocks also has "no evidence that Wes or anybody else accessed any user data". Would you trust a bag of rocks with your computer security?


Regardless of whether or not he followed etiquette or the rules he did report it and obviously had no intention of utilizing it to be a bad guy. And calling his employer? This was ass covering by the CSO.


I understand how dumping SENSITIVE data can make you a flight risk, but he specifically outlined that he avoided dumping anything sensitive (that is anything directly related to Users and their data). He did dump S3 buckets that had a treasure trove of other files (such as the API keys for the other services and static content), so I guess my question here is at what point does dumping of any kind become bad?


In infosec keychains are about as sensitive as private as it gets. They should probably change it to "do not pull or retain any data from any server except that which is explicitly needed to identify the vulnerability" for those who might not understand.


But I feel like it would have been the same if he got to the point he did and recognized that he had access to keychains. Whether or not he actually accessed them,especially since they weren't auditing (from what I understand), is sort of irrelevant at that point, they would have to be cycled either way.

I understand that they're top secret, but that sort of proves the extent of the vulnerability.


It would have been the same. Bug bounties are for quality of the bug/vulnerability - for instance they find a configuration error that directly affects every server Facebook has open. Or they find a zero day exploit with root capabilities. Those would be million dollar bugs. Facebook definitely needs to clarify that the bounty is for the severity and widespread nature of the bug itself and not an invitaion to penetration testing. They also need to be more explicit about what is not allowed. Maybe they should give bonuses for the value of the target, but the current policy is for the bug itself. He certainly did expose an embarrassing lack of procedure and awareness of key security and that's certainly worth a lot more to Facebook than the bug. However they definitely do not want to encourage penetration testing. And it's infosec code of ethics (probably should be written down somewhere) that when you find a bug you don't use the bug to download anything from the target. It means a lot of people won't be interested because they want to hack and penetration test. To be whitehat about that requires a lot closer communication and contractual obligation.

Facebook needs to get its shit together in key security and clarity of its bounty program. On the other hand this guy writing a blog about downloading a keychain and probing how deep it leads is definitely not responsible infosec.


On some level isn't the security testing a farce if you can't use local data to escalate your breach? It seems kind of like a bank that wants to know if their front door is unlocked but doesn't want you to tell them the vault's open.


From your profile: https://keybase.io/breakingbits/sigs/DIO92uX_zdSeZEwYeQ74qj1... throws an error. Just FYI.


Thanks, I revoked and reissued keys recently. I'll fix that.


Database is just a tool to store data, just like such tool is a filesystem. Can you explain the difference between dumping user logins from a table and just reading them from a file? How first one is a no-no and the second one is fine?


Summarizing what I've seen here in analogy form:

  Researcher: "I found a way to unlock your door"

  Facebook: "Thanks, here's $2500. We've now fixed the problem."

  Researcher: "Oh, BTW when I unlocked your door I rifled through
    your stuff and found your passport, your banking details, and a
    lot of personal information. I've kept copies of these. I also
    found the keys to your car and looked inside, where I found a box
    in the trunk. That box contained sensitive documents including an
    employee badge / proximity card. I used this card to gain access
    to your workplace. In doing this, I also managed to get into the
    janitor's closet which had a set of keys. I used these keys to
    get access to the complete building and took a look at all the HR
    files and rifled through a bunch of corporate contracts."

  Facebook: <gobsmacked>

  Researcher: "Can I have my million bucks now?"
Where the researcher stepped over the line is using the door attack to escalate further attacks. It's little different than finding a way to reliably impersonate Mark Zuckerberg's credentials in such a way that others will 100% believe it. That finding is worthy of a reward. But then using that vulnerability to social engineer others to reveal passwords, using that as a launching point for mounting further attacks is going way too far.


I saw it more like:

  Researcher: "I found a way to unlock your door"

  Facebook: "Thanks, here's $2500. We've now fixed the problem."

  Researcher: "Ohh hey about that bug. Turns out that
 door, if the guys from the Ashley Madison breach found
 first, your entire company would lose billions in market
 cap, you and all your friends would no longer have jobs,
 and the trust placed in your company by the public would
 be so eroded that there's a good chance it would no longer
 exist."

  Facebook: "Well this is embarrassing. Our boss found out
 and talked to your boss, the subject of lawyers and law
 enforcement may have been mentioned in an effort to keep
 this info getting to the public, and when this failed, he
 made a highly visible blog post discrediting your
 professional conduct"

  Researcher: <gobsmacked>
You can make the case for misconduct on both sides but I'm more inclined to side with the researcher. If you define bugs and the associated bounty by the amount of possible damage it could cause, this one would definitely be 'catastrophic'. And Facebook would still be none the wiser if he hadn't dug deeper.


   Oh by the way, when I looked in your open front door, I noticed all your 
   computer terminals had their passwords written on post-it notes by their 
   monitors, and the big safe in the back room had its key hanging right 
   next to it on a chain.


Except that for a company as big as Facebook your security provisions probably shouldn't stop right at the front door.


Sorry, but being a multi-billion company, pledging $1M bounty and giving only $2500 for a serious bug which could lead to taking control of the whole Instagram is greedy. I can understand why the researcher didn't stop there.


Also note that the second part of this conversation happened over a month after the original report.


Note to self: Don't report any chained attacks to any large companies bug bounty programs. Alex Stamos contacting the employer of the bug reporter is completely out of line.

This is the fastest and easiest way for Facebook to stop good submissions to their bug bounty program.


In my opinion, the author is feigning shock...

He claims to have downloaded the content listed below. And he is surprised that Facebook responds coldly? Note the string "private keys" in this list... Doesn't the author know how long it will take them to recover from this breech? How much it will cost them?

On the other hand, it does sort of re-enforce the idea that he should be paid handsomely, doesn't it? :)

    * Static content for Instagram.com websites. Write access was not tested, but seemed likely.
    * Source code for fairly recent versions of the Instagram server backend, covering all API endpoints, some image processing libraries, etc.
    * SSL certificates and private keys, including both instagram.com and *.instagram.com
    * Secret keys used to sign authentication cookies for Instagram
    * OAuth and other Instagram API keys
    * Email server credentials
    * iOS and Android app signing keys
    * iOS Push Notifications keys
    * Twitter API keys
    * Facebook API keys
    * Flickr API keys
    * Tumblr API keys
    * Foursquare API keys
    * Recaptcha key-pair


Is this not the point of a Whitehat bounty program? To entice someone to discover and disclose a bug in a trustworthy manner?

If they react this way, and can't trust people to attempt to find exploitable security holes on their system (even those that yield private keys), then what is the point at all? The only people that find them then, are not going to be as cooperative about it.

> Doesn't the author know how long it will take them to recover from this breech? How much it will cost them?

This is not the author's fault. He did nothing but disclose bugs that Facebook themselves set in place, and seemed to be very open with them about it, at that.


No, this is not the point of bug bounties. The point of a bug bounty is to find and fix bugs. That's why they're called "bug bounties".

This person took a bug bounty and ran it as a penetration test.

Facebook fixed the one bug he found and paid him for it.


Bug bounty appears to be a misnomer in this instance. Facebook is specifically asking for reports of security vulnerabilities in their policy:

> If you believe you have found a security vulnerability on Facebook, we encourage you to let us know right away.[1]

Which then begs the question to me: how do you differentiate an acceptable and unacceptable probing of security vulnerabilities when you can't capture the full impact of an issue without attempting to exploit it to its fullest? Because it is certainly not outlined in their policy.

And when you're asking for any whitehat to attempt to discover and disclose security vulnerabilities in your system with only the limpest of guidelines around how to do so, I don't feel that it is warranted to react such as Facebook has here.

[1]: https://www.facebook.com/whitehat


I don't know. I feel bad for Alex but if we want to suggest that Facebook's vulnerability disclosure policy was poorly written, I will ruefully agree.

When you stand up a bug bounty program, you are giving strangers permission to do something that they would otherwise be prosecuted for doing. You should be extraordinarily careful when you do that, and your rules of engagement should be crystal clear. These weren't.


EDIT: Having read the CSO's explanation that the guy was using his company work email, it makes more sense why the CSO would contact the company (and explains away the pettiness my comment was referring to)

One thing I notice: if the CSO felt like this person did something grossly illegal and irresponsible, why not go straight to the police? Why instead go to the man's employer and speak passively aggressively?

Paradoxically, contacting the authorities could have helped facebook's argument. It would communicated to the community at large: "Hey Facebook believes it has clear standing to pursue this guy. Maybe, he really did do something wrong."

Instead, what I'm reading is: "Facebook doesn't actually believe what the guy did was illegal per se... but they wanted to spite the guy anyway."

For me, it seems petty.


Zero is the number of people on HN who would feel better about this situation if Alex Stamos had referred this person to the police to be prosecuted under CFAA.


The researcher has already updated his post regarding the use of his company email. Apparently your original point still stands:

> I never contacted Facebook or Alex using my work email account. It was only after Alex contacted my employer via email that I sent a reply from my work account. Alex indirectly contacted me at work, not the other way around.

Also, why would he be doing this work at the behest of his employer when (IIRC) Facebook's bounty program only pays out to individuals? It would automatically make him ineligible to claim the bounty.

To me it seems like Alex Stamos tried to use some good old threaten-your-livelihood intimidation tactics and failed miserably.


I commented earlier to sort of the same effect, and was thinking a little more about this.

I don't think the goal, or desire, is to be told the full extent or impact of a problem. The goal is to be alerted to spots that may lead to a large problem, or re in and of themselves a large problem.

This seems like it has a few facets to it. You end up reducing the space of things to mostly "ways to get in the front door." Thinking about it, I would probably be frustrated, in general, if I knew someone had important keys to the kingdom I was in charge of. It doesn't change the fact that others may or may not have also gotten the same access, now it's 1-* instead of 0-* people who have it and shouldn't.

I'm still slightly skeptical on the bounty reward itself. This was a simple exploit that got pivoted into some major shit, so do you reward the exploit of the logical conclusion of the exploit? I lean towards the latter, but again, as you said... how do you figure out the impact without... actually trying to figure out the impact?

Bug bounties are an interesting concept, to be sure. -


Holy christ that is SO wrong. The system should not be so easy to pivot in that way. That was definitely the real bug. If getting the keys to the kingdom is easy as exploiting a trivial bug than Instagram is really really screwed.

As I'm sure it's not the only trivial bug!

Instagram should be thanking Wes for the wakeup call instead of making him the enemy.


Why wouldn't it be considered a bug that accessing one low-permission S3 bucket allowed him to access all the other buckets, including user data and keys?


It is a bug. But I think the point Facebook is making is that it is impolite to exploit the RCE bug and then access other systems.


Both tptacek here and Facebook claim that he found one bug. He found at least two, depending on how you classify things: even if Facebook would not like to admit that their security architecture around token amanagement was/is deficient, and the fuzziness of internal security boundaries makes "bug" somewhat hard to define, it was deficient by industry standards (especially for such a large and tech-focused company), and he got way more access than that RCE should have given him. Whether or not he was supposed to go looking for such additional bug(s), it's discourteous not to at least acknowledge that he found them, and thereby provided Facebook additional value over just finding the RCE.


If he had told Facebook that at the same time as he reported the credentials he harvested from the database --- which his timeline suggests he could have --- I'd agree with you.

But he didn't. He put the credentials in his back pocket so he could pull them out when they suggested he hadn't found his "million dollar bug". And so for a month after they fixed the bug, some fucking rando is walking around with credentials to all of Instagram's AWS assets, totally unbeknownst to anyone at Facebook. They turn down his bid for his "million dollars", and he busts the credentials out on them. You think they're going to thank him?

He's lucky it was Stamos and not Mary Ann Davidson.


I think the point is that, after the first bug report those credentials SHOULD NOT WORK because their job should have included revoking ANYTHING that system have access to. How did they know Wes was the first person to find that bug and the linked credentials?

So, the fact that those credentials still worked a month later is a HUGE FUCKING DEAL! Alex, the consummate professional, didn't do his job and instead had a knee jerk reaction to someone slapping that fact in his face.


It has been incredibly interesting reading through those threads. People are arguing two completely different arguments. tptacek is saying that the dude keeping AWS keys without disclosing this was bad and guy is lucky to not get a early morning wake-up call from men with guns. slewis, comex et al are saying that Facebook not locking down and later disabling AWS keys was bad and Facebook was lucky they didn't get sold on black market. Both sides are correct but it's informative who makes which arguments.


That's not what I said. I took issue with tptacek's statement that there was only one bug.


Exactly.

Notwithstanding the fact that AWS credentials should be very narrow in scope.


What is the protocol for assuming that a bug might have previously been exploited and keys already compromised? Is that just not worried about unless they see evidence in logs?


Especially considering Alex Stamos apparently requested reassurance that he _hadn't_ accessed particular classes of data - instead of looking in their own presumably non-existent audit logging of people who've had access to the private keys ssl of instgram.com and *.instagram.com!!!

(Seriously??? That's some world-class enterprise-grade "moving fast and breaking things"...)


I don't know, but that's the security team's job; it is emphatically not the job of a bug bounty researcher to do that.


I don't know much about this which is why I asked.

It seems that severity-based payouts have created incentives that do not match the program rules? Maybe all rce bugs should be paid out on an assumption that if used they'll lead to access to a shell or to user data.


Severity on a vulnerability assessment is based on the bug itself; it's the severity of the RCE.


Yeah - but it's 100% clear from this that FB wanted to brush the RCE under the carpet with a "not at all severe $2500" classification - without ever admitting to losing their private ssl keys or auth token seeds.

He clearly _did_ have a "security vulnerability" that gave him the keys to the kingdom. He knew it, and Facebook know it - and they wanted to pretend it was no big deal.

Any bets on how many months till there's a large-scale breach of Facebook user data? The reality of the balance between responsible disclosure and selling an exploit is much easier to evaluate now.


That certainly is the fun and exciting way to read this story.


Which is fine. But threatening to call the cops was really bad.


You don't know that's what happened, even the researcher didn't say that. You're extrapolating.

A much more reasonable and likely explanation of the same set of things we've been told:

Alex Stamos called Synack and said that the AWS credentials, which, by the researchers own admission, he'd chosen to retain long after the vulnerability he reported was fixed, had to be deleted, and that if they weren't and the researcher continued to use them, the situation would be out of Stamos' hands and into Facebook legal's, at which point he couldn't keep him from being prosecuted.

In that interpretation, Alex isn't threatening the researcher; he's (very reasonably) saying "you cannot use these credentials you've taken from the server, and if you keep doing that, I can't take responsibility for how Facebook will handle this, so you should stop right away before you harm yourself."


it's utterly trivial to revoke and reissue aws access keys. trying to paint this as a necessary security measure is incredibly dishonest. the only plausible reasons to loop in his employer and mention legal remedies are intimidation and incompetence and as you've assured us incompetence is off the table...


blazespin > > But threatening to call the cops was really bad.

tptacek > You don't know that's what happened, even the researcher didn't say that. You're extrapolating.

From Wes' blog (presumably based on his boss' oral description of the call): "Alex then stated that he did not want to have to get Facebook's legal team involved, but that he wasn't sure if this was something he needed to go to law enforcement over."

Your bias is apparently badly incapacitating your reading comprehension, because "stated that [...] he wasn't sure if this was something he needed to go to law enforcement over" is exactly threatening to call the cops. Not even your friend mr Stamos, who has presumably read Wes' blog post, is claiming that he didn't. So whom are you saying is lying; Wes, or his boss?

Oh, and "(very reasonably) saying 'you cannot use these credentials you've taken from the server, and if you keep doing that, I can't take responsibility for how Facebook will handle this, so you should stop right away before you harm yourself.'" really, really, really sounds like Vito The Baseball Bat "very reasonably" saying "You cannot use this testimony you got off Loanshark Louie, and if you keep doing that, I can't take responsibility for how the boys will handle this, so you should stop right away before you harm yourself."

Seeing that as SERIOUSLY (as opposed to sarcastically) "very reasonable"... Well, hello, friendship-bias Bizarro World.


I'll rephrase the question. Is the broader vulnerability apparent based on the first discovery OR does it only become clear the further down the rabbit hole you get?


I don't know. If we're going to speculate, I'll say: the Facebook security team didn't know this system existed (it's a 3rd party admin console on a public IP address!), and their immediate reaction to it was "nuke it from orbit, pay out the bounty for finding it, and forget about it".

My guess is that they discovered the AWS credential thing on December 1.


If they discovered the AWS credential thing on December 1 after the security researcher reported it, and wouldn't have discovered it otherwise, and it could be the case that someone else found the exact same attack path first, shouldn't they reward him for making them aware of a problem they would not have otherwise noticed? That they wouldn't have fixed? That others that discovered the same attack path might otherwise still openly exploit to MITM all the traffic, to do arbitrary things with arbitrary user accounts?


In your experience, are there other, more careful organizations who would have taken the host offline but saved a disk dump for later investigation?


I would tend to agree.

Facebook's point is that he found a vulnerability, and exploited it instead of stopping there. I kind of understand their point of view though. "See you have a vulnerability there, and then I can get access to this, and then this, and see now I have the password of your user, and then I'm just one click away from accessing all the instagram pictures I want."

Although Facebook's handling of the problem is poor (why didn't the CSO call the author directly to get things squared out? He does not talk to people who are not C*O?), they do have a point.

I think the author acted in good faith, but got carried away by his findings unfortunately.


"See you have a vulnerability there, and then I can get access to this, and then this, and see now I have the password of your user"

However, these weak passwords could have been exploited separately as part of an attack. It is fair to call it a new vulnerability, even though it was discovered by exploiting the first vulnerability.


Exploiting the bug would have been downloading the actual contents of the S3 bucket (the instagram source and other things). He specifically says he did not do that.


He clearly made a big effort not to violate privacy. The problem is that he made their security look like a joke by getting the keys to the kingdom without anyone noticing. Did that big expensive IDS catch him? Nope. Did any of the log watchers babysitting the AWS logs? Nope. One researcher made the CSO look incompetent in the matter of minutes.

If he had found a bug with something a developer wrote that would be a different story. What he found was layer after layer of Operations (particularly Security Operations) failures. This is something you hire a CSO to think about (or at least hire/manage others to think about).


Are we reading the same article?

> [...] I queued up several buckets to download, and went to bed for the night.

> The next day, I began to go through some of what I'd downloaded, [...]


Key quote:

"Since the Faceboook Whitehat rules state that researchers need to "make a good faith effort to avoid privacy violations", I avoided downloading any content from those buckets"

Listing the contents of the bucket is very different from fetching them. Without listing the contents, he wouldn't know the severity of the vulnerability. There's nothing wrong with that.


While I'm on his side, his wording seems to indicate he did download data from SOME of the buckets, just not those specifically containing user sensitive data.


I'm of the opinion that not downloading user data, but grabbing source code, backups, and secret keys - is a perfectly reasonable interpretation of "making a good faith effort to avoid privacy violations".


What if he hadn't found out about it, but someone else had already taken the files?

Facebook might've never known


That's the Catch-22 of whitehat, isn't it? The whole idea is to find breaches and report them, but any worthwhile breach is going to expose sensitive data. How could you possibly know you've found a hole unless you've peered through it?


> Doesn't the author know how long it will take them to recover from this breech?

I assume Facebook would need to regenerate API keys anyways. Simply showing that author could have accessed the API keys is reason enough to think that he may have even if he claims to not have, or that someone else may have access the API keys.


>>Doesn't the author know how long it will take them to recover from this breech? How much it will cost them?

Doctor diagnoses, patient has cancer.

Doesn't the doctor know how long the patient will take to recover from this disclosure? How much it will cost the patient?


Facebook's calling his employer could be slanderous, possibly even criminal harassment.

Between stories like this demonstrating companies' apparent lack of understanding of whitehat infosec, and Weev's incarceration demonstrating the American legal system's apparent lack of understanding of whitehat infosec, it's hard to believe people still participate in such endeavors.


I don't see anything in the description of that call that qualifies as either slander (which requires a false statement of fact) or harassment (which requires a pattern of repeated contact intended to cause emotional distress).


If bringing unrelated parties to dialogue in the background is not harrasment, then what is?

Imagine I will contact your significant other over your comment on HN. Would not you be deeply disturbed even if I do it just once? Bonus points for frivolous legal threats on my side.


Nobody is arguing that this person shouldn't be disturbed. But that doesn't make it "criminal harassment".

Unfortunately --- and this is not a normative argument, so please don't wig out --- the criminal action here is the researcher's. Nobody's been prosecuted, but could they have been? YES.


"If bringing unrelated parties to dialogue in the background is not harrasment (sic), then what is?"

Exactly what parent said, " . . . a pattern of repeated contact intended to cause emotional distress". Yes, doing as you said would be deeply disturbing, but it wouldn't be harassment.


Also remember that the story we have here is a one sided narration from a bug bounty researcher.

The story tells us his side of things but what specifically Facebook perceived as threat is still unknown ? Why would a CSO get involved unless they specifically think that the data has been accessed violating the goodwill of the bug bounty research in the first place.


> Why would a CSO get involved unless they specifically think that the data has been accessed...

The most likely reason I can think of is that he was getting some heat from some other C*O.

Edit: Now that Alex's side of the story has been released, his actions don't seem out of line (assuming it's reasonably accurate, and I have no reason to suspect it isn't).

While my explanation is still a valid answer, I agree with the parent. Sounds like he was just doing his job. Though it would be interesting to listen to the audio of the conference call....


That's true, there could be large portions of the story that are omitted or inaccurate. We may never even get the full story.

Assuming the story as stated is truthful or even plausible, what options do whitehat hackers have to defend themselves in such a scenario? I mean the whole point seems to be to try to penetrate a secure system, and the consequences of that action seems to be fairly obvious from the start. If a whitehat hacker is successful, that carries with it the inherent potential that they will have some sort of access to some sort of sensitive data, right?

Surely telling Facebook "I was able to access these exact things" means he expected Facebook to update passwords and change keys accordingly, making the possibility that he retained those keys moot.


It almost seems like Facebook wanted to know about the issue, but not have to update the keys.


He claims he had access to so many credentials. That's a P1 security protocol. You can't just let a manager to handle it. Your executive boss, CSO, has to step in.


Or likely tortious interference - https://en.wikipedia.org/wiki/Tortious_interference


No, it can't be either of those things.


What possible motivation did Facebook have for contacting the company with whom this person had a contract employee relationship with, other than to implicitly threaten problems for both? There was no implication that he was doing this other than on his own, and he had cleared it with his employer. Presumably he didn't email Facebook with a corporate email account, and presumably his employer wasn't in a position where Facebook's first assumption would be "corporate espionage! (which was voluntarily reported to a bug bounty program)" - that's disingenuous.

No, it was "We're bigger than you and we have the power to fuck with your life and livelihood", and nothing more.


To ensure that this person deleted the credentials they had taken from the server they popped with the RCE, obviously.

Again: read the timeline. He submitted a finding with AWS creds taken from the server he popped on October 22 --- on December 1, more than a month after Facebook shut the server down. He took AWS creds from a Facebook server and saved them on his laptop for more than a month.

WHY?


Then threaten him with legal action (not that I necessarily condone this - I will say that I did like your breakdown down thread to providing another perspective and balance).

It's neither harassment or slander, but it could be tortious business interference, where one party induces a second party to break a contract with a third party.

His contract employers are neither his parents or legal guardians - who have no more power to "ensure" this as anyone else.

As a corollary to this - this is exactly why it's illegal for debt collectors to call your friends and family to "encourage" them, or you by humiliation, to pay up.


Do you see what a ridiculous no-win situation this is?

Option 1: Call the pentest firm he works for.

Option 2: Threaten legal action.

Option 3: This dude might still be walking around with god knows what shit he's pulled out of S3 buckets or lord knows what else was accessible with those AWS keys or what other keys were in other S3 buckets or

wow i'm having a small panic attack just trying to complete that sentence


Where is option "change the keys that were publicly accessible for who knows how long"?


This isn't a single key. It's, like, maybe all the keys? The bad stuff that happened here all happened in a single day, the day that the researcher disclosed the AWS creds for the first time, more than a month after the server he dumped them from was shut down.


I would act as if an unknown malicious party had all the keys at that point. It might not be true, but it might be true.


Oh I 1000% agree about that.


How about Option 4: calling him?


I don't know whether they tried, or if they didn't, why they didn't. Like I wrote below[1], I'm wondering what the rest of the story is. This isn't the first person to have submitted an RCE to Facebook, but it's the first person to get Facebook to go nuclear over a submission. Why?

[1]: https://news.ycombinator.com/item?id=10754627


Why does it matter if he deleted the credentials? As soon as anybody was able to get the credentials them, you'll have to make the assumption that others have been able to do the same.

And that means rotating all your credentials the very same day you learn about that happening. Why does it make a difference then if he still has stale credentials?


Surely a competent technology company would realize the using creds that were stored on a known-compromised server is bad and change them immediately, right?


Without having a position in this debate myself: I think that's not quite fair.

My understanding is this: They got a report that a server can be compromised and fixed that vulnerability. Unbeknownst the reporter grabbed a huge amount of (remote! not on that server, btw) data to play with.

Later the reporter returns to Facebook and says 'Btw, I got all these valuable pieces of information and have those for quite a while'.

Only at that point can you panic and rotate keys, but now you notice that a third party had access to all these keys for a month already. What else did they get? Maybe the researcher sat in a posh coffee place and grabbed interesting Instagram credentials (using the certificate) or escalated this further, gaining even more access based on the exposed information so far.

In my world, Facebook/Instagram are basically completely owned and have to assume that this guy grabbed ~everything~. They probably need to hire (vs. doing a bug bounty) people to grab the same data from the same buckets to look for potential follow-up targets that were _not_ disclosed, but might've fallen to the same bug hunter.

Who's to say that the guy doesn't come around on New Year's Eve with Yet Another Disclosure based on the same 'attack'?

I hate the 'contact the employer' part, but I'd hate to be in FB's shoes far more. I can hate the company and feel for its CSO/IT staff in this aftermath at the same time.


That's fair, but it implies that at no point did FB ask themselves "what would someone who exploited this vulnerability have access to?" If they had, they would have realized they were completely owned before the researcher pointed it out to them and taken steps to fix it (changing keys, etc.) At that point the researcher would be the least of their worries, and they would have tried to figure out if anyone else had completely owned them.

However, since they were unable to figure out that the friendly researcher owned them until he told them we now know that FB itself doesn't know who has their data.


> Presumably he didn't email Facebook with a corporate email account

"At this point, it was reasonable to believe that Wes was operating on behalf of Synack. His account on our portal mentions Synack as his affiliation, he has interacted with us using a synack.com email address, and he has written blog posts that are used by Synack for marketing purposes."

From Alex Stamos's writeup: https://www.facebook.com/notes/alex-stamos/bug-bounty-ethics...


Wes has a footnote update:

> I never contacted Facebook or Alex using my work email account. It was only after Alex contacted my employer via email that I sent a reply from my work account. Alex indirectly contacted me at work, not the other way around.


I definitely stand corrected on that point, if it's the case - then calling his employer becomes a reasonable action to take.


Pointing out that this doesn't meet the legal definition of slander or criminal harrassment doesn't mean sticking up for Facebook or defending its motivation.


I'm not sure whether I'm sticking up for Facebook. I'm not just lawyering this thread. I think, if I was in Alex's shoes, I might have done something similar. I'm very glad I didn't have to make that call myself, because, what a nightmare this is.


I think the solution here is to pay $100k+ for RCE exploits and explicitly forbid pivoting access after the first vulnerability is discovered. Facebook offered $2,500 for a security vulnerability that could do much greater damage. What kind of vulnerability is a "million-dollar bug" if not RCE? How would you possibly have a "million-dollar bug" that is a single-point-of-contact bug and how would you verify that Facebook is paying you fairly? They didn't seem to in this case.


Alex responds:

https://www.facebook.com/notes/alex-stamos/bug-bounty-ethics...

Critically:

At this point, it was reasonable to believe that Wes was operating on behalf of Synack. His account on our portal mentions Synack as his affiliation, he has interacted with us using a synack.com email address, and he has written blog posts that are used by Synack for marketing purposes.

Alex's timeline seems like it matches what I wrote earlier:

https://news.ycombinator.com/edit?id=10754627


> I never contacted Facebook or Alex using my work email account. It was only after Alex contacted my employer via email that I sent a reply from my work account. Alex indirectly contacted me at work, not the other way around.

From Wes's blog post.

I don't know anything about security or about the people involved. But I read your quote, and I read the one above.

Unless Stamos explicitly disagrees with Wes's timeline of events, my interpretation of 'he has interacted with us using a synack.com email address' does not explicitly state that Wes used it in relation to the attack before the phone call to the CEO.

Happy for more evidence to be presented to show the contrary.


Assuming that's true (and I personally don't believe Stamos would flagrantly fabricate a detailed story like this publicly), this is a game changer. It's fully reasonable to escalate to an employer if they seem to be affiliated with the security researcher's report.

Also worth noting that this is frequently done in the security industry - folks will often credit not only themselves but also the companies they work with and are associated with in a security report.


No, Alex just assumed. Why didn't he just ask Wes if he was doing this for Synack?


He "assumed" because the researcher signed up for the Facebook bounty program as an employee of Synack and used his Synack email to communicate with Facebook.

He wasn't guessing. He didn't look the guy up on LinkedIn.


> He didn't look the guy up on LinkedIn.

I don't really see how else you can interpret the defense "he has written blog posts that are used by Synack for marketing purposes".

And it's pointed out all over the thread, but no part of "the researcher signed up for the Facebook bounty program as an employee of Synack and used his Synack email to communicate with Facebook" is uncontested, nor is it supported by the text of Alex Stamos' response. You've just read in what you want to see.


> At this point, it was reasonable to believe that Wes was operating on behalf of Synack

> He "assumed" because the researcher signed up for the Facebook bounty program as an employee of Synack and used his Synack email to communicate with Facebook. He wasn't guessing. He didn't look the guy up on LinkedIn

This is a load of bolony / ass-covering by Alex - Facebook's bounty program explicitly deals with individuals only, not companies and Alex knows this. From https://www.facebook.com/whitehat/

> We only pay individuals

edit: down-voters, please point out the faults in my logic.


From Wes' updated post:

> I never contacted Facebook or Alex using my work email account. It was only after Alex contacted my employer via email that I sent a reply from my work account. Alex indirectly contacted me at work, not the other way around.


Why not ask directly Wes if he is working on the behalf of his company? Seems shady at least to resort immediately to his employer.


I agree with what another guy said -- you should do the ethical thing and stay out of it. You're his buddy and have already rallied enough about how he's a Good Guy and was just confused. We get it.


> his account on our portal mentions Synack as his affiliation

Can someone clarify exactly what portal is referred to here? Is it something besides https://www.facebook.com/whitehat/report/ ?

If not, this is a totally bogus excuse.


My gathering from the researcher's post is that facebook requires you to use a personal account. I'm guessing he just had his employer listed on his fb account.


Seems that way, but I wanted to invite anyone who knows more to comment.

Alex's response uses extremely misleading language to justify the employer contact:

  - "account on our portal mentions Synack as his affiliation" amounts to nothing more than "he lists his job on his facebook profile"
  - "he has interacted with us using a synack.com email address" -- OP claims otherwise. Taking OP's word, the use of "has" perhaps prevents this from being an outright lie: yes, as of *right now* he "has" used a synack.com email. But did he before you reached out?
  - "he has written blog posts that are used by Synack for marketing purposes" -- ... and? What does that have to do with anything?
"He listed an employer on his fb profile" is literally their top justification for the supposed belief he was acting on their behalf. Yikes.


So if I'm reading this correctly, this massively compromising attack was made possible by doing a little research? e.g. Knowing about one of the admin services used by Instagram, looking in that admin's public repo, and musing whether Instagram had bothered to change the secret key from the default entry in the repo?

We'll probably never see a post mortem on this but it'd be interesting to hear how this got moved to production...: was the Sensu admin panel a nice scaffold for internal use and by the time they decided to make it remote, everyone just assumed the secret key had been changed at some point?


I can tell you from experience working at another similar company that this is not surprising at all. Especially as startups transition into larger companies (with formal security controls and policies), a lot of things can get missed or forgotten. Your primary production servers may be completely up-to-date and secure, but somewhere along the way, there's a high chance that an engineer deployed an internal admin tool or a test build somewhere that ends up being public, but ultimately lost and forgotten. The problem is, that kind of "lost" infrastructure often contains keys, credentials, or network access to other more critical parts of the infrastructure, and no one realizes the severity of the mistake until it's too late.


Even if that is the case, why is it exposed to the public? They firewalled it off almost immediately, so I assume it didn't need to be...

Never expose anything that doesn't need to be, SSH tunnels, openvpn... heck, use HTTP authentication wrapped SSL tunnels if you have to. Web servers tend to be more secure than webapps.

Mistakes like this are far too easy to make. So anything that isn't part of your business application needs to be tunneled with authentication or isolated completely.


The thing that gets to me is the lack of gratitude on Facebook's end. Instead, they turn him into the villain for breaking imaginary rules. What would have been the harm in slapping him on the wrist and giving him some sort of reward for exposing a huge vulnerability? Instead, they eat the reward and shit on the guy who produced it. Real classy FB.


Did you read the whole post? He got paid on the RCE.


Yea I did and I realize he got paid out a little, but it was short of the $1 million.

I realize a million is a bit unrealistic, but if you're going to make a public statement, at least back it up or prove to the guy why his findings don't constitute a "million-dollar bug". It's not right to just cold-shoulder the guy and hide behind vague rules that were never clearly outlined. In fact, you might even conclude Facebook brought his behavior on themselves by making such a statement as "if a million-dollar bug is found, we'll pay it out." $2500 is nothing when you're thinking $1,000,000


Nobody is going to pay you a million dollars in 2015 for the 2013 Rails YAML bug in a stale server. Nobody is going to pay you a million dollars for a reliable Firefox RCE, and those take months to prove out and develop, and there's a liquid market for them.


But that's not going to stop Facebook from publicizing that they will. You're glossing over the details and attributing an aire of "old news" to the bug. Well, yes / no. If he didn't find such an ancient bug but instead someone devious did, they could have dumped all the private user photos. If that happened, what do you think the financial implications might have been?


He got $2500 for that bug. I will venture a guess that that's the most any bug bounty program will pay for that Rails YAML bug in 2015.


How much do you suppose blackhats would pay for instagram's ssl keys, mobile app signing keys, push notification keys, etc?

Yeah, the researcher went deep into the grey area, but I find Alex Stamos's reaction barely short of unbelievable - it's almost as though he's so new to the internet he's never heard of the Streisand Effect... (Either that, or he's just so accustomed to bullying and intimidating people who might embarrass him that he's now got that corrupt politician "Waddaya mean I'm 'abusing my power'? We grant multimillion dollar contracts to old school buddies all the time? What's the problem?" look on his face.)


Not much. Probably much less than $2500.

A script to create new bogus accounts on Facebook is probably worth more than mass Facebook account compromise.

People really don't seem to understand how the "black market" works.


I was thinking more of the Zerodium/Gleg/BoozAllenHamilton class of buyers - who'd on-sell it to, say, the Egyptian or Thai Government, rather than run-of-the-mill carders or identity thieves.

(But yeah, I'm perfectly happy with my life where I have no real understanding of how the black market for this kind of thing works...)


How does that matter in any way? This was a series of fuck-ups. Facebook wouldn't pay $1M to anyone, ever, since it would encourage this kind of behaviour. It was the "zero-dollar bug that lead to the million-dollar fuck-up" though.


I think the point though, is that it's more than just a single old Rails YAML bug. The privilege escalation shouldn't have been there. Their infrastructure would still be vulnerable even without the initial exploit.


The black market may. Having what you need to replace an app installed on pretty much everyone's iPhone with arbitrary code is a pretty big deal.


But that 'minor' $2500 bug pivoted into a massive bug in how they handled credentials. THAT was worth a hell of a lot more than $2500.


This is like paying for "unlimited data" and the telco reducing your bandwidth to dial-up speeds after you download 1 GB.


Sort of an interesting conflict these bug bounties create. You have someone who wants to hack as deeply as possible to have a bigger bug bounty based on stated rules, but at the same time they will invalidate your bounty if they arbitrarily determine it as too much?

I imagine the initial report by his friend that the server was accessibly would not be a very high paying bounty compared to one accessing the server. But how deep is too deep?


Right? If he left it at the RCE he would have gotten the $2,500 split between him and his friend... but he continued and was able to get access to all the S3 buckets which you would assume would warrant a much higher payout. Instead he got a huge amount of backlash.


Right, this feels like a way for Facebook to simply not payout a bigger bounty after they realized how big an appropriate bounty would be.

If the author submitted the RCE, and nothing else: is someone at Facebook actually going in and trying to simulate what he actually did? Who knows, because the process is pretty opaque. If you argue with Facebook's assessment, and go and further exploit the system to say "no, this is actually how bad the RCE is, in the grand scheme", you've now actually gone and proved what can be done, against their guidelines, which potentially disqualifies your initial discovery altogether.


Exactly how I see it. People want a higher bounty, and are also curious of any more bugs deeper. But companies want them to stop at the first layer.

It seems too difficult to define how deep is too deep, especially since at least he reported him doing it. He didn't decide to go that deep and then just report the RCE and collect $10 million from people far more interested in this.


Not only that, but dangling the $1 million bounty means they are encouraging the bounty hunters to try to make it larger. And ultimately it also leaves them in a position to find out how big it is (for whatever negotiations) and prove it to the company (in order to make an argument to its magnitude).


> With the RCE it was simple to read the configuration file to gain the credentials necessary for this database. I connected and dumped the contents of the users table.

This was his mistake. This is a huge no-no. You never dump data unless you have permission. It's against the terms of most bounty programs.


And if you look at the timeline, it looks like he got away with it the first time:

* Day 1: Report RCE

* Day 2: Report finding from dumped file

* Day 4: RCE's gone

* Day 8: Asked not to dump files using RCEs in the future

* Day 26: Paid out for the RCE.

* Day 40: Bug based on dump is rejected

* Day 41: Report new bug based on dump, which shouldn't have been accessible for over a month!

* Day 41+: All hell breaks loose.


But like he said in the article, he was unable to find a clear policy that gave him the "Stop, no further" point. It may have been a bad assumption to think Facebook was going with the Tumblr stance of "give us a thorough POC," but where should he have drawn the line in his hack and why here instead of where he did?


In the absence of a clear guideline, Researcher101 should kick in; it was clearly the wrong thing to do.

An apparent refusal to admit that in the write up is making it hard to put 100% support behind him.

There is no excuse: dumping the user table was too far.

Facebook went rather far too, of course.


This wasn't the end-users table though, it was the admins table. What if there were a table called "security_keys" - would dumping that be disallowed?


Yes. As much or more so than an end user table. You can't dump data and use dumped data acquired from a legitimate vulnerability to continue to gain access to additional resources.


At the line right above the one I quoted:

> As described above, I used the web interface to gain code execution, but at this point I still hadn't actually gained access to the web interface as a normal user.

He had code execution, there was no need for him to go any further.


Getting the credentials is clearly enough to prove the point. Digging through user data is just celebrating.


Whereof one cannot speak, one should be silent. Dumping the user table is the literal next step in a standard vulnerability assessment (in order to acquire reused credentials), wasn't prohibited by the terms of FB's bug bounty program, and was crucial to the development of the bug.


No, that's the next step in an external penetration test, which is not the same thing as a vulnerability assessment.

In an external pentest, you get a set of netblocks and rules of engagement, and you get as far as you can. That's why it's called a "penetration test".

In a vulnerability assessment, you get a target (usually an application), and you find as many flaws in that target as you can.

Big annual pentests often have wide-open rules of engagements, where you (as a consultant) win big by, for instance, dumping the CEO's mail spool. But those projects also start with several meetings worth of negotiating rules of engagement.

Vulnerability assessments virtually never have those rules of engagement!

Nobody that I know of runs a bug bounty program on pentest norms. To do so would be grossly irresponsible, because on every network with more than 1000 hosts I've ever tested, ever, RCE behind the firewall is gameover for the whole test: you can get everything.


You're HN's anointed expert, so I suppose all I can say is that's not my experience.

Among the many reasons bug bounties are bad ideas is that they generally fail to write clear rules -- as Facebook did. As written, what he did is not against the rules and while it may fall into some best-practices bucket you assert to be universal, that's hardly sufficient for a field in which participants can come from any background. But please, continue to defend your friend whose multi-billion company had a month to cycle their popped keys and failed to do so, then responded by threatening a researcher's employment after multiple conciliatory e-mails.


then responded by threatening a researcher's employment after multiple conciliatory e-mails.

That is NOT what happened. Look at the timeline again.

* He popped the server.

* He submitted the RCE.

* He submitted dumped file from the compromise as a finding.

* They fixed the RCE.

* They told him not to dump files.

* They paid out the RCE finding.

* A month later, they declined to pay out on the dumped file.

* In response, he submits a new finding, with AWS creds that he stored for more than a month after they shut down the server

* (Whatever else happens that day)

* Stamos calls Synack.


The "then" isn't temporally proximal. The quoted e-mails (unless you feel like asserting that they're fake, which I think is the next step in your arguments in this thread) demonstrate that he's trying to work within the unwritten rules of the program and asking for clarification in good faith. Then after that, rather than attempting any communication with his, Stamos threatens his employment.

I agree with you that something seems off, but you're happily giving all the charity to FB and none to this guy, which is your prerogative but hardly makes for good conversation.


Read the timeline again and then the post.

1. Second finding is declined.

2. New third finding, which includes AWS credentials that this person should not have had, is written and submitted.

3. Stamos calls Synack.

I believe the relative timing of these events is, in fact, established.

Now: stipulate that I'm right, even if you're not sure. Does your opinion of the story change?


Not really, no. Your should not have had is still presupposing a set of bug-bounty-hunter-professional-guidelines that don't actually exist unless they're specified in the program guidelines, and from a philosophical perspective the actual security vulnerability under discussion now is that their sec team is so lackluster that they can't or won't change out a credential set known to have been externally accessible (and, the critical point, to anyone who could have found this not-particularly-obscure vuln, not just this researcher).


I agree. If there's no clear rule "all data stays in our network", dumping data is not an unreasonable move. I don't care whether some experts in their offices mull about what's alright to do in a pentest or when finding vulnerabilities for a bounty program - most people aren't experts in that sector, so better make it clear. The researcher is in the right here.


Not only is dumping data an unreasonable move, but it's one that will get you referred to prosecutors. That didn't happen here, but it just did happen somewhere else last week. Don't ever do that.


I wouldn't do that (I'd be scared to death about what would happen, even without reading this article). But I also don't find it an unreasonable move. Just make it clear - you dump data, we're going to sue you. Right now, the researcher is in the clear, even though what he did was incredibly stupid.

I don't understand why a company would ever say "you can snoop around in our stuff" without very clearly stating what they can do. You're leaving open a legal loophole where a blackhat can claim to be a whitehat.


He didn't dig through user data.

60 accounts on the admin console are not users, and he did not touch the buckets with actual user data.


How likely is it that this sort of a thing stopped being a technical item of discussion and turned into a political one by the security contacts at Facebook?

I'm always curious about what sort of internal pressures would lead people to take a well-reported bug that the author did not take malicious action on and blow it up to the point that the CSO is getting involved.


Only way I can see this happening would be finger pointing and finding others to blame. Eventually, the problem starts with a few people then becomes inter-team issue. Then higher ups start to get involved.


Not only did this person make several large and irresponsible mistakes in the process of uncovering and reporting the bug (dumping tons of private user information without permission, going far beyond simply discovering and reporting the bug, etc.), but they also keep referring to Ruby ("running Ruby 3.x, which is susceptible to code execution via the Ruby session cookie") as the vulnerable piece, when in reality, it's the version of Rails that had the vulnerability.


Well, that’s the point. An unexperienced person with half an hour on Google got full access to Instagrams systems.

And the bug has been existing for 2 years.

Wonder where the person who tipped him off had the info from – could very well have been a common target in the black hat scene.


@secalex I believe that the researcher clearly fulfilled the primary objective of bug bounty programs by exposing a weakness of yours which you, inspite of having large and competent teams, weren't aware of and had not sealed yet. And he did nothing to use that information with a malicious intent.

Your actions are detrimental to your relations to such good mannered external security researchers who are helping you keeping you infrastructure safe from the bad guys. You should have been a little more sensitive and a lot more generous that you have been.


Wow what happened to Instagram?

Facebook really needs to go the way of myspace if they keep this sort of behavior up.

How can a CSO at Facebook legitimately tell a CEO of another organization that a vulnerability of "little value" was found when the researchers has your signing certs? Does he lack relevant info or is he just incompetent?

This is tantamount to mafia tactics. Hint, hint, we're facebook so get your people in line or else.


If companies are going to keep trying to get out of paying bounties for insane vulnerabilities like this, white hat researchers will just move onto something else, leaving the bounties to be paid out by the black market. Bounties aside, contacting his employer is a disgusting move.


The fact that Alex Stamos from Facebook contacted this researchers employer talking about potential lawsuits to threaten the employee via a proxy is probably the single most damning thing in the entire article.

That to me is entirely unacceptable, if you want to threaten someone then have your legal team send them a cease and desist. Don't go after their livelihood.


I'm gutted cause of that, i can not believe the FB CSO contacted his employer , it's such a disrespectful thing to do. Another reason to hate facebook.


This is as clear cut a case of full exploit with escalation of privilege all the way to full services source code read access, SSL private keys, full admin AWS credentials, services API keys from Twitter to analytics, email server logins, the list goes on.. all of this without even looking at a single user profile or violating user privacy, and it's not a legit security bug? This has to be worth more than $2500, and I think Facebook sets a bad precedent where folks won't disclose big security issues because of how unclear the TOS are, so that they can avoid embarrassment.


October 22nd: Weak passwords found and reported. Also grabbed the AWS keys from the config file.

October 24th: Server no longer reachable. Tested keys and they still worked, assumed to have went on a download spree.

Seems like this is the biggest issue with how Facebook handled this case. No one looked to see what Wes accessed when he logged in with the weak credentials? No one realized he could have accessed the AWS key?

To treat what Wes found as a minor bug and then fuck up like that is sort of hilarious.


Ridiculous.

This is why many security professionals become disillusioned with bounty programs. This story is not uncommon at all.

Bounty programs, while presenting a tempting incentive to practice one's skills are a very poor income strategy.

You are essentially working, unpaid, for organizations who are just as likely to ignore you (or report you to law enforcement) as they are to pay you for your findings.

No wonder so many young talented security pros are easily tempted to trade their findings for the safety of a crypto transaction with an anonymous buyer than they are to submit them through official channels.


Wait a sec.

Look at his timeline again.

He tested the AWS creds in October.

They shut the server off on October 24.

He reported the AWS creds in December.

Did he tell them about the AWS creds before then? His mails don't say that he did.

If he didn't, why didn't he?


Exactly. This is extremely shady behavior, I'm sure if he (a) reported the S3 creds as soon as they were discovered, and (b) did not start randomly downloading everything accessible onto his personal device, this would have turned out a lot differently.


My two cents.

It seems that people defending Facebook's behaviour in this thread have collectively lost sight of what the point of a bug bounty is to begin with - to encourage people to report issues, rather than sell them.

We now have people arguing that "it is not acceptable to pivot beyond the initial intrusion for a bug bounty", even though a malicious attacker would have done the exact same thing. As long as standard no-damage rules are followed, where's the problem?

The bug bounty program is working exactly as intended, but the researcher is getting dinged over arbitrary rules. As somebody else here mentioned already: the reason blackhat work still pays, is because such arbitrary and bureaucratic rules do not exist there.

We should not forget that bug bounties are a tool, not a goal - the goal is to convince researchers to report rather than sell, and every part of a bug bounty and its rules must be designed accordingly.

Also: Why the hell were those AWS credentials not revoked immediately after compromise? This constitutes a grossly negligent failure on Facebook's part to assess impact, on top of their existing failure to have the "keys to the kingdom" on a single server to begin with.

And frankly, that failure only reinforces the need for the researcher pivoting into further systems, rather than just keeping it to a PoC - because evidently, nobody is going to assess impact at Facebook, if the researcher doesn't do it himself.


This is an excellent point, but there's a good answer to it.

The purpose of a bug bounty is not to encourage a particular individual to report an issue rather than sell it. The purpose is to encourage more people to get into the business of finding and reporting bugs before the people who are in the business of selling bugs to criminals find them and sell them. If, in the process, some black hat researcher also decides to report some particular bug rather than facilitate a crime, so much the better - but you can't rely on that, and you shouldn't design a bug bounty program around it.

In other words, you're not competing with the black market. Instead, you're paying to improve your security, and accordingly, you want to get the most bang for your buck. Finding previously-unknown entry points is high-value. Finding internal pivots is extremely low-value because they are ubiquitous, and your infrastructure is already designed around the assumption that they are ubiquitous.

Which isn't to say that you aren't interested in finding the internal vulnerabilities and eliminating them. You are. Which is why you conduct penetration tests. But pen tests are big deals, with rules of engagement around them. You deliberately give the testers elevated internal access so they can test under the assumption that there may be an entry point you don't know about. You establish ongoing communication between the testers and the clients, especially at any potential pivot or escalation point prior to proceeding. You don't run a pen test by opening it up to anyone who wants to give it a whack and hoping that they'll tell you about it afterwards (i.e. a bug bounty program). That's an insanely high-risk, low-value way to discover your internal vulnerabilities.


It's clear to me after reading between the lines of both sides of the story, that Instagram/FB sec team screwed up not acknowledging the severity of the bug and paying accordingly to the researcher.

Why get mad about a "low level bug"... I mean, if you can dump private user pics from a photo sharing app, how is this low level? really?

It's also pretty clear that the researcher shouldn't have dumped data although most likely he reserved this hidden card for later since he was expecting the lowball... but there are smarter ways to reply to lowballing.

IMO poorly managed on both parts.


An interesting decision on Alex's part to only pay the $2500 for the RCE bug.

On one hand, this signals to anyone else who might want to disclose security issues that Facebook bounties don't pay out anywhere proportionally near the full potential damage impact of the issue.

On the other hand, if they pay out a lot more now, they're signalling that if you find a vulnerability, you need to dig deeper in order to have insurance in case Facebook gets stingy.

Probably the best outcome would have been to pay out a more proportional bounty, even though Wes' exploration was beyond what's generally acceptable, so that Facebook's bounty program reputation is preserved.

That or press criminal charges to discourage any other researchers from going over the line.


It's not the main point of the post, which is Facebook's response to the researcher, but I'm really surprised that they're storing unencrypted secret keys and source code on S3. They trust Amazon a lot and have no fear that somebody could eavesdrop Amazon servers (if I were a black hat I'd go for the accounts of the big guys, not for the one of a random guy)

http://www.exfiltrated.com/research-Instagram-RCE.php#One_Ke...

I wonder what any claim of protecting user's privacy is worth when they leave their credentials unprotected in that way.

https://www.instagram.com/about/legal/privacy/

"We use commercially reasonable safeguards to help keep the information collected through the Service secure [...]"

Ops.

I can imagine why they didn't appreciate the efforts of the researcher. Hopefully they'll change their current practices.


The initial bug in Ruby/Rails is striking in its stupidity.[1] You can send something to Ruby/Rails in a session cookie which, when unmarshalled, stores into any named global variable in the namespace of the responding program. It's not a buffer overflow or a bug like that. It's deliberately designed to work that way. It's like doing "eval" on untrusted input. This was on YC years ago.[2] Why was anything so idiotic ever put in Ruby at all?

Something like this makes you suspect a deliberate backdoor. Can the person who put this into Ruby/Rails be identified?

[1] http://robertheaton.com/2013/07/22/how-to-hack-a-rails-app-u... [2] https://news.ycombinator.com/item?id=6110386


I think you're overextrapolating here, though I admit my knowledge on this isn't totally up to date.

As I understand it, Ruby's Marshal function, which takes text data and deserializes it, is not safe by default. So, is that a flaw of Ruby? I guess...except that this kind of serialization seems to be a standard feature in languages (well, Ruby and Python, the two things I currently use):

https://docs.python.org/3/library/pickle.html

> Warning The pickle module is not secure against erroneous or maliciously constructed data. Never unpickle data received from an untrusted or unauthenticated source.

So the true bug seems to be that in Rails ActiveSupport (in a deprecated class, which uses some of Ruby's fun meta magic to deal with missing methods -- so basically, the classic obfuscation of functionality as a tradeoff for some sugary magic, all in a deprecated function that likely no one revisits), you can trigger a set of functions and routines in which the final decoding step, for whatever reason, ends up invoking Ruby's Marshal (via Rack: http://www.rubydoc.info/github/rack/rack/Rack/Session/Cookie...)


Also, only the server is allowed to put things into the session cookie, which is enforced by checking the cookie's signature which is generated from a key that only the server is supposed to know. Using a "native object" serializer (like Marshal or pickle) for session data and storing the secret token in a file that is easy to accidentally check into source control are both stupid things to do, but they're also common mistakes and you have to do both at the same time for this attack to work, so it seems quite overboard to suggest it was done deliberately.


Completely right. If the secret server token is compromised, it is presumed that you can fake any data. Should that allow for RCE? That's where Ruby steps in and provides the double whammy.


Marshalling bugs in other languages and frameworks:

- Java: WebSphere, WebLogic, JBoss, Jenkins : http://foxglovesecurity.com/2015/11/06/what-do-weblogic-webs... . Admittedly most of these are through sidechannels and nothing as obvious as sessions, but it's the same mistake.

- Python: https://blog.nelhage.com/2011/03/exploiting-pickle/ . Unpickling got at least Cisco Web Security Appliances: http://tools.cisco.com/security/center/content/CiscoSecurity...

- PHP : Of course, it's PHP and of course, it's WordPress . https://vagosec.org/2013/09/wordpress-php-object-injection/

It's hard to attribute malice to an obvious mistake that everyone makes.


This might be a useful attack vector against ad servers and trackers. Those use complex cookies. The next step in the ad blocker war may be taking over ad servers.


Cookies injecting data into the global namespace? Hmm, sounds familiar... http://php.net/manual/en/security.globals.php


Posting this write-up might be the last thing the researcher should have done--from a criminal liability perspective. First, the negative press might serve to piss off Facebook (who could have some perspective we are not privy to here). From Facebook's angle, the criminal aspect here may be a much closer issue, and this write-up could serve as the tipping point. Second, as a party admission, this post is could very well be admissible against the researcher at trial. Without a doubt, it can be used to contradict any testimony he might provide in defense of his actions here. (So, you HAD read the ToS, correct?) Even without Facebook's "pressing charges", a US Attorney with political aspirations might just decide she has enough here to move forward against the researcher in an effort to appear "tough on cybercrime". This whitehat stuff is murky territory for sure.


I can't see Facebook ever pursuing the criminal angle in this situation. I actually wonder if Alex's boss isn't a little unhappy with his response because it will make people think twice about their bug bounty (just look at the backlash here). The bug bounty was put out there so that people don't use or sell exploits as blackhats.


Facebook doesn't have to "pursue" criminal charges, however. It's the Government that brings criminal charges. In this case, Alex would just be a witness (willing or otherwise) the Government used to produce evidence of the researcher's crime. There is a mistaken understanding that if the "victim" of a crime doesn't "press charges", then there is no criminal liability. However, the "victim" is really only a witness to the actual crime in the eyes of the law. Here, the researcher has arguably confessed to a number of computer crimes, and if a DA/USAO or the DOJ were interested in making a statement, they might have enough evidence to indict the researcher on the strength of this post alone. Facebook, while perhaps not interested in "pressing charges", would have to comply with a criminal investigation here.


I don't see how the CSO's response makes sense for Facebook's security interests. As CSO, it is in your interest to allow a researcher to exploit an RCE to its furthest. Otherwise, you would only ever allow researchers to inoculate your outest layer of protection, while leaving any inner level untested and thus less secure.

If indeed only credentials and technical information were obtained, all aimed at finding more security issues, Facebook should be thankful for finding all the vulnerabilities across all their security layers.


If accurate (which it seems to be), a very disappointing handling by Facebook.


Either way, it's awesome for the world. This kind of attack is great to tell people one more reason why they should not trust Facebook, WhatsApp, Instagram, etc. It'd only have been better if someone malicious had done it and made some data public (perhaps slightly redacted).

In particular, it might help with Signal vs WhatsApp.


When reading the author's article, it would certainly be easy to grab the pitchforks. It is actually a pretty interesting/useful vulnerability that some low-level AWS keys were able to be escalated to some highly privileged keys, and that none of these keys where IP-whitelisted.

However, the biggest issue I see here is that the author (in their own timeline at the bottom of this post) says that they discovered the AWS keys on October 24, yet they did not report this to Facebook until December 1 (in the meantime, they were having various discussions with Facebook about whether their other submissions were valid). That is seriously concerning behavior, if you find come across some live AWS keys this should be reported immediately, you should absolutely not just sit on them for over a month as if they are some sort of bargaining chip.


If accurate, seems like a pretty counterproductive way to handle this.


[deleted]


Take their lumps, fix their shit, and pay the bounty?


Precisely - at the very least, taking care before bandying around legal action or calling someone's boss. Even without context, its a pretty weird sequence of events.


Pay the bounty and communicate with him directly to get the information deleted.


Alex Stamos (Facebook CSO) just posted an official response:

https://news.ycombinator.com/item?id=10755060



Why call the CEO and not his Mom?


On the one hand I got a little squicked in the story when he started cracking passwords, but on the other hand I kind of assumed that bug bounty systems would want the tester to find out how deep the bug goes. Otherwise the depth of your security isn't being tested.


The lessons i learned here are: 1) any RCE vulnerability of Instagram leads to unrestricted access to user data. Facebook knows it, does nothing about it. 2) facebook will not pay you your bug bounty reward, but will complain to your employer.


I really don't want to imagine what would of happened if he wasn't part of the bug bounty and instead after malicious intent how bad things would of gone.


Looks like the sites' down. Mirror/Google cached page: http://webcache.googleusercontent.com/search?q=cache:vR9o3UY...


It's really simple. This is the beginning of the end of Facebook. With their fake clicks on their ads and what not.


imo Facebook should be grateful for people like this instead of burning them


Indeed. I can somewhat understand the fearful reaction, but ultimately it hurts the company's rep.


I'd like to see a service where a company's source code/database/confidential info is placed in escrow pending the payout from a bug bounty. Or, perhaps more likely, some sort of 3rd-party arbitration.


Good luck finding an escrow to not only trust, but would be willing to take the heat for that one.

To be a trustworthy escrow, you must have a good reputation or track-record.

There's near no anonymous escrows that could provide a service trustworthy enough to handle this. And going the non-anonymous route would be near impossible, Facebook would litigate an entire country over this.


that's a lot of posturing on both sides. FB had some severe vulnerabilities that the author certainly pointed out. And the author could have read the bucket contents without downloading them. FB clammed up. The author overreached. Neither ends up really winning anything here. Tis a shame.


Nerd owns FB and wants to rub it in their face. FB power plays nerd. Nerd publicly pawns FB in retaliation.


CSO slaps a legal threat to a security researcher and talks about ETHIC? Good job man, gooooooooooooooooooooooooooooooooood job.


Bad form on Mr. Stamos' part.

edit: if it's indeed true, but I have my doubts that's the case. Hard to say either way.


I thought their stack was django?


> Ruby 3.x

Rails 3.x


Is it normal for security researchers to use Windows for their OS?


Not good ones!


Once again we see how people act hard-ass in sight of gaping vulnerability in their system. Be it law system, computer system or moral system, you will see denial and intimidation.

We should have "pastebin hat" list and Facebook should definitely be on it.

The problem with humans is that they will rather go extinct over such things than behave properly. You could try to teach us by painful example but death will probably come first.


"As a researcher on the Facebook program, the expectation is that you report a vulnerability as soon as you find it. We discourage escalating or trying to escalate access as doing so might make your report ineligible for a bounty. Our team accesses the severity of the reported vulnerability and we typically pay based on its potential use rather than rely on what's been demonstrated by the researcher."

Well, FB feels your bug bounty is worth $200? Strike that figure. We feel like your bug bounty is worth a $100 advertising credit, if you buy $100 in advertising? Next time just report the bug. Thanks!

(I don't know if my innate dislike of FB, or I feel it shouldn't be up to a company to determine what they feel a bug is worth? If you are going to have a bug program--put in some Very solid rules? They shouldn't be just winging it at this point? It's not some cute little start up? It's a huge machine that's making a fortune off it's victim?

I'm still not sure if FB really cared about this hacker's escalation of a potential attack, or it's about money? Would I want a hacker to show me my vulnerability with my clients information--no, but make that crystal clear in the TOS.)


Am I the only one mildly annoyed that the author constantly conflated Rails and Ruby?


nope I was too. Interesting illustration (assuming it wasn't just a typo) that exploitation of vulnerabilities doesn't necessarily require deep understanding of the tech. stack in question.


And on the flip side, deep understanding of the technology stack in question doesn't necessarily lead to implementing it securely. This is division of labor at work.


In general, if you have a green handle, you shouldn't be commenting on things like this. Otherwise we'll have sock puppets galore muddying the waters.


No, because knowledgeable people sometimes make accounts to comment on what they know about. Yes there are fakes too, but HN's philosophy is that it's better to deal with bad things (annoying as that is) than exclude good things.

We detached this subthread from https://news.ycombinator.com/item?id=10755545 and marked it off-topic.


What does a green handle indicate by the way? I checked the FAQ and there's nothing there.


green handle

New account, IIRC less than 2 weeks old. The name is colored green. But I've seen it not be consistent, where some posts are green, others aren't. All in the same thread.


It's more complex than just creation date. Somewhere in there it involves votes cast on your posts, which is why you might see someone's name switch colors from one post to another in the same thread (the system doesn't go back and switch name colors on previously-created posts). IIRC the exact mechanism isn't public.


No, it's simpler than that—just a function of account age.


New account.


uh, says who? Do you work for Y Combinator? Is that in the rules somewhere?


mmmm if you only knew


[flagged]


Please don't do this here.


Can you please define "this"?


It was uncivil and unsubstantive. If you want to comment here you need to do a lot better than that, which I'm sure you can:

https://news.ycombinator.com/newsguidelines.html

https://news.ycombinator.com/newswelcome.html

We've detached this subthread from https://news.ycombinator.com/item?id=10755067 and marked it off-topic.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: