Hacker News new | past | comments | ask | show | jobs | submit login
Instagram's Million Dollar Bug (2015) (archive.org)
201 points by tslocum on Dec 14, 2020 | hide | past | favorite | 93 comments



Previous discussion (5 years ago): https://news.ycombinator.com/item?id=10754194


Alex's response is no longer available. (His link to his Facebook post is dead.) Anyone have a mirror?


Here is a cached version on Wayback Machine: https://web.archive.org/web/20161218181922/https://www.faceb...


(my comment is on the overall trend, as the specifics on this incident are complex)

The issues with bug bounties as a whole is the market is skewed. For any work done by a bug bountier, there is exactly one legitimate buyer, who gets to make a significant judgement call on the value of the work done. Furthermore, this value is decided upon after the work has been completed, and has been provided to the company. In what other industries is this the case?

Alternatively, triagers have a whole pile of crap to wade through, to get to the useful material.

Furthermore, it really is hard to place an accurate monetary value on a bug that's responsibly reported, and patched. This is in part due to unclear monetary results from being breached. What precisely is the monetary loss from the recent MS Teams bug that was reported but not exploited vs the incidents this year at Twitter and SolarWinds?

Having had some involvement in the bug bounty arena as a reporter, I have to say I'm a big fan of those companies that open up all of their reports after a fix period of time. This allows them to build trust with those who look into their products, and develop a reputation for being prompt and consistent.


> Furthermore, this value is decided upon after the work has been completed, and has been provided to the company. In what other industries is this the case?

Those "mail us your gold" ads on TV.


Hospitals do this, but backwards. The service provider gets to set the price.


Basically only in America, though. Elsewhere the situation where someone gets into a car wreck, goes to hospital and then gets told how broke they are now just doesn't exist.

In much of the rest of the world, the health system sets and publishes the rates - and guarantees payment to service providers. The doctors perform the services, and then submit for renumeration directly to the health system. The patients, well, they're sick and don't have to worry about any of it.


Side note, just fyi, because I used to make the exact same (tiny) mistake all the time. It's spelt remuneration rather than renumeration.

I think it helps to think of the `muner` part as being derived from the same root word as money rather than `numer` as number (which had been my previous assumption I guess).

You just inspired me to actually google my memory technique above, and it turns out the `mun` is from a latin root word for gifting (think munificent)[0]

So now I can think of a munificent monetary remuneration, and should remember it!

[0] https://www.merriam-webster.com/dictionary/remuneration#:~:t...


Thanks! I think I’d have been typing it wrong for the rest of my life had you not pointed this out haha.


Most developing countries are moving to the US model - private hospitals in India, China and the Middle East, for instance.

What baffles me is how expensive government hospitals too are in the US.


It's not about whether the hospitals are private or not, it's whether you know the price beforehand and can make an informed choice - the most basic thing about the free market.


Funny how everybody began to downvote me after totally misunderstanding my point in muddled fashion.

I wasn't talking about hospitals going private - after all, there are private hospitals in Europe and the UK too. I was talking about poor price transparency in the US being adopted in all of those places by private players. I specifically called out government players in the US, since they engage in the same practice, while government hospitals in all of those countries do not.


Pretty sure they make you an offer on the gold that you can decline. Probably still predatory but I think you are mistaken


> triagers have a whole pile of crap to wade through, to get to the useful material.

This is very true.

> The issues with bug bounties as a whole is the market is skewed. For any work done by a bug bountier, there is exactly one legitimate buyer who gets to make a significant judgement call on the value of the work done.

The problem, in my experience, is that they never analyze it by its potential. Why would they, they have the details now and usually your legal details so if it leaks they'll have you busted in a heartbeat and sued for contract violation.

> Furthermore, it really is hard to place an accurate monetary value on a bug that's responsibly reported

I submit that from my experience threat modelling this is actual dead simple but nobody feels the need to do it.

> What precisely is the monetary loss from ...

As you point out, the issue is that there's a single buyer. You really need to open up the bidding. If you trusted a Russian mob to pay residuals (and they probably would) you might be able to sell this for what ended up being $50M+, and the criminals could clear billions if done right. Then the next time something like this came up you'd have more bargaining power. If the company was still there...

Thomas is right that there isn't specifically a market like flippa for exploits but there are dark markets and many of the vendors would be open to a chat. I'm not rooting for this, I'm just not blind and it will happen. (Well, if it's Twitter I'm rooting a little...)


IMHO it’s only a matter of time until someone blows up a unicorn just for the thrill of it. That’s not something I’d support, but I won’t feel bad for companies that don’t pay adequate bug bounties.


You mean like cracking the most lucrative accounts on Twitter and then stealing Bitcoin? https://www.wired.com/story/inside-twitter-hack-election-pla...


As of right now, what is the lasting damage done to twitter by that attack? My argument is that it honestly wasn't that much, and thus bugs capable of that amount of damage aren't valued that much either.


There is plenty of price competition for your bug disclosure: the Chinese, the Israelis, the Saudis, the Americans, OR directly to Apple. :-)


Yeah let me just call up Saudi intelligence real quick, what's the name in the yellow pages?


Just mention MBS on a WhatsApp chat to a Saudi journalist, they find you.


He said legitimate.


God, this is frustrating. They essentially cracked Instagram's entire production environment open, and took explicit steps at every turn to stay within the published guidelines, and then they just take his report with zero compensation whatsoever. Insane.


I wouldn't really blame the guy if he decides to sell the next one on the darknet.


According to free market economics, this is exactly what should happen. Security researches sell their exploits on the dark web until bug bounties rise to the same or higher prices as the dark web will pay.

It's crazy that they can find a bug that would cost Instagram 1M+, yet payouts are in the thousands or maybe tens of thousands if you're super lucky.

I'm curious if it's illegal to sell exploits. Using them is obviously illegal, but is the transfer of knowledge for money illegal? I.e. I'm not allowed to build an M16, but presumably I could by the schematics for one if I wanted (I've never tried, but I can't imagine possession of them is illegal since they make posters of it and what not).


That seems to depend on some very specific and unusual definition of "free market economics." Usually there is some unspecified assumption of rights (particularly property rights), and actions which violate those rights are not considered to be "free market" interactions. As an obvious example, if you creep around neighborhoods looking for people with valuable property that isn't well-secured against theft, then offer to sell a homeowner the information about the security problems you've discovered, and then sell that information to professional thieves if the homeowner declines, I don't think that would be "exactly what should happen according to free market economics."


I honestly think this is what free market economics will get us, due to the high barriers to selling on the black market (ethically, legally, and logistically). The bug bounty targets with high payouts from the company line up roughly with the ones with high payouts on Zerodium etc.

As I stated elsewhere in the thread, I'm not honestly convinced the fallout from a company being breached is that high, which leads to the current pricing for bug bounties. Twitter stock is massively up from when their incident happened in July. We'll see what happens with SolarWinds.


> I'm not honestly convinced the fallout from a company being breached is that high

The market clearly doesn't care, and so neither do executives. What needs to happen is a household company gets exploited/hacked/pwned/whatever so hard that their entire business collapses, maybe not entirely but significantly. Then the market will price these breaches very differently.


As someone who's not in the infosec/cyber industry, what exactly is Zerodium and why is it generally considered a suboptimal buyer?


Zerodium is one of several companies that buys exploits and sells them to governments. This route supposedly pays more than public bug bounties, but with different secrecy etc requirements.


Yup. Then you get things like the Twitter hack a few months ago where a bunch of celebrities were tweeting a crypto scam. I'd bet that wouldn't have happened if bug bounties were paid and reliable enough.


Technically he used the 1st bug to enter their systems and then escalate access through other security holes or bugs.

That's not likely to be accepted by default by most companies. I would assume a default "do not escalate access" unless explicitly asked for.


While I can see why that's the case - If a surface breach is patched, any other flaws that could be exploited won't be accessible to an attacker.

On the other hand, software is built in layers. If there's an "inside" breach, i.e. I can get from an inner layer to a deeper layer, I would want to know about it.

Facebook were idiots to structure their policy this way.



but he didn't/ He only gained access because the admins used weak passwords .


Ah nice. Facebook resorts to intimidating bug bounty participants acting in good faith by threatening them through their employer instead of talking.

Can't say I'm surprised, given the level of ethics Facebook exhibits at every conceivable level.


Disclaimer: I was a Security Engineer on the FB Security Team until last month and was also involved in the Bug Bounty Program :-)

That's not how Facebook treats Bug Bounty Participants. By far, it's one of the better programs in terms of payouts, fairness, and triage time on critical issues.

Just a recent example: a bug bounty hunter reported unexpired CDN links. After internal research, FB figured out to chain this into a Remote Code Execution and paid out 80k USD to the researcher. (https://www.facebook.com/BugBounty/posts/approaching-the-10t...)

That said, I wasn't there in 2015, so I only know the story from some stories. (which portray the story a tad different) - Even if it were true, I haven't seen such treatment in the last three years at FB.


Forgive us (non-facebook engineers) if we don't take your (single rank-n-file engineer) anecdotal experience for official company policy when there's a public documented case of the head of the department doing otherwise.


Based on FB's official rebuttal, he had mentioned his company affiliation on the bug bounty portal account and had used a company email address for the communications. To me, this indicates that he was acting in an official company capacity.

Further, they didn't reach out to the CEO of the company until after he'd exfil'd data from the IG S3 bucket outside the scope of the bug report to try and leverage a bigger payout.

I have no reason to doubt any of that.

There's a lot of negatives about working at Facebook, but a lack of professionalism is not one of them.


I think FB's greatest achievements is convincing their employees that their jobs are actually good for society, or at least neutral. Plenty of good people working there who seem honestly confused about how their jobs lead to so corruption and downfall of our society.


Upton Sinclair got this right almost 100 years ago- “It is difficult to get a man to understand something, when his [RSUs depend] on his not understanding it.”

Of course, I also work at a FAANG, so people in glass houses and all that...


Their culture of continuous (and I do mean continuous) performance review ensures they're always focused on not losing their jobs. If you know someone who works there, ask 'em.


Fool me once, shame on you.


This was discussed at length when it was first submitted here 5 years ago. The researcher found a (known) exploit, claimed $2500, then a month later used internal details he gathered (and saved) from the first exploit to breach the system further to demand a bigger payout.


They didn't change the credentials that had been hacked? God I wish the hacker had sold the vuln to North Korea.


Real life is not like the movies, in which a floppy disk of info is exchanged for a suitcase of money in a dark alley or in a boardroom. Blackhats typically find that there is little market for their info, especially before the advent of bitcoin being popular. yeah, you cracked a bunch of selfie pics. What can you do with it. not much.


He had signing keys for the Instagram app and the *.instagram.com keypair. Do you think that's not valuable and dangerous?


sorry, can you explain what the signing keys and keypair would allow someone to do?


Would allow you to make an app that steals all your info and release it as if it was the latest app from instagram.


Wouldn't you also need login info (prob including 2fa) to an Apple developer account?


If *.instagram.com keypair is the TLS certificate keypair, then they could MITM Instagram. They'd probably need to physically stalk some Instagram employees, but getting the TLS certificate key pair would be the difficult part.

On a related note, what do MS Windows/OSX/Android/iOS/Linux do when they see a WiFi AP with an SSID (and maybe even MAC) they recognize, with a WPA2 key they know, operating without encryption? Will they still auto-connect in the clear? In other words, if an attacker cloned the SSID of someone's work/home network, with a strong enough signal, could they trick devices into auto-connecting to an unencrypted AP?


People do this with public WiFi - for example, set up at Starbucks with a duplicated SSID, wait for target to connect and route it through as if it were connected to the real Starbucks WiFi, all the while monitoring in the middle.


And what do they do with it? Set up a global worldwide network of agents extorting money from people sending dick pics over Instagram?


yeah, you cracked a bunch of selfie pics. What can you do with it.

FTA: "specifically I gained access to a lot of data including SSL certs, source code, photos, etc"

Blackhats typically find that there is little market for their info, especially before the advent of bitcoin being popular.

And now?


Bug bounty programs pay you for the severity of the exploit, not the potential damage you could do with it. The researcher found an unpatched server with a known Ruby RCE and cracked a weak password. Whether he found the server empty or containing nuclear codes isn't what determines the payout.

Storing user data and private keys on your computer after reporting the hack and using them again to access the systems is way beyond the scope of a bug bounty program (and probably criminal).


> the severity of the exploit, not the potential damage you could do with it

Isn't severity measured in terms of potential damage?


Yes.

https://www.facebook.com/BugBounty/posts/approaching-the-10t...

CDN bug report... Earlier this year we received a report from Selamet Hariyanto who identified a low impact issue in our CDN... a very sophisticated attacker could have escalated to remote code execution. As we always do, we rewarded the researcher based on the maximum possible impact of their report, rather than on the lower-severity issue initially reported to us. It is now our highest bounty — $80,000.


In 2017 Doxagram made well over 100k selling a emails and phone mumbers associated with a relatively small list of instagram accounts.


The problem with bug bounties is they are one-sided, against the researcher. The conditions of bounties typically stipulate that any attempt at negotiation can be interpreted as extortion, so it is either take it or leave it.


Sounds like a third party might be able to improve the situation by providing escrow.

With their first bugs, researchers are entirely unknown quantities to the company. Stating, "I have a critical zero-day, but I won't tell you what it is until you pay me $BUCKS," clearly won't work.

A reliable escrow service, to whom the researcher can provide the exploit and the company can provide $BUCKS, offers insurance to both parties. If the exploit is not as described, the researcher loses the exploit entirely and gets no $BUCKS, but if the exploit is as described, the company cannot renege on the deal.

(Edit, addressing the direct question more-clearly: perhaps what is necessary to avoid the perception (and reality) of extortion is the emergence of accepted professional understanding for assessing the value of exploits. Without such a system, there will always be a strong incentive pushing people in the direction of blackhat work.)


AFAIK this is similar to Zerodium's business model, except they sell the zero day exploits to governments [0].

From their website [1]:

> "We pay BIG bounties, not bug bounties"

[0]: https://en.wikipedia.org/wiki/Zerodium

[1]: https://www.zerodium.com/


How else would you phrase someone telling you "I have this bug and will exploit it if you don't pay me X amount" vs. "I think the impact is bigger because of Y"? For me, the first sounds quite clearly like extortion.

The first case would get you likely in trouble. The second case would routinely cause a further review in any decent program, and if there's any merit to it, you get a higher bounty.

Nobody is forced to participate in any bug bounty program. If people feel the reward is too low, they should not partake.


False dichotomy, they aren't threatening to exploit it, they simply won't give details of the exploit if they aren't paid.


I'd advise anyone against trying that for a system not owned by them. (e.g., someone's else website)

As soon as you do that, you venture into dangerous territory. Companies are required to investigate claims of breaches seriously. And as soon as something like this is escalated, it may be out of the Information Security team's hands to decide the next steps.


> How else would you phrase someone telling you "I have this bug and will exploit it if you don't pay me X amount"

Hello Strawman!

> The second case would routinely cause a further review in any decent program

We literally just read a example of how a big corp responds to #2. Do you think it was a 1 of?


I was part of "big corp" for the past three years and was involved in many bug bounty reports. A reasonable claim like "I think this should be higher because XYZ" gets investigated and, if justified, higher bounties issued.

This blog post seems a bit one-sided and doesn't correlate to the facts that I have heard. I wasn't there at the time being so I don't know the truth. But that blog post seems not quite 100% to be it.

What I have seen, however, in the past years, is that some people omit facts or misrepresent things to get some press. So I am quite a cynic on blog posts like this :-)


> A reasonable claim like "I think this should be higher because XYZ" gets investigated and, if justified, higher bounties issued.

That's highly dependent on the individuals and the company doing the bounty. It's incredibly reasonable that people are suspicious of the process, when it is opaque as it is, and the disparity in negotiating power being the company and the person submitting the bug.

My personal experience is the FB bug bounty process has been generally positive, but inconsistent at times in the graded severity of issues and transparency of the decisions being made. I've clearly presented my case, and asked for additional information, but not gotten very far. My only real option in response is in how I allocate my time.

Having reports and payout amounts be permanently hidden results in stories like this being the only insight to the process.


Well, it includes verbatim copies of the whole email chain, and those are looking pretty bad in itself without any of the surrounding text.

Unless you're saying they've been tampered with, or that there was additional communication in between that he omitted, it seems pretty clear that this is not a professional way to handle communications.


The researcher doesn't have zero knowledge before choosing to work with/for a company. The history of payouts and the perception of the company in the community are meaningful indicators of willingness to pay.


A large number of companies keep their bug bounty payouts and reports permanently private, which I feel is a disservice to the community.


So, to summarize, you go to bank and you say "your back door is vulnerable can you check", instead of checking and giving you some kind of praise, they call police to beat the hell out of you...

This is exactly sort of thing that will make community of white hackers stop caring, and leave open door to foreign agency malicious hackers to do as they please.

I would like to know what was really going inside of their heads, was someone internally trying to steal the thunder, was it vanity/pride, was it lack of funds?!, was it fear?


I think the issue was he went into the back door, and then found a key, and then started unlocking more doors. In other words, he used the initial bug to escalate access into their systems. Which is pretty obvious a no-no.


Why? He jimmied a lock. A bank should not use a padlock. He found a key and found a dead end... oh no, in the dusty closet there was another lost key. That one shouldn't be there... wait the old key opens everything? Oh no.

Privilege escalation is explicitly allowed by facebook. He escalated.


By the way while reading this, I was expecting happy ending, something nice to start the day, but, alas, this is almost like a heavy Russian drama, starts with light tone and ends so depressive I would rather go back to bed crawling under the blanker and into fetal position.


Off topic, but there is a bug on Instagram that has been bothered me for quite a while.

On web (not sure about the app), if your language is Japanese, for any profile that has 0 following, it will show "Following: 0" as "フォロー中NaN人". A screenshot for the lazy: https://i.imgur.com/rTGXe3T.png

Of course this is a rather minor issue, but it still feels weird to me that one of the most popular website/service in the world would have this kind of bug live so long (and yes, I have reported it multiple times).


My Kindle says “2GB gratis de 3GB” (in Spanish) which doesn’t make any sense, instead it should say “2GB libres de 3GB” (2GB free of 3GB).

Free can be translated to either “libre” or “gratis”, libre is as in freedom, gratis is free as in beer.

I can’t understand how the most popular reading device would have that kind of mistake in one of the most common languages in the world.


They likely do i18n like other companies, and just outsource it to someone for cheap, and never fix translation errors.


Hah, "free" is often translated wrong into Chinese too for the same reason.


Maybe these bugs involved English speaking devs copy and pasting out of spreadsheets.


I tried a little searching but I can't find anything that says how this all ended. Alex Stamos denied saying anything bad. But then what? It looks like it was all just dropped pretty much as is?


> But then what? It looks like it was all just dropped pretty much as is?

That usually means some money was exchanged and some NDAs were signed.


Why would Facebook NDA paying a researcher? Shouldn't they be shouting it at the top of their lungs?


From my experience working in the PR and media industry, this NDA appears to serve a key purpose: It discourages engagements/discussions on social media platforms, thus hastening this incident into irrelevancy to mainstream media, thus protecting the brand reputation and key shareholders of FB.

Security findings are never good for the share price. Therefore it is crucial for the company to take control of the narrative when possible.


From Facebook's POV the researcher behaved badly and rewarding that behavior without an NDA will encourage other researchers to behave badly.


There is no real bug besides the ruby RCE thing. Cracking weak passwords is not eligible. Sorry. I can see why Facebook denied him a remittance but their approach of contacting his employer was wrong.


Not a "bug" in terms of incorrect code. But if I worked there, I'd sure like to know that

1. There were older versions of apps with config files stored in S3 that contained AWS keypairs for roles with wide open access

2. That such keypairs existed in the first place and were used on servers - probably no service role with such wide access should exist, and even if it did, it ought to be caught by routine audits for overpermissioned roles, and also old keypairs should be retired and rotated regularly

3. That a whole bunch of private key material basically encompassing the keys to the Instagram castle were stored in S3 buckets


Any closure on this? Did FB ever make amends? Surely there are some FB security employees on HN.


Million dollar bug actually sounds like a small amount given the context!


This speaks to a couple of issues that bothered me while working in bug bounty triage.

> Alex informed my employer (as far as I am aware) that I had found a vulnerability, and had used it to access sensitive data. He then explained that the vulnerability I found was trivial and of little value, and at the same time said that my reporting and handling of the vulnerability submission had caused huge concern at Facebook.

[my emphasis]

There is this conceptual separation between the severity of the issue and the impact. Simplifying things much further than the situation described in the piece, you could have an admin account with the password "password". This is a stupid issue. The fix is to change the admin password. How much of a bounty should be paid for this report?

One school of thought is that the value of the report is related to what you can accomplish by exploiting it. This is clearly the right approach if you're assessing the issue's value to an attacker. It has some problems in the bug bounty context -- a major one is that it feels subjectively unfair to the company! They don't want to pay 100x more for the same vulnerability just because, this time, it happened to have more sensitive stuff behind it.

Another is that, as here, you often see a chain of vulnerabilities, all of which are of very little consequence in isolation, but they happen to combine into something much greater than the sum of the parts. (I recall a published writeup, which I can no longer find, in which one important step was a logout CSRF. Nobody cares about those.) The policy of "stop investigating as soon as you find anything" rules out this kind of "whole is greater than the sum of the parts" finding by definition.

> Playing By The Rules

> Microsoft (in my opinion), has done the best job of explaining exactly how far they would like a researcher to take a vulnerability. Google and Yahoo imply that you should report a vulnerability immediately, but do not clarify how far you should go in determinining impact. Tumblr, on the other hand, puts in writing the policy of just about every bounty program. The better your PoC shows impact, the more you are likely to get paid. Further, the better a researcher can understand and describe impact, the more likely they are to receive a greater reward.

This bothers me from a fairness perspective. I have personally seen essentially the same report on different pages of a webapp get paid out differently because the researchers provided different speculation about what might be possible using their exploit. The guy who got paid less was careful about following the rules, asking for guidance about exactly what and how he could investigate, and then he only claimed what he was able to demonstrate. The guy who got paid more had a more generic claim that "this demonstrates SQLi, and writing to the database might be possible". I could not establish whether writing to the database was in fact possible for the same reason the first guy (and the second guy) didn't try -- it might have been unacceptably disruptive to the company. So I passed the speculation through, and the payout ended up being higher.

The lesson here is, "claim the moon and the stars." But I feel that means the ecosystem is unhealthy; that's not what I think the lesson should be.

Companies always say they will investigate the full impact of a vulnerability when you follow the protocol they urge of "as soon as you find something, report it and don't try to escalate". But this is nearly impossible to do even if you're trying in good faith.

---

Sometimes you're not trying in good faith. I have also seen what is exactly the same issue paid out differently depending on the category the researcher files it under. Many programs publish payout schedules by category. In this case, the schedule contained a mix of technical category types ("XSS") and functional category types ("account takeover"). One researcher found a way to present an issue in a low-paying technical category as a high-paying functional category. I repeatedly noted in my reports to the company that this researcher was getting paid quite a lot more for the same vulnerability than other researchers who didn't know about the loophole. This state of affairs never changed; I assume the main concern was maintaining the relationship with the loophole guy. But obviously, this sort of thing directly falsifies the claim that "we will investigate the full impact of the issue you report and pay out appropriately."


> Companies always say they will investigate the full impact of a vulnerability when you follow the protocol they urge of "as soon as you find something, report it and don't try to escalate". But this is nearly impossible to do even if you're trying in good faith.

Disclaimer: I was a Security Engineer on the FB Security Team until last month and regularly attended the payout meetings :-)

I've seen plenty of bug bounty programs making such claims, but the Facebook program keeps up to this promise the most. Every bug is root caused to the line that caused the issue and assessed on maximal potential impact.

Sometimes that leads to cases where low impact vulnerabilities got paid out tens of thousands of dollars. The big bounty often came as a big surprise to the reporter :-)

Just a recent example: a bug bounty hunter reported unexpired CDN links. After internal research, FB figured out to chain this into a Remote Code Execution and paid out 80k USD (https://www.facebook.com/BugBounty/posts/approaching-the-10t...)

Facebook has big pockets. As a bug bounty hunter, I'd not worry about being screwed by them. It's by far one of the best paying bounty programs.

There are many reasons to criticize Facebook or Instagram. But the handling of its application security should not be in the top 10 :-)


So what do you think is going on in this piece (5 years old), where Alex Stamos characterizes the issue as "trivial and of little value"?

> Facebook has big pockets. As a bug bounty hunter, I'd not worry about being screwed by them. It's by far one of the best paying bounty programs.

I don't think the middle sentence is related to the other two. Every company I triaged for had deep pockets. I routinely saw payouts in excess of $1,000 and not uncommonly several thousand. I don't recall ever seeing one that hit $10,000. But what I'm describing above are ways for the company to screw the researcher without really being motivated by stinginess. Fairness is not a concern.


> So what do you think is going on in this piece (5 years old), where Alex Stamos characterizes the issue as "trivial and of little value"?

I sadly wasn't there at the time, and Stamos post doesn't refer to it at all. So I can't comment on this.

I guess the truth on this is just known to the researcher, their boss, and Stamos.

> But what I'm describing above are ways for the company to screw the researcher without really being motivated by stinginess. Fairness is not a concern.

That's a fair point, and I can see how representation can cause a significantly different payout decision, especially if there is no technical payout panel with a security background.

Phrasing something as "Reflected XSS" vs. "Account Take-Over via XSS" sounds undoubtedly different. But it is impact-wise probably the same.

The problem is mitigated at Facebook by having engineers in the payout panel that understand the tech stack and security implications. But I think many companies don't have that luxury, and you undoubtedly may end up with inconsistencies.

Thanks for sharing your perspective. Much appreciated!


Are there any means inside IG/FB to let engineers (or employees in general) hold company/managers accountable in cases like this?


Is it legal to start an auction to sell an exploit, never close on it, then use that as the price point to negotiate a bounty?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: