Mildly off topic but I don't care: I'll never forget the time I was invited as an early developer for the Palm Pre. At the time Palm's website was horrendously bad security and usability wise.
While waiting for their website to accept my upload I was poking around and decided to see what I could get into. I found that the addresses were something like /user/<sequential id> so I wondered "could I look at another user's page by changing the ID? Na, no way that would work"...but it did work. In fact I was able to see and theoretically control app submissions for all 500 developers at the time. This set of pages even included tax IDs.
So I immediately went to Palm and told them about the issue and how to reproduce. After about two weeks they finally reported that the issue was fixed...except it wasn't. What they did was change the way the pages were accessed from a standard GET with the ID in the URL to using JavaScript to accomplish the same thing but kinda sorta hide the ID (so basically fetch content via JavaScript versus page loading via direct browsing). So naturally I was able to change the ID and still get in.
It would take them another month to finally fix this issue. I was never able to convince them to let developers know their tax IDs may have been exposed along with all of their other information. I did get a special mention in one of their release notes but they spelled my name wrong :(
I have yelled at people for this for years. I thrilled to know it has a name. This is why the URL pages on delicious were md5'd and not the raw url_id.
I signed up for the Firebase beta in February 2013 using a unique email address and got a phishing mail to that address in September 2013.
I reported this fact to security@firebase.com but never got a reply. I guess they preferred to sweep the theft of their customer database under the carpet.
We once used a third-party service (the name eludes me now) to handle our customer mailing list, and of course they got hacked, so everyone who used unique email address + extensions blamed us for selling their addresses to spammers. They only acknowledged the hack in a "we're investigating this"-type blog post, and 6 months laters they revamped their blog and the post mysteriously disappeared...
> We once used a third-party service ... they got hacked ... [people] blamed us for ...
From my point of view as a client, I don't care that the problem was with one of your suppliers: my relationship is with you no them, I trusted you with my information not them, and so forth. Unless you gave your users a choice about whether their information went to that third party then from your user's point of view it is your fault - there was nothing they could do to prevent the problem (other than not use your service).
I know it is not realistic to expect you to fully vet the security of all third parties and take full responsibility for any failures of them as no amount of due diligence will protect against every eventuality, hence I use the "unique email address" approach and take other care when handing out personal details that are less easy to fake (phone numbers, details needed for payment processing, ...), but it is also not realistic to disavow yourself of responsibility when something like that does go wrong at a supplier. You chose to trust that service, not your users. You put your users information in a position of being out of your (and their) control.
This is awesome, and exactly the right way to go about things (reporting things privately, telling people when there's a good outcome). It's good to see so many HN companies handling their side of the issues so well too.
EDIT: Just for curiosity's sake, how many HN companies have you tried to hack? 24 successes (not counting HN as a company) out of every company that's been through the accelerator would indicate HN companies are unusually good at websec. Conversely, if you've only tried to hack 24 and you've had 24 successes, that would indicate something else.
I was targeting YC company, I just filtered my success with YC companies.
With bug bounties there are many cases of duplicate issues or they are aware of issue internally. That count as failure for me but not to the companies.
do not remember the ratio, but sometime it was easy some it wasn't. As I test only website, most companies do test their sites themself. So you can consider 70-30 ratio for success-failure
Well, they also put your name on their "thank you" page and sent you a nice email! What else could you possibly want?
It might be a multi-million dollar business, but it's not like these hacks can actually cost them millions of dollars. Verizon has had employees giving out personal details to people on the phone for years, and they're still happy to do it even for the director of the CIA: https://www.schneier.com/blog/archives/2015/10/the_doxing_tr...
I think Schneier is arguing that if companies were liable for their disregard of even minimal security standards, they might pay you more to help finding vulnerabilities.
I did my first conference talk[0] on this, and also have a similar list.
Security consulting is expensive and the value just isn't there at all for early stage companies. It's why I think Owasp top 10 should be required reading for founders.
As for my conference talk, the delivery was atrocious (warning if you choose to watch). I spent 5 minutes per startup and churned through hundreds. I didn't name names because there were too many bugs to report after a day or two of doing it.
I agree with you that early stage companies generally do not have much utility in security consulting due to its expense, but unfortunately "crowdsourced security" is not yet a viable replacement.
Once a company has enough funding that it has left the "early stage" point, there is almost no reason not to engage with security firms. This doesn't mean pay a firm $20,000 for a week or two of work, it means find the highest quality you can afford.
My own firm works with YC companies all the time and they are generally very happy with the work I do. I think it really comes down to what you offer. If you charge an unreasonable amount, have pushy salespeople, inflate the findings in your report or just view your job as handing off a report and demanding a bill, you're doing it wrong and not contributing value.
On the other hand, fairly priced security consulting with an eye towards developer education and working with the company to resolve their vulnerabilities contributes a lot of value. More security firms should try to help companies improve their security in the SDLC.
I do hope crowdsourcing security improves. I think it could be better, but it isn't yet. The results in my experience are mixed - for every bounty hunter who finds vulnerabilities you have another nine who just spam for pity findings on Hackerone and Bugcrowd. Most of the successful bounty hunters eventually just open up their own consulting shops or take very lucrative jobs with top companies like Google or Facebook.
I do wish there was a middle ground. I don't think it's fair for security consultants to work for free (which very often happens with bug bounties, even if they are very good). However, I really don't like how inflated the pricing has become at the largest security firms, which appears to be a side effect of having account managers, project managers, salespeople, "solutions architects" and finally the consultants themselves on each engagement.
I can break mobile apps too, but my workflow is less pretty for churning through a hundred companies, so didn't do anything mobile for the talk. The talk was 100% web based
The general case is that you are taking input from an untrusted source and later displaying it. You need to do two things: validate that it is the kind of data you are expecting, and scrub it of entities that might be a problem.
You can't do either one of these on the browser side because a bad actor can pretend to be your code and submit it without the checks. (For good clients, you can run a check that prevents them from making simple mistakes. You can't trust that, though.)
Validating that you have a correctly formatted URL is not too hard. You can even request the URL to make sure that it's reachable. But that doesn't tell you anything about the content, because evil people can't be trusted to turn on the evil flag in their packets. The safest case is to drop all submissions with IDNs; the next safest is to compile a list of homographs and drop anything with those; after that, you might keep a blacklist (which, unfortunately, can grow without reasonable bounds). You can outsource the blacklisting to centralized services checked via HTTPS API or DNS RBL or...
I found a bug in your page! The link to the Gitlab acknowledgements page isn't right (not sure what the correct link should be). Let me know what my bounty will be ;)
I believe you are referring to the original title: "List of Y Combinator companies I have worked with(hacked)" (I had the same feeling when I opened the page).
However, the author is a security researcher and has indeed hacked (as in, disclosed vulnerabilities) the companies in question, so I thing the title is not ambiguous.
I don't think anything he found could come close to bankrupting any of the companies... but I agree he should be receiving a more significant award, even if the companies don't have a bug bounty program yet.
Most of them are early stage, most of them just can't trust a random email,Some of them want to save money for facebook ADs. That's how startups work these days.
While waiting for their website to accept my upload I was poking around and decided to see what I could get into. I found that the addresses were something like /user/<sequential id> so I wondered "could I look at another user's page by changing the ID? Na, no way that would work"...but it did work. In fact I was able to see and theoretically control app submissions for all 500 developers at the time. This set of pages even included tax IDs.
So I immediately went to Palm and told them about the issue and how to reproduce. After about two weeks they finally reported that the issue was fixed...except it wasn't. What they did was change the way the pages were accessed from a standard GET with the ID in the URL to using JavaScript to accomplish the same thing but kinda sorta hide the ID (so basically fetch content via JavaScript versus page loading via direct browsing). So naturally I was able to change the ID and still get in.
It would take them another month to finally fix this issue. I was never able to convince them to let developers know their tax IDs may have been exposed along with all of their other information. I did get a special mention in one of their release notes but they spelled my name wrong :(