Many folks in the security community might suggest a) An oblique warning publicly like "There exists a security problem with this; I have mailed the devs" b) actually mailing the devs c) waiting for confirmation of fix or a reasonable time and only then d) tar-and-feather. The term-of-art for this is "responsible disclosure."
This incentivizes people to fix things quickly and preserves the reputational value of breaking into things without researcher-vendor relations getting adversarial when you announce something like "I harvested a couple dozen of your customers' API keys" or "Here's an exploitation roadmap you can follow in your browser" in a public forum.
A policy of so-called responsible disclosure is a reasonable approach to take when dealing with an established product/service that contains a minor vulnerability, something potentially dangerous but unlikely to be exploited in the immediate future with serious negative effects.
In this case, we appear to have a new project run by people who don't know what they're doing, with a glaring vulnerability that had presumably already compromised 80+ people's sensitive credentials and in turn who knows what other sensitive information. Bringing it down as fast as humanly possible and loudly so no-one else gets damaged in the meantime is entirely justified in a case like this.
Completely agree. Look at his responses to the security issue being brought up in the original thread [1]. This is someone who is clearly playing fast and loose with some frameworks, and people's information, and does not deserve to be given that consideration.
In his own responses, he says that he won't email the users! I can't imagine how upset I would be if this had my information. Access control for something like this is dead simple
But the privacy of a security disclosure shouldn't just be based on the company responsible for it. It should also take into account the fact that users of that company's product could be harmed by a public disclosure.
In this case I think the public disclosure would have two effects:
1. Put current users of the product at risk.
2. Prevent people from signing up for the product.
So the question really becomes, does 1 outweigh 2, or vice-versa? (and the answer to that also depends on how cooperative & quick the company will be with a fix)
Also, this is what I would call a functioning market. People using SaaS products should make it very clear that skipping security will take "viable" out of you MVP. Making an example of them may be a humiliating experience for the guys who built this, but it incentivizes doing the right thing.
It's not about kindness toward the people running the service, it's about kindness toward the service's users who are potentially compromised.
With responsible disclosure, only the original poster and anyone else who happened to figure this out would know. Now everyone does, and any random attacker can just go to the site and harvest AWS credentials from anyone signed up.
I understand what you're saying, but in this case I doubt that hiding it was going to help much. The vulnerability would have been obvious to a lot people, and the site had already gone high profile via sites like HN so many people would have been aware of it.
In a position where no choice of action/inaction is guaranteed to be harmless, I think limiting the damage is probably the most practical choice, and certainly a reasonable one. It limits the number of potential victims, and it also serves as a warning to those developing future sites that this sort of screw-up is not acceptable.
My own policy is that each situation is unique and in some cases you have to disclose details upfront.
I believe this was one of those cases. The founders of this application were told in the original thread that there were security issues. They didn't respond to the issue and continued allowing users to signup.
Their immediate response should have been to shut down the application with a maintenance page. Their response was instead to tell users to delete their accounts[1].
The other factor here is that because of the type of application users were likely to upload private and sensitive information. This wasn't a simple todo application where users would test it out with fake data, it is a backup application.
The combination of poor initial response, the sensitivity of the data being used and the popularity of the application (being at the top of HN, all over twitter etc.) would lead me to make the exact decision what this blogger did. It was important to notify all users asap that there are problems here, so that they could act on it.
Edit: didn't you do something similar with the Diaspora launch? I think that was another example where it was important to get the vulnerability information out since that first release was popular, users were uploading sensitive information and it was going to take some work to secure the app.
I gave Diaspora advice on fixing the vulnerabilities then a week to do it prior to mentioning anything more specific than "There exist multiple very bad bugs here."
I mis-remembered. It is interesting to read that thread again[1] since there was a similar discussion about disclosure.
FTR, I don't think that the gap between saying there is a security vulnerability and describing it is very large, especially when the audience contains capable penetration testers.
He submitted this article after the vulnerability was already fixed. (I grant that the initial comment came before.) I'd be inclined to agree with you, overall, save for a couple mitigating factors in this case:
1) The founder's behavior in the other thread, including refusing to notify affected parties.
2) Such a simple mistake worries me about what else might be vulnerable in the application which is built to handle users' backup data, and for that reason alone, I think this article is extremely important right now.
This is not a classic security problem causing a "woops" on border line case. This is a failure to implement 101 basic feature.
I know it's hard for people behind this company and they probably invested a lot of time and love to build this product. But we can't just let that pass, for the sake of people that'll use that service next (I mean, with something that basic missed, what next ?).
Please, just get back to learn creating web applications, and see you in a few months for a great product ! (because, yes, the idea was interesting)
I'm generally with you, but I do hope that this tarring and feathering will drive home one point:
Don't trust the client.
The user ID in the URL like this is a giant "try editing me and see what happens" sign, even if you came with no intention of providing unsolicited pen testing. I seriously doubt just this one person noticed.
Not sure what the parent is referring to, but back in the day you'd have porn preview galleries with no index, but with an easily enumerable id in the URL. Ah, youth...
I think it's at an entirely different level when it's a brand new company and a trivial security flaw. That just seems like incompetence, and he's absolutely right to suggest not trusting people like that with your data.
It's incompetence (at security), but not malice, and they were willing to fix the problem quickly.
It's a problem of competing claims -- you want to keep the world safe so end users are protected, and are willing to use new (secure) services, but you also want to avoid discouraging developers (either these guys, or others who see how they're being ragged on and choose not to develop something on their own).
It's not a fundamental flaw in the application, just an admin interface error. Yes, they should have known to test, but I reserve the nuclear hate for willfulness, since hate and vitriol is sometimes in short supply.
The mistake betrays so much incompetence that there is really no way for me to trust anything they ever do again. The other mistakes they make might not be quite so easy to find.
Just yesterday we had someone publish a "securely delete your email" application. 'tptacek found problems in it immediately[1], but he didn't call the guy incompetent or an idiot or "never trust anything he does again." There was no attempt to shame.
I see the more experienced people around here have a lot more sympathy for these guys. If you've done a lot, you've also had some public mistakes. You grow empathy.
I do find the company's follow-up offensive. Hopefully they will learn from that, as well.
The thread in question involved a few broad oversights in a tool, not an immediate disclosure due to a trivial oversight. There's a difference between not generating random numbers correctly and immediately disclosing every AWS key you've been given.
The purpose of "responsible disclosure" is to prevent subtle vulnerabilities from being known by more people.
For a vulnerability as obvious as this, it's a fair bet that bad guys will notice immediately. "Responsible disclosure" is great when you've discovered something tricky, but it's irresponsible when anyone else can notice as easily as you can.
Remember that the term "responsible" is about responsibility to the users, not to the developers. If publicizing a vulnerability would leak it to bad guys who don't have it, the responsible thing to do is not to leak it. If the bad guys already have it, the responsible thing to do is to tell the public. (After all, disclosure is about whether to tell the public, not whether to tell the developers.)
I agree with you and have done so in the past. Quietly telling the developers of a website that they have large vulnerabilities that allow me to gain almost complete access to large numbers of accounts.
With things like that, especially when revealing it could lead to people using the vulnerability maliciously I don't think it is ethical to release details of it, unless they don't make any indication they are going to fix it.
It is bothersome when they don't even thank you for bringing it to their attention however.
You speak of "responsible disclosure", but what about "responsible launch"?
If a backend is coded this poorly, it betrays irreparable and highly dangerous levels of idiocy, laziness, and lack of foresight in the ones who coded it. Everyone deserves to be informed of this blunder so they know to avoid this group like the plague.
Public ridicule and preemptive destruction of the brand is the only conscionable reaction.
It sounds like it wasn't launched yet. The founders say they built it for themselves and their friends to start. Someone discovered the URL and posted it to Hacker News.
They probably should have shut it down or disabled registrations once it got out until it was tested.
> It sounds like it wasn't launched yet. [...] Someone discovered the URL and posted it to Hacker News.
Sorry, but if you put a public site on the Internet, somewhere it can be discovered, and you are prompting people to put in sensitive credentials on that site, then you have launched for practical purposes. You should be implementing security measures accordingly.
If you're not ready for that and just want to show friends, it's not exactly rocket science to add basic HTTP Auth to the site, lock it to specific IP addresses, or any number of other trivial measures that would have prevented this problem.
Friends are customers, too. One row in your database is a customer.
Before ever putting a service up on the public Internet (service defined here as "accepts arbitrary requests" and "delivers arbitrary responses"), I would hope every human being that knows his way around a text editor treats user data like the Dead Sea Scrolls. If you store a row in a database, you then think of every way that an unauthorized party can gain access to that row and close each in multiple ways. I can recite dozens of cases where user data hasn't been treated with the respect it deserves (i.e., every single Bitcoin disclosure due to newer developers running sites that are handling money).
If people took user data more seriously than they do in general, we'd have a lot less leaks. Imagine if this had gone undiscovered and the service took off? Imagine how many undiscovered vulnerabilities there are in there, with this track record to start?
I can't sympathize with this at all. I just can't.
It sounds like it wasn't launched yet. The founders say they built it for themselves and their friends to start. Someone discovered the URL and posted it to Hacker News.
It should not have been on the public internet without access control for editing/viewing personal information like this - as soon as a site is visible on the internet there are bots trying all conceivable urls on it and scraping for information. If you look in your logs for any server you'll find all sorts of php,aspx etc urls as bots try to find vulnerabilities, no matter what you're running. I'm sure there'll be some Rails scrapers out there too though perhaps they're not too common yet.
There are probably a lot of other holes if they left the user security so wide open.
I don't get this. Ensuring that users can't edit/access the profiles of other users is trivial in most frameworks.
It shouldn't be something that slips through testing. If you aren't doing that from the start, something is seriously wrong with how you're building out your application.
This incentivizes people to fix things quickly and preserves the reputational value of breaking into things without researcher-vendor relations getting adversarial when you announce something like "I harvested a couple dozen of your customers' API keys" or "Here's an exploitation roadmap you can follow in your browser" in a public forum.