Everyone here that is thinking of giving this company the benefit of the doubt needs to go read their (smeagle) responses to RKearney from the original thread. Here are some samples of the careless attitude behind this:
----
"if anyone's concerned about your AWS key, just destroy your IAM user and create a new one. that's what it was designed for."
----
In response to advice saying they should notify users by email:
"good idea.
actually, we'll just wipe them and force new ones."
----
In response to RKearney warning people about just what exactly is exposed:
I was slightly sympathetic to them right up until smeagle posted RKearney's email. While not noticing an incredibly obvious security hole is serious, it's somewhat understandable in the context of a site that unintentionally goes public before it's ready. What's far, far worse is the mindset in which someone who points out a security hole is the problem, and should be personally attacked.
They should have thanked him, notified their users, done a thorough review of their own security, and warned new signups to only use IAM keys. Instead they got defensive, made excuses, and attacked the messenger.
> it's somewhat understandable in the context of a site that unintentionally goes public before it's ready
Sorry, but no. There is absolutely no good reason while creating an user controller - even when writing the very first lines - to not check against current user on sensible actions.
Actually, I may argue that's the very first thing you do once the controller skeleton is set up.
I'm a big fan of responsible disclosure, and I think that the HN crowd has far too much of a one-track mind about "always disclose security problems everywhere!", especially with something that may not have been ready for release.
But that "contact Ryan if anything goes wrong" is grade A asshole.
This is a severe lack of customer service. The least that can be done is a quick shutdown of the site until there's a good fix, an email to all customers (since legally they have to disclose the breach: http://en.wikipedia.org/wiki/Security_breach_notification_la...), and a thanks out to whomever reported the issue.
If you make a mistake, own up to it. Honesty is the best key to building a business, and I'm sure they've at least lost the HN trust for any product in the future.
Many folks in the security community might suggest a) An oblique warning publicly like "There exists a security problem with this; I have mailed the devs" b) actually mailing the devs c) waiting for confirmation of fix or a reasonable time and only then d) tar-and-feather. The term-of-art for this is "responsible disclosure."
This incentivizes people to fix things quickly and preserves the reputational value of breaking into things without researcher-vendor relations getting adversarial when you announce something like "I harvested a couple dozen of your customers' API keys" or "Here's an exploitation roadmap you can follow in your browser" in a public forum.
A policy of so-called responsible disclosure is a reasonable approach to take when dealing with an established product/service that contains a minor vulnerability, something potentially dangerous but unlikely to be exploited in the immediate future with serious negative effects.
In this case, we appear to have a new project run by people who don't know what they're doing, with a glaring vulnerability that had presumably already compromised 80+ people's sensitive credentials and in turn who knows what other sensitive information. Bringing it down as fast as humanly possible and loudly so no-one else gets damaged in the meantime is entirely justified in a case like this.
Completely agree. Look at his responses to the security issue being brought up in the original thread [1]. This is someone who is clearly playing fast and loose with some frameworks, and people's information, and does not deserve to be given that consideration.
In his own responses, he says that he won't email the users! I can't imagine how upset I would be if this had my information. Access control for something like this is dead simple
But the privacy of a security disclosure shouldn't just be based on the company responsible for it. It should also take into account the fact that users of that company's product could be harmed by a public disclosure.
In this case I think the public disclosure would have two effects:
1. Put current users of the product at risk.
2. Prevent people from signing up for the product.
So the question really becomes, does 1 outweigh 2, or vice-versa? (and the answer to that also depends on how cooperative & quick the company will be with a fix)
Also, this is what I would call a functioning market. People using SaaS products should make it very clear that skipping security will take "viable" out of you MVP. Making an example of them may be a humiliating experience for the guys who built this, but it incentivizes doing the right thing.
It's not about kindness toward the people running the service, it's about kindness toward the service's users who are potentially compromised.
With responsible disclosure, only the original poster and anyone else who happened to figure this out would know. Now everyone does, and any random attacker can just go to the site and harvest AWS credentials from anyone signed up.
I understand what you're saying, but in this case I doubt that hiding it was going to help much. The vulnerability would have been obvious to a lot people, and the site had already gone high profile via sites like HN so many people would have been aware of it.
In a position where no choice of action/inaction is guaranteed to be harmless, I think limiting the damage is probably the most practical choice, and certainly a reasonable one. It limits the number of potential victims, and it also serves as a warning to those developing future sites that this sort of screw-up is not acceptable.
My own policy is that each situation is unique and in some cases you have to disclose details upfront.
I believe this was one of those cases. The founders of this application were told in the original thread that there were security issues. They didn't respond to the issue and continued allowing users to signup.
Their immediate response should have been to shut down the application with a maintenance page. Their response was instead to tell users to delete their accounts[1].
The other factor here is that because of the type of application users were likely to upload private and sensitive information. This wasn't a simple todo application where users would test it out with fake data, it is a backup application.
The combination of poor initial response, the sensitivity of the data being used and the popularity of the application (being at the top of HN, all over twitter etc.) would lead me to make the exact decision what this blogger did. It was important to notify all users asap that there are problems here, so that they could act on it.
Edit: didn't you do something similar with the Diaspora launch? I think that was another example where it was important to get the vulnerability information out since that first release was popular, users were uploading sensitive information and it was going to take some work to secure the app.
I gave Diaspora advice on fixing the vulnerabilities then a week to do it prior to mentioning anything more specific than "There exist multiple very bad bugs here."
I mis-remembered. It is interesting to read that thread again[1] since there was a similar discussion about disclosure.
FTR, I don't think that the gap between saying there is a security vulnerability and describing it is very large, especially when the audience contains capable penetration testers.
He submitted this article after the vulnerability was already fixed. (I grant that the initial comment came before.) I'd be inclined to agree with you, overall, save for a couple mitigating factors in this case:
1) The founder's behavior in the other thread, including refusing to notify affected parties.
2) Such a simple mistake worries me about what else might be vulnerable in the application which is built to handle users' backup data, and for that reason alone, I think this article is extremely important right now.
This is not a classic security problem causing a "woops" on border line case. This is a failure to implement 101 basic feature.
I know it's hard for people behind this company and they probably invested a lot of time and love to build this product. But we can't just let that pass, for the sake of people that'll use that service next (I mean, with something that basic missed, what next ?).
Please, just get back to learn creating web applications, and see you in a few months for a great product ! (because, yes, the idea was interesting)
I'm generally with you, but I do hope that this tarring and feathering will drive home one point:
Don't trust the client.
The user ID in the URL like this is a giant "try editing me and see what happens" sign, even if you came with no intention of providing unsolicited pen testing. I seriously doubt just this one person noticed.
Not sure what the parent is referring to, but back in the day you'd have porn preview galleries with no index, but with an easily enumerable id in the URL. Ah, youth...
I think it's at an entirely different level when it's a brand new company and a trivial security flaw. That just seems like incompetence, and he's absolutely right to suggest not trusting people like that with your data.
It's incompetence (at security), but not malice, and they were willing to fix the problem quickly.
It's a problem of competing claims -- you want to keep the world safe so end users are protected, and are willing to use new (secure) services, but you also want to avoid discouraging developers (either these guys, or others who see how they're being ragged on and choose not to develop something on their own).
It's not a fundamental flaw in the application, just an admin interface error. Yes, they should have known to test, but I reserve the nuclear hate for willfulness, since hate and vitriol is sometimes in short supply.
The mistake betrays so much incompetence that there is really no way for me to trust anything they ever do again. The other mistakes they make might not be quite so easy to find.
Just yesterday we had someone publish a "securely delete your email" application. 'tptacek found problems in it immediately[1], but he didn't call the guy incompetent or an idiot or "never trust anything he does again." There was no attempt to shame.
I see the more experienced people around here have a lot more sympathy for these guys. If you've done a lot, you've also had some public mistakes. You grow empathy.
I do find the company's follow-up offensive. Hopefully they will learn from that, as well.
The thread in question involved a few broad oversights in a tool, not an immediate disclosure due to a trivial oversight. There's a difference between not generating random numbers correctly and immediately disclosing every AWS key you've been given.
The purpose of "responsible disclosure" is to prevent subtle vulnerabilities from being known by more people.
For a vulnerability as obvious as this, it's a fair bet that bad guys will notice immediately. "Responsible disclosure" is great when you've discovered something tricky, but it's irresponsible when anyone else can notice as easily as you can.
Remember that the term "responsible" is about responsibility to the users, not to the developers. If publicizing a vulnerability would leak it to bad guys who don't have it, the responsible thing to do is not to leak it. If the bad guys already have it, the responsible thing to do is to tell the public. (After all, disclosure is about whether to tell the public, not whether to tell the developers.)
I agree with you and have done so in the past. Quietly telling the developers of a website that they have large vulnerabilities that allow me to gain almost complete access to large numbers of accounts.
With things like that, especially when revealing it could lead to people using the vulnerability maliciously I don't think it is ethical to release details of it, unless they don't make any indication they are going to fix it.
It is bothersome when they don't even thank you for bringing it to their attention however.
You speak of "responsible disclosure", but what about "responsible launch"?
If a backend is coded this poorly, it betrays irreparable and highly dangerous levels of idiocy, laziness, and lack of foresight in the ones who coded it. Everyone deserves to be informed of this blunder so they know to avoid this group like the plague.
Public ridicule and preemptive destruction of the brand is the only conscionable reaction.
It sounds like it wasn't launched yet. The founders say they built it for themselves and their friends to start. Someone discovered the URL and posted it to Hacker News.
They probably should have shut it down or disabled registrations once it got out until it was tested.
> It sounds like it wasn't launched yet. [...] Someone discovered the URL and posted it to Hacker News.
Sorry, but if you put a public site on the Internet, somewhere it can be discovered, and you are prompting people to put in sensitive credentials on that site, then you have launched for practical purposes. You should be implementing security measures accordingly.
If you're not ready for that and just want to show friends, it's not exactly rocket science to add basic HTTP Auth to the site, lock it to specific IP addresses, or any number of other trivial measures that would have prevented this problem.
Friends are customers, too. One row in your database is a customer.
Before ever putting a service up on the public Internet (service defined here as "accepts arbitrary requests" and "delivers arbitrary responses"), I would hope every human being that knows his way around a text editor treats user data like the Dead Sea Scrolls. If you store a row in a database, you then think of every way that an unauthorized party can gain access to that row and close each in multiple ways. I can recite dozens of cases where user data hasn't been treated with the respect it deserves (i.e., every single Bitcoin disclosure due to newer developers running sites that are handling money).
If people took user data more seriously than they do in general, we'd have a lot less leaks. Imagine if this had gone undiscovered and the service took off? Imagine how many undiscovered vulnerabilities there are in there, with this track record to start?
I can't sympathize with this at all. I just can't.
It sounds like it wasn't launched yet. The founders say they built it for themselves and their friends to start. Someone discovered the URL and posted it to Hacker News.
It should not have been on the public internet without access control for editing/viewing personal information like this - as soon as a site is visible on the internet there are bots trying all conceivable urls on it and scraping for information. If you look in your logs for any server you'll find all sorts of php,aspx etc urls as bots try to find vulnerabilities, no matter what you're running. I'm sure there'll be some Rails scrapers out there too though perhaps they're not too common yet.
There are probably a lot of other holes if they left the user security so wide open.
I don't get this. Ensuring that users can't edit/access the profiles of other users is trivial in most frameworks.
It shouldn't be something that slips through testing. If you aren't doing that from the start, something is seriously wrong with how you're building out your application.
10 minutes? Never give your information to a business that made a mistake like this, ever.
That wasn't merely a "security vulnerability". It was also a demonstration that the people running the business have absolutely no idea what they are doing when it comes to security, privacy, or testing and release processes. (Actually, there is an alternative explanation, which is even worse: they knew and didn't care. I prefer to assume naivety rather than malice.)
Unfortunately, the only sensible action when faced with a business like this is to run away and not look back for a very long time, except perhaps to check who the people responsible were so you can avoid anything else they work on in the near future as well.
"Never give your information to a business that made a mistake like this, ever."
Fwiw back in 1996 or 97 the UPS website did the same thing. By altering the tracking number you could see somewhat complete information on someone else's shipment. Since the tracking numbers ran in sequence from the shippers log books giving one tracking number from a competitor you could see all their customers. (To get that all you had to do was place a single order so they shipped to you. Although I guess it wouldn't have been even easier to social engineer someone to simply give you any tracking number and save that step.)
Totally not the same thing. Not even close. Knowing a tracking number is nothing compared to having access to someone's AWS account. That's like saying knowing where someone's car is parked is the same as having their keys and a full tank of gas.
I agree with silhouette. These guys scaffolded a rails project and and then slapped bootstrap on it. You can't trust an MVP this extreme.
This is exactly the same thing - an obvious security oversight that resulted in private information being disclosed, simply by modifying the URL.
The fact that the results are so different is irrelevant - the attack vector was essentially the same.
Also, even with the complete lack of security on this site, it should still not be possible to take any action on the victim's AWS account. IAM has read-only roles for this exact reason - hopefully no-one was negligent enough to post their master AWS key/secret in to this or any other third-party site.
The results absolutely do matter. If you're running a site, you should be auditing the areas where the security risk is the greatest and taking extra precautions, and those areas are hopefully few. If you're not first taking care of the places where the results are disastrous, then you're doing it wrong.
Obviously if there is a known attack vector, you would fix any similar issues everywhere, but not all code is the same.
The point I was making is that this particular flaw is indeed the same as the UPS flaw when considered at a high level - modifying a URL caused sensitive data to be disclosed.
In practical terms, of course the nature of data that is disclosed is relevant. AWS keys are incredibly valuable, and should be treated as such.
Fair point, though of course that was at a time when most of the world hadn't even heard of the World Wide Web yet. Most people running web sites handling sensitive information have learned a lot of lessons since then.
That’s a really interesting anecdote, but not fully thinking through the security implications of the Web in 1996 is not the same thing as royally screwing up in 2012.
Currently given a tracking number you can only get the following info:
Package weight
Shipping date
Who signed for it (last name)
Where package was left
Town delivered to
When delivered
And some other nominal info.
In the old days you saw exactly who the shipper was and detailed info on the recipient and recipients address. There was probably other info but what I've listed is what
I remember. I remember thinking at the time that it would be valuable and contain exactly what a competitive company would need to gather a list of potential customers.
I go one step further. I refuse to give information that provides more access to a business than they need to have or that can even affect any other service I receive from anywhere else.
Here, I'd like to give them a key that works only with glacier vaults that they have created, and nothing else. If this isn't possible, then I'll go without.
> It was also a demonstration that the people running the business have absolutely no idea what they are doing when it comes to security, privacy, or testing and release processes.
I'm pretty sure a lot of successful startups were started by people who had "absolutely no idea what they were doing". Give them a break...
Yeah, if we couldn't do business with people who ever released software with security holes, none of us would have jobs.
The big fuck-up is when they told anyone with key problems to contact the guy who found the issue. That's why we should consider them unprofessional. The security holes were accidents. The blamestorm was deliberate.
Those who know me will laugh to see me continuing to beat this dead horse, but this is a really great example of why ORM+scaffolding is an anti-pattern, by which I mean it seems like a good idea at first, but the costs outweigh the benefits.
It's absolutely true that you can use ORM and scaffolding patterns in a totally secure way. But the problem is that the defaults are insecure -- every table can be accessed, every record is available, every field can be edited, and the URLs for doing so are (deliberately) easily guessable.
One of the simplest and most fundamental rules of effective security is to close everything down by default and only open things up as required, after careful consideration. Scaffolding breaks that rule.
One of the simplest and most fundamental rules of effective security is to close everything down by default and only open things up as required, after careful consideration. Scaffolding breaks that rule.
This is really an argument for building authentication and authorization into every app, rather than against scaffolding/ORMs.
As rails doesn't have auth (of both kinds) built in, it doesn't really matter if they offer scaffolding or not - any editing url you make is going to be completely without protection unless you add it. The only thing you'd be adding by not having guessable urls without authentication/authorization is security through obscurity.
So IMHO the lack of auth is really the issue here (and the thing that breaks the rule in your final sentence), rather than the guessable urls.
I think it has less to do with ORM and more to do with laziness. Its really one line in the controller, if loggedin user is not the user trying to edit redirect.
It's not just laziness. 99% of all framework tutorials I've seen out there completely ignores even basic authentication/authorization issues, which are universal to all real websites. This lack of attention to details is cultivated.
I guess they are using Rails. If they had only taken a few hours to go through a free tutorial like the one at railstutorial.org , they could have avoided this blunder.
>> One of the simplest and most fundamental rules of effective security is to close everything down by default and only open things up as required, after careful consideration.
Which is why my Rails authorization library takes a whitelisting approach.
Well, scaffolds are not suppose to totally avoid coding. They try to provide what you may write again and again, but you're supposed to take that as a basis, not a final product.
At first, I thought I was supposed to notice the complete lack of "HTTPS" in the URL bar.
The data leakage obviously overshadows it, but I can't think of a site that wouldn't be a better "fit" for SSL encryption than an app like this, aside from banking/government sites.
SSL might have been a "nice-to-have" back in the day when there were real arguments to be made against it (mostly performance-related), but even those don't really apply to a "pet project" made by "two nerds" (smeagol's words, not mine.) And for an app like this, I think it's critical.
I took a look at the team. Is it considered apropos to state how surprised you are that developers who come across as relatively senior are capable of making an incredibly fundamental security mistake such as this?
It looks like this was just the default Rails resource scaffolding.
> Is it considered apropos to state how surprised you are that developers who come across as relatively senior are capable of making an incredibly fundamental security mistake such as this?
Only if you're concerned that you can't tell who is "relatively senior." In this case, your judgement was unfortunately wrong.
'Relatively senior' here means 'ostensibly trusted with important tasks in the past'. Both of the creators of this application (I won't say 'founders of this startup' because that's silly) are ex-WePay.
Working in a cool company doesn't automatically mean the developer is competent. Neither is working for a large corp for that matter. I've seen far too many examples of this, unfortunately.
I had to look up WePay on Google. The wikipedia page says they have 30 employees as of a year ago, did YC and 1 round of funding.
I have to be honest - that doesn't demonstrate a high level of trust at all these days. It's sad, but true. Plus, if you say they're "ex-WePay," I assume they were just everyday developers for WePay, not critical resources.
"Head of Product" sounds like a product manager, it doesn't even sound like an engineer at all - this makes it even less surprising that these pathetic security holes existed.
Just to be clear, that means my assumptions you lambasted were generous. Not folly.
I'm curious, did you let them know that this vulnerability exists before you wrote an article and posted it to HN?
If you let them know and they ignored you, then I understand that you'd want to write an article and spread it around. It's important that customers know when a company doesn't value their security. At that point, the proper way for them to handle it is to quietly fix it, and then let all their affected customers know so they have a chance to change their security settings.
However, if you didn't give them a bit of time first, then you are doing more damage than good to them--and their customers.
Holy shit! I consider myself a mediocre programmer at best and even I wouldn't make such a dumb mistake. This is literally something only a amateur would do. I'm just awe struck that this would even happen. How?
What we've found is that there are 2 mindsets: building and breaking. When you're building a product it's super hard to switch to the breaking mindset of security, simply because mental context switching is expensive and mentally exhausting. The most important thing is to force yourself into that mode before posting anything publicly. If you don't have the security experience, have a friend or service (like ours) look it over. Data is one of the most important assets to your company (or project), and any sort of disclosure can shut you down permanently.
it's something someone would do who's never worked with authentication and authorization before and doesn't have the fallback of a professional tester (aka breaker).
As people have mentioned, rails doesn't have it built in. I've used gems to provide it since I don't trust myself to write good enough security algorithms (and really, why reinvent the wheel if I don't have to).
In .net we can use the asp.net membership. But you've always got to have that authorization part, which I think can get forgotten about unless you've got a system under you belt or something/someone to crib from.
Sometimes you just don't think, and sometimes it becomes very public.
However, I do think that authentication is where people may believe they can stop, forgetting or maybe not understanding, that authentication really doesn't do much, without an authorization system.
I'd love to see some concrete suggestions on the right way to do security for a site like this. This would take far more than protecting a few web pages from unauthorized access. What else should they do? How should they store sensitive data like AWS keys? Should they include a feature to force the creation of a new temporary key to prevent users from naively storing their master key? Is there a better mechanism than storing the key?
One appropriate way to do this is to not build this type of application as a web service, but instead as a standalone application. (Disclaimer: I haven't really tried this product, for obvious reasons.)
I've been working on a similar kind of problem (linking to AWS); the right thing is to walk users through IAM credential creation (if you must have the keys), explaining exactly what rights are needed, why, and how to verify, and to not take anything more than you need.
I'd also encapsulate use of any user-provided sensitive data in an API then called by your service, and put some logic within the API (because I don't want a random web UI screwup to dump everything for everyone) -- rate limits, etc.
I've been working on a similar kind of problem (linking to AWS); the right thing is to walk users through IAM credential creation (if you must have the keys), explaining exactly what rights are needed, why, and how to verify, and to not take anything more than you need.
I'm puzzled. For the kind of service they want to offer, to have such newbie security vulnerabilities makes you wonder if it even works.
It's the kind of flaw that any moderately experienced developer never leaves opened.
Security stuff aside, I'm curious why a system would be designed this way. Surely (in most systems) all users have the same page for managing their account (eg: /account) or is this system designed so that the management portion (eg: what a support person would use) is the same as what the users use? I don't think I've encountered a site that had accounts edited this way before.
The last time I used Rails (2009), the way RESTful URLs are set up encouraged this pattern. It's simple enough to restrict access to the user in question, but it is (or was) easy to overlook.
It's relatively common practice to have an url like this: /yourusername or /users/yourname or /users/32, which shows a public profile, and therefore to have a guessable url for it, and to have a link to edit your account on that page if you are that user.
That in itself is not a security problem, but having no access control obviously is.
don't confuse the URL structure with the application design. just because the url is /user/87/edit doesn't mean that there is a file called edit inside a folder called 87. almost any modern web development framework lets you create whatever URLs you want. i'm sure, internally, that every user's edit panel is powered by the same code.
It's a Rails application and it does reflect the structure. It's the edit action on a controller, likely the UsersController. It looks like they tried to implement a user management system using has_secure_password and didn't know enough to do it properly.
They therefore created a default resource structure (probably using the rails scaffold generator command) which includes the id in the URL on all member routes, including the route for the edit action.
> I don't think I've encountered a site that had accounts edited this way before.
Any site that employs the pattern of specifying user account routes using the user's primary key in the URL needs to implement authorization. This site clearly skipped that step.
To me, this looks like the stereotypical bare-bones rails deployment by a newbie.
If it's accessible on the public internet and asks for something as secure as API keys, that is when you should worry about security, not when it's "meant to be picked up by HN".
Exactly. When you go to provide your information to a website, you need to consider "what are all the possible outcomes of this?" We can't use the reputation of the providers of a service as a proxy and thus have zero information about what to expect. Since you almost never have the source code for the website you're giving your information to, using your logic, just never give your data out. Right?
On one hand we all want to move quickly, get users, add new features, etc etc.
On the other, security issues like this are just so vital that nothing else really matter if your data is not secure. It's especially true for a BACKUP SERVICE that promises ridiculous stuff like "99.999999999%" uptime on the frontpage.
honestly, this was all accidental. it was a pet project we started to toy with Glacier and a week later i accidentally hit the Like button sending a ping to my friends on FB. bless my friends for being so influential i guess. shame on us for using Rails carelessly.
if you have any experience with startups, you'll know that 99% of the things you launch go nowhere--this project was no different. we honestly thought our site was of absolutely no consequence. we're truly thankful so many people found it useful, but trust me we're sorry there was a hole.
however, just to be clear:
- about 20 accounts were exposed, including me and my buddy
- i emailed all of them, and wiped out the credentials
- they quickly responded (i saw the updates come in)
thankfully, AWS is designed for such situations. with a few clicks, people deactivated their credentials (both IAM and main account) and regenerated new credentials. the fact that all the early signups were techies who know their way around AWS really saved us.
one more thing: the correct quote is:
"Glacier is built for durability of 99.999999999%"
also: i agree with ryan--don't trust 10-minute old startups :-)
Classy response. Now here's your chance to take lemons and make lemonade. Clearly your pet project is something that people find really interesting and useful. So it went public before you intended and had some security flaws: oh well, that's in the past now. Write your mea culpa about how much you learned from this experience, hit the front page of HN again, sign up a bunch of users, and go get some venture capital. Good luck!
Contacted a PR person in between the last thread and this one, I'm guessing? That's a rapid 180.
You have a long way to go in my mind, in terms of fixing the initial response. You probably have help now, which is great, but your initial kneejerk demonstrates underlying trouble to me which you need to fix.
You're in a tough spot, too, because you can't delete those godawful comments without looking suspicious.
You repeatedly write some form of that assertion (we're two nerds) as if it is supposed to excuse something. I honestly couldn't give a rat's ass regarding who you are. I care about your actions and your actions alone. Stop making excuses!
I'd like you to apologize not only for the disclosure, but also to the reporter for how you treated him in the other thread. The entire other thread of your responses is disgusting, and you don't get to write it off because of your gender, quantity, or employment status. Own your comments and stop excusing them with that bullshit line.
I have to admit that I would also be pleased if your service disappeared until you're working with somebody who has a little more experience with secure Web applications; this mistake betrays your experience. Since we all started somewhere, though, I can only hope you fix this on your own.
Think of durability like the bank telling you that your money is 99.99999999% secure in their vault, but you can only access it from 9 to 5, Monday to Friday. The bank's "uptime" is really low (40 hours / 168 hours * 100%) but your money's "durability" is quite high.
Thanks, I would find it helpful if you could also explain that on your homepage. The statement "Amazon Glacier is designed to provide average annual durability of 99.999999999% for an archive" is quite clear and meaningful, but "Glacier is built for durability of 99.999999999%" just seems like a non-sensical marketing blurb.
If I got my math right, this means that they expect to lose on average about 10 bytes per stored terabyte per year. (Of course these losses, should they occur, would probably be not uniformely distributed).
still... developing an application and then bolting on some security over top of it later seems like a recipe for disaster. And pushing it to a public server before any security has been implemented is a very stupid thing to do.
It may have been a prototype they quickly wrote, and planned to do a heartier implementation later. And when they suddenly got onto HN's front page, a whole bunch of excitement happened and they completely forgot that this was just a prototype.
"Pushing it to a public server" is really minor. Mozilla had this issue, too, when they had a new filename technically available on a server and someone jumped the gun and told the whole world that the new version was ready. Well, it wasn't. A bunch of kids whined that it was all Mozilla's fault for having a file available on their public server, but while it's arguable that a service that is reachable by URL has no expectation of privacy, it's a hell of a lot harder to argue that having a service reachable by URL implies a warranty that it is safe to use.
Friends in the 90's would run telnet and web servers with "Username:" "Password:" "Credit Card Number:" prompts. It was funny to watch that some people would type in apparently real data, although we never verified.
Wow. That's taking the idea of an MVP a little too far. Just goes to show worth doing a little research before handing over any real information to a new service like this.
I would recommend using CanCan for security if they haven't done so already so you can't just type in other users user_id in the url to view or edit.
https://github.com/ryanb/cancan
Cancan is great way to make sure that you can only read or edit your own records in the database with Rails.
Cancan probably wouldn't have prevented this. It's not because someone didn't use some library. The developer probably just did a User.find(params[:id]) instead of doing something like current_user from whatever authentication system they were using. He probably used the scaffolding generator to make everything and forgot to go back and ensure things are secure.
It's also interesting that the aws key/secret are "masked" on the page, but you can just visit http://www.iceboxpro.com/users/12.json and get the formatted json representation with no masking.
The optimistic and humble side of me wants to believe that this is a rare occurrence.
The truth is that I don't remember working on a single codebase that didn't have some eventually discovered vulnerability in auth(entication|orization). When I eventually do comb through controllers and find easily exploited access-control violations, I've often been met with responses similar to the behaviour of the developers at Icebox.
Rails does and will continue to protect you from a lot of mistakes, but nothing is going to help long term unless you know what words like authentication, access control and session management mean.
If you're a professional web developer and you care about your users then please buy and have a read through The Web Application Hackers Handbook[1]. Every page is dripping with easily exploitable attacks you didn't think of. That last app you built is almost definitely vulnerable to a handful of them.
Another hole that has been exploited in the past (speaking generally here, not about this specific startup) is a password reset function that confirms the email address it is sending the password/recovery link to. If the accounts are sequentially numbered, it's a trivial exercise to fetch a reset link for each member, and scrape the email address returned.
Major data leaks / security issues like this are not confined to sites that are 10 minutes old. Yesterday I discovered a recently funded startup is exposing all personal user data and activity to the world via their public, unprotected APIs. I'm hoping they fix it quickly before someone interested in harvesting that data finds it.
if you've used that service, the information you entered was publicly visible (key to access aws, etc) (the thread linked above says it has now been patched).
[i don't understand why, but when i access the link for this thread i get the gzipped page as a download; linux + chrome 22; firefox displays what appears to be gzipped data; wget saves the gzipped data as index.html; same behaviour for chrome on opensuse and ubuntu; windows 7 + ie9 (in a vm) shows the gzipped data in notebook; is no-one else seeing this?!]
[update: fixed now - it looks like it wasn't changing the content type]
Fortunately, you don't give App.net your credit card information directly. It's handled by Stripe, and they're past being "10 minutes old". (Even though this isn't easily verifiable in the checkout flow to a non-developer.)
What OS/Browser are you using? I turned on aggressive caching with CloudFlare to speed up page loads but it appears that it's only serving the gzipped version.
Its got nothing to do with the age of the startup, and everything to do with the quality of the engineering. You see problems like this and worse in companies that have been around for years.
It would have been more responsible to privately notify the owner of the site rather than karma whoring a blog post to top of HN.
This comment was completely tongue in cheek; in all seriousness it was great to see them fix this so quickly.
Tell me that blog post didn't read like a security exploit announcement...
Anyways, as discussed, it never should've happened in the first place, but it sounds like from the comment threads that it was likely Framework related, so.... yeah, still prettty bad.
Honestly, this isn't even a matter of engineering.
I don't know the rails solution, but a quick-and-dirty solution in other frameworks is to use a decorator on your controller/views that does something like:
if request.session.userId == action.userId:
pass
else:
return SecurityExceptionResult
The example above is like 10 mins to code and put under test once you fill it in with the necessary stuff- You're probably going to want to log would-be security issues and gracefully handle the error.
With that said, user-identity does not belong in a URL. If you just did /user/edit (we assume all operations are performed on the logged in user) and then moved your security validation down a level to verify that session.userId == model.record.userId you'd be much better off.
What do you call the role of the individual whose job it is to implement account management? Do you not call that person an engineer? If not - whose responsibility would you say it is to ensure shit like this doesn't happen?
Good point, but I read "engineering doesn't matter" to mean you shouldn't spend a significant amount of time carefully designing a system that's likely to change. pg is absolutely correct in that sense.
My point was the amount of time required to prevent security holes like the ones outlined in the link are minimal - preventing them isn't going to stand in the way of an engineer implementing other features.
Ryan, your post is NOT an example of responsible disclosure. You could have written your post and posted it AFTER alerting the Ice Box Pro guys and waiting until they had the main issues fixed. Your post would still be a good post. In fact, you seem to weigh the importance of your post getting on HackerNews above the security of the people who tried Ice Box Pro. The creators of Ice Box Pro had good intentions and messed up security (as almost any startup does to some extend). You are either ignorant to what security actually means or unethical, which at best is as bad as what the creators of Ice Box Pro did and maybe worse. Will you take responsibility if any of the users that tried Ice Box Pro get hacked as a consequence of your post?
Ah, I see you did let them know and the vulnerability was fixed before you posted. Good. I recommend saying such a thing in your post because it helps people like me understand that you are in fact responsible about the disclosure.
----
"if anyone's concerned about your AWS key, just destroy your IAM user and create a new one. that's what it was designed for."
----
In response to advice saying they should notify users by email:
"good idea. actually, we'll just wipe them and force new ones."
----
In response to RKearney warning people about just what exactly is exposed:
"in case you have issues with your AWS keys. RKearny's email: ryan@ryankearney.com https://secure.gravatar.com/avatar/f7d7b021fb488fe6a67ddb286....