I hope those who ripped into Homakov initially are feeling more mellow towards him. They certainly were justified in thinking him being rash, but his action raised far more awareness about this issue than the typical proper channel.
Rails devs (some, not all of them) had dismissed his complaint because "Every Rails developer hears about attr_accessible." Well, I'll be the first to say that I can't remember the last time that this update_attributes vulnerability had been pointed out to me. I certainly can remember all the times that Rails docs reminds me to use their sanitizing helpers when making ActiveRecord queries.
To be fair, I haven't developed apps that required the use of user-facing access to update_attributes, and maybe when I got around to using that, I would've wisely consulted the dev guides to make sure I was following best practice. But knowing me, I probably would've likely thought, "Well, that seems simple enough, here goes."
It's not that the logic behind this vulnerability is hard to understand...in retrospect, it's as clear and blatant as the processes that lead to SQL injection.
But surgical patients die because elite surgeons sometimes forget to wash their hands (Google "Atul Gawande checklist"). It's not an impossibility that a skilled dev team would overlook the update_attributes issue.
The Rails team was right in arguing that this wasn't a security risk given a half-competent dev. But they were looking at the problem from the wrong perspective and assumed that everyone is as familiar with Rails best practices as they were. So how else could Homakov convince them otherwise other than pricking a high-profile dev group?
What if Homakov managed to alert the Github team, and they managed to fix it quietly? Github would be safe but thousands of Rails sites would still be operating in ignorance. It truly stinks for the Github group that they had to respond to a five-alarm emergency on a Sunday...on the other hand, I think there are going to be a lot of Rails devs who are thankful that they (involuntarily) took one for the team. Thanks to Homakov, it was a small hit.
While I mostly agree that the Rails default should probably be changed, I think in fairness it's also worth pointing out that the Rails team does give fair warning about this issue. Specifically, anyone who reads the official Rails Security Guide back-to-back will know about this:
One might counter that nobody reads these guides back-to-back. I must admit I haven't read every word of every Rails guide. But I have read the security guide in its entirety, and I think every developer owes it to herself and her clients to do the same.
I'm not a big fan of the "this is documented" defense.
If Microsoft left a network accessible default passworded Admin account in Windows Server but documented it and told people to change it, would that be okay simply because it was documented? Documentation is no panacea for bad defaults.
Part of the OP's point (and he's absolutely right) is that this incident proves beyond a shadow of a doubt that just warning about the issue is clearly not enough. If the GitHub team screwed this up, what hope do the majority of the unwashed masses of Rails developers have, warning or no warning?
IIRC, something like that actually happened. Except it was a file server that threw your whole directory tree up on the net. oops! Why you ask? How could that even happen? It was just a few years before everyone had Internet, so it was assumed that your network was a LAN. (Source: http://www.grc.com/su-bondage.htm)
Of course, everyone should have really had a firewall anyway, so this was obviously cool right? After all, it's up to the user to secure their machine.
I think that every competent dev knows that something like update_attributes is inherently dangerous. So they probably keep it for backend import/maintenance tasks. At some point, someone copies the code/wrapper-function into the front-end.
The status quo of Rails is that everything is sanitized if you use the helpers. And validators on the models only look at data integrity. So all this protection, at least for me, kind of lulls you into feeling secure, because SQL inject is generally the typical, awful-case scenario.
update_attributes is not really a SQL inject attack vector (since the actual values are sanitized)...it's partially a social engineering scheme.
Actually, I do use update_attributes for public-facing interfaces. But only with attr_accessible. (I never use attr_protected, since blacklists are a disaster waiting to happen.)
On that checklist remark, I'd like to see and use more checklists. Hopefully some smarter person will write the "How to put a Rails app to prod" checklist, now that the need is obvious. Same for Django, or any other popular stack. It would be great if the communities created and maintained checklists summarizing best practices for different tasks.
Once you have a good checklist, you can turn it into a library so that the code can check the things on the checklist for you, or change the code so that the things you had on the checklist to avoid are impossible, and the things you had on the checklist happen automatically. Then you can stop checking the things on that checklist and start on a different checklist.
What you're describing is people doing work that can be automated. That can happen in software, but only in incompetent development teams, or under very special circumstances.
Surgery is different from programming because it's manual labor, and it still exists as a human activity because we don't yet have a feasible way to automate it.
Homakov kind of gave the impression he had discovered a previously unknown vulnerability in Rails. This is not the case. Rather, he discovered an instance in which one very prominent Rails app (Github) failed to implement a standard Rails security practice.
For those not familiar with Rails, it boils down to this: You as the programmer need to use a security feature built into Rails called mass assignment security. If you fail to use this feature, you have a vulnerability. In other words, the default is insecure by design. The alternative would be to make Rails secure by default, but that would mean pretty much nothing would work until you explicitly granted access where necessary. I guess the core team figured "not working by default" was worse than "insecure by default."
Homakov obviously disagreed with this design decision. I can understand why, and I mostly feel the same way.
So Homakov posted an issue to the Rails repo Github (https://github.com/rails/rails/issues/5228) suggesting the default be changed. He made a good case and was initially polite. A few days passed, and nobody else had posted to his thread.
So, presumably to draw attention to this issue, he exploited the fact that Github had failed to use mass assignment protection. Specifically, he posted a comment with a far-future timestamp, which obviously should be impossible. (I think that's what he did, although Github seems to to have fixed the timestamp now.) He then said this should be proof enough that the Rails defaults need to be changed.
The problem with Homakov's argument, as pointed out in subsequent comments in the thread, is that Homakov's hack only demonstrated a mistake on Github's part, not a bug in Rails. It didn't prove anything about Rails that we didn't already know. The only thing surprising he demonstrated was that Github had left open a rather serious vulnerability.
TL;DR: Rails has some less-than-secure defaults which all Rails developers are expected to understand and deal with. Homakov found out that Github failed to do so in at least one instance, and he wanted to use that as proof the Rails defaults should be changed.
But I disagree that Homakov's hack "only demonstrated a mistake on Github's part." It rebutted the Rails team opinion (and again, not all of them disagreed with Homakov) that this was a trivial, edge-case problem.
The fact that web devs write templates vulnerable to XSS is not Rails fault, but apparently the problem was prevalent enough that HTML sanitizing was turned on by default.
Apparently, there wasn't empirical evidence to show that update_attributes had the same rate of mistakes to justify a change in defaults...Homakov's hack was a powerful rebuttal.
I don't like the gist of your post. You are giving Rails way too much of a pass and assigning too much blame to GitHub. A framework should not be "insecure by design". Period. GitHub engineers are likely about as good as they get and still missed it. That's less an indictment of GitHub engineering and more so poor decision-making on the Rails side.
One of the lessons I learned early on regarding security was to program as if I don't trust my code to be secure. I always ensure that something else is enforcing security to the extent possible. In some cases I take the "run with the least possible privileges" to an extreme, and grant my application no privileges to crucial resources absent user-supplied credentials.
The advantage of this approach is that even otherwise ordinary security wholes become hard to exploit in useful ways. For example, the set of interesting attacks you can pull off from SQL injection when the SQL permissions are tied to your application login are quite a bit less than they are ordinarily and while you can still do nasty things, the attacks tend to require greater internal knowledge of the database, and the scope of vulnerability is narrowed. Get rid of string interpolation in your queries to the extent possible and another issue goes away.
Be paranoid about security and that will serve you well.....
So when I read that a framework is insecure by default, I naturally suppose that I have good reason to stay away from it.
I agree with you that the default should be changed. And note that any application developer can in fact require attr_accessible globally, across all models, with one line of configuration.
The point of my post was not to give Rails a pass, and I apologize for misleading if it came off that way. Rather, I was trying to clarify the situation for those who are less familiar with Rails. The discussions surrounding this issue (including Homakov's own words) seem to erroneously suggest that Homakov discovered a previously unknown vulnerability in Rails. I was merely clarifying that he instead found a vulnerability in a specific Rails app.
Now, it's a matter of opinion as to whether the Rails default should be called a "vulnerability." I say yes, but reasonable people can disagree. What's clear, though, is that no previously unknown vulnerabilities in the framework have been revealed.
No. A property of a system that leads to compromises is a vulnerability. Let's not let give in to neo-essentialism. Just because the word "vulnerability" has been assigned some narrow meaning in the past is no reason to enforce that usage. This is a problem, and "vulnerability" is plainly as the sun shines the right word for it.
Put another way, security problems are as much architectural problems as programming mistakes. If the architecture is secure, the programming mistakes will be less severe. Start with a good architecture and the rewards, security-wise, will be substantial.
This is the problem. We think of a security problem as "the developer made a mistake." Often it's the software architect who made the mistake, and if we insist that frameworks weed out the bad architects we are all better off.
"The alternative would be to make Rails secure by default, but that would mean pretty much nothing would work until you explicitly granted access where necessary."
Given the amount of logging that occurs if you do set whitelist_attributes, it's not like this is a huge problem to fix. And, that logging (and the fact that your app mysteriously doesn't work) serve as a loud signal as to what action to take. On the other hand, the "insecure by default" solution is a silent and potentially catastrophic failure.
Compare to how brake pads squeal: even the least mechanically savvy driver brings their car to a mechanic when their pads are running thin.
Finally, the suggested fix (which, frankly, wouldn't have helped github) was simply to update the default generator to set whitelist_attributes, rather than merely including a comment to the effect. The "everything is broken" list would be introductory guides, full stop. So, novice developers would be held up until the guides could be updated with good security practice. Experienced devs, who supposedly all know about this, wouldn't have any problem on new apps.
And the core team have basically said "meh, too much trouble." Apparently, they haven't been chasing html_safe! calls through their views, which is frankly way more of a pain than attr_accessble'ing data fields.
Web frameworks like Yesod have shown that it's possible to make a web framework that is secure by design (at the type level, even) without compromising on usability.
?? Call me old fashioned, but this is called 'defensive coding' and should (in my opinion) be the norm when dealing with client-generated input. It might be more verbose and not 'The Rails Way', but update_attributes seems like too much magic for my paranoid taste.
Yes, that makes sense. If your view's form should only update the user's name, then the logic that handles that form submission should only update the user's name. Allowing the client to inject other attributes (even allowed/whitelisted ones) could potentially break other app logic.
Because convenience is deemed more important than safety.
Same reason that "enum" and "int" are pretty much interchangeable in C, that arithmetic conversions and truncations are implicit -- it's not very safe, but it is more convenient.
Just a heads up for those who only read the TL;DR: Please be aware that if you just put that single line of code in your initializers for your existing app, your app will break anywhere you are using update_attributes(). They do call it out later in the article, but you have to set attr_accessible on all your models.
Is it just me, or doesn't this sound very similar to SQL injection, only as applied to an ORM instead?
That is, if my understanding is correct, they're taking user posted data and trivially turning it into a command to update data.
This doesn't sound like a problem with Rails, in the same way that if I turn data I receive from the user straight into an SQL statement, the fact that people can abuse it isn't a problem with SQL.
It's a problem with Rails because Rails provided the update facility that takes user data and told people "if you use this, you can build a blog in 15 minutes." If you're turning user data into SQL, at least you're the one who wrote the code. Your database didn't come with a parse POST and update table function.
SQL injection is a problem with the way database libraries are designed. Just because some idiotic piece of design has persisted for years doesn't change what it is: a vulnerability.
I must say that this episode is the best example of how to handle such cases.
Homakov found the issue and made his point without any sign of maliciousness.
Github, also handled is extremely professionally by accepting it and fixing the problem and then publishing a full report on it, rather than get into a pissing match with Homakov and getting law enforcement and lawyers involved.
It was a temporary suspension while we investigated the scope of the damage. There was some other weird activity on his account that he wasn't up front about.
I agree that the default for Rails should be more secure, but this was a really basic mistake by the GitHub team. Yes, mistakes do happen, but there's a very simple way to avoid the exploit that is postulated in this article.
The 'schematic' of what the public key update looks like from the original post:
class PublicKeyController < ApplicationController
before_filter :authorize_user
...
def update
@current_key = PublicKey.find_by_id params[:key]['id']
@current_key.update_attributes(params[:key])
end
end
The correct way to code this is as follows:
class PublicKeyController < ApplicationController
before_filter :authorize_user
...
def update
@current_key = current_user.public_keys.find params[:key]['id']
@current_key.update_attributes(params[:key])
end
end
This has two updates to it that protect against this exploit:
1. Rather than calling "find_by_id" on the PublicKey model, which searches all public keys, you call it on the current user's list of public keys. This scopes the search down to the public keys that they own. Thus, if you pass in the id of a key they do not own, it will not be found, leading us to:
2. Using "find" instead of "find_by_id" will trigger an ActiveRecord::RecordNotFound error (404) if the resource is not found. Of course, find_by_id will just return nil in this instance, so the update_attributes part would still fail, but triggering a 404 is an easier, cleaner way of dealing with this, I think.
It's really very simple: you don't let people access stuff they don't own.
Now, this does not protect you against faked timestamps, or against privilege escalation by passing in a faked "role" parameter, and so on. You still need to use attr_accessible to protect yourself from that stuff, but scoping resources down to the user who owns them is a simple technique that should be standard practice for applications with authentication.
I own the key. I set the owner of the key to you. You now own the key that I have on my machine. I commit to your repo.
I'm sure the github team has authorization at a model level, preventing access to other user's resources. This wasn't an instance of accessing another user's resources via rails, it was assigning your own resources to another user, then abusing that fact via git.
Actually, that's true. I stand corrected. I think I was distracted by the rather obvious issue in the example code that the OP posted, which did not include this basic protection.
still mean that an attacker can pass in arbitrary fields to be updated in the public key table? If that's the case then doesn't it open room for an attacker to change a field that is assumed not accessible from the outside? For example, would it be possible to change an _id field that references another table?
Yes, if there is anything else in the public key table that you don't want to be updated via form fields, then you should be protecting yourself against it. However, every developer - or at minimum, every experienced developer - should know the basic maxim "never trust the data from your users".
But this method does protect resources from being arbitrarily assigned to other users.
In any controller where the user should be authenticated, I would suggest the following two guidelines (let's assume the model is PublicKey, as before):
1. In the index action, where you get a list of resources, use:
@keys = current_user.public_keys
2. In all of the actions (such as show, edit, update, etc.) where the method starts with:
@key = PublicKey.find(params[:id])
Remove that method and instead create a private method in your controller:
def get_key
@key = current_user.public_keys.find(params[:id]) if params[:id]
end
And at the top of your controller:
before_filter :get_key
This should just be a habit. With these two modifications, you've just scoped the resource down to the user everywhere that it is used. Additionally, you've DRYed out your single resource-specific actions (show, edit, etc.) because the code to find the resource only exists in one place.
Of course, you still need to think about attr_accessible. But even this is not a panacea. Consider the case where a user has a role_id field that specifies whether they are an admin, manager, or regular user. In this three role scenario, managers are allowed to create managers or regular users, but not admins. Admins can create users of all three roles.
This means that you may want to allow the role_id to be updated by mass assignment. You just have to ensure that users cannot update the role_id if the role they have picked is more privileged than their current role. You could just add a validator to the user model that does exactly that.
Alternatively, you could keep role_id as a blacklisted attribute, but in your controller you could check for the new role in the params, and then only assign it if the user should be able to assign it. Both approaches have merit. The bottom-line is that you still have to THINK.
and then define the scoping rules in the declarative auth file:
authorization do
role :user do
has_permission_on :public_keys do
to [:write, :read]
# user refers to the current_user when evaluating
if_attribute :user_id => is {user.id}
end
end
end
This is a bit more DRY, because you are abstract out the conditions of access. This is especially useful in situations where you have readonly access or other types of acl.
I posted a link to the relevant part of it (http://news.ycombinator.com/item?id=3665429) on another thread regarding this exploit already, but the official Rails Security Guide covers this and other common security pitfalls really well. It is worth reading over thoughtfully if you are building a Rails app:
Right, the Rails security guide and the Netscape secure coding guide (https://wiki.mozilla.org/WebAppSec/Secure_Coding_Guidelines) are highly recommended reading for every developer at my company. They are very well written and cover a lot of ground when developing web applications on RoR.
I'm not a Ruby or Rails guy, so I can't comment on how likely a typical (or atypically good) Rails dev might be to code this sort of bug into their app.
But as a security guy, given the power of Public Key assignment in the context of a system for managing access to Git repositories, I can't help but be a little surprised that model objects that touch Public Keys weren't more thoroughly reviewed.
If nothing else, folks everywhere will be thinking a little harder about authorization logic this week.
I'm a Python/Django developer and don't really know much Ruby or Ruby on Rails. Does anyone have an outsiders/non-rubyist explaination of how this hack was carried out? Did he modify HTTP headers? POST parameters?
In Django (and Python in general) you have a form abstraction that validates the user input and filters out unwanted fields, preventing problems like this. On Rails (and Ruby in general) they don't use such abstraction (I do myself, using an implementation of forms abstractions[1] very similar to what Django provides).
Instead they send the params dictionary (which contains url captures, POST and GET values) directly to the model instance, and expect the model to deal with it. The problem with this approach is that it gives too much responsability to the model. Other than forms not necessarily mapping directly to models, making this more complicated, it is also prone to security issues, like the one Github suffered. ActiveRecord (Rails ORM) allows you to whitelist and blacklist fields at the model level (which IMO is the wrong way to do this, Django got it right), but a lot of people don't do it.
From what I can tell, the 'rails way' of dealing with form data is something akin to user.__dict__.update(request.POST). So you can just add some additional parameters to your POST data and set arbitrary fields on the model.
I don't think this is the best way to describe this. It is not the "Rails Way" to NOT validate form input. The Github team used a method that allows quick and easy assignment of bulk values. This method is used often in non-public-facing tasks but when it is used in connection with a public facing POST request, then validation logic should be implemented.
It was not, in Github's case. It seems like a glaring error in retrospect but it's easy to see how this code (or the pattern) would move over from the private to public-facing interface and not trigger any errors or notice.
You would modify the hidden form values (or add parameters that didn't exist) depending on the situation. These new/modified parameters would appear in Rails' params hash which would then be passed to the update function which, by default, will update any fields you hand it.
I wonder how many Rails apps there are out there that is still vulnerable to this sort of flaw. Both GitHub and Posterous has fixed it, but there's probably thousands of smaller less known Rails sites/apps that still haven't been patched.
I know I immediately panicked because I know this was not one of the many issues that I had thought to be aware of. Luckily, I remembered that I've just out of habit/circumstance not used update_attributes...so I avoided the bullet out of dumb luck.
I imagine there are many, many sites that are vulnerable to this. I hope the high profile hack (at least on HN) quickly spreads the kind of panic that gets other devs to check their repos today.
Although the OP talks about update_attributes, the problem is not limited to that method. The problem is with "mass assignment". So, if you are passing a hash of attributes and values to the new or create method of a model, then you are doing mass assignment, and you are exposed to this issue.
For example, if you are doing this in your controller:
I find it hilarious that just in the last few days they've removed register_globals from PHP for good. It seems every language/framework ends up retracing the same set of security/convenience tradeoffs over and over again.
Since this vulnerability has been known about for years and experienced crackers have presumably checked for it before, is there any reason to be concerned about more malicious corruption of other repositories on github?
The problems with update_attributes have been known, and messing with things like timestamps probably has been done before, but Rails itself didn't violate any acls. All users still can only access their own resources via Rails.
This attack was a combination of Rails and git/ssh keys. There's a bit of a clever aspect to it, one that isn't implied merely by understanding the vulnerabilities of update_attributes. While I think it is likely someone else has thought of it before, it is a little more exotic than something your average cracker is going to try.
Really? How is this exotic? Simple inspection of the rails form will reveal attribute and model names. And as Homarov proves, you don't need to even leave the friendly interface of the inspector to POST what you want.
The only reason why a cracker hasn't tried this is because it seems too simple to work.
OK, but that doesn't really answer my question, which is why people aren't more concerned about earlier more surreptitious corruptions of the repositories of github via the same vulnerability. It's got to have been an attractive target for the likes of the Operation Aurora [1] folks.
Presumably, Github could check for this activity by finding all public key submissions in which a public key registration involved a user id that is not the same user id as the signed-in user who submitted that. I'm not sure that's a simple DB query though...
it would still be pretty difficult to insert malicious code into a git repo, you would need to add a new commit to the end without being noticed. modifying earlier commits would be possible but git would complain when someone pulled from it in order to update a repository because the hashes wouldn't match up. New clones would work fine though.
`attr_accessible` should only be used to protect the attributes that are NEVER modified by users. Access to the rest of the attributes may differ by user role, and should be handled by the controller. Trying to use `attr_accessible` to protect everything leads to enough frustration to make one eventually give up on security.
`attr_accessible` should only be used to protect the attributes that are NEVER modified by users
But web app developer may not be able to know in advance what new columns might be added on the database, possibly by some other team. If I understand this right, in the absence of attr_accessible, any new columns are completely writable by the HTTP request.
So having a default-deny whitelist approach is the only sane strategy.
Trying to use `attr_accessible` to protect everything leads to enough frustration to make one eventually give up on security.
Or give up on Rails. Usually the basic security of the database is not negotiable.
"Homakov PUT an update to his own existing public key which included a new user_id. The user_id he used was that of a member of the Rails repository members."
But wouldn't that mean, that the commit would display the username of the user with the `user_id' that he used?
It's more that they display both. Unlike git core, github actually tracks "push" events to branches (git doesn't care, it only sees commits) as distinct from the commits they contain.
So presumably it would have shown the rails developer pushing a change authored by Homakov.
I don't find this a good solution. A much better solution is to generate a list of form inputs that are present on a page, then when the form is submitted, check if any of the form inputs were changed / any were added.
Agreed, whitelisting attributes regardless of context is much too coarse. Here is a form library that implements what you describe: https://news.ycombinator.com/item?id=3666907
Between the time Homakov made his work public and the time it took them to fix it? I think it was close to an hour and on a Sunday. I doubt there are any other cases.
No, we are not. My bad, I misunderstood the statement; I thought he was referencing yesterday only, which I thought might not be possible, given the sort amount of time. Obviously if other people knew that's a whole different story.
I didn't know about the solution you mention at the time I wrote the post. I'm reading up on it now and I think it's actually going to end up being the official solution so it's a good flag:
Am I the only one who does not understand what is this about? Oh, no, looks like rails team does neither.
Stupid code can be written in any framework/language. How much experience does one need to understand a simple rule - _never_ use user input directly.
If you have an urge to trust your users - I'd suggest better way:
`params[:command]`
Rails devs (some, not all of them) had dismissed his complaint because "Every Rails developer hears about attr_accessible." Well, I'll be the first to say that I can't remember the last time that this update_attributes vulnerability had been pointed out to me. I certainly can remember all the times that Rails docs reminds me to use their sanitizing helpers when making ActiveRecord queries.
To be fair, I haven't developed apps that required the use of user-facing access to update_attributes, and maybe when I got around to using that, I would've wisely consulted the dev guides to make sure I was following best practice. But knowing me, I probably would've likely thought, "Well, that seems simple enough, here goes."
It's not that the logic behind this vulnerability is hard to understand...in retrospect, it's as clear and blatant as the processes that lead to SQL injection.
But surgical patients die because elite surgeons sometimes forget to wash their hands (Google "Atul Gawande checklist"). It's not an impossibility that a skilled dev team would overlook the update_attributes issue.
The Rails team was right in arguing that this wasn't a security risk given a half-competent dev. But they were looking at the problem from the wrong perspective and assumed that everyone is as familiar with Rails best practices as they were. So how else could Homakov convince them otherwise other than pricking a high-profile dev group?
What if Homakov managed to alert the Github team, and they managed to fix it quietly? Github would be safe but thousands of Rails sites would still be operating in ignorance. It truly stinks for the Github group that they had to respond to a five-alarm emergency on a Sunday...on the other hand, I think there are going to be a lot of Rails devs who are thankful that they (involuntarily) took one for the team. Thanks to Homakov, it was a small hit.