Hacker News new | past | comments | ask | show | jobs | submit login
Hijacking a Facebook Account with SMS (fin1te.net)
148 points by phwd on June 26, 2013 | hide | past | favorite | 48 comments



Facebook can and should be held liable for clear failings on the behalf of their security team.

Absolutely no backend code should be pushed out that isn't first audited by a security company. God knows they can afford it, and mistakes like this could end up being much more costly to Facebook (stock price, lawsuits, etc).

Crap like this makes it clear that not only are critical changes to the security infrastructure at Facebook not at all audited (in-house or outsourced!) for even the most ludicrously-obvious security vulnerabilities, but also that Facebook itself does not take even begin to take security seriously.

And this is completely ignoring the fact that it took them five days to acknowledge such a critical issue, which is a further symptom of Facebook's sheer apathy to the security, privacy, and data of their users, both corporations and individuals alike. To think that a company/website like Facebook, containing as private and personal information as Facebook profiles have, and with such incredible monetary and technical resources at its beck and call cannot even triage incoming vulnerability reports correctly makes absolutely zero sense.


This isn't quite fair. The difficulty of writing completely secure software is out of comparison with the difficulty of finding vulnerabilities.

First, it's a numbers game. There are order magnitude more people trying to break the product than there are people trying to make it secure (dev team vs. rest of the world?). As a developer you are ALWAYS at a disadvantage.

Second, different objectives. As a vulnerability seeker you only have to find one weakness, while as a developer you attempt to write securely everywhere. This just isn't realistic. The best developers can do is try as best they can. There is no indestructible software.

Third, the response and fix time is actually good? If anything - 5 days is incredibly good turn around. We don't know what else is going on, what other crazy vulnerability may have been reported at this time or being worked on previously, etc. While security is important it is unrealistic to imagine that the team in charge of these kind of fixes is all that large or has infinite resources.

It is hard (impossible) to secure something like Facebook fully. I agree with the other sentiments that if anything their crowdsourcing efforts have been quite successful. If you are unhappy with a 5 day turn around - start looking for another solution. I think you'll be hard pressed to find anything 1) more secure, and 2) with quicker responses to security issues.


> Third, the response and fix time is actually good? If anything - 5 days is incredibly good turn around.

Yes. Especially since the code wasn't actively being exploited in the wild.


Yep, and 2 of those 5 days were the weekend and the monday was a national holiday.


> If anything - 5 days is incredibly good turn around.

Especially for a major holiday weekend when a lot of people go on vacation....


Uh... what? You should be "held liable" for security bugs? What a load of crap.

This individual saw the bug, announced it to Facebook using their bounty program, they fixed it pretty fast and gave him money. What more do you want?


I think what ComputerGuru is saying is that facebook should be liable for security vulnerabilities like the one discovered, not the hacker who discovered. (this is what I gathered from your comment)

I actually agree to an extent with ComputerGuru. The company deploying the code (facebook) is responsible for any exploits. We don't know if Facebook does consult a security team (given the bounty amounts, I'm sure they have one internally).

The real problem with code though is that bugs will ALWAYS exists. They can get even worse as more people have a hand in the package. I've encountered this alot on projects, where code snippets will either be redundant, over complicated, or (in this case is my guess) will conflict with other pattern checks.

To be honest, I'm actually surprise that we heard about this exploit. I'd almost imagine that most companies would be tempted to have the hacker sign an NDA in order to collect the bounty.


Yes, I meant it like that. Bugs will always exist and it's stupid to think that Facebook should be liable for bugs that caused nobody any harm whatsoever just because the bug existed.


Some bugs are incredibly complex, and you can't just pay some large sum of money to have them all fixed beforehand. They're basically crowdsourcing security and it seems to be working quite well.


Some bugs are incredibly complex, however I do not think that exposing the user id in an HTML form and having a skeleton key style confirmation code that is not directly linked to a specific user is a complex bug. I agree that you can't just pay some large sum of money to have everything fixed before release: bugs happen. But security should be one of the number one priorities when designing and developing a new feature. This seems like little more than negligence on the part of the dev team and I think it is right that people are upset/bewildered that a security bug like this could be put into a production feature.


From the perspective of black-box testing, this scenario seems quite obvious. (Alter [redacted: form/post] value.)


Chrome 27.0.1453.116 (for me) says:

"Warning: Suspected phishing site!

The website at blog.fin1te.net contains elements from sites which have been reported as “phishing” sites. Phishing sites trick users into disclosing personal or financial information, often by pretending to represent trusted institutions, such as banks."

The home page doesn't produce this message, even though the linked article is summarized there. Clicking on the article from the home page also produces this message.

Nonetheless, very simple yet very clever exploit! I'm sure someone kicked themselves pretty hard over that one.


Chrome 23.0.1271.97 (AdBlock, Ghostery, 3rd-party cookies off) and I'm not getting any phishing warning.


Got the same thing.


[deleted]


Disabling javascript isn't a protection against phishing.


This is mindbogglingly bad. How did they manage to introduce a dependence on unauthenticated client-side state for such a critical operation in a relatively new feature?

If they weren't willing to hit the database to recall the profile_id for the reset operation, it makes me wonder whether the confirmation codes are in fact deterministic, rather than randomly generated.


Probably some old code that was written in haste back in the day, and that never got touched because it got the job done.


Sounds like Facebook.


Are you kidding? Sounds like every company everywhere.


Yuuuuup. People think FB is free of these problems because they have written some highly performant code and have a shit ton of money. Nope, money doesnt cure laziness and definitely doesnt cure "it works so why fix it".


Pretty much, it's expected when you're small for security to take a backseat to convenience. However, when you reach the point of billions of users, every line of code should be reviewed and there's no excuse for something this simple to slip by.


Confirmation by SMS is not a relatively new feature.


This root of this bug (exposing profile_id or some user identifier in a hidden field and passing it to the server as a parameter) is incredibly common, and super easy to exploit via inspect element.

We have a rails test that we give dev candidates, and red flags go up when we see this happening (which is far more often than I'd like to admit). Kind of scary that there's likely a bunch of production code floating around that is so easily hackable.


Great ingenuity in finding authentication flaws. It's exactly what I told a friend who is learning programming...it's all trial and error.

Every time I hear the reward amounts, it entices me to divert my attention to finding bugs and loopholes in systems. :/


I don't know how I feel about the reward amount since probably it equates to 2 months of the salary (I'm estimating around 100k/year) of a facebook dev. Depends on how much time you spend searching for an exploit, the real reward would be getting some fame about your skills rather the 20k figure.


Oh I'd keep the check stub and photocopy it onto my resume (just kidding).

Considering exploits are supposed to be hard to find (why they're large bounties), it's just the incentive/hush money to pay the hacker, because you have to consider a few things...

1) why and how did you find the exploit (were you trying to hack someones account, stumble upon it [that's lucky], are you a security firm [meaning you have success in this before], were you black hat contracted, ect).

2) a hacker would prefer the recognition [possible employment], the reward [sandwiches aren't free], and release of liability [a company may still file charges for probing their systems 'weev,' is an example]

I can think of very few vulnerability testers that have gained employment at the companies where they find the exploits. Comex is one I can think of off the top of my head (created the jailbreak for iphones, landed an internship at Apple, then career Google).


The biggest reason I see for the payouts is simple:

That exploit has a value on the 'black market'. If it comes down to "no money" or "$20k", people are going to be looking at the "something" instead of "nothing", no matter what the laws say.

The bug bounties don't always have to be a lot - most people will want to do the right/safe thing anyway. They just have to offer some incentive (we've all seen some success with even $800 bug bounties) to keep the honest people honest.


If someone needs a monetary incentive to be honest then they're not honest, in fact they're quite the opposite.


I don't agree that the motivation would be to keep the honest people honest.

There's nothing wrong with having a talent and wanting to make a living from it.


On the contrary, we're in the position to do an incomparable disservice to the world. Companies buy exploits simply to buy the hacker's silence, and governments buy exploits to bolster their offensive military capabilities -- when we sell to them we're complicit with the damage they do.

Personally I'm of the opinion that the only responsible disclosure is full and anonymous disclosure.


This is an incredibly simple (and dangerous) hack, I'm happy to see it was neutralized so soon after being discovered.

Also good to see that the finder was amply rewarded for his effort.


Nice.

A side note - the SMS confirmation code text should explain what is going to happen when the code is used. Along the lines: "Facebook mobile confirmation code ds3467hj. Note. Entering this code would link this phone to your Facebook account".

Otherwise, if the SMS is just "confirmation code ds3467hj" it is overly easy to create a phishing attack which results in the user (striving to get access to some resource, like a magazine article for example) in entering the code on an attacker web site.


Looks like an easy $20,000. :)


Looks like there was a 2-day window between when the reveal post was made vs when Facebook fixed it.


Facebook isn't a startup; I'm surprised it took 2 days to be honest. This should have been an all-out panic mode, level-1 alarm and push.


Given that at any big company, every change (even a critical security fix!) has to go through review and QA, most startups are probably better equipped to fix this sort of thing than most big companies.


"23rd May 2013 - Reported 28th May 2013 - Acknowledgment of Report 28th May 2013 - Issue Fixed"

5 Days to Acknowledge: Yipes!


In all fairness, it was reported on a major holiday weekend...


Looks more like a 1 day window. Bug was acknowledged on the 28th of May and fixed the same day. This post is almost a month after-the-fact - 26th of June.


A one day window from acknowledgement to correction, but still a five day window from report to correction.


I'm surprised this ever made it into production. Never, ever trust user input.


So how much does a facebook 0-day go for these days anyway?


This bug shows us, how bad their software really is and that all the PHP crap on their frontend can access every data from every users. If they have had a "middleware" between frontend and database, such kind of bugs weren't possible.

Anyone remember the bug as everyone had access to private photos of Marc Zuckerberg?

http://www.telegraph.co.uk/technology/facebook/8938725/Faceb...

Same auth-bypass shit.


Oh my goodness a anti PHP comment in a Facebook thread. Who would have guessed?

Facebook has some of the best engineers in the world. They also have their own modified version of PHP.

And really it doesn't what they use, they could use Lua and still have this issue. Don't think just because ebay used C++ or cgi or what ever they used doesn't mean ebay didn't ever have issues. Same goes for every other site/language out there.

The PHP hate is getting a little old.


Please don't get confused, there are foolish people utilizing every programming language all the time. PHP may be disproportionately more popular than other programming languages but it hardly deserves the blame.


Blaming technology is the worst. From the nature of the bug I think it was just some if missing somewhere.


I'm sorry, but did you even read the article? The exploit is explicitly stated, it's trusting unauthenticated client state. That's a fundamental design flaw in the intended behaviour, not an accidental coding error. They gave the profile ID to reset to the client when serving the page, and then blindly used whatever profile ID the client sent when submitting the form. The fix was to fetch the profile ID for the authenticated user instead of sending a profile ID on a round-trip through the client.

I certainly agree that the problem wasn't the technology here, but I disagree with your conclusion. "some if missing somewhere" is far easier to avoid technologically than a high-level design flaw like this. It's fairly easy for a type system to notice that not all cases in a conditional are accounted for, but it's much harder for a type system to understand that it's inappropriate to use client-submitted data as a profile ID for a password reset request (as opposed to operations like submitting a friend request, where it's perfectly valid).


what should they use instead?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: