Sounds to me like the big difference here is that this guy made a video when he couldn't communicate verbally (honestly seems like a smart way to do it even if American English is your first language; describing UI interactions verbally is generally pretty non-trivial). And critically, he did not actually use the exploit against another user. He just showed that he could have.
Is it still worth it to follow every link on Facebook and check the URLs/AJAX requests whether the parameters can be tampered with? At Facebook's scale I always assumed there would be someone full-time employed to do this. In fact, I wouldn't mind if it was good paying. Just give me all the Facebook frontend endpoints and I will go by them one-by-one. Manually. I will even document the test cases and what could be intercepted, changed or can be improved in terms of validation.
They most likely do test for security vulnerabilities. However, the attack surface and overall complexity is so large that things will slip by even with the most rigorous testing.
For now, the best you can hope for is a layered defense and rigorous dev and ops practices to help minimize the attack surface and reduce the overall damage a single successful attack can achieve.
I think this is one of the things that Microsoft did a pretty good job with. There is a security process in place that every product goes through for every release. While it still can't catch everything, even the simplest of threat models would have caught a bug like this.
While Facebook most likely does do some form of threat modeling for their main site, without a rigid process for all code that goes public you'll run into issues like this that are just as severe. Just because it's a mobile support site for requesting photo removals doesn't mean it is less important surface area in terms of security.
Exactly. As little as possible should be passing through the querystring. Put in the minimum amount in the QS and look the rest up in the DB. If possible, the QS should be signed for an extra layer of protection.
I think we can all agree that it's both a very difficult and a very large task to maintain an application with 500 million active users, let alone continue innovation and expansion.
Testing can only ever go so far - bugs and vulnerabilities exist everywhere, even in Facebook.
With the resources they have access to, I'd say there's no real excuse - unless it's not a priority - which could very well be. Privacy only became a priority (which coincides with security) when Facebook started to regularly change people's privacy settings on them.
The exploit is easy, but the implications are very dangerous. Such an exploit could have been automated to take down hundreds of photos before it was even detected.
nope, only thing this would've shown is that pictures aren't really deleted. Think about it. Facebook would do a rollback and all the pictures would be back. However with a little bad luck on their part they'd mess up which would lead to them restoring rightfully deleted pictures (many of them embarrassing). Would this have happened for sure? Probably not, but I strongly believe that this could have ended hilariously and frankly I am a little disappointed that the researcher was a white hat ;)
Rolling back is a last-ditch effort, it often causes more problems than it would cure. Sure you'd get the pictures back, but everything done in the interrim would be deleted. And if he were a black hat, we'd have never heard about this.
I can't imagine Facebook could roll anything back on their scale. Everything touches so many things, it would be a nightmare to get done.
I'd imagine they'd find the accounts that were responsible for deleting the pictures that weren't theirs (as this hack allowed to have happen) and restore the pictures deleted by those accounts.
As I understand it, the exploit involves crafting a URL to send in a removal request to the Facebook support. Wouldn't this count as social engineering or were the removal requests automated?
It seems you can send a crafted URL to request the deletion of images owned by Person A, to Person B. Cutting out any interaction from the original owner.
it looks like the request goes to the person who posted the photo first. presumably so that person can delete the photo without getting support involved. it looks like the problem was you could control the profile_id so it was different from the profile that owned the photo.