Doesn't this show just how crappy the backend permissions must be in Facebook's code? Every new page needs to get the permissions checks exactly right, otherwise... Disaster. As an analogy, It's like the most stupidly-designed UNIX system, where each user program that opens a file runs as root and must remember to do a permissions check when opening a file, rather than centralising the permissions system in the kernel.
No-one would accept such a shoddy design in an OS, yet in today's web apps it is apparently standard practice...
Facebook's permissions model is very complex. Just imagine a case where Fanny comments on Alice's photo which is shared to a custom friends list, which contains Bert (Alice's friend), who Fanny put on her block list... and that's a simple example.
It has been proposed a number of times to put it all behind an API. I do not know if this has been finished yet. I remember an epic diff comment thread which only ended after the author defended her solution with a mathematical proof of correctness.
Well actually you can look at it exactly like a kernel, where the backend is the kernel and http clients are the processes, and access control is done at resource level access, by the kernel. The things is, you couldn't even model facebook access with unix perms, and if you've played with acl, I think you realize that the problem is not solely due basic soft architecture.
That said, Facebook should have addressed this problem seriously by now.
But Facebook permissions can be modelled. They may not be direct mappings to UNIX permissions or ACLs, but that's taking my OS analogy too literally. The point is, Facebook should have a shared component that does the permission checks, rather than giving each page global access and relying on the author to do the checks themselves.
I deeply agree for the facebook case, I just wanted to point out that there is no known general solution for a centralized resource access control for web backend that will fit all use cases properly.
That surprised me, too. I'd have thought they'd have an API for all this kind of stuff, so the front end page rendering part simply couldn't make these mistakes.
Maybe it was too limiting (slow dev) to have change two things anytime they needed different data? Or perhaps at one point there were performance concerns?
But facebook isn't an OS, and it's the kind of stuff that many developers aren't used to dealing with. It's the equivalent of saying that many desktop applications with server back-ends had leaky permissions.
The consequences are potentially far worse at facebook scale of course, but it's not like we as software developers generally have gone from understanding how to easily prevent these problems to an amnesiac state where we're suddenly careless.
Given the relentless appearance of this style of security bug in multiple Facebook pages, I think your description of a careless, amnesiac state is spot on.
In my app, I've been quite pleased with REST API endpoints mapping segments to objects the API will work on and explicitly declaring the permission before any page specific code runs.
So if you have an URL like /{username}/year/{year}/profile_lists you would in a declarative style, say that "username" must match an existing Facebook user ID, and that the page viewer must be able to view certain privacy related settings of the username. When your code runs, you get the current user, the username from the URL is mapped to another user object, year is mapped to an integer etc.
It's an error to declare an API which needs access to a resource without saying what type of access is required. In Facebook's case maybe I'd go one step further and create a proxy object for the user that codifies those rules. So if you ask for "view profile friend stats" access, and it's granted, the user object your function gets cannot start modifying things.
This is probably a symptom of the number of people employed at Facebook, lack of documentation, and that the entire app and related infrastructure is a (quickly) moving target.
It is quite saddening that there is a recent trend of hiding the complete URL from the user when the URL itself conveys much information. When the URL is hidden the user is not given the incentive to look at the URL, let alone modify it. This kind of bug should have been discovered much sooner when the user is given the opportunity to directly look at the URL and experiment with it.
This is very similar to what Weev was indicted and convicted for.[0] Simply passing valid requests to a system can by construed as "unauthorized" if it is unexpected by the operator of that system.
It's not too hard to obfuscate the actual domain for non-technical users, leading to easier phishing. By only displaying the actual domain name, it's much easier for people to see that they aren't on the site they expect to be.
IMO, the tradeoff of reducing phishing effectiveness is worth the small amount of additional effort needed to find this bug.
Mobile would be great for taking this kind of approach to bug hunting.
Especially since Android just launched a (proper) bug bounty program [0]. A ton of old problems are new again on Android, especially due to the fact a significant percentage of the OS stuff is being re-implemented in Java (IPC, sandboxing, etc). The more I dig into it the more I'm convinced very few people are conducting serious security reviews outside of Google.
Take this bug as an example: http://seclists.org/fulldisclosure/2014/Nov/81 An apk with system privileges (the settings app) would accept IPC messages from any unprivileged app and relay them with system privileges.
I've been wanting to start doing bug bounties for a while now, but I have only been able to find serious bugs in sites without bug bounty schemes. I was starting to think that it would be impossible to get any bug bounties because of the number of people searching, but this post gives me some confidence.
2. When looking for bugs in sites with existing programs like Facebook your best chance is when they announce a new feature or product. This includes acquisitions (Facebook paid out over $100,000 for bugs when they added the Oculus websites to their program).
In general do you need to register or anything like that? I think it'd be a fun thing to try, but also don't want any of the bad legal repercussions that can come with it
Some programs require you to register an account to report a bug while others use email, but you don't need to get permission to look.
All bug bounty programs have rules that outline what parts of their site/product you can test and what kinds of bugs they are looking for (here's Facebook's https://www.facebook.com/whitehat/). As long as you follow the rules you won't have any legal problems.
New programs are launching all the time or the scope of current programs is expanding out to include new products or features. It's never too late to get started, there's actually more work than researchers at the moment and it will be like that for many, many years to come.
In terms of how to get started, I definitely suggest monitoring the various bug bounty sites to see what's new and if a bounty's scope has expanded.
Can anyone comment on when is a good time to start a bug bounty program?
I have some clients with relatively small scale (small budget) projects. Is it better to post a bounty program on HackerOne? Or force them to budget to hire a security researcher consultant for a day to find high-level issues? Or both?
In my experience with running bug bounties it will be cheaper in terms of time (and probably in terms of money) and more effective to hire an application security consultant to look at the projects first.
Bug bounties require a lot of time to keep on top of the submissions (essential in providing a good experience for researchers) and to filter out the noise of invalid and working-as-intended bugs.
Having a consultant come through will mean that your bugs will be the exception rather than the rule. Instead of every form field and parameter having a cross site scripting bug only that deprecated status page that you'd forgotten about will be vulnerable. A good consultant will also be able to help you fix the bugs and avoid them in the future.
Getting the low hanging fruit out of the way before launching This difference can easily pay for the consultant, since each XSS can be worth >$500 (or thousands in the case of the bounty programs I've worked on) so getting the low hanging fruit out of the way before launching is definitely worth it.
No-one would accept such a shoddy design in an OS, yet in today's web apps it is apparently standard practice...