Doesn't this show just how crappy the backend permissions must be in Facebook's code? Every new page needs to get the permissions checks exactly right, otherwise... Disaster. As an analogy, It's like the most stupidly-designed UNIX system, where each user program that opens a file runs as root and must remember to do a permissions check when opening a file, rather than centralising the permissions system in the kernel.
No-one would accept such a shoddy design in an OS, yet in today's web apps it is apparently standard practice...
Facebook's permissions model is very complex. Just imagine a case where Fanny comments on Alice's photo which is shared to a custom friends list, which contains Bert (Alice's friend), who Fanny put on her block list... and that's a simple example.
It has been proposed a number of times to put it all behind an API. I do not know if this has been finished yet. I remember an epic diff comment thread which only ended after the author defended her solution with a mathematical proof of correctness.
Well actually you can look at it exactly like a kernel, where the backend is the kernel and http clients are the processes, and access control is done at resource level access, by the kernel. The things is, you couldn't even model facebook access with unix perms, and if you've played with acl, I think you realize that the problem is not solely due basic soft architecture.
That said, Facebook should have addressed this problem seriously by now.
That surprised me, too. I'd have thought they'd have an API for all this kind of stuff, so the front end page rendering part simply couldn't make these mistakes.
Maybe it was too limiting (slow dev) to have change two things anytime they needed different data? Or perhaps at one point there were performance concerns?
But facebook isn't an OS, and it's the kind of stuff that many developers aren't used to dealing with. It's the equivalent of saying that many desktop applications with server back-ends had leaky permissions.
The consequences are potentially far worse at facebook scale of course, but it's not like we as software developers generally have gone from understanding how to easily prevent these problems to an amnesiac state where we're suddenly careless.
This is probably a symptom of the number of people employed at Facebook, lack of documentation, and that the entire app and related infrastructure is a (quickly) moving target.
It is quite saddening that there is a recent trend of hiding the complete URL from the user when the URL itself conveys much information. When the URL is hidden the user is not given the incentive to look at the URL, let alone modify it. This kind of bug should have been discovered much sooner when the user is given the opportunity to directly look at the URL and experiment with it.
It's not too hard to obfuscate the actual domain for non-technical users, leading to easier phishing. By only displaying the actual domain name, it's much easier for people to see that they aren't on the site they expect to be.
IMO, the tradeoff of reducing phishing effectiveness is worth the small amount of additional effort needed to find this bug.
Mobile would be great for taking this kind of approach to bug hunting.
Especially since Android just launched a (proper) bug bounty program [0]. A ton of old problems are new again on Android, especially due to the fact a significant percentage of the OS stuff is being re-implemented in Java (IPC, sandboxing, etc). The more I dig into it the more I'm convinced very few people are conducting serious security reviews outside of Google.
Take this bug as an example: http://seclists.org/fulldisclosure/2014/Nov/81 An apk with system privileges (the settings app) would accept IPC messages from any unprivileged app and relay them with system privileges.
I've been wanting to start doing bug bounties for a while now, but I have only been able to find serious bugs in sites without bug bounty schemes. I was starting to think that it would be impossible to get any bug bounties because of the number of people searching, but this post gives me some confidence.
2. When looking for bugs in sites with existing programs like Facebook your best chance is when they announce a new feature or product. This includes acquisitions (Facebook paid out over $100,000 for bugs when they added the Oculus websites to their program).
New programs are launching all the time or the scope of current programs is expanding out to include new products or features. It's never too late to get started, there's actually more work than researchers at the moment and it will be like that for many, many years to come.
In terms of how to get started, I definitely suggest monitoring the various bug bounty sites to see what's new and if a bounty's scope has expanded.
Can anyone comment on when is a good time to start a bug bounty program?
I have some clients with relatively small scale (small budget) projects. Is it better to post a bounty program on HackerOne? Or force them to budget to hire a security researcher consultant for a day to find high-level issues? Or both?
In my experience with running bug bounties it will be cheaper in terms of time (and probably in terms of money) and more effective to hire an application security consultant to look at the projects first.
Bug bounties require a lot of time to keep on top of the submissions (essential in providing a good experience for researchers) and to filter out the noise of invalid and working-as-intended bugs.
Having a consultant come through will mean that your bugs will be the exception rather than the rule. Instead of every form field and parameter having a cross site scripting bug only that deprecated status page that you'd forgotten about will be vulnerable. A good consultant will also be able to help you fix the bugs and avoid them in the future.
Getting the low hanging fruit out of the way before launching This difference can easily pay for the consultant, since each XSS can be worth >$500 (or thousands in the case of the bounty programs I've worked on) so getting the low hanging fruit out of the way before launching is definitely worth it.
[+] [-] joosters|10 years ago|reply
No-one would accept such a shoddy design in an OS, yet in today's web apps it is apparently standard practice...
[+] [-] aristus|10 years ago|reply
It has been proposed a number of times to put it all behind an API. I do not know if this has been finished yet. I remember an epic diff comment thread which only ended after the author defended her solution with a mathematical proof of correctness.
[+] [-] valisystem|10 years ago|reply
That said, Facebook should have addressed this problem seriously by now.
[+] [-] MichaelGG|10 years ago|reply
Maybe it was too limiting (slow dev) to have change two things anytime they needed different data? Or perhaps at one point there were performance concerns?
[+] [-] eterm|10 years ago|reply
The consequences are potentially far worse at facebook scale of course, but it's not like we as software developers generally have gone from understanding how to easily prevent these problems to an amnesiac state where we're suddenly careless.
[+] [-] bentcorner|10 years ago|reply
[+] [-] unknown|10 years ago|reply
[deleted]
[+] [-] kccqzy|10 years ago|reply
[+] [-] dublinben|10 years ago|reply
Be careful with simple "experimentation" like this. You can fall afoul of the CFAA for exactly this.
[+] [-] interurban|10 years ago|reply
IMO, the tradeoff of reducing phishing effectiveness is worth the small amount of additional effort needed to find this bug.
[+] [-] dmix|10 years ago|reply
Especially since Android just launched a (proper) bug bounty program [0]. A ton of old problems are new again on Android, especially due to the fact a significant percentage of the OS stuff is being re-implemented in Java (IPC, sandboxing, etc). The more I dig into it the more I'm convinced very few people are conducting serious security reviews outside of Google.
Take this bug as an example: http://seclists.org/fulldisclosure/2014/Nov/81 An apk with system privileges (the settings app) would accept IPC messages from any unprivileged app and relay them with system privileges.
[0] http://techcrunch.com/2015/06/16/google-launches-bug-bounty-...
[+] [-] Retr0spectrum|10 years ago|reply
[+] [-] ssclafani|10 years ago|reply
1. Monitor https://hackerone.com, https://bugcrowd.com and Twitter for announcements of new programs.
2. When looking for bugs in sites with existing programs like Facebook your best chance is when they announce a new feature or product. This includes acquisitions (Facebook paid out over $100,000 for bugs when they added the Oculus websites to their program).
[+] [-] SamHoustonCM|10 years ago|reply
In terms of how to get started, I definitely suggest monitoring the various bug bounty sites to see what's new and if a bounty's scope has expanded.
There's also a bunch of guides, tutorials, and tools listed on Bugcrowd's Forum: https://forum.bugcrowd.com/c/security-research
[+] [-] r3bl|10 years ago|reply
[+] [-] colinbartlett|10 years ago|reply
I have some clients with relatively small scale (small budget) projects. Is it better to post a bounty program on HackerOne? Or force them to budget to hire a security researcher consultant for a day to find high-level issues? Or both?
[+] [-] arkem|10 years ago|reply
Bug bounties require a lot of time to keep on top of the submissions (essential in providing a good experience for researchers) and to filter out the noise of invalid and working-as-intended bugs.
Having a consultant come through will mean that your bugs will be the exception rather than the rule. Instead of every form field and parameter having a cross site scripting bug only that deprecated status page that you'd forgotten about will be vulnerable. A good consultant will also be able to help you fix the bugs and avoid them in the future.
Getting the low hanging fruit out of the way before launching This difference can easily pay for the consultant, since each XSS can be worth >$500 (or thousands in the case of the bounty programs I've worked on) so getting the low hanging fruit out of the way before launching is definitely worth it.