(my comment is on the overall trend, as the specifics on this incident are complex)
The issues with bug bounties as a whole is the market is skewed. For any work done by a bug bountier, there is exactly one legitimate buyer, who gets to make a significant judgement call on the value of the work done. Furthermore, this value is decided upon after the work has been completed, and has been provided to the company. In what other industries is this the case?
Alternatively, triagers have a whole pile of crap to wade through, to get to the useful material.
Furthermore, it really is hard to place an accurate monetary value on a bug that's responsibly reported, and patched. This is in part due to unclear monetary results from being breached. What precisely is the monetary loss from the recent MS Teams bug that was reported but not exploited vs the incidents this year at Twitter and SolarWinds?
Having had some involvement in the bug bounty arena as a reporter, I have to say I'm a big fan of those companies that open up all of their reports after a fix period of time. This allows them to build trust with those who look into their products, and develop a reputation for being prompt and consistent.
> Furthermore, this value is decided upon after the work has been completed, and has been provided to the company. In what other industries is this the case?
> triagers have a whole pile of crap to wade through, to get to the useful material.
This is very true.
> The issues with bug bounties as a whole is the market is skewed. For any work done by a bug bountier, there is exactly one legitimate buyer who gets to make a significant judgement call on the value of the work done.
The problem, in my experience, is that they never analyze it by its potential. Why would they, they have the details now and usually your legal details so if it leaks they'll have you busted in a heartbeat and sued for contract violation.
> Furthermore, it really is hard to place an accurate monetary value on a bug that's responsibly reported
I submit that from my experience threat modelling this is actual dead simple but nobody feels the need to do it.
> What precisely is the monetary loss from ...
As you point out, the issue is that there's a single buyer. You really need to open up the bidding. If you trusted a Russian mob to pay residuals (and they probably would) you might be able to sell this for what ended up being $50M+, and the criminals could clear billions if done right. Then the next time something like this came up you'd have more bargaining power. If the company was still there...
Thomas is right that there isn't specifically a market like flippa for exploits but there are dark markets and many of the vendors would be open to a chat. I'm not rooting for this, I'm just not blind and it will happen. (Well, if it's Twitter I'm rooting a little...)
God, this is frustrating. They essentially cracked Instagram's entire production environment open, and took explicit steps at every turn to stay within the published guidelines, and then they just take his report with zero compensation whatsoever. Insane.
Disclaimer: I was a Security Engineer on the FB Security Team until last month and was also involved in the Bug Bounty Program :-)
That's not how Facebook treats Bug Bounty Participants. By far, it's one of the better programs in terms of payouts, fairness, and triage time on critical issues.
Just a recent example: a bug bounty hunter reported unexpired CDN links. After internal research, FB figured out to chain this into a Remote Code Execution and paid out 80k USD to the researcher. (https://www.facebook.com/BugBounty/posts/approaching-the-10t...)
That said, I wasn't there in 2015, so I only know the story from some stories. (which portray the story a tad different) - Even if it were true, I haven't seen such treatment in the last three years at FB.
This was discussed at length when it was first submitted here 5 years ago. The researcher found a (known) exploit, claimed $2500, then a month later used internal details he gathered (and saved) from the first exploit to breach the system further to demand a bigger payout.
The problem with bug bounties is they are one-sided, against the researcher. The conditions of bounties typically stipulate that any attempt at negotiation can be interpreted as extortion, so it is either take it or leave it.
Sounds like a third party might be able to improve the situation by providing escrow.
With their first bugs, researchers are entirely unknown quantities to the company. Stating, "I have a critical zero-day, but I won't tell you what it is until you pay me $BUCKS," clearly won't work.
A reliable escrow service, to whom the researcher can provide the exploit and the company can provide $BUCKS, offers insurance to both parties. If the exploit is not as described, the researcher loses the exploit entirely and gets no $BUCKS, but if the exploit is as described, the company cannot renege on the deal.
(Edit, addressing the direct question more-clearly: perhaps what is necessary to avoid the perception (and reality) of extortion is the emergence of accepted professional understanding for assessing the value of exploits. Without such a system, there will always be a strong incentive pushing people in the direction of blackhat work.)
How else would you phrase someone telling you "I have this bug and will exploit it if you don't pay me X amount" vs. "I think the impact is bigger because of Y"? For me, the first sounds quite clearly like extortion.
The first case would get you likely in trouble. The second case would routinely cause a further review in any decent program, and if there's any merit to it, you get a higher bounty.
Nobody is forced to participate in any bug bounty program. If people feel the reward is too low, they should not partake.
The researcher doesn't have zero knowledge before choosing to work with/for a company. The history of payouts and the perception of the company in the community are meaningful indicators of willingness to pay.
So, to summarize, you go to bank and you say "your back door is vulnerable can you check", instead of checking and giving you some kind of praise, they call police to beat the hell out of you...
This is exactly sort of thing that will make community of white hackers stop caring, and leave open door to foreign agency malicious hackers to do as they please.
I would like to know what was really going inside of their heads, was someone internally trying to steal the thunder, was it vanity/pride, was it lack of funds?!, was it fear?
I think the issue was he went into the back door, and then found a key, and then started unlocking more doors. In other words, he used the initial bug to escalate access into their systems. Which is pretty obvious a no-no.
By the way while reading this, I was expecting happy ending, something nice to start the day, but, alas, this is almost like a heavy Russian drama, starts with light tone and ends so depressive I would rather go back to bed crawling under the blanker and into fetal position.
Off topic, but there is a bug on Instagram that has been bothered me for quite a while.
On web (not sure about the app), if your language is Japanese, for any profile that has 0 following, it will show "Following: 0" as "フォロー中NaN人". A screenshot for the lazy: https://i.imgur.com/rTGXe3T.png
Of course this is a rather minor issue, but it still feels weird to me that one of the most popular website/service in the world would have this kind of bug live so long (and yes, I have reported it multiple times).
I tried a little searching but I can't find anything that says how this all ended. Alex Stamos denied saying anything bad. But then what? It looks like it was all just dropped pretty much as is?
There is no real bug besides the ruby RCE thing. Cracking weak passwords is not eligible. Sorry. I can see why Facebook denied him a remittance but their approach of contacting his employer was wrong.
Not a "bug" in terms of incorrect code. But if I worked there, I'd sure like to know that
1. There were older versions of apps with config files stored in S3 that contained AWS keypairs for roles with wide open access
2. That such keypairs existed in the first place and were used on servers - probably no service role with such wide access should exist, and even if it did, it ought to be caught by routine audits for overpermissioned roles, and also old keypairs should be retired and rotated regularly
3. That a whole bunch of private key material basically encompassing the keys to the Instagram castle were stored in S3 buckets
This speaks to a couple of issues that bothered me while working in bug bounty triage.
> Alex informed my employer (as far as I am aware) that I had found a vulnerability, and had used it to access sensitive data. He then explained that the vulnerability I found was trivial and of little value, and at the same time said that my reporting and handling of the vulnerability submission had caused huge concern at Facebook.
[my emphasis]
There is this conceptual separation between the severity of the issue and the impact. Simplifying things much further than the situation described in the piece, you could have an admin account with the password "password". This is a stupid issue. The fix is to change the admin password. How much of a bounty should be paid for this report?
One school of thought is that the value of the report is related to what you can accomplish by exploiting it. This is clearly the right approach if you're assessing the issue's value to an attacker. It has some problems in the bug bounty context -- a major one is that it feels subjectively unfair to the company! They don't want to pay 100x more for the same vulnerability just because, this time, it happened to have more sensitive stuff behind it.
Another is that, as here, you often see a chain of vulnerabilities, all of which are of very little consequence in isolation, but they happen to combine into something much greater than the sum of the parts. (I recall a published writeup, which I can no longer find, in which one important step was a logout CSRF. Nobody cares about those.) The policy of "stop investigating as soon as you find anything" rules out this kind of "whole is greater than the sum of the parts" finding by definition.
> Playing By The Rules
> Microsoft (in my opinion), has done the best job of explaining exactly how far they would like a researcher to take a vulnerability. Google and Yahoo imply that you should report a vulnerability immediately, but do not clarify how far you should go in determinining impact. Tumblr, on the other hand, puts in writing the policy of just about every bounty program. The better your PoC shows impact, the more you are likely to get paid. Further, the better a researcher can understand and describe impact, the more likely they are to receive a greater reward.
This bothers me from a fairness perspective. I have personally seen essentially the same report on different pages of a webapp get paid out differently because the researchers provided different speculation about what might be possible using their exploit. The guy who got paid less was careful about following the rules, asking for guidance about exactly what and how he could investigate, and then he only claimed what he was able to demonstrate. The guy who got paid more had a more generic claim that "this demonstrates SQLi, and writing to the database might be possible". I could not establish whether writing to the database was in fact possible for the same reason the first guy (and the second guy) didn't try -- it might have been unacceptably disruptive to the company. So I passed the speculation through, and the payout ended up being higher.
The lesson here is, "claim the moon and the stars." But I feel that means the ecosystem is unhealthy; that's not what I think the lesson should be.
Companies always say they will investigate the full impact of a vulnerability when you follow the protocol they urge of "as soon as you find something, report it and don't try to escalate". But this is nearly impossible to do even if you're trying in good faith.
---
Sometimes you're not trying in good faith. I have also seen what is exactly the same issue paid out differently depending on the category the researcher files it under. Many programs publish payout schedules by category. In this case, the schedule contained a mix of technical category types ("XSS") and functional category types ("account takeover"). One researcher found a way to present an issue in a low-paying technical category as a high-paying functional category. I repeatedly noted in my reports to the company that this researcher was getting paid quite a lot more for the same vulnerability than other researchers who didn't know about the loophole. This state of affairs never changed; I assume the main concern was maintaining the relationship with the loophole guy. But obviously, this sort of thing directly falsifies the claim that "we will investigate the full impact of the issue you report and pay out appropriately."
> Companies always say they will investigate the full impact of a vulnerability when you follow the protocol they urge of "as soon as you find something, report it and don't try to escalate". But this is nearly impossible to do even if you're trying in good faith.
Disclaimer: I was a Security Engineer on the FB Security Team until last month and regularly attended the payout meetings :-)
I've seen plenty of bug bounty programs making such claims, but the Facebook program keeps up to this promise the most. Every bug is root caused to the line that caused the issue and assessed on maximal potential impact.
Sometimes that leads to cases where low impact vulnerabilities got paid out tens of thousands of dollars. The big bounty often came as a big surprise to the reporter :-)
[+] [-] TravisLS|5 years ago|reply
[+] [-] wdr1|5 years ago|reply
[+] [-] albntomat0|5 years ago|reply
The issues with bug bounties as a whole is the market is skewed. For any work done by a bug bountier, there is exactly one legitimate buyer, who gets to make a significant judgement call on the value of the work done. Furthermore, this value is decided upon after the work has been completed, and has been provided to the company. In what other industries is this the case?
Alternatively, triagers have a whole pile of crap to wade through, to get to the useful material.
Furthermore, it really is hard to place an accurate monetary value on a bug that's responsibly reported, and patched. This is in part due to unclear monetary results from being breached. What precisely is the monetary loss from the recent MS Teams bug that was reported but not exploited vs the incidents this year at Twitter and SolarWinds?
Having had some involvement in the bug bounty arena as a reporter, I have to say I'm a big fan of those companies that open up all of their reports after a fix period of time. This allows them to build trust with those who look into their products, and develop a reputation for being prompt and consistent.
[+] [-] arwhatever|5 years ago|reply
Those "mail us your gold" ads on TV.
[+] [-] HexagonalKitten|5 years ago|reply
This is very true.
> The issues with bug bounties as a whole is the market is skewed. For any work done by a bug bountier, there is exactly one legitimate buyer who gets to make a significant judgement call on the value of the work done.
The problem, in my experience, is that they never analyze it by its potential. Why would they, they have the details now and usually your legal details so if it leaks they'll have you busted in a heartbeat and sued for contract violation.
> Furthermore, it really is hard to place an accurate monetary value on a bug that's responsibly reported
I submit that from my experience threat modelling this is actual dead simple but nobody feels the need to do it.
> What precisely is the monetary loss from ...
As you point out, the issue is that there's a single buyer. You really need to open up the bidding. If you trusted a Russian mob to pay residuals (and they probably would) you might be able to sell this for what ended up being $50M+, and the criminals could clear billions if done right. Then the next time something like this came up you'd have more bargaining power. If the company was still there...
Thomas is right that there isn't specifically a market like flippa for exploits but there are dark markets and many of the vendors would be open to a chat. I'm not rooting for this, I'm just not blind and it will happen. (Well, if it's Twitter I'm rooting a little...)
[+] [-] arwhatever|5 years ago|reply
[+] [-] Zee2|5 years ago|reply
[+] [-] hh3k0|5 years ago|reply
[+] [-] fouc|5 years ago|reply
That's not likely to be accepted by default by most companies. I would assume a default "do not escalate access" unless explicitly asked for.
[+] [-] AviationAtom|5 years ago|reply
https://web.archive.org/web/20151217232048/https://www.faceb...
[+] [-] FriendlyNormie|5 years ago|reply
[deleted]
[+] [-] paulpauper|5 years ago|reply
[+] [-] typenil|5 years ago|reply
Can't say I'm surprised, given the level of ethics Facebook exhibits at every conceivable level.
[+] [-] LukasReschke|5 years ago|reply
That's not how Facebook treats Bug Bounty Participants. By far, it's one of the better programs in terms of payouts, fairness, and triage time on critical issues.
Just a recent example: a bug bounty hunter reported unexpired CDN links. After internal research, FB figured out to chain this into a Remote Code Execution and paid out 80k USD to the researcher. (https://www.facebook.com/BugBounty/posts/approaching-the-10t...)
That said, I wasn't there in 2015, so I only know the story from some stories. (which portray the story a tad different) - Even if it were true, I haven't seen such treatment in the last three years at FB.
[+] [-] paxys|5 years ago|reply
[+] [-] spoonjim|5 years ago|reply
[+] [-] paulpauper|5 years ago|reply
[+] [-] ISL|5 years ago|reply
With their first bugs, researchers are entirely unknown quantities to the company. Stating, "I have a critical zero-day, but I won't tell you what it is until you pay me $BUCKS," clearly won't work.
A reliable escrow service, to whom the researcher can provide the exploit and the company can provide $BUCKS, offers insurance to both parties. If the exploit is not as described, the researcher loses the exploit entirely and gets no $BUCKS, but if the exploit is as described, the company cannot renege on the deal.
(Edit, addressing the direct question more-clearly: perhaps what is necessary to avoid the perception (and reality) of extortion is the emergence of accepted professional understanding for assessing the value of exploits. Without such a system, there will always be a strong incentive pushing people in the direction of blackhat work.)
[+] [-] LukasReschke|5 years ago|reply
The first case would get you likely in trouble. The second case would routinely cause a further review in any decent program, and if there's any merit to it, you get a higher bounty.
Nobody is forced to participate in any bug bounty program. If people feel the reward is too low, they should not partake.
[+] [-] underwater|5 years ago|reply
[+] [-] NiceWayToDoIT|5 years ago|reply
This is exactly sort of thing that will make community of white hackers stop caring, and leave open door to foreign agency malicious hackers to do as they please.
I would like to know what was really going inside of their heads, was someone internally trying to steal the thunder, was it vanity/pride, was it lack of funds?!, was it fear?
[+] [-] fouc|5 years ago|reply
[+] [-] NiceWayToDoIT|5 years ago|reply
[+] [-] fireattack|5 years ago|reply
On web (not sure about the app), if your language is Japanese, for any profile that has 0 following, it will show "Following: 0" as "フォロー中NaN人". A screenshot for the lazy: https://i.imgur.com/rTGXe3T.png
Of course this is a rather minor issue, but it still feels weird to me that one of the most popular website/service in the world would have this kind of bug live so long (and yes, I have reported it multiple times).
[+] [-] elboru|5 years ago|reply
Free can be translated to either “libre” or “gratis”, libre is as in freedom, gratis is free as in beer.
I can’t understand how the most popular reading device would have that kind of mistake in one of the most common languages in the world.
[+] [-] darepublic|5 years ago|reply
[+] [-] blakesterz|5 years ago|reply
[+] [-] elliekelly|5 years ago|reply
That usually means some money was exchanged and some NDAs were signed.
[+] [-] paulpauper|5 years ago|reply
[+] [-] ufmace|5 years ago|reply
1. There were older versions of apps with config files stored in S3 that contained AWS keypairs for roles with wide open access
2. That such keypairs existed in the first place and were used on servers - probably no service role with such wide access should exist, and even if it did, it ought to be caught by routine audits for overpermissioned roles, and also old keypairs should be retired and rotated regularly
3. That a whole bunch of private key material basically encompassing the keys to the Instagram castle were stored in S3 buckets
[+] [-] umvi|5 years ago|reply
[+] [-] julianmcolina|5 years ago|reply
[+] [-] thaumasiotes|5 years ago|reply
> Alex informed my employer (as far as I am aware) that I had found a vulnerability, and had used it to access sensitive data. He then explained that the vulnerability I found was trivial and of little value, and at the same time said that my reporting and handling of the vulnerability submission had caused huge concern at Facebook.
[my emphasis]
There is this conceptual separation between the severity of the issue and the impact. Simplifying things much further than the situation described in the piece, you could have an admin account with the password "password". This is a stupid issue. The fix is to change the admin password. How much of a bounty should be paid for this report?
One school of thought is that the value of the report is related to what you can accomplish by exploiting it. This is clearly the right approach if you're assessing the issue's value to an attacker. It has some problems in the bug bounty context -- a major one is that it feels subjectively unfair to the company! They don't want to pay 100x more for the same vulnerability just because, this time, it happened to have more sensitive stuff behind it.
Another is that, as here, you often see a chain of vulnerabilities, all of which are of very little consequence in isolation, but they happen to combine into something much greater than the sum of the parts. (I recall a published writeup, which I can no longer find, in which one important step was a logout CSRF. Nobody cares about those.) The policy of "stop investigating as soon as you find anything" rules out this kind of "whole is greater than the sum of the parts" finding by definition.
> Playing By The Rules
> Microsoft (in my opinion), has done the best job of explaining exactly how far they would like a researcher to take a vulnerability. Google and Yahoo imply that you should report a vulnerability immediately, but do not clarify how far you should go in determinining impact. Tumblr, on the other hand, puts in writing the policy of just about every bounty program. The better your PoC shows impact, the more you are likely to get paid. Further, the better a researcher can understand and describe impact, the more likely they are to receive a greater reward.
This bothers me from a fairness perspective. I have personally seen essentially the same report on different pages of a webapp get paid out differently because the researchers provided different speculation about what might be possible using their exploit. The guy who got paid less was careful about following the rules, asking for guidance about exactly what and how he could investigate, and then he only claimed what he was able to demonstrate. The guy who got paid more had a more generic claim that "this demonstrates SQLi, and writing to the database might be possible". I could not establish whether writing to the database was in fact possible for the same reason the first guy (and the second guy) didn't try -- it might have been unacceptably disruptive to the company. So I passed the speculation through, and the payout ended up being higher.
The lesson here is, "claim the moon and the stars." But I feel that means the ecosystem is unhealthy; that's not what I think the lesson should be.
Companies always say they will investigate the full impact of a vulnerability when you follow the protocol they urge of "as soon as you find something, report it and don't try to escalate". But this is nearly impossible to do even if you're trying in good faith.
---
Sometimes you're not trying in good faith. I have also seen what is exactly the same issue paid out differently depending on the category the researcher files it under. Many programs publish payout schedules by category. In this case, the schedule contained a mix of technical category types ("XSS") and functional category types ("account takeover"). One researcher found a way to present an issue in a low-paying technical category as a high-paying functional category. I repeatedly noted in my reports to the company that this researcher was getting paid quite a lot more for the same vulnerability than other researchers who didn't know about the loophole. This state of affairs never changed; I assume the main concern was maintaining the relationship with the loophole guy. But obviously, this sort of thing directly falsifies the claim that "we will investigate the full impact of the issue you report and pay out appropriately."
[+] [-] LukasReschke|5 years ago|reply
Disclaimer: I was a Security Engineer on the FB Security Team until last month and regularly attended the payout meetings :-)
I've seen plenty of bug bounty programs making such claims, but the Facebook program keeps up to this promise the most. Every bug is root caused to the line that caused the issue and assessed on maximal potential impact.
Sometimes that leads to cases where low impact vulnerabilities got paid out tens of thousands of dollars. The big bounty often came as a big surprise to the reporter :-)
Just a recent example: a bug bounty hunter reported unexpired CDN links. After internal research, FB figured out to chain this into a Remote Code Execution and paid out 80k USD (https://www.facebook.com/BugBounty/posts/approaching-the-10t...)
Facebook has big pockets. As a bug bounty hunter, I'd not worry about being screwed by them. It's by far one of the best paying bounty programs.
There are many reasons to criticize Facebook or Instagram. But the handling of its application security should not be in the top 10 :-)
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] vittore|5 years ago|reply
[+] [-] rg2004|5 years ago|reply