I think Valve and HackerOne handled this poorly, but I think the author is partially at fault for repeatedly failing to communicate the issue clearly.
I worked as a penetration tester for a while, and I had trouble understanding what the author was saying. The headline should have been that the steam mobile app makes requests to the plaintext HTTP URL (http://store.steampowered.com) instead of the TLS-authenticated URL (https://store.steampowered.com). Without TLS, an attacker can impersonate the Steam store server and steal credit cards or trick users into installing malicious apps.
The author reported this as:
>The vulnerability is that an attacker can perform a man in the middle attack by spoofing an HTTP request pretending to be from store.steampowered.com. While the client does check for an eventual HTTPS redirect, it can redirect to an HTTPS URL.
There's so much ambiguity and missing information in that writeup:
* Who does the attacker send an HTTP request to? I think the author meant to say an HTTP response.
* In "it can redirect," does the word "it" refer to the "the client" or
"redirect?"
* I think "it can redirect to an HTTPS URL" was supposed to be "any HTTPS URL."
* Why is the client vulnerable? What should they be doing instead?
Also, the author's exploit scenario is to just make the Steam app load his portfolio page, which might have further muddled things. It sounds inconsequential that an attacker can trick Steam users into visiting a developer's portfolio page. It might have been a clearer report if the proof-of-concept redirected to a website that looked like the Steam store but had a warning saying, "I'm an evil copy of the Steam store that will steal your credit card number."
I'm not a security expert. I am a software engineer; as such make it my responsibility to understand something of network security, but I have no formal training in such and I don't work in a specifically security-focused role. Nor do I work for an explicitly security-focused company.
I understood the issue clearly on first read.
You're absolutely right that it could have been explained more clearly, and that there is some ambiguity in the wording (not everyone's a perfect communicator). But if I (a very normal, non-security-focused software engineer) can grok this, it's the absolute least I would expect from someone working for HackerOne! Their entire job is to be able to understand this sort of thing in depth.
> I think the author is partially at fault for repeatedly failing to communicate the issue clearly.
I disagree. The first sentence of the report includes "HTTP", "man in the middle attack" and "store.steampowered.com"
If that isn't clear to someone, they are not sufficiently trained to triage vulnerability reports. You simply cannot do that job right if you need the attacker to hand-walk you through the difference between HTTP and HTTPS and why you would use the latter for an online store.
No, the problem is that HackerOne isn't actually a "Bug Bounty as a Service" company. It's a "Cover Corporate Asses as a Service" company. They don't care if they don't understand your vulnerability. Their primary value add is that their customers can stick their heads into the sand and still create the impression that their software has no security flaws. What happens is that HackerOne forces security researchers into one specific workflow. They have an incentive to close as many bug reports as out of scope to avoid paying a bounty. There is no incentive to have qualified staff that actually have the technical background knowledge to determine if your report is actually a security problem or not. The fact that reporters have to create a hackerone account means that HackerOne can ban you from their platform and you automatically lose the ability to report bugs for other companies. What often happens is that they simply ignore your report and try to pretend that it never existed in the first place. When the reporter decides that public disclosure is the the path of last resort then HackerOne will respond immediately and blame the reporter for not following the HackerOne workflow and ban the account of that reporter.
The fact that Valve decided to pay money to this company implies that they don't care about security. You can't blame them for not fixing the vulnerabilities when HackerOne is interfering heavily in the process but you can blame Valve for knowingly choosing a company with a horrifyingly bad track record.
So in a nutshell, could we summarise the issue here as "Valve didn't use TLS and thus Valve's users are vulnerable to the exceptionally well-known consequences of not using TLS?"
If so, then... okay, but I don't know what the blog author was expecting when he reported this. Pointing out that HTTP has MiTM possibilities is kind of up there with pointing out that the sky is blue. If, in 2020, a site has made the choice not to upgrade to TLS then it's more likely a conscious decision than an oversight.
I'm not even a software engineer. Lowly MBA that writes R code. even I understood the gist. But the part that I understood well is that _this was a HackerOne managed program_, meaning that it's HackerOne's job to take a raw submission and turn it into a well designed report that can be triaged and fixed.
Thank you. This was a bit confusing. It helped that this was the first paragraph.
>This is my first blog, but I felt like this is something I needed to get off my chest after months. If people enjoy this blog post, I will probably do more in the future.
I'm sure this blog will probably learn a hard and fast lesson on writing thanks to your post. This can't be any easy topic to explain. You did a solid job, thank you again.
Companies receive so many "First, you have to be on the other side of this airtight hatch, then you..." reports that anything that looks even remotely like it will just get summarily closed. My personal favorite ones start with some form of "I copied the user's cookies from device A's file-system, and..."
Just some suggestion on how to report these kind of things, because there is an actual underlying issue here worth fixing. It's good that you didn't mention the reverse proxy. Next, don't say "spoofing an HTTP request" in your first sentence of the report, that's an immediate red flag. If you have access to spoof something on the network, it's already not an issue for 99% of people and an instant low priority. Instead, say "Steam insecurely relies on a redirect response to upgrade the hosted content from HTTP to HTTPS, instead of directly establishing the HTTPS connection". How this can be exploited is now much more general than just being a spoofing issue, with both the problem and solution clearly stated.
I've been on both sides of this, and sadly having HackerOne/BugCrowd as intermediaries often hurts more than it helps.
On one hand I've had to sort through the never ending stream of "if you bypass the safeguards first" issues and some guy in India copying and pasting open source vuln scanner reports. I get why people don't want to deal with this and outsource it.
On the other hand, I have a legitimate exploit against GitHub that is "working as intended" for months now. No amount of back and forth is going to convince them that leaking commit messages on enterprise accounts is serious apparently.
Simply say that there is a typo in the steam configuration, it is connecting to the (insecure) URL http://...
This allows steam network traffic to be intercepted. It can be fixed by correcting the URL to https.
For example, somebody using steam from a coffee shop could have his credentials/cookies/accounts intercepted by the coffee shop operator or any other visitor.
I believe coffees and other gaming venues are a supported use case for steam and you do not wish to leave your users at risk.
IMO There is really no need to blow this out of proportion. It's just a typo. Developers make typos all the time. Bet they're more likely to double check something trivial like that if pointed to.
The domain and what they are reading off of that filesystem matters though. Like for example, if there was a credential that is in the clear/poorly encrypted and used in other parts of your system.
I've reported one bug to HackerOne so far, which was a bug with Wordpress that allowed you to access the title of unpublished posts. Months went by with no reply, despite clear demonstrations of how it could be exploited.
Eventually I demonstrated the attack on Techcrunch's site to a Techcrunch reporter, showing him the titles of their upcoming news stories for the coming week, including embargoed news. They got in touch with HackerOne/Wordpress and it finally got resolved a month or two later.
I don't feel like HackerOne added anything positive to the mix, other than a layer of confusion/delay.
HackerOne started with such promise, but stories like this keep coming out. It makes you wonder how many people were even more patient than OP.
Unfortunately, despite all the HackerOne claims, it still seems to take public disclosure and embarrassment to make companies actually take things seriously. Seems sunlight is still the best disinfectant.
One of the big problems is the reports are bad. Like some person just shotgunning the output of an automated script to every company in there without really understanding. Or we get a lot of “I can squat an s3 bucket with the company name in it and make it public” - no way! So filtering through to the good ones takes too many hands, and often times they’re like this one where it’s like...yea technically true but an acceptable enough risk.
> HackerOne started with such promise, but stories like this keep coming out. It makes you wonder how many people were even more patient than OP.
To state the probably obvious, remember that primarily what you see are the negative interactions. You're unlikely to see many posts about the positive interactions. People don't tend to post so much when things go as expected, they do when they go wrong.
I think what you’re all missing is the asymmetry that most bug reports are bad. If you understand this then you understand that hackerone has respected its promises to companies hiring them: they perform the first set of triage. With the number of reports they are going through, the number of bad stories we’re hearing about them is pretty abysmal.
I feel like it’s the same with any type of bad news, it can easily blow out of proportion and wrongly indicate that things are going very very badly. I guess this is why the news are addicted to shocking and breaking news.
And if the claims of HackerOne/Valve trying to get out of paying a bounty, that's just terrible, because a lot of these exploits can be sold to nefarious actors for much much more.
Not paying out the promised bounty, to me, basically spits in the face of independent (and ethical) security researchers.
This HackerOne thing seems weird. It's like they looked at Google and decided "let's apply their customer service, but for security researchers." With this model, things like this blog post inevitably follow -- including the "they ignored me until I complained on Twitter, then they instantly fixed it". (Classic.)
I took djb's "Unix security holes" class in 2004 and he advocated for just disclosing the security hole immediately, things like this haven't changed my mind. I know people want the money, but it's peanuts and doesn't seem worth the hassle unless you're really young. Nobody is going to spend months looking for obscure bugs for $10,000 when they could get paid that in a week to write the bugs (accidentally) in the first place.
It all makes very little sense to me. If you care about security, you'll have a team like Project Zero. Anything else is just applying the "gig economy" to engineering work, and the results are pretty predictable. It's kind of sad.
My guess is that, like the crypto wars, disclosure fights are going to happen every generation.
There was a big discussion about this in the 90s. Vendors would sit on bugs forever, frequently simply to suppress knowledge of them rather than fixing them. Hackers rebelled; some simply published what we now call 0days, others would publish on a non-negotiable timeline. Eventually, "responsibly disclosure" became a norm.
Looks like companies have once more figured out how to game the process, so their counter-parties are going to renegotiate. And the cycle of life is complete.
This is an overall problem at bigger companies. The machinations that decide priorities often do not understand engineering things and thus never prioritize the fixes. It's not uncommon to see engineering items that would take 30 minutes to fix get hours of engineering man hours in discussion about "should we fix it?" and "when?" .
I tend to sneak these little things into my PRs cause I cant stand to see them linger w/o cause
I’d like to add a different take to this. I have contacted Valve support in the past, clearly stated my problem, and got a response that looked like they read half of my question and responded without reading the entire thing. If the same team that responds to their customer support reads this stuff, which seems crazy, they didn’t bother to understand the problem before responding.
I don't get the "this issue is a duplicate so we won't pay" business. If I find a serious issue that's still unpatched and someone tells me "Ops it's a duplicate sorry!" I'm still going to ask for payment. If I don't get it, it's a given that I'm going public - assuming it is legal to do so -. Why doesn't everyone do that?
I'll go even further and say that price negotiation should happen at every disclosure. If you're not satisfied with the price you're getting, go ahead and make the vulnerability public ASAP - assuming it is legal to do so -. It's about time companies that have invested nothing in security compared to their profits and their parasitic middlemen like HackerOne acquire proper incentives. Right now, they are getting away with having the public subsidize their security procedures AT MASSIVELY REDUCED costs. This has to stop.
> If I find a serious issue that's still unpatched and someone tells me "Ops it's a duplicate sorry!" I'm still going to ask for payment. If I don't get it, it's a given that I'm going public.
Probably because I think what you've just described could be viewed as extortion, which is illegal in many locations? Also, it doesn't really do you any favors, I think. You'll get a week or less of recognition as finding an exploit, and then the story will come out how you both sniped someone else's find and possibly caused damage on purpose for your five minutes of fame.
To be clear, it's the initial monetary request and actions because it was denied that makes this entirely mercenary and would not reflect well on you. You were obviously willing to sit on the exploit for a while for some cash, so you no longer have any moral arguments to rely on for your behavior if you release it immediately, and the fact that it's not original just makes it worse. I imagine your reputation for security matters might never recover.
There are ways to get the moral defense back, but it requires waiting a while to see if it actually gets fixed and not taking it public immediately (so it actually is for their unresponsiveness and not just because they didn't pay).
>They're trying to get out of paying bug bounty money: I guess this is the more extreme perspective to take here, but considering the whole experience, a definitely possible one. I wasn't here for the bug bounty money, I have work by this point, but if there's some younger child trying to get into security research doing this, this could be enough to massively demotivate them if they were promised it from the HackerOne page.
>They had someone who posted the same bug either weeks or months in advance: This means that Valve left someone else hanging for an insanely long time. This is equally messed up.
I can't understand that in the age of blockchain buzzword bullshitting companies, in one instance where the immutable public ledger would actually be useful (for instance with sha512 hash of initial report) it isn't used. It wouldn't be very complicated and it would immensely help their trustworthiness.
I've had or heard quite a few conversations about patent bounties inside of companies that love patents, and there's a very common rule (that ends up being gamed) that each of the first N contributors gets X dollars, and if more than N authors exist then they all split NX dollars.
Unfortunately such a strategy could also be gamed by a bug bounty. If I split 150% of the bounty between all people who reported the bug within a time interval, I could just tell a buddy who has moved out of town about it and end up with a bit more money between the two of us. Either in exchange for him giving me half his bounty, or by returning the favor later.
>For a simple MITM exploit that can be fixed by replacing "http://" with "https://", this is simply unacceptable.
I think the author is really not understanding the complexity of updating to an https:// url inside of a mobile applicaiton. Valve is most likely using a self signed cert so that would require bundling the certification in with the app so that Apple/Android allowed it to load inside of a webview. This is not nearly as simple as just updating all of the urls in the app to https:// and the fix could very well take a few months. Furthermore, loading the store page is not necessarily a vulnerability to valve as if you are able to re-direct it, you wouldn't have access to any of the Steam user specifics, (like account data). It wouldn't be much different than putting a shady link somewhere on the internet, and people navigating to it.
Why would they use a self signing cert? They could use a real Cert. It's Steam. They can afford real certs, or just use LetsEncrypt.
There is absolutely no reason for the app to connect to a login/authentication service (or any service) over plain text, period! There should be unit tests that scan for http:// and will fail the build if found in the code or resources.
I know some things are not simple fixes, but this is absolutely a fix that can be done and we should all know how to do. It should have also been made a security priority and pushed through.
What you are saying makes no sense, why does android only accept the self signed certificate when it redirects from http and not for direct https connections. Can you provide a source, a stackoverflow link maybe where this problem is discussed ?
You can't actually do arp spoofing/hijacking at an ISP level. This is because arp/mac addresses don't have much impact on traffic when crossing a layer 3 (network/router) boundary.
Reading stories like this really make me hate these customer service walls we've put up everywhere. Only a couple of times in these exchanges did it seem like there was a functioning, thinking human intelligence on the other end. All of the robo-responses are depressing, including the human-generated ones. And it's even more discouraging when you have to hear about their company's internal chaos and disorganization. Really gives me a low opinion of this HackerOne organization.
Just drop a line on twitter saying you've discovered a vulnerability in $popularSoftware and mention $company. Say you'll be disclosing in 90 days if $company doesn't issue a reply publicly.
Make sure to deal with an actual human and that everything is done according to best practice. You may even get publicity this way and even if it's unethical it can be sold or used to your advantage.
If they care, trust me when I say they will make an effort. Most places (like Google) have effective systems in place for dealing with such queries.
it wouldn't even be unethical. responsible disclosure starts with engaging with company at eye-level. all that these bug bounty platforms do is take away exactly this power and allow the company to consolidate the contract to a single entity (e.g. preferred supplier). they deserve even less respect than any shady recruiter or typical outsourcing sweat-shop.
giving these people power is like talking to a cop without a lawyer - regardless of what they say, they don't have your interest in mind and you have lost before the game has even started.
In a way, this is the same problem that we see with tech hiring & recruiting. Most gatekeepers are less technical than the most tehcnical developers they gate-keep, but still necessary because 80-90% of reports(in case of security)/applicants(in case of hiring) are unqualified. Anyone who is qualified enough to screen without false positives can probably get a better job.
Responsible Disclosure does not mean waiting for arbitrary and unilateral decisions from a company just to be undervalued.
If your argument for responsible disclosure equally applies to this post, a $100 payout and to a $100,000 payout - as long as its from the company that needs to patch the exploit, then re-evaluate your argument.
[+] [-] mtlynch|6 years ago|reply
I worked as a penetration tester for a while, and I had trouble understanding what the author was saying. The headline should have been that the steam mobile app makes requests to the plaintext HTTP URL (http://store.steampowered.com) instead of the TLS-authenticated URL (https://store.steampowered.com). Without TLS, an attacker can impersonate the Steam store server and steal credit cards or trick users into installing malicious apps.
The author reported this as:
>The vulnerability is that an attacker can perform a man in the middle attack by spoofing an HTTP request pretending to be from store.steampowered.com. While the client does check for an eventual HTTPS redirect, it can redirect to an HTTPS URL.
There's so much ambiguity and missing information in that writeup:
* Who does the attacker send an HTTP request to? I think the author meant to say an HTTP response.
* In "it can redirect," does the word "it" refer to the "the client" or "redirect?"
* I think "it can redirect to an HTTPS URL" was supposed to be "any HTTPS URL."
* Why is the client vulnerable? What should they be doing instead?
Also, the author's exploit scenario is to just make the Steam app load his portfolio page, which might have further muddled things. It sounds inconsequential that an attacker can trick Steam users into visiting a developer's portfolio page. It might have been a clearer report if the proof-of-concept redirected to a website that looked like the Steam store but had a warning saying, "I'm an evil copy of the Steam store that will steal your credit card number."
[+] [-] lucideer|6 years ago|reply
I understood the issue clearly on first read.
You're absolutely right that it could have been explained more clearly, and that there is some ambiguity in the wording (not everyone's a perfect communicator). But if I (a very normal, non-security-focused software engineer) can grok this, it's the absolute least I would expect from someone working for HackerOne! Their entire job is to be able to understand this sort of thing in depth.
[+] [-] michaelt|6 years ago|reply
I disagree. The first sentence of the report includes "HTTP", "man in the middle attack" and "store.steampowered.com"
If that isn't clear to someone, they are not sufficiently trained to triage vulnerability reports. You simply cannot do that job right if you need the attacker to hand-walk you through the difference between HTTP and HTTPS and why you would use the latter for an online store.
[+] [-] imtringued|6 years ago|reply
The fact that Valve decided to pay money to this company implies that they don't care about security. You can't blame them for not fixing the vulnerabilities when HackerOne is interfering heavily in the process but you can blame Valve for knowingly choosing a company with a horrifyingly bad track record.
[+] [-] gnfargbl|6 years ago|reply
If so, then... okay, but I don't know what the blog author was expecting when he reported this. Pointing out that HTTP has MiTM possibilities is kind of up there with pointing out that the sky is blue. If, in 2020, a site has made the choice not to upgrade to TLS then it's more likely a conscious decision than an oversight.
[+] [-] tvladeck|6 years ago|reply
[+] [-] ngold|6 years ago|reply
>This is my first blog, but I felt like this is something I needed to get off my chest after months. If people enjoy this blog post, I will probably do more in the future.
I'm sure this blog will probably learn a hard and fast lesson on writing thanks to your post. This can't be any easy topic to explain. You did a solid job, thank you again.
[+] [-] contravariant|6 years ago|reply
[+] [-] Saaster|6 years ago|reply
Just some suggestion on how to report these kind of things, because there is an actual underlying issue here worth fixing. It's good that you didn't mention the reverse proxy. Next, don't say "spoofing an HTTP request" in your first sentence of the report, that's an immediate red flag. If you have access to spoof something on the network, it's already not an issue for 99% of people and an instant low priority. Instead, say "Steam insecurely relies on a redirect response to upgrade the hosted content from HTTP to HTTPS, instead of directly establishing the HTTPS connection". How this can be exploited is now much more general than just being a spoofing issue, with both the problem and solution clearly stated.
[+] [-] dsl|6 years ago|reply
On one hand I've had to sort through the never ending stream of "if you bypass the safeguards first" issues and some guy in India copying and pasting open source vuln scanner reports. I get why people don't want to deal with this and outsource it.
On the other hand, I have a legitimate exploit against GitHub that is "working as intended" for months now. No amount of back and forth is going to convince them that leaking commit messages on enterprise accounts is serious apparently.
[+] [-] user5994461|6 years ago|reply
This allows steam network traffic to be intercepted. It can be fixed by correcting the URL to https.
For example, somebody using steam from a coffee shop could have his credentials/cookies/accounts intercepted by the coffee shop operator or any other visitor.
I believe coffees and other gaming venues are a supported use case for steam and you do not wish to leave your users at risk.
IMO There is really no need to blow this out of proportion. It's just a typo. Developers make typos all the time. Bet they're more likely to double check something trivial like that if pointed to.
[+] [-] staz|6 years ago|reply
If only Valve could hire a subcontractor whose task it was to triage and clarify theses reports...
[+] [-] buildbot|6 years ago|reply
[+] [-] samcrawford|6 years ago|reply
Eventually I demonstrated the attack on Techcrunch's site to a Techcrunch reporter, showing him the titles of their upcoming news stories for the coming week, including embargoed news. They got in touch with HackerOne/Wordpress and it finally got resolved a month or two later.
I don't feel like HackerOne added anything positive to the mix, other than a layer of confusion/delay.
[+] [-] dlgeek|6 years ago|reply
Unfortunately, despite all the HackerOne claims, it still seems to take public disclosure and embarrassment to make companies actually take things seriously. Seems sunlight is still the best disinfectant.
[+] [-] idunno246|6 years ago|reply
But occasionally the “oh f#%^” report comes in...
[+] [-] Twirrim|6 years ago|reply
To state the probably obvious, remember that primarily what you see are the negative interactions. You're unlikely to see many posts about the positive interactions. People don't tend to post so much when things go as expected, they do when they go wrong.
[+] [-] baby|6 years ago|reply
I feel like it’s the same with any type of bad news, it can easily blow out of proportion and wrongly indicate that things are going very very badly. I guess this is why the news are addicted to shocking and breaking news.
[+] [-] applecrazy|6 years ago|reply
Not paying out the promised bounty, to me, basically spits in the face of independent (and ethical) security researchers.
[+] [-] jrockway|6 years ago|reply
I took djb's "Unix security holes" class in 2004 and he advocated for just disclosing the security hole immediately, things like this haven't changed my mind. I know people want the money, but it's peanuts and doesn't seem worth the hassle unless you're really young. Nobody is going to spend months looking for obscure bugs for $10,000 when they could get paid that in a week to write the bugs (accidentally) in the first place.
It all makes very little sense to me. If you care about security, you'll have a team like Project Zero. Anything else is just applying the "gig economy" to engineering work, and the results are pretty predictable. It's kind of sad.
[+] [-] rasz|6 years ago|reply
[+] [-] _jal|6 years ago|reply
There was a big discussion about this in the 90s. Vendors would sit on bugs forever, frequently simply to suppress knowledge of them rather than fixing them. Hackers rebelled; some simply published what we now call 0days, others would publish on a non-negotiable timeline. Eventually, "responsibly disclosure" became a norm.
Looks like companies have once more figured out how to game the process, so their counter-parties are going to renegotiate. And the cycle of life is complete.
[+] [-] maallooc|6 years ago|reply
[+] [-] TwoBit|6 years ago|reply
[+] [-] rasz|6 years ago|reply
No tedious 3 day whiteboard interviews there I bet.
[+] [-] maerF0x0|6 years ago|reply
I tend to sneak these little things into my PRs cause I cant stand to see them linger w/o cause
[+] [-] jodaco|6 years ago|reply
[+] [-] armitron|6 years ago|reply
I'll go even further and say that price negotiation should happen at every disclosure. If you're not satisfied with the price you're getting, go ahead and make the vulnerability public ASAP - assuming it is legal to do so -. It's about time companies that have invested nothing in security compared to their profits and their parasitic middlemen like HackerOne acquire proper incentives. Right now, they are getting away with having the public subsidize their security procedures AT MASSIVELY REDUCED costs. This has to stop.
[+] [-] kbenson|6 years ago|reply
Probably because I think what you've just described could be viewed as extortion, which is illegal in many locations? Also, it doesn't really do you any favors, I think. You'll get a week or less of recognition as finding an exploit, and then the story will come out how you both sniped someone else's find and possibly caused damage on purpose for your five minutes of fame.
To be clear, it's the initial monetary request and actions because it was denied that makes this entirely mercenary and would not reflect well on you. You were obviously willing to sit on the exploit for a while for some cash, so you no longer have any moral arguments to rely on for your behavior if you release it immediately, and the fact that it's not original just makes it worse. I imagine your reputation for security matters might never recover.
There are ways to get the moral defense back, but it requires waiting a while to see if it actually gets fixed and not taking it public immediately (so it actually is for their unresponsiveness and not just because they didn't pay).
[+] [-] RyJones|6 years ago|reply
[+] [-] 7777fps|6 years ago|reply
[+] [-] Hitton|6 years ago|reply
[+] [-] Hitton|6 years ago|reply
>They're trying to get out of paying bug bounty money: I guess this is the more extreme perspective to take here, but considering the whole experience, a definitely possible one. I wasn't here for the bug bounty money, I have work by this point, but if there's some younger child trying to get into security research doing this, this could be enough to massively demotivate them if they were promised it from the HackerOne page.
>They had someone who posted the same bug either weeks or months in advance: This means that Valve left someone else hanging for an insanely long time. This is equally messed up.
I can't understand that in the age of blockchain buzzword bullshitting companies, in one instance where the immutable public ledger would actually be useful (for instance with sha512 hash of initial report) it isn't used. It wouldn't be very complicated and it would immensely help their trustworthiness.
[+] [-] hinkley|6 years ago|reply
Unfortunately such a strategy could also be gamed by a bug bounty. If I split 150% of the bounty between all people who reported the bug within a time interval, I could just tell a buddy who has moved out of town about it and end up with a bit more money between the two of us. Either in exchange for him giving me half his bounty, or by returning the favor later.
[+] [-] hexpwn|6 years ago|reply
[+] [-] klohto|6 years ago|reply
[+] [-] maerF0x0|6 years ago|reply
[+] [-] restingrobot|6 years ago|reply
I think the author is really not understanding the complexity of updating to an https:// url inside of a mobile applicaiton. Valve is most likely using a self signed cert so that would require bundling the certification in with the app so that Apple/Android allowed it to load inside of a webview. This is not nearly as simple as just updating all of the urls in the app to https:// and the fix could very well take a few months. Furthermore, loading the store page is not necessarily a vulnerability to valve as if you are able to re-direct it, you wouldn't have access to any of the Steam user specifics, (like account data). It wouldn't be much different than putting a shady link somewhere on the internet, and people navigating to it.
[+] [-] djsumdog|6 years ago|reply
There is absolutely no reason for the app to connect to a login/authentication service (or any service) over plain text, period! There should be unit tests that scan for http:// and will fail the build if found in the code or resources.
I know some things are not simple fixes, but this is absolutely a fix that can be done and we should all know how to do. It should have also been made a security priority and pushed through.
[+] [-] pgo|6 years ago|reply
[+] [-] odensc|6 years ago|reply
Why would they "most likely" be using a self-signed cert? That would be an extreme edge-case in my mind, not the standard.
[+] [-] bikingbismuth|6 years ago|reply
[+] [-] mekane8|6 years ago|reply
[+] [-] tcd|6 years ago|reply
Make sure to deal with an actual human and that everything is done according to best practice. You may even get publicity this way and even if it's unethical it can be sold or used to your advantage.
If they care, trust me when I say they will make an effort. Most places (like Google) have effective systems in place for dealing with such queries.
[+] [-] DyslexicAtheist|6 years ago|reply
it wouldn't even be unethical. responsible disclosure starts with engaging with company at eye-level. all that these bug bounty platforms do is take away exactly this power and allow the company to consolidate the contract to a single entity (e.g. preferred supplier). they deserve even less respect than any shady recruiter or typical outsourcing sweat-shop.
giving these people power is like talking to a cop without a lawyer - regardless of what they say, they don't have your interest in mind and you have lost before the game has even started.
[+] [-] vntok|6 years ago|reply
That's blackmail. An expedient way of getting your door breached.
[+] [-] zaptheimpaler|6 years ago|reply
[+] [-] pm_me_ur_fullz|6 years ago|reply
If your argument for responsible disclosure equally applies to this post, a $100 payout and to a $100,000 payout - as long as its from the company that needs to patch the exploit, then re-evaluate your argument.
[+] [-] sitzkrieg|6 years ago|reply