top | item 26764657

(no title)

mxscho | 4 years ago

Doubt that.

There is this so-called "Steam web API key scam" which is ongoing for years at this point: Scammers create phishing Steam login pages to grab people's credentials. Just with these credentials, the damage an attacker can do is still limited because of 2FA. However, the biggest flaw is that it is possible to automatically create API keys for the phished accounts that allow 24/7 remote access of these Steam accounts without the user even noticing. With this access, scammers then automatically modify and alter trades at will and at any time in the future, milliseconds before people confirm them using their mobile device (2FA), e.g., by declining the original trade and setting up a new trade with a scammer's bot account that has changed its profile data to the one of the actually intended trading partner.

This attack is mostly based on phishing, spoofing and confusion, but it could at least be made much harder by preventing automated API key generation and therefore indefinite access to an account (e.g., by implementing email confirmations or captchas for API key generation).

Each day lots of children or laypeople are losing in-game items worth thousands of dollars. I'm admin on a popular CS:GO and gaming Discord server with ~30k members and we see such reports multiple times a week.

Valve has no incentive to fix this as long as it's not their money or regulators start applying pressure.

discuss

order

jsnell|4 years ago

Valve has been pretty aggressive about rolling out these kinds of policies compared to the rest of the industry. (E.g. they were wery early with requiring 2FA to be enabled for a period of time before doing sensitive actions like trades, adding warning interstitials on links that leave Steam). I don't think the incentives have changed that much.

So, here's what makes me confused about your story:

1. I don't see any kind of activity hooks in IEconService, that would let the attackers know via a callback that. Are you saying that they're polling all the hijacked accounts at a high frequency to detect trades they could intercept? That seems like a highly divergent use case from normal uses of the API, and one that an abuse team would be motivated to prevent.

2. I thought the Steam trade confirmation dialog showed very specific information about just what was being traded for what. I.e. it's not just that you're approving "a trade with foo", it's "a trade with foo (whom you've had as a friend for 20 days), where you give a xyzzy and receive a quux". Are the users just blindly approving trades worth thousands without even verifying?

I don't like either of your solutions though. A captcha would be just be minor irritation for the attacker, and anyone who can be phished into logging in can be phished to approve the key generation. It seems that the bigger problem here is that the API keys are unscoped. Once you have that, it's easier to inform the user in the approval flow about just what they're approving, and viable to nag the users into revoking access for apps with dangerous permissions.

throwawaykol|4 years ago

> Are the users just blindly approving trades worth thousands without even verifying?

People do. Many years ago I started playing an MMOG and the old timers were all discussing some incredibly rare new item. So I said I had one, and someone said he'd give me 100 million credits for it. For comparison, I'd just spent several hours grinding out about 10 credits. So I sent him a formal offer - some random piece of junk for 100 million credits - and he was so excited he clicked OK without reading what he was getting. He was so angry! He spent weeks spewing venom on the forums.

Of course, this wasn't real money, but in terms of time spent earning it he suffered a significant loss.

mxscho|4 years ago

> Valve has been pretty aggressive about rolling out these kinds of policies compared to the rest of the industry.

True indeed.

> Are you saying that they're polling all the hijacked accounts at a high frequency to detect trades they could intercept?

Yes.

I have to admit, the "milliseconds before" part was just wrong because I failed when trying to oversimplify for attention.

> it's "a trade with foo (whom you've had as a friend for 20 days), where you give a xyzzy and receive a quux". Are the users just blindly approving trades worth thousands without even verifying?

Often, the attackers focus on swapping trade offers that are initiated from a 3rd party, e.g., a trusted middleman marketplace site that requests your item (with nothing in return) that you want to offer. 3rd party sites take a lot of blame for "stolen items" because people don't even understand how this scam works.

Here, the few seconds are between the 3rd party offering the trade and the compromised user accepting the trade, not between the user accepting the trade in the browser and on his phone. Since the phished user is not aware of the 3rd party site's account in the first place (it is not one of his friends), it is very easy to clone all the observed account details and transform a scam bot account into looking like it is the one from the 3rd party site. Actually, there are characteristics that cannot be spoofed, but an ordinary user, not even aware that he was phished and that someone has control over his account who can do such things, will not notice this.

Now, you could argue that preventing 3rd party sites from existing could also solve this issue. However, I see a valid use case in these 3rd party sites. The goal of my suggestion is to counter these attacks with minimal effort without disabling automated trading capabilities completely:

> A captcha would be just be minor irritation for the attacker, and anyone who can be phished into logging in can be phished to approve the key generation.

I agree that it would only make the attack harder, not impossible, but considering the usual workflow I still see this as an improvement - as a first step.

The phishing is usually done by setting up a "legit" website, e.g. for skin trading, skin gambling or even any other non-financial purpose that requires authentication via Steam. This "legit" website then spawns a malicious "Login with Steam" OpenID credentials popup, rendered inside (!) the web page. This means, the website itself draws (depending on your OS and browser) a perfectly fine looking Browser popup window inside the legit page. It basically spoofs the browser UI itself. Laypeople get fooled easily by this, they sometimes do not even question why the window cannot be dragged out of the page, if they even try. These web apps are built in top-tier quality because obviously, the profit potential is huge. There is probably even a framework sold to easily recreate such pages at this point.

What I'm trying to say is: Getting the user to login is easy because it's part of the legit workflow. The API key generation - not so much.

Basically, everything I'm asking for is to make it hard to automatically transform a normal user account into a bot account used to automate trade offers. I know that there is a valid use case for automated bot accounts and automated trade offers. But the automation of the action to enable such functionality for an account should be prevented at all cost, and it should be explicitly requested from the user, including a warning.

Probably you are saying something similar with that statement with which I agree:

> the bigger problem here is that the API keys are unscoped

TL;DR: I think that preventing automated Steam web API key generation is the best short-term solution considering effort to make the attack a lot harder for the scammers.