top | item 45658257

(no title)

timdev2 | 4 months ago

I would normally say that "That must be a coincidence", but I had a client account compromise as well. And it was very strange:

Client was a small org, and two very old IAM accounts had suddenly had recent (yesterday) console log ins and password changes.

I'm investigating the extent of the compromise, but so far it seems all they did was open a ticket to turn on SES production access and increase the daily email limit to 50k.

These were basically dormant IAM users from more than 5 years ago, and it's certainly odd timing that they'd suddenly pop on this particular day.

discuss

order

tcdent|4 months ago

Smells like a phishing attack to me.

Receive an email that says AWS is experiencing an outage. Log into your console to view the status, authenticate through a malicious wrapper, and compromise your account security.

SoftTalker|4 months ago

Good point. Phishers would certainly take advantage of a widely reported outage to send emails related to "recovering your services."

Even cautious people are more vulnerable to phishing when the message aligns with their expectations and they are under pressure because services are down.

Always, always log in through bookmarked links or typing them manually. Never use a link in an email unless it's in direct response to something you initiated and even then examine it carefully.

timdev2|4 months ago

These were accounts that shouldn't have had console access in the first place, and were never used by humans to log in AFAICT. I don't know exactly what they were originally for, but they were named like "foo-robots", were very old.

At first I thought maybe some previous dev had set passwords for troubleshooting, saved those passwords in a password manager, and then got owned all these years later. But that's really, really, unlikely. And the timing is so curious.

highfrequencyy|4 months ago

I second this, pretty much immediately after my organization got hit with a wave of phishing emails.

jbverschoor|4 months ago

Or maybe it wasn't DNS, but they simply pulled the plug bc of some breach?

LeonardoTolstoy|4 months ago

Almost this exact thing happened to me about a year ago. Very old account login, SES access with request to raise the email limit. We were only quickly tipped off because they had to open a ticket to get the limit raised.

If you haven't check newly made Roles as well. We quashed the compromised users pretty quickly (including my own, the origin we figured out), but got a little lucky because I just started cruising the Roles and killing anything less than a month old or with admin access.

To play devil's advocate a bit. In our case we are pretty sure my key actually did get compromised although we aren't precisely sure how (probably a combination of me being dumb and my org being dumb and some guy putting two and two together). But we did trace the initial users being created to nearly a month prior to the actual SES request. It is entirely possible whomever did your thing had you compromised for a bit, and then once AWS went down they decided that was the perfect time to attack, when you might not notice just-another-AWS-thing happening.

timdev2|4 months ago

Thanks for sharing. After digging in, it appears that something very similar happened here, after all. It looks like an access key with admin role leaked some time ago. At first, they just ran a quiet GetCallerIdentity, then sat on it. Then, on outage day, they leveraged it. In our case, they just did the SES thing, and tried to persist access by setting up IAM Identity Center.

orblivion|4 months ago

I wonder if a few cases of compromise right after the outage can also be a coincidence. If we have a lot of reports of the same, then it gets interesting.

(The particulars of your case being strange is a separate question though.)