top | item 43893906

AWS Built a Security Tool. It Introduced a Security Risk

209 points| simplesort | 10 months ago |token.security

81 comments

order

yfiapo|10 months ago

I agree this was a security concern and it was reported and addressed appropriately. With that said as things go this is pretty minor; perhaps a medium severity issue. Information disclosures like this may be leveraged by attackers with existing access to the lower environment, in conjunction with other issues, to escalate their privileges. By itself, or without the existing access, it is not usable.

More over, the issue wasn’t that AWS recommended or automatically setup the environment insecurely. Their documentation simply left the commonly known best practice of disallowing trusts from lower to prod environments implicit, rather than explicitly recommending users follow that best practice in using the solution.

I don’t think over-hyping smaller issues, handled appropriately, helps anyone.

liquidpele|10 months ago

Sounds like typical hyperbole. Worked at a place once where some “security researcher” trashed the product because they could do bad things on the appliance… if logged in as root.

placardloop|10 months ago

This so called “security risk” is a role in a nonprod that can list metadata about things in your production accounts. It can list secret names, list bucket names, list policy names, and similar.

Listing metadata is hardly a security issue. The entire reason these List* APIs are distinct from Get* APIs is that they don’t give you access to the object itself, just metadata. And if you’re storing secret information in your bucket names, you have bigger problems.

gwbas1c|10 months ago

Depending on what the metadata is, it can be a huge security risk.

For example, some US government agencies consider computer names sensitive, because the computer name can identify who works in what government role, which is very sensitive information. Yet, depending on context, the computer name can be considered "metadata."

voytec|10 months ago

> And if you’re storing secret information in your bucket names, you have bigger problems.

Yeah but the design should be made on the assumption that some customers will do stupid things, and protect them.

Not an identical case, but I once bought a Cisco router for home lab/learning and it appeared to be a hardware decommissioned by one of European banks, not flashed before being handed over to some asset disposal contractor. It eventually landed on an auctioning portal with bank's configuration. The bank was very meticulous with documenting stuff like the address of the branch where it was installed in device's config and ACL names/descriptions included employees' names and room numbers. You could easily extract the names of people granted extended access to internal systems.

So while I agree with you in principal, even financial institutions do stupid things, lack procedures or their processes don't always follow them. Cloud provider's design should assume their customers not following best practices.

philipwhiuk|10 months ago

At the end of the day if you deploy a tool that can access production data, you need to treat it like production. That's the reality here.

dangus|10 months ago

As an AWS-focused practitioner, I started doing Google Cloud training and it blew my mind when I found out that the multiple account sub-account mess that AWS continues to use just doesn’t exist there. GCP sensibly uses a folder and project system that provides a lot of flexibility and IAM control.

It also blew my mind that Google Cloud VPCs and autoscaling groups are global, so that you don’t have to jump through hoops and use the Global Accelerator service to architect a global application.

After learning just those two things I’m downright shocked that Google is still in 3rd place in this market. AWS could really use a refactor at this point.

thinkindie|10 months ago

I think Google scares a lot of people away with their approach of not being able to talk to any human whatsoever unless you spend a lot of money on a monthly basis.

I read a lot of horror stories of people getting in troubles with GCP and not being able to talk to a human person, whereas you would get access to some human presence with AWS.

Things might have been changed, but I guess a lot of people have still this in the back of their mind.

b112|10 months ago

Sort of the same with anything Amazon. Look at their retail website! It used to be the most ground breaking, impressive product search engine out there.

Now it's weird in a dozen different ways, and it endlessly spews ridiculous results at you. It's like a gorgeous mansion from the 1900s, which received no upkeep. It's junk now.

For example, if I want to find new books by an author I've bought from before, I have to go to: returns & orders, digital orders, find book and click, then author's name, all books, language->english, format->kindle, sort by->publication date.

There's no way to set defaults. No way to abridge the process. You mysteriously you cannot click on the author name in "returns & orders". It's simply quite lame.

Every aspect of Amazon is like this now. It was weird workflows throughout the site. It's living on inertia.

sumitkumar|10 months ago

So this is about customer support. Google supports by the customer by a better product but minimal manual support for issues later.

AWS has an organically evolved bad product which has been designed by long line of six page memos but a manual support in case things get too confusing or the customer just need emotional support.

TheTaytay|10 months ago

I agree completely. Every time I need to do Something in AWS I feel like I’m just stumbling over footguns in an infinite sea of footguns. Meanwhile, other providers. (GCP and Azure) have the ability to group resources under projects/folders. They have sensible default isolation primitives that you can understand…

If you forget to tag a resource in AWS, it’s very difficult to find out what it’s being used by. And yeah, infrastructure as code helps with this, but God help you if you created something via the console.

If AWS had a cloud product that had 10% of the surface area, and a simplistic project/RBAC primitive, I would use it in a heartbeat. Hell, it’s essentially what other companies like Heroku are selling (and charging a premium for).

Even if Cloudflare’s R2 cost the same as AWS, I’d use it because the likelihood of one of our engineers doing something wrong permissions is GREATLY diminished.

Anyway, just nodding along to your comment and venting a bit.

icedchai|10 months ago

I've worked with both AWS and GCP off and on for 15 years. In general, I find GCP easier to work with: better developer experience, services that are simpler to configure (Cloud Run vs ECS/Fargate), etc. However, AWS is like the new IBM: nobody ever got fired for going with AWS...

wrs|10 months ago

AWS’s account system is nuts. I know it grew historically out of the “just buy S3 storage with your Amazon account” original, but it’s 2025 and they run half the internet now.

Until a few months ago, you couldn’t even be signed in to more than one account at a time in the console. Now you can use…up to five? (If you’re following “best practices” you likely have far more than five.)

For anyone who hasn’t seen GCP’s console, there’s just a simple menu to switch your view to any of the projects you have access to. There’s even a search box in case you have enough to need it.

philipwhiuk|10 months ago

One of the stand-out things at AWS Summit London was the number of talks basically saying:

"Yes accounts is a mess but they're what we have".

jiggawatts|10 months ago

Azure has Resource Groups and global visibility across all products in all regions in a single pane of glass.

There are “single IP” global load balancers with regional dynamic routing in Azure too.

People just assume AWS is the best in the same way that Cisco was considered the best even though they were a dinosaur selling over-priced products for the last two decades.

belter|10 months ago

What is the "account sub-account" you are referring to? Does it blow your mind Google Availability Zones are firewalls across the SAME data center?

https://youtu.be/mDNHK-SzXEM?t=560

candiddevmike|10 months ago

Google's resource management + AWS's IAM + Azure's... nah == best of everything.

soco|10 months ago

I don't know GCP but my experiences with Azure were also way smoother than AWS. It's like the Amazon folks are not even trying to work on less friction...

gwbas1c|10 months ago

My experience with GCP was that the support staff was rude.

bulatb|10 months ago

Someone invented the two-sentence clickline. Now even blogs do it.

y-curious|10 months ago

Hackers Hate Him: The Weird Trick that Keeps Users Clicking in 2025

smallnix|10 months ago

Are HackerNews Comments the Next Victim of this Crazy Trend?

abhisek|10 months ago

IAM is complex. More so with federation and cross account trust. Not sure every weakness can be considered as a vulnerability.

In this case, I was looking for a threat model within which this is a vulnerability but unable to find so.

jonfw|10 months ago

The security industry, unfortunately, is awash with best practice violations masquerading as vulnerabilities

atoav|10 months ago

A fundamental problem that plagues many security solutions can be understood by analogy:

Imagine a incredibly secure castle. There are thick unclimbable walls, moats, trap rooms, everything compartmentalized, an attacker that gains control of one section didn't achieve much in terms of the whole castle, the men in each section are carefully vetted and are not allowed to have contact or family relationships with men stationed in other sections, so they cannot be easily bribed or forced to open doors. Everything is fine.

But the king is furious, the attackers shouldn't control any part of the castle! As a matter of principle! The architects reassure the king that everything is fine and there is no need to worry. The king is unconvinced, fires them and searches for architects that do his bidding. So the newlyfound architects scramble together and come up with secret hallways and tunnels, connecting all parts of the castle so the defenders can clear the building in each section. The special guards who are in charge of that get high priviledges, so they could even fight attackers who reach the kings bed room. The guard is also tasked to keep in touch with the attackers so they are extra prepared for when they attack and understand their mindset inside out.

The king is pleased, the castle is safe. One night one of those guards turns against the king and the attackers are sneaked into the castle. The enemy is suddenly everywhere and they kill the king. A battle that should have been fought in stages going inwards is now fought from the inside out and the defenders are suddenly trapped in the places that were meant for the very enemies they are fighting. The kingdom has fallen.

The problem with many security solutions – including AV solutions – is that you give the part of your system that comes into contact with the "enemy" the keys to your kingdom, usually with full unchecked priviledges (how else to read everything that is going on in the system). Actual security is the result of strict compartmentalization and a careful and continous vetting of how each section can be abused and leveraged once it has fallen. Just like in mechanical engineering where each new moving part can add a new failure point, in security adding a new priviledged thing adds a lot of new attack surface that wasn't previously there. And if that attack surface gives you the keys to the kingdom it isn't the security solution, it is the target.

jiggawatts|10 months ago

The difference here is that A/V scanning and security vulnerability scanning can be done from the “outside” using read only privileges.

Many clouds now support scans of snapshots, removing the need for direct access to the read/write internals of a workload.

This is where your analogy falls flat a bit.

18172828286177|10 months ago

This is just incompetence on the part of the person deploying this solution. Just because AWS say “don’t deploy to management account” doesn’t mean you should a deploy something with access to all your accounts into a dev account.

ben-dh-kim|9 months ago

This can be a sensitive issue for organisations of a certain size. Depending on how widespread and complex the trust relationship is, it may or may not be a threat. But I think one of the points that everyone is questioning is how the first attack vector can be initiated. I agree with what you're saying about the complexity of IAM and trust relationships in general.

ahoka|10 months ago

The link is ironically blocked by my companies security suite.

MortyWaves|10 months ago

Blocking pseudo-“security research” like this one is probably a safe bet.

MikeIndyson|10 months ago

Depends on your implementation, you should not store sensitive data in metadata

swisniewski|10 months ago

The article is bullshit.

AWS has a pretty simple model: when you split things into multiple accounts those accounts are 100% separate from each other (+/- provisioning capabilities from the root account).

The only way cross account stuff happens is if you explicitly configure resources in one account to allow access from another account.

If you want to create different subsets of accounts under your org with rules that say subset a (prod) shouldn’t be accessed by another subset (dev), then the onus for enforcing those rules are on you.

Those are YOUR abstractions, not AwS abstractions. To them, it’s all prod. Your “prod” accounts and your “dev” account all have the same prod slas and the same prod security requirements.

The article talks about specific text in the AWS instructions:

“Hub stack - Deploy to any member account in your AWS Organization except the Organizations management account."

They label this as a “major security risk” because the instructions didn’t say “make sure that your hub account doesn’t have any security vulnerabilities in it”.

AWS shouldn’t have to tell you that, and calling it a major security risk is dumb.

Finally, the access given is to be able to enumerate the names (and other minor metadata) of various resources and the contents of IAM policies.

None of those things are secret, and every dev should have access to them anyways. If you are using IAC, like terraform, all this data will be checked into GitHub and accessible by all devs.

Making it available from the dev account is not a big deal. Yes, it’s ok for devs to know the names of IAM roles and the names of encryption key aliases, and the contents of IAM policies. This isn’t even an information disclosure vulnerability .

It’s certainly not a “major risk”, and is definitely not a case of “an AWS cross account security tool introducing a cross account security risk”.

This was, at best, a mistake by an engineer that deployed something to “dev” that maybe should have been in “prod” (or even better in a “security tool” environment).

But the actual impact here is tiny.

The set of people with dev access should be limited to your devs, who should have access to source control, which should have all this data in it anyways.

Presumably dev doesn’t require multiple approvals for a human to assume a role, and probably doesn’t require a bastion (and prod might have those controls), so perhaps someone who compromises a dev machine could get some Prod metadata.

However someone who compromises a dev machine also has access to source control, so they could get all this metadata anyways.

The article is just sensationalism.

gitroom|10 months ago

Man, this got me thinking hard about where the line really is for what counts as a real risk versus hype. Everyone draws it different. you ever catch yourself worrying too much about stuff that probably isn't even a threat?

tasuki|10 months ago

Without reading this article: of course!

Most "security tools" introduce security risks. An antivirus is usually a backdoor to your computer. So are various "endpoint protection" tools.

The whole security industry is a sham.