top | item 12670316

EC2's most dangerous feature

256 points| dwaxe | 9 years ago |daemonology.net

101 comments

order
[+] hueving|9 years ago|reply
The blog post buries the lead a little bit because it's talking about lots of pain points with the ec2 API and IAM. The important point to take away is that any process with network access running on your instance can contact the EC2 metadata service at http://169.254.169.254 and get the instance-specific IAM credentials.

Think about things like services that accept user submitted URLs, crawl them, and display results...

[+] cddotdotslash|9 years ago|reply
This is actually a vulnerability I've seen countless times. If a site accepts a URL which it reads and returns to the user, submit the 169.254.169.254 metadata service. About 1 out of 5 times I've tried it, I'm about to get a response.
[+] inopinatus|9 years ago|reply
Fun fact: you can ignore EC2 instance roles and use the Amazon Cognito service for processes to obtain role-based short-term credentials.

I've described it previously as "Kerberos for the AWS Cloud" (which will make any self-respecting crypto nerd squirm) but hopefully it conveys the general idea. Yes it was designed for mobile & browser use, and yes the API isn't pretty, but it's there.

[+] novaleaf|9 years ago|reply
do you know if any similar vulnerability exists with Azure or Google Cloud?

edit: not sure why the downvote, I use Google Cloud, so honest question :(

[+] strictfp|9 years ago|reply
....and that you can imitate the metadata service to make life easier :) A plug for my friends side project: https://github.com/otm/limes . It's a local metadataservice. Very handy for making aws libs work without having to config them much. And great support for MFAs.
[+] kevsim|9 years ago|reply
I believe Pocket faced exactly this issue once upon a time.
[+] cesarb|9 years ago|reply
What I've done for a previous company was to, as one of the very first things done within every EC2 instance, add an iptables owner match rule to only allow packets destined to 169.254.169.254 if they come from uid 0. Any information from that webservice that non-root users might need (for instance, the EC2 instance ID) is fetched on boot by a script running as root and left somewhere in the filesystem.
[+] falcolas|9 years ago|reply
This won't help with IAM roles, since the credentials provided in the metadata expire. Of course, a small tweak to the iptables entry would help there as well.

Mind posting your entry for us iptables impared folks?

[+] manojlds|9 years ago|reply
But the data is not static.
[+] jeremyjh|9 years ago|reply
That is the obvious answer, do you have any scripts you could share?
[+] skywhopper|9 years ago|reply
Hopefully the operators using EC2 instance profiles understand and weigh the risks of using that feature. It's good to be cautious, but the feature is only dangerous if you don't take the time to understand it. Running a server on the Internet at all is "dangerous" in the same sense. And for this particular risk, it turns out there's a simple fix.

He _is_ right in his first criticism that the IAM access controls available for much of the AWS API are entirely inadequate. In the case of EC2 in particular, it's all or nothing--either your credentials can call the TerminateInstances API or they can't. I'm sure Amazon is working on improving things, but for now it's pretty terrible. But in practice it just means you have to take care in different ways than you would if his tag-based authz solution were implemented.

That said, while it's certainly frustrating to an implementor, it's not "dangerous" that limitations exist in these APIs. We're talking about decade-old APIs from the earliest days of AWS, and while things have been added, the core APIs are still the same. That's an amazing success story. But like any piece of software, there are issues that experienced users learn how to work around.

You can bet that the EC2 API code is hard and scary to deal with for its maintainers. Adding a huge new permissions scheme is likely nearly impossible without a total rewrite... I don't envy them their task.

[+] jamiesonbecker|9 years ago|reply
It's impossible to limit access to any part of the instance metadata in any way w/o firewalling (which has its own issues) or even to expire access to any part of it. Since instance profiles have keys (even though automatically rotated), any process on the system, owned by any user, can access anything exposed via the instance role. This makes embedding IAM keys into your instance and protecting it by root-only or ACL's MUCH MUCH safer... but AWS specifically states that instance profiles are preferred. In fact, for our Userify AWS instances (ssh key management), we are required to use instance roles and not allowed to offer the option. (This is why we do not offer S3 bucket storage on our AWS instances but we do on Pro and Enterprise self-hosted.)

The biggest issue with the IAM instance profiles is that they trade security for convenience.. and it's not a good trade.

[+] eeeeeeeeeeeee|9 years ago|reply
I agree that it's important to do your research, but Amazon does us no favors here. I didn't know about this potential leakage until I needed to use the metadata system in AWS and then I realized the potential for abuse. Honestly, this should probably be an enabled option and it should be off by default.

The fact is that Amazon provides a commodity service and it's not a standard thing that most people expect to have an internal HTTP service that exposes potentially sensitive information to non-root users.

I actually disagree with the OP where he says they should use Xen Store for metadata. If I were Amazon, there is no way I would want to commit to using an option that is specific to one hypervisor technology. What if Amazon wants to switch to KVM?

[+] jcrites|9 years ago|reply
I appreciate your comment!

> either your credentials can call the TerminateInstances API or they can't

Note that you can restrict the inputs to this API using IAM Policy in semantically meaningful ways. Three controls I'm familiar with that are useful for restricting inputs are the resource type for instances and the conditions for instance profile and resource tags [1]. The latter two are most flexible.

An instance profile restriction allows you to express a concept like, "This user may only terminate instances that are part of this specific instance profile"; in that way, the instance profile characterizes a collection of instances that can be affected by the policy. The resource tag condition can be used in a similar way. [2] is an example of a policy restricting terminations based on instance profile. The key fragment of it is:

  "ArnEquals": {"ec2:InstanceProfile":
    "arn:aws:iam::123456789012:instance-profile/example"}
A role with this policy condition can only affect instances that are part of the specified instance profile.

This allows you to create roles or users that have access to instances that are part of a certain instance profile only. If you wanted a group of instances to be able to manage (e.g. terminate) themselves, then the role on those instances could be access-restricted to the instance profile of those same instances. By assigning different fleets of instances different instance profiles, you can control which users or roles can access each fleet by restricting access to the fleet's Instance Profile. Similar restrictions are possible with resource tags on instances.

That said, though, I agree that there's room to improve the access control story. Managing instances through their full lifecycle sometimes involves accessing other resources like EBS volumes too, and it's not easy to construct a policy container that sandboxes access to just the right resources and actions while allowing the creation of new resources. Colin called out some of the gaps in his post. If you do not need to allow the creation of new resources then the problem is a bit easier. For example, you can avoid the need to create new EBS volumes directly by specifying EBS root volumes as part of instance creation using BlockDeviceMapping.

[1] http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-suppo... [2] https://gist.github.com/jcrites/d6826fc57b17c3c0ac50cae1fc9b...

[+] _hyn3|9 years ago|reply
tl;dr:

1) IAM instance roles have no security mechanisms to protect them from being read from any process on the instance, thus completely eliminating them from all Linux/UNIX/Windows permission systems. (The real reason for this is that instance metadata was a convenient semi-public information store for things like instance ID, but it was extended to also provide secret material, which was, at best, an idiotic move.) As the author points out, Xen already provided a great filesystem alternative that could be mounted as another drive (or network drive) to be managed with the regular OS filesystem permission system. (reading an instance ID is just a matter of reading a "file")... for some reason, AWS didn't leverage this and instead just added the secret material to its local instance metadata webserver.

2) the API calls are not fine grained enough and/or there are big holes in their coverage -- so, for instance, if you want to use some other AWS services, you can end up exposing much more than you intended.

[+] 0x0|9 years ago|reply
This is interesting! Can this be abused with AWS-hosted services that reach out to fetch URLs? For example, image hosts that allow specifying an URL to retrieve, or OAuth callbacks, etc? Are there any tricks to be played if someone were to register a random domain and point it to 169.254.169.254 (or worse, flux between 169.254.169.254 and a public IP in case there is blacklisting application code that first checks to resolve the hostname but then passes the whole URL into a library that resolves again?)
[+] gtsteve|9 years ago|reply
This is an interesting attack that I must confess I hadn't thought of, but surely any service that accepts an arbitrary URL has a list of IP ranges to avoid. However, to harden a role in the event of instance role credentials leaking, you could use an IAM Condition [0].

There is actually an example of this in the IAM documentation [1], although the source VPC parameter doesn't work for all services, and I can't see a list of services that support this parameter. This would ensure that the requests actually came from instances within your VPC.

[0] http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_po...

[1] http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucke...

[+] jamiesonbecker|9 years ago|reply
The point is not requests that originate elsewhere. The point is that this system is not protected in any way from any other process on your system.
[+] BraveNewCurency|9 years ago|reply
Er, the problem is not "people can hit an un-routable IP from outside your instance". The problem is that "if your instance allows an attacker to make a HTTP request, you might expose personal information". For example, web crawlers or other fetchers.
[+] rcaught|9 years ago|reply
> almost as trivial for EC2 instances to expose XenStore as a filesystem to which standard UNIX permissions could be applied, providing IAM Role credentials with the full range of access control functionality which UNIX affords to files stored on disk.

Doesn't this become more complicated when you think about EC2 offering Windows instances? Even with straight UNIX file writing, what writes this? Where does it write this? Which user has read permissions?

[+] jamiesonbecker|9 years ago|reply
In UNIX, the same way that EBS volumes are mounted... think of the /proc or /sys virtual filesystems.

In Windows, I'm guessing that this would be exposed as a network drive.

[+] skywhopper|9 years ago|reply
Yeah, having the metadata available over an http interface is actually brilliant. Simple HTTP calls are easy to do from any network-capable OS or language.
[+] Rapzid|9 years ago|reply
I've used firewall rules in the past to scope the metadata store to admin users.
[+] patsplat|9 years ago|reply
EC2 instances are designed to enforce isolation between instances, not processes. Presumably there would only be one primary service running on each.

Use AWS be pushed towards an architecture based on containers and services. AWS is the OS, not any individual machine.

[+] jamiesonbecker|9 years ago|reply
Until AWS fixes this (which, as the article points out, may never happen), a chmod'ed 600 file (only readable by root) is actually much safer, even when STS auto-rotation is taken into account.
[+] djb_hackernews|9 years ago|reply
If users can issue arbitrary commands on an instance then that instance should have zero Iam roles and should delegate actions to services running on separate instances.

The instances hosting our users go a step further and null route Metadata service requests via iptables.

[+] jeremyjh|9 years ago|reply
It isn't just about users, its also about malicious software you may accidentally install, if for example a library you use is compromised as has happened before with Ruby gems.
[+] tex0|9 years ago|reply
It's much like the same Problem with the Google Cloud. Even worse there I'd say.
[+] sgrytoyr|9 years ago|reply
Could you please elaborate? I'm not doubting you, just very interested in learning more.
[+] boulos|9 years ago|reply
I just double checked, and the most similar thing we expose is the token's for each service-account in the instance metadata. As pointed out in the article, any uid on the box can read that. But, you can create instances with a zero-permission service account (the equivalent of nobody?) and just avoid it.

This does mean that everywhere else you'd have to have explicit service accounts and such, but that seems like a reasonable "workaround" until or unless we make metadata access more granular (I like the block device idea! Would you want entirely different paths for JSON versus "plain" though?)

[+] yandie|9 years ago|reply
If you're sharing the same instance for multiple users, trying to achieve security among the users is almost impossible anyway. That's why physical separation/virtualization is one of the first thing to focus on when talking about security.
[+] jamiesonbecker|9 years ago|reply
Isolation is definitely important, but not all parts of the system running a single function need the same levels of access, and in fact it may be possible to target those components separately. Take a look at the wikipedia articles for 'defense in depth' or 'privilege separation' to see how important it is inside a system to treat each component isolated to itself as much as possible. (This is also why you don't want to rely on only a perimeter firewall for access control.)
[+] icedchai|9 years ago|reply
IAM instance roles are still an improvement over how it was typically done in the past: hard-coding the same key in a configuration file and deploying it everywhere.

It's a balance between security and convenience.

[+] logronoide|9 years ago|reply
Same happens with metadata access in Openstack.

The access is controlled by source IP (and namespace). I wonder if it's possible to spoof the IP and access Metadata of other servers/users.

[+] Thaxll|9 years ago|reply
It has been the case for 10 years, anyone knows that, I don't see the problem. If you're not happy with it just use API keys.
[+] cperciva|9 years ago|reply
OK, dwaxe, I have to ask: Are you a robot? Because I uploaded this blog post, tweeted it, and then came straight over here to submit it and you still got here first.

Not that I mind, but getting your HN submission in within 30 seconds of my blog post going up is very impressive if you're not a robot.

[+] dwaxe|9 years ago|reply
Yes I am. This is my personal account, but I use it to automatically post to Hacker News. I was playing around with BigQuery one day and found the Hacker News dataset [1]. From my experience with the Reddit submissions dataset [2], I knew that I could compose this query,

SELECT AVG(score) AS avg_score, COUNT(* ) AS num, REGEXP_EXTRACT(url, r'//([^/]*)/') AS domain FROM [fh-bigquery:hackernews.full_201510] WHERE score IS NOT NULL AND url <> '' GROUP BY domain HAVING num > 10 ORDER BY avg_score DESC

which returns a list of domains with more than ten submissions sorted by average score. This turns out to be a list of some of the most successful tech blogs on the internet, as well as various YCombinator related materials. Out of the domains with over 100 submissions, daemonology.net has the 9th highest average score per submission. I manually visited all the domains with more than about 30 submissions, found the appropriate xml feeds, and saved them. I added a few websites like eff.org whose messages I think everyone should read anyways.

Then I jumped into python and started trying to figure out how to post to Hacker News. It was a little more complicated than I anticipated [3], but an open source HN app for Android helped me figure it out.

I set up a cron job on my $5 Digital Ocean that runs the script every few minutes (pseudocode):

If you can reach http://news.ycombinator.com, Check all feeds for new entries, Post a new entry to hn, Sleep for an hour before posting another

[1] https://bigquery.cloud.google.com/dataset/bigquery-public-da...

[2] The only difference on Reddit is the subreddit system.

[3] After you send a POST request to send to the login screen, Hacker News gives you a url with a unique "fnid" parameter, and you send another POST request to another url with the appropriate "fnid".

[+] jacquesm|9 years ago|reply
Suggestion: post before tweeting.

And to your question, dwaxe is not a bot (there are comments associated with the account too), and this has happened before (apparently a lightning fast submitter):

https://news.ycombinator.com/item?id=12218797

Of course he/she could still be running a script.

[+] brettproctor|9 years ago|reply
Assuming dwaxe replies claiming to not be a robot, how would we go about verifying? :P
[+] latentpot|9 years ago|reply
Could be using something like IFTTT with a simple rule RSS to Post Immediately on Forum
[+] jakozaur|9 years ago|reply
Most likely dwaxe is human using scripts to boost his karma. Almost like sportsman using enhancing performance drugs.

Side question: I wonder when AI pass Hacker News Turing test. So bot can trick us into being human by HN comments.