(no title)
antoncohen | 4 years ago
The metadata server is documented to be at 169.254.169.25, always[1]. But Google software (agents and libraries on VMs) resolves it by looking up metadata.google.internal. If metadata.google.internal isn't in /etc/hosts, as can be the case in containers, this can result in actual DNS lookups over the network to get an address that should be known.
AWS uses the same address for their metadata server, but accesses via the IP address and not some hostname[2].
I've seen Google managed DNS servers (in GKE clusters) fall over under the load of Google libraries querying for the metadata address[3]. I'm guessing Google wants to maintain some flexibility, which is why they are using a hostname, but there are tradeoffs.
[1] https://cloud.google.com/compute/docs/internal-dns
[2] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance...
[3] This is easily solvable with Kubernetes HostAliases that write /etc/hosts in the containers.
bradfitz|4 years ago
> Using a fixed IP makes it very difficult to spoof the metadata
https://github.com/googleapis/google-cloud-go/commit/ae56891...
skj|4 years ago
It was not straightforward. I learned a lot about iptables and docker networking.
aenis|4 years ago
tryauuum|4 years ago
antoncohen|4 years ago
There might be a persistence issue, it seems like part of this attack was that the IP was persisted to /etc/hosts even after the real DHCP server took over again. But even just writing to /etc/hosts could open the door redirecting traffic to an attacker controlled server.