top | item 8107574

DNS And Docker Containers

56 points| hunvreus | 11 years ago |wiredcraft.com | reply

21 comments

order
[+] thedevopsguy|11 years ago|reply
When I first starting working with docker I'd use the

  docker inspect $CONTAINER |  grep -i VAR  
pattern alot until I discovered that you can do use the container name and do go with:

   docker inspect --format '{{ .NetworkSettings.IPAddress }}' replset1

Service discovery and docker is still a pain point with the technology. Serf [1] and etcd [2] are tools that manages a cluster of services and helps solve the problem described in the article.

[1] Serf is by the guys behind Vagrant. - http://www.serfdom.io/

[2] etcd - http://coreos.com/using-coreos/etcd/

[+] KirinDave|11 years ago|reply
It actually seems like Consul still might be an easier way to go than this. Dnsmasq is sure easy, but consul isn't exactly challenging and is happy to run in its own container when local dev is a key.

Why have a different methodology for local dev and deploy when you could have the SAME methodology for local and deploy with almost no extra cost?

[+] thu|11 years ago|reply
I'm using that approach for Reesd. Instead of a single dnsmasq instance on the host, I have one instance (actually a Docker container) for each logical group of containers. Containers can reference db.storage.local and different groups will actually talk to their own database. This is pretty cool for integration tests or spinning a new version of the group before promoting it into production.

Before that, I was using SkyDNS but I was not happy in having to maintain/reset the TTL.

Side note: when you use --dns on a container and commit the result to an image, that image will have the --dns value in its /etc/resolv.conf (meaning that you still have to run a DNS at that address or to provide again --dns to supply a new address).

[+] balou|11 years ago|reply
Sounds interesting, got anything more detailed about that approach?

Thanks for the --dns tip!

[+] balou|11 years ago|reply
I'd love to hear about other people's workflow when it comes to address that issue - are you simply re-provisoning the all shebang if a critical component is to be updated or change? or relying on etcd/zookeeper as mentioned? or ...
[+] geekbri|11 years ago|reply
We use consul. Each instance has it's own bind setup with DNS forwarding and it forwards normal traffic on through the regular DNS servers and anything in .consul on to the consul client running locally.
[+] mmmooo|11 years ago|reply
Just wrote something very similar, but more targeting development sandboxes. host runs a script that uses rubydns. a zone cut to the host for dev.domain.com. rubydns then uses docker-api (with small redis cache) to respond to dns requests for *.dev.domain.com. Developers can now reference their sandboxes as foo.dev.domain.com from within the host, or anywhere else, including their own machines. More to it then this ofcourse, but thats the dns portion.
[+] aus_|11 years ago|reply
SkyDock[0] is another alternative. It uses SkyDNS. It is a little more setup, but it has a few additional features. crosbymichael is a sharp guy.

[0]: https://github.com/crosbymichael/skydock

[+] billsmithaustin|11 years ago|reply
One shortcoming of the OP's approach is the out-of-band process for updating dnsmasq's hosts file. If you want to use something like fig to start a cluster of containers, you still have to wrap a script around the fig command to update the hosts file. I wonder how hard it would be to modify SkyDock to update dnsmasq (update the hosts file and then kill -HUP dnsmasq).
[+] opendais|11 years ago|reply
Does anyone else just bind Docker containers to specific ports and/or group of ports on the same interface thereby not needing to care about what IP address each container thinks it is?
[+] KirinDave|11 years ago|reply
That only makes sense in small deployments where you aren't multihomed.

What's frustrating is that I don't think Docker currently has an elegant way to make you not care if the remote service is local or not. What'd be interesting is if there was a special tun interface for that kind of communication and you could bind that to local containers or just dump the traffic back onto your LAN, NAT'd to a remote host.

[+] KaiserPro|11 years ago|reply
I'm 100% with how docker deals with virtual ethernet devices, but why would you not want to use DNS/DHCP?
[+] balou|11 years ago|reply
I'm perfectly fine with the DNS/DHCP approach, just wondering what are the alternatives out there. That dnsmasq approach worked like a charm for several of our projects for 1+ year.

Always wondering of the extra round-trip on DNS request when we could have a hardcoded value in the host. We're talking local network so the latency ain't much of an issue but still. Then there is the possibility of a dnsmasq restart when a request occur, caching ala nscd could work, but then we're in for a lot more trouble when it comes to expire that cache !

[+] emmelaich|11 years ago|reply
Small thing but if you update /etc/hosts you can do a reload of dnsmasq instead of restart.
[+] balou|11 years ago|reply
Neat, I'll give it a shot.