Service discovery and docker is still a pain point with the technology. Serf [1] and etcd [2] are tools that manages a cluster of services and helps solve the problem described in the article.
It actually seems like Consul still might be an easier way to go than this. Dnsmasq is sure easy, but consul isn't exactly challenging and is happy to run in its own container when local dev is a key.
Why have a different methodology for local dev and deploy when you could have the SAME methodology for local and deploy with almost no extra cost?
I'm using that approach for Reesd. Instead of a single dnsmasq instance on the host, I have one instance (actually a Docker container) for each logical group of containers. Containers can reference db.storage.local and different groups will actually talk to their own database. This is pretty cool for integration tests or spinning a new version of the group before promoting it into production.
Before that, I was using SkyDNS but I was not happy in having to maintain/reset the TTL.
Side note: when you use --dns on a container and commit the result to an image, that image will have the --dns value in its /etc/resolv.conf (meaning that you still have to run a DNS at that address or to provide again --dns to supply a new address).
I'd love to hear about other people's workflow when it comes to address that issue - are you simply re-provisoning the all shebang if a critical component is to be updated or change? or relying on etcd/zookeeper as mentioned? or ...
We use consul. Each instance has it's own bind setup with DNS forwarding and it forwards normal traffic on through the regular DNS servers and anything in .consul on to the consul client running locally.
Just wrote something very similar, but more targeting development sandboxes. host runs a script that uses rubydns. a zone cut to the host for dev.domain.com. rubydns then uses docker-api (with small redis cache) to respond to dns requests for *.dev.domain.com. Developers can now reference their sandboxes as foo.dev.domain.com from within the host, or anywhere else, including their own machines. More to it then this ofcourse, but thats the dns portion.
One shortcoming of the OP's approach is the out-of-band process for updating dnsmasq's hosts file. If you want to use something like fig to start a cluster of containers, you still have to wrap a script around the fig command to update the hosts file. I wonder how hard it would be to modify SkyDock to update dnsmasq (update the hosts file and then kill -HUP dnsmasq).
Does anyone else just bind Docker containers to specific ports and/or group of ports on the same interface thereby not needing to care about what IP address each container thinks it is?
That only makes sense in small deployments where you aren't multihomed.
What's frustrating is that I don't think Docker currently has an elegant way to make you not care if the remote service is local or not. What'd be interesting is if there was a special tun interface for that kind of communication and you could bind that to local containers or just dump the traffic back onto your LAN, NAT'd to a remote host.
I'm perfectly fine with the DNS/DHCP approach, just wondering what are the alternatives out there. That dnsmasq approach worked like a charm for several of our projects for 1+ year.
Always wondering of the extra round-trip on DNS request when we could have a hardcoded value in the host. We're talking local network so the latency ain't much of an issue but still. Then there is the possibility of a dnsmasq restart when a request occur, caching ala nscd could work, but then we're in for a lot more trouble when it comes to expire that cache !
[+] [-] thedevopsguy|11 years ago|reply
[1] Serf is by the guys behind Vagrant. - http://www.serfdom.io/
[2] etcd - http://coreos.com/using-coreos/etcd/
[+] [-] KirinDave|11 years ago|reply
Why have a different methodology for local dev and deploy when you could have the SAME methodology for local and deploy with almost no extra cost?
[+] [-] thu|11 years ago|reply
Before that, I was using SkyDNS but I was not happy in having to maintain/reset the TTL.
Side note: when you use --dns on a container and commit the result to an image, that image will have the --dns value in its /etc/resolv.conf (meaning that you still have to run a DNS at that address or to provide again --dns to supply a new address).
[+] [-] balou|11 years ago|reply
Thanks for the --dns tip!
[+] [-] balou|11 years ago|reply
[+] [-] geekbri|11 years ago|reply
[+] [-] mmmooo|11 years ago|reply
[+] [-] aus_|11 years ago|reply
[0]: https://github.com/crosbymichael/skydock
[+] [-] billsmithaustin|11 years ago|reply
[+] [-] opendais|11 years ago|reply
[+] [-] KirinDave|11 years ago|reply
What's frustrating is that I don't think Docker currently has an elegant way to make you not care if the remote service is local or not. What'd be interesting is if there was a special tun interface for that kind of communication and you could bind that to local containers or just dump the traffic back onto your LAN, NAT'd to a remote host.
[+] [-] KaiserPro|11 years ago|reply
[+] [-] balou|11 years ago|reply
Always wondering of the extra round-trip on DNS request when we could have a hardcoded value in the host. We're talking local network so the latency ain't much of an issue but still. Then there is the possibility of a dnsmasq restart when a request occur, caching ala nscd could work, but then we're in for a lot more trouble when it comes to expire that cache !
[+] [-] emmelaich|11 years ago|reply
[+] [-] balou|11 years ago|reply