I've got a strong interest on overlay networking solutions like Weave, but I'm not an expert on Docker and other container solutions.
What's the new thing here? If I understood correctly, it seems that you can connect your Docker host to an overlay network, so your containers can access other containers and resources through it. Am I correct to think this facilitates orchestration of the containers' network?
Disclosure: I am behind https://wormhole.network which could be seen as some sort of Weave competitor, but it's not. It covers other use cases, even though there's some overlapping e.g. overlay multi-host networking for containers https://github.com/pjperez/docker-wormhole - it doesn't require changes on the host itself, but can't be orchestrated.
I haven't come across your project before. Any relation to my (similarly named) project https://github.com/vishvananda/wormhole ? I always thought that one of the most interesting piece of my project was easy ipsec tunnel setup. It turns out that setting ip ipsec tunnels is pretty tricky.
I haven't looked at this in detail, but does this work with the standard networking features introduced in Docker 1.9 [1] and 1.10 [2]? Can I still use 'docker network create/connect' and the DNS service discovery features of Docker? Can containers interoperate regardless of the choice of Docker plugin, or will they only work on a plugin based on the weave proxy? The wording in the post leaves that ambiguous.
Well, sort of. Before the Docker networking features, Weave relied on a proxy that intercepts Docker API calls to set up their network before passing requests on to Docker. It was/is a workaround for lacking plugin support in Docker, and even after the network plugin support was added it still allows additional functionality that is hard to implement via the plugin mechanism but there is now a Docker network plugin for Weave too.
What they've done now if I've understood it correctly, is that they've effectively leveraged that to allow them to intercept Docker API calls, and if that call requests a network provided by a CNI plugin, they call CNI "on behalf of Docker" and then pass on a modified API call to Docker, so you can have Docker/Kubernetes/Rocket on the same overlay network.
> Can containers interoperate regardless of the choice of Docker plugin, or will they only work on a plugin based on the weave proxy?
Containers don't care what network you configure. Basically Docker will just use a bridge interface, assign IP addresses for a container on that bridge, and optionally expose ports on the host. The Docker networking support lets Docker query an external plugin API to obtain the details to use for a container. Kubernetes and Rocket implements a different plugin API for the same purpose. But in both all of this happens before a container is started.
Once it's started, the container just sees a an interface bound to a suitable IP, so your containers should not need to care.
Other than IPv6 support, is there a reason to use Calico/Weave over Flannel? We've been very happy with Flannel, especially using the clean CoreOS-Flannel integration.
In the case of Weave: encryption, multicast, fault tolerance (fully distributed through a CRDT mesh), integrates beautifully with containers (can be used as a Docker network plugin and provides out-of-the-box service discovery). More over, it's stupidly simple to use and setup (zero-conf, no kv-store required).
I'm looking forward to setting up some trial deployments of calico at my workplace.
While there is an ease of understanding of bridged and/or overlayed networks, native end-to-end routing between containers with regular IP datagrams and container-level addressability has been on my wishlist for a long time.
Ahh ... had to search for "calico tom denham" to find http://www.projectcalico.org/ ok ... that makes more sense than a company dealing with biology ...
[+] [-] NetStrikeForce|10 years ago|reply
What's the new thing here? If I understood correctly, it seems that you can connect your Docker host to an overlay network, so your containers can access other containers and resources through it. Am I correct to think this facilitates orchestration of the containers' network?
Disclosure: I am behind https://wormhole.network which could be seen as some sort of Weave competitor, but it's not. It covers other use cases, even though there's some overlapping e.g. overlay multi-host networking for containers https://github.com/pjperez/docker-wormhole - it doesn't require changes on the host itself, but can't be orchestrated.
[+] [-] vishvananda|10 years ago|reply
[+] [-] justincormack|10 years ago|reply
[+] [-] shykes|10 years ago|reply
[1] https://blog.docker.com/2015/11/docker-multi-host-networking...
[2] https://blog.docker.com/2016/02/docker-1-10/
[+] [-] vidarh|10 years ago|reply
What they've done now if I've understood it correctly, is that they've effectively leveraged that to allow them to intercept Docker API calls, and if that call requests a network provided by a CNI plugin, they call CNI "on behalf of Docker" and then pass on a modified API call to Docker, so you can have Docker/Kubernetes/Rocket on the same overlay network.
> Can containers interoperate regardless of the choice of Docker plugin, or will they only work on a plugin based on the weave proxy?
Containers don't care what network you configure. Basically Docker will just use a bridge interface, assign IP addresses for a container on that bridge, and optionally expose ports on the host. The Docker networking support lets Docker query an external plugin API to obtain the details to use for a container. Kubernetes and Rocket implements a different plugin API for the same purpose. But in both all of this happens before a container is started.
Once it's started, the container just sees a an interface bound to a suitable IP, so your containers should not need to care.
[+] [-] bboreham|10 years ago|reply
Weave Net as a product ships with support for the former; the blog post describes work towards support for the latter.
No they do not interoperate; there is a detailed argument at http://blog.kubernetes.io/2016/01/why-Kubernetes-doesnt-use-...
[1] Scare-quotes because all of these things are only a few months old.
[+] [-] politician|10 years ago|reply
[+] [-] DanielDent|10 years ago|reply
The developers are friendly, helpful and responsive. But in my testing, Weave simply wasn't adequate. I hope they'll get there.
[+] [-] __monadic|10 years ago|reply
[+] [-] danfitch|10 years ago|reply
[+] [-] chrissnell|10 years ago|reply
[+] [-] fons|10 years ago|reply
Full disclamer: I work at Weaveworks.
[+] [-] vetrom|10 years ago|reply
While there is an ease of understanding of bridged and/or overlayed networks, native end-to-end routing between containers with regular IP datagrams and container-level addressability has been on my wishlist for a long time.
[+] [-] emeraldd|10 years ago|reply
[+] [-] rconti|10 years ago|reply
Occasionally it makes me feel left behind.
Then I remember most of these "new hotness" technologies get abandoned as quickly as they get adopted, so I'm not missing much.
[+] [-] emeraldd|10 years ago|reply