top | item 13592864

Container orchestration: Moving from fleet to Kubernetes

283 points| trojanowski | 9 years ago |coreos.com

73 comments

order
[+] mevile|9 years ago|reply
I love the suggestion for new users to try minikube. I got started with minikube and kubernetes recently and it was only then when I had an aha moment with containers. I get it now. I know containers have been around a while but with kubernetes the orchestration difficulty has been lowered to the point where I can't imagine going back to the way I was getting things working before. From minikube I moved to kubernetes on GCE, and it mostly just worked. I still use minikube for my local dev environment.
[+] gkop|9 years ago|reply
Yes minikube rocks. It essentially fulfills the dream promised but poorly delivered by Docker Compose - a development environment as similar as possible to production.
[+] zebra9978|9 years ago|reply
What do people generally think about Docker Swarm ? The new deployment using .yml files is pretty cool : https://www.infoq.com/news/2017/01/docker-1.13

In fact, IMHO kubernetes has tried to do something similar with .. but it is not engineered ground up for simplicity. Which is why it has MULTIPLE tools for this - minikube, kubeadm, kompose - but nothing matching the ease of use of docker and its yml files.

The last survey showed 32% of the polled used Docker Swarm versus Kubernetes' 40% - and this is back when Docker Swarm was highly unstable. https://clusterhq.com/2016/06/16/container-survey/#kubernete...

Are people here using Swarm ? what have your experiences been like.

[+] moondev|9 years ago|reply
Kubernetes deployments are done via yml (or json) files too. They are called manifests.

I think you are misunderstanding the tools listed. Minikube sets up a single-node local cluster. Kubeadm sets up a multi-node cluster. No matter how or where your cluster is set up, you still deploy with manifests.

[+] EtienneK|9 years ago|reply
I would also like to know the answer to this question. Every time I try setting up a Kubernetes cluster, it's an exercise in frustration. Docker Swarm is much easier in comparison.

Add to the fact that Docker Swarm is adding Enterprise features (such as Secrets in 1.13) and that is has an Enterprisey version (Docker Datacenter) which supports multiple teams, why would I - an Enterprise developer and architect - look at Kubernetes over Docker Swarm?

[+] Axsuul|9 years ago|reply
I've tried both. Kubernetes is targeted really towards large production clusters -- even kubernetes itself requires quite a bit of resources. A single non-HA cluster initialized by kubeadm for example couldn't even schedule itself on a n1-standard-1 machine on GCE.

For a MVP or a small production stack that runs on one server, I would go with Docker Swarm for its simplicity and small footprint. And even if you do end up scaling across many nodes, you still won't need k8s (kubernetes).

[+] EwanToo|9 years ago|reply
A brave decision, but I think it's the right one for both CoreOS, and in the long-run, their customers.

Definitely pretty painful for people who have already adopted fleet, but a year of support is much better than I would expect

[+] sytse|9 years ago|reply
I too salute CoreOS for doing the right thing for their customers and the ecosystem. Kubernetes was something that was hard to predict, it didn't grow organically but was suddenly released by Google.

Right now I believe Kubernetes is the project with the most accepted pull requests per day. This came up in a talk from GitHub at Git Merge 2017. It shows that k8s is on its way to becoming the default container scheduler platform. It will be interesting to see how Docker Swarm and Mesosphere will compete during 2017.

The container scheduler is becoming the next server platform. The fifth one after mainframes, minicomputers, microcomputers, and virtual machines.

While configuring GitLab to run on k8s we learned that much of the work (like Helm Charts) doesn't translate to Docker Swarm and Mesosphere. I think there might be strong network effects similar to the Windows operating system.

[+] simonvdv|9 years ago|reply
Hmm that's a pity even though it shouldn't come as a surprise for anyone who's actively using/involved with fleet. I like the simplicity and flexibility of fleet (basically distributed SystemD) a lot. Don't necessarily want to switch to a bigger scheduler like Kubernetes. Anyone have any suggestions for/experiences with an alternative simpler scheduler (like Nomad or an alternative solution like the autopilot stuff from Joyent)?
[+] schmichael|9 years ago|reply
Nomad dev here. We should definitely tick the simplicity box for you. If not, let me know. :)

Nomad is a single executable for the servers, clients, and CLI. Just download[0] & unzip the binary and run:

    nomad agent -dev > out &
    nomad init
    nomad run example.nomad
    nomad status example
And you have an example redis container running locally!

Nomad supports non-Docker drivers too: rkt, lxc templates, exec, raw exec, qemu, java.[1] To use the "exec" driver that doesn't use Docker for containerization you'll need to run nomad as root.

[0] https://www.nomadproject.io/downloads.html

[1] https://www.nomadproject.io/docs/drivers/index.html

[+] wise0wl|9 years ago|reply
We are moving toward container-pilot and it's A+. We have been using an adapted autopilot pattern for some time now with our thick VMs and it's been great. There is no one system that solves all problems and fits all paradigms, but it seems like container-pilot / autopilot as a pattern is very successful at delivering simplicity.

BTW, we are also using Triton (formerly SmartDC) from Joyent and are absolutely loving it. It's not without it's rough edges, but it is by and far the best public / private cloud option we have found that supports containers and VMs.

[+] vidarh|9 years ago|reply
Same here. What made me like fleet despite the many problems with it is the simplicity and that it is not a container scheduler but a systemd unit scheduler, so it is far more flexible than just a container scheduler.

I have projects where Kubernetes is probably the right choice, but I have many more where Kubernetes is massive overkill and where I also need/want the distributed systemd units.

[+] sandGorgon|9 years ago|reply
Docker Swarm - especially 1.13 with the new, simpler yml file based deployment
[+] chad-autry|9 years ago|reply
I've been telling friends and co-workers I think kubernetes has won the orchestration war. But even as I did so I wanted something simpler for my own purposes, and so was using fleet.

Luckily for me, I'd stuck with making all my units global and driving their deployment off of metadata. I think I'll just strip off the [X-fleet] section, and start deploying them straight to systemd with ansible.

[+] mrkurt|9 years ago|reply
This is roughly what we're doing. Ansible to manage specific containers on hosts. It works quite well, and with some IP tables shenanigans we have a lot of power over how we roll new containers out.
[+] Intermernet|9 years ago|reply
As someone who's been working with containers since docker was released, I feel like this is the right decision.

CoreOS are awesome, and I hope that rkt takes off (no pun intended)

K8s has been a fun companion to travel with on the road to stability, but I think they've now got it right. I remember the confusion regarding config file formats, network architecture, persistent storage etc and I'm happy to say they've mostly got it nailed now.

Congrats to thocken and team ️

My next experiments are with the smartos docker support and Kubernetes. Hopefully I can get K8s running nicely on solaris zones and get better container isolation happening ️

Once again, I think CoreOS have made the right decision here, but that doesn't preclude major changes in K8s itself!

[+] raesene6|9 years ago|reply
I think Kubernetes is a really interesting product and obviously has a lot of momentum. That said for something thats seeing wide adoption it still has a lot of rough edges and things that need fleshed out.

One I ran across recently was the upgrade process for clusters. Per (https://kubernetes.io/docs/admin/cluster-management/#upgradi...) it seems that unless you're on GCE the best way to upgrade a cluster is by rebuilding it from scratch as the upgrade script is still "experimental", which doesn't seem great.

The other area that I think Kubernetes is lagging Docker quite a bit on is security documentation and tooling. There's no equivalent of the CIS guide for Docker or Docker bench, both of which are useful in understanding the security trade-offs of various configurations and choosing one that suits a given deployment.

[+] eicnix|9 years ago|reply
Building a cluster from scratch is usually not a bad idea: You create a new cluster with the upgraded version, combine both clusters through federation and start moving pods from the old to the new cluster.

Upgrading a cluster in place will come in the future.

[+] cookiecaper|9 years ago|reply
>That said for something thats seeing wide adoption it still has a lot of rough edges and things that need fleshed out.

Yes, I'm concerned about this not just with k8s, but Docker as well. Both are very immature products and there's a massive rush to adopt them, attributable almost entirely to social pressures and the insecurities of people who lead these tech depts.

When things like StatefulSets and persistent storage are still iffy/under development, it should be clear that these things are nowhere near production-ready.

[+] merb|9 years ago|reply
I don't get that move. fleet was extremly well suited to schedule a kubernetes high available master. as soon as you have 3 etcd nodes and 3 fleet nodes you could use fleet to bootstrap kubernetes in a way more stable fashion than all of the other available options.

if people remove the low level tools to manage a cluster it will be harder and harder to bootstrap higher level stuff.

but well, what to expect in the container space, stuff changes there just way too often.

[+] puzzle|9 years ago|reply
You can do that kind of bootstrap without fleet. Just use ignition or cloud-config with the right systemd units and a bunch of fixed IP addresses. I think the CoreOS folks worked on a number of ways to simplify and automate bootstrapping of the Kubernetes control plane, so they saw fleet as redundant now. Besides, it took a long time for it to get something resembling a mechanism that updates units in the cluster.

That said, being a lower level tool as you point out, it can be useful during e.g. troubleshooting. Imagine the case where `fleetctl list-machines` returns more nodes than `kubectl get nodes`.

[+] avichalp|9 years ago|reply
I think it is a brave decision which might affect the current users of Fleet for a while but will prove to be a good for the community overall.

If you think from a new comers perspective who is actually getting started with container orchestration he/she does a lot of research to choose a framework/tool and if you provide them with a lot of suboptimal solutions it doesn't really help (I do not mean Fleet is suboptimal but k8 is already close to become a standard). It is always better to have one or two standard solutions for a particular problem. Parallels can be drawn from the javascript world where we have this influx of libraries, frameworks and tooling which only does few things differently than others but this has led a lot of confusion specially among beginners and instead to thinking deeply about core concepts people are often seen chasing the new shiny frameworks.

[+] newsat13|9 years ago|reply
I am sad to see fleet go. Fleet was quiet simple to setup but k8s was a monster. They have so much terminology and it tries to cover all the cases of cloud orchestration. I think my fallback now is Swarm (hope it gets more stable though)
[+] rajivm|9 years ago|reply
I was in the same boat of leaning toward the simplicity of Compose/Swarm/Docker Cloud, and even took a look at Rancher which supports Swarm & their own compose/Cattle scheduler. After spending months trying to get these to work effectively, and battling with their continuous changes & instability -- I eventually gave Kubernetes a shot. There's definitely a greater learning curve to understanding what all the terminology is, but for the basic uses of deploying a set of services, it turns out it's actually not as complicated as it first seems.

My shortcut was using https://github.com/kubernetes-incubator/kompose to convert my docker-compose.yml to the equivalent K8S objects. It wasn't as simple as just running it, but it let me see what it would basically take to do the same thing in Kubernetes. It ended up taking just a few days to wrap my head around it all and get it up and running. Probably even easier if you use something like GKE which manages the cluster for you. If you're investing in using containers for the long-haul, I think it's definitely worth the learning overhead.

There are only three key object types you need to understand to start using K8S: Deployments, Pods & Services. Feel free to msg me if you have some questions about getting started.

[+] wstrange|9 years ago|reply
Kubernetes is dead simple to use, but can be a little daunting to set up.

Thankfully, that is changing with things like minikube, kubeadm, kops, and self hosted Kube.

I think the orchestration wars are essentially over. Kube has insane momentum, and is a well architected solution.

[+] adamu__|9 years ago|reply
That's too bad. I quite liked fleet for its simplicity, but maybe it is time to spend more time with Kubernetes.

Just after finishing a prototype Redis Cluster pseudo-PaaS built on fleet makes it a bit of a gut punch though.

[+] dpratt|9 years ago|reply
This is interesting, but has a potential problem - what do you use to schedule the control plane?

Right now, we use Fleet to schedule a highly available k8s API server and associated singleton daemons. Then API server is required to get anything else scheduled in the cluster.

How are they going to solve this bootstrap problem?

[+] Perceptes|9 years ago|reply
As moondev pointed to, eventually bootkube will handle bootstrapping k8s clusters. At my company we just set everything up using cloud-config. A systemd unit boots the kubelet on each server, and static k8s manifests are loaded by the kubelet to run the rest of the k8s components as pods. This way, the kubelet itself is the only component that is not managed by k8s itself.
[+] seenitall|9 years ago|reply
Sensible move, though I hope it's not too disruptive for Fleet users. Don't think they have any option though. The list of easy ways to try K8s should include conjure-up on Ubuntu for either laptop-scale or large cloud/Vmware/bare metal deploys