top | item 35424551

Comparing K3s with vanilla Kubernetes

74 points| jhoelzel | 3 years ago |hoelzel.it | reply

87 comments

order
[+] wouldbecouldbe|3 years ago|reply
It was 01:30. About to go to bed. Next day would fly with my daughter to holidays.

I checked the apps of a bunch of clients. None of them loaded. I was like what...

I checked the server. Everything down.

I'd been running Kubernetes on Digital Ocean. And Digital Ocean forced a Kubernetes update that was incompatitable with mine at night.

Took me 8 hours to fix it. No sleep. Ended up moving it back to a good old VPS. And throw away K8s.

Now to be fair, I had been getting warnings with deployments. But I was used to that, Kubernetes has 10 updates per week. I dont have time to update K8S or my helm files every week.

So yeah it was my fault, but I was used to good old VPS hosting. There is an old php application I've build 5 years ago with Laravel, that never needs anything. I did some updates and patches, but it always just works.

Im used to running node on apache or nginx, even though a bit less stable, still almost never crashes.

Kubernetes there is always something. I guess there are reasons to choose it, but it's not stability.

I ended up taking the plane, and my daughter was super kind and patient. But no more kubernetes for me.

[+] alyandon|3 years ago|reply
My experience so far is that k8s itself is relatively stable. It's when you start using vendor specific addons/plugins to actually do stuff like provision PVs, modify Citrix LB settings, etc outside of k8s everything quickly becomes a burning tire fire. :-/
[+] davidkuennen|3 years ago|reply
Sounds like Digital Ocean was the problem, not Kubernetes.

Been running managed Kubernetes in GCP for years without any issues.

They are bugging me to update for some time now, but I don't think they would force an update on me.

[+] amq|3 years ago|reply
In my experience, helm is the worst offender. Somehow, every second chart update has a breaking change preventing the update, defaults that don't mind wiping the persistent volume and a discontinued somewhat consistent 'stable' central repo make me seriously regret using helm charts for anything that is not ephemeral.
[+] b33j0r|3 years ago|reply
Yep. My problem with a managed cloud is that kubernetes drivers are vendor specific. I hit this landmine too when the ingress I was using got a breaking update (as far as I could tell).

Still worked on my test cluster. It was a throw away site, so I didn’t even bother fixing it. And, I honestly can’t tell you what went wrong. Just deploying docker compose, often with ansible, is the most reliable for me at most scales.

[+] jhoelzel|3 years ago|reply
I feel your pain and i would lie if a k8s update never caught me off guard, but there are still policies you can set for the autoupdate of k3s or trigger them manually at will.

have a look at k3s and maybe you will like kubernetes more again. There is no magic to it. Have a faulty node? spin up a new one.

And if your hoster is kind enough, there will even be apis with cloudinit for you to do that.

Im not trying to imply that you need a managed k3s with this post, but rather trying to show how easy kubernetes can be if you leave the big clouds and try not to overcomplicate things.

[+] bithavoc|3 years ago|reply
DigitalOcean also cordons Nodes for no reason at random times. Currently moving to EKS exactly for this reason
[+] whalesalad|3 years ago|reply
Don’t throw the baby out with the bathwater.
[+] woopwoop24|3 years ago|reply
there is a good talk from kelsey hightower on when it makes sense to ue k8s and the number was like 20 servers. i think with what you are describing i would not even think about using k8s.
[+] zzyzxd|3 years ago|reply
I tried k3s several times in the past few years but I still can't use it in my homelab:

1. the project claims to be production ready and support HA control plane setup, but there's no solution for API load balancing out of box. How do you bring up a new node(either control plane or worker node)? You write down the join token produced by the first control plane node, and hardcode the token and the existing control plane's IP in the new node's systemd unit file. Btw, if you use the official installation script, that file is going to have permission 755 and everyone on the server can just read that token.

2. And how do you bring up the first control plane anyways? The official instruction is to `curl` a bash script and pipe into a shell. You can probably translate that script into some ansible playbook, but the whole running-a-bootstrap-script-and-passing-along-secrets approach make the whole process difficult to be converted into some something that's supposed to be idempotent.

All the problems can be worked around, in fact I was half way there, but then I suddenly started thinking: "didn't I choose k3s because I thought it was easy?"

[+] ph4te|3 years ago|reply
I've gone through this a few times recently and have it in my homelab and at the office. What works good for me using kube-vip to a VIP on the control plane, and then MetalLB to dish out private addresses in the respective networks, or even statically assigned addresses. I have been turning them all up with k3sup which works like a charm.

turn up the first node, install kube-vip, switch config to point to the vip, turn up all my other master nodes, then turn up my workers, install metallb, setup my subnet, install rancher, expose it with a LB, install longhorn. then start deploying things. here is an example of what i use to turn up the first one with k3sup. all of the servers are turned up and configured with ansible doing minimal updates, users, sudo access, etc..

k3sup install \ --ip=192.168.1.11 \ --user=k3s-user \ --sudo \ --tls-san=192.168.1.10 \ --cluster \ --k3s-channel=stable \ --k3s-version=v1.24.12+k3s1 \ --no-extras \ --k3s-extra-args "--flannel-iface=ens160 --node-ip=192.168.1.11" \ --merge \ --local-path $HOME/.kube/config \ --context=k3s-lab

[+] fellowniusmonk|3 years ago|reply
Thank you for sharing this and saving me time in exploring k3, it's shocking how common it is that an evangelized tool is impractical to setup and use in even a simple homelab configuration.

Knowing what NOT to investigate because it isn't "ready" can be one of the biggest time sucks.

[+] jhoelzel|3 years ago|reply
Truth be told yes that tricky but can be managed with ansible easily for instance.

1) Your main problem though would probably be the need for a haproxy or bgp which does load balancing for you. There are other solutions like kube-vip but they are more a "failover" solution that HA. Which would be fine for a homelab and is for instance how Rancher Harvester (kubernetes for virtual machines) does it.

2) you have to pass a parameter called --cluster-init for the first node and then join the other nodes. once the cluster is running you dont need any node with that parameter anymore and its common practise to create the first node wiuth --cluster-init then join 3 other ones and take down the first node

And on a personal node, you sound like you would be happy with rancher harvester. check it out its bascially turnkey

[+] MuffinFlavored|3 years ago|reply
> but there's no solution for API load balancing out of box

Why do you need this in a homelab?

[+] iameli|3 years ago|reply
Been using k3s in production for nearly four years now and would recommend it to anyone. Super lightweight and easy to deploy. Opinionated about stuff I don't care about while allowing for customization of network stack, backing database, and ingress controller if you want to do it yourselves. Their embedded etcd is way, way easier to set up than a custom etcd distribution.
[+] jhoelzel|3 years ago|reply
I have built clusters for clients with k3s too and it has always been a charm. More recently i was able to bring up a Dualstack cluster for a VOIP company that now can basically scale endlessly with gitops on top.

The fact that you can integrate existing Sysadmin Teams because they will understand that a program that runs a service with a binary and a config is all it takes, is worth its weight in gold.

They know their Loadbalancers and haproxies as well as how to provision true raid systems that are not software based which almost makes disk failure go away and maintenance really sheduleable.

[+] jasoneckert|3 years ago|reply
In addition to K3s, I've used managed K8s, custom-rolled K8s, as well as various other K8s distributions. K3s has - by far - provided the least friction for most of my use cases, and is what I incorporate in any initial cloud design.

Of course, other stakeholders and constraints may eventually mean that we adopt something else before it gets implemented, but K3s is what I start with for many of the same reasons outlined in this article.

[+] alexellisuk|3 years ago|reply
I'm a big fan of K3s, however managed Kubernetes from a large cloud vendor with a track record has a lot to offer when it comes to reducing management and the need for an SRE for K8s itself.

Folks might also be interested in two free resources:

1 - K3sup https://github.com/alexellis/k3sup - the author mentions HA K3s - K3sup is an easy way to get that using SSH. It's also a good pairing for K3s with Raspberry Pi 2 - Kubernetes at the Edge with K3s (CNCF / LF course) - I was commissioned to write this and I talk a lot about the differences and also the origin story of K3s and what Darren was aiming for.

Have fun with Kubernetes - whichever flavour you go for.

[+] mindwok|3 years ago|reply
IMO K3s (and distros like it) are the future of self managed Kubernetes. The same way Linux distributions brought simplification and sane, opinionated defaults to Linux in an era where compiling your own kernel and throwing user space together, K3s does the same for vanilla Kubernetes. It’s a joy to deploy and manage over vanilla.
[+] kamikazechaser|3 years ago|reply
Title is misleading. k3s is a deployment stack/distribution that builds off various Kubernetes modules. It must pass a certain test suite to conform to Kubernetes standards.

What you might be trying to compare is kubeadm which is the official deployment stack provided by Kubernetes.

[+] jhoelzel|3 years ago|reply
while somewhat yes but no.

im not trying to compare it with kubeadm (which is a more a setup script https://kubernetes.io/docs/reference/setup-tools/kubeadm/ ) but with the fact that vanilla kubernetes comes with moving parts that have to be configured and maintained and also updated separately.

you can actually setup "kubernetes" which is often referred to as vanilla kubernetes without it too. See "Kubernetes the hard way" by kelsey hightower.

[+] dang|3 years ago|reply
Ok, I've stuffed "default" in the title above. If someone wants to suggest a better (i.e. more accurate and neutral) title, we can change it again.
[+] gerty|3 years ago|reply
Is there any advantage of running k3s if you want to keep etcd? I understand that most k3s performance gains come from etcd being replaced by sqlite but if you still want a HA control plane, sqlite won't cut it.
[+] iameli|3 years ago|reply
We've been using k3s' embedded etcd for as long as its existed, and it's great. Setting up the etcd cluster is dramatically simplified; let the first node generate a token and feed it to all the other nodes. Tons of other advantages to k3s; the single-binary deploy process, the built-in networking stack (which you can secure with Wireguard out-of-the-box), built-in ingress controller if you want one.
[+] jhoelzel|3 years ago|reply
you can easily still use external etcd if you really need to.

But in general k3s can be HA without issue and scaled just as well as vanilla k8s. The main advantage of it is that everything comes neatly packed into a single binary whereas the alternative would mean to have a multitude of services running for cluster provisioning.

Kubernetes in the end is basically an API server with multiple componets and k3s puts a nice bow around all of them.

[+] moondev|3 years ago|reply
I always chuckle to myself when things like k3s, microk8s and so on claim to be "lightweight" kubernetes. Lightweight compared to what exactly? Because pure, upstream, vanilla kubernetes (kubeadm) is the lightest possible, it doesn't come with a CNI, ingress, or any of the additional stuff these distros do. Additionally, why make your life harder by adding an additional layer on top of kubernetes. For troubleshooting then you get to track down if this is a bug in your distro, or actually kubernetes itself. Just run the real thing.
[+] wg0|3 years ago|reply
I would lean towards k0s over k3s. The article has one piece of information wrong. K8s no more comes with docker. It is containerd I think.
[+] jhoelzel|3 years ago|reply
the only way i would ever trust a mirantis product again is if they dissolve their board completely.

The whole "kubernetes-lens" debacle still burns deep.

[+] awinter-py|3 years ago|reply
hmmmmm no mention of pvcs eh
[+] jhoelzel|3 years ago|reply
Longhorn ( https://longhorn.io/ ) has been stable for a long time now and is also a could native foundation project =)

You can start with a single storage node without replication and easily go from there to triple replicated storage

[+] kobalsky|3 years ago|reply
the title doesn't make sense, it reads like "comparing Ubuntu with Linux, How Ubuntu is often the better choice".

k3s is a kubernetes distribution

[+] ttymck|3 years ago|reply
What is the official kubernetes distribution called?
[+] jhoelzel|3 years ago|reply
while true, writing "vanialla kubernetes" did not have the right feel for me.

What would have been a better title for you?

[+] paddw|3 years ago|reply
K3s seems like a terrible name, given how people will be confused with the numeronym for Kubernetes.
[+] apetresc|3 years ago|reply
That's the joke, though? It's k8s, but "smaller".
[+] tetraodonpuffer|3 years ago|reply
It is a pretty decent name since you can easily search for information about it, unlike say “kind” (which I typically use for development) which is absolutely un-googleable
[+] nailer|3 years ago|reply
Kinda off topic but what’s the actual word for k3s? We have Kubernetes k8s, addreesen horowitz a16z, internationalisation i18n, founders f6s. What is k3s?
[+] stonemetal12|3 years ago|reply
It isn't a word. From their documentation:

We wanted an installation of Kubernetes that was half the size in terms of memory footprint. Kubernetes is a 10-letter word stylized as K8s. So something half as big as Kubernetes would be a 5-letter word stylized as K3s. There is no long form of K3s and no official pronunciation.

[+] cwayne|3 years ago|reply
I like to tell people its "kates"