top | item 37845903

K3s – Lightweight Kubernetes

254 points| kristianpaul | 2 years ago |k3s.io | reply

191 comments

order
[+] intangible|2 years ago|reply
I've been using a 3 nuc (actually Ryzen devices) k3s on SuSE MicroOS https://microos.opensuse.org/ for my homelab for a while, and I really like it. They made some really nice decisions on which parts of k8s to trim down and which Networking / LB / Ingress to use.

The option to use sqlite in place of etcd on an even lighter single node setup makes it super interesting for even lighter weight homelab container environment setups.

I even use it with Longhorn https://longhorn.io/ for shared block storage on the mini cluster.

If anyone uses it with MicroOS, just make sure you switch to kured https://kured.dev/ for the transactional-updates reboot method.

I'd love to compare it against Talos https://www.talos.dev/ but Talos's lack of support for a persistent storage partition (only separate storage device https://github.com/siderolabs/talos/issues/4041 ) really hurts most small home / office usage I'd want to try.

[+] imiric|2 years ago|reply
Thanks for your perspective.

How has your experience been with Longhorn? Performance, flexibility, issues, maintenance...? I'm interested in moving away from a traditional single-node NAS to a cluster of storage servers. Ceph/Rook seem daunting, and I'd prefer something easy to setup and maintain, that's performant, reliable and scales well. Discovering issues once you're fully invested in a storage solution is a nightmare I'd like to avoid. :)

[+] organsnyder|2 years ago|reply
Funny: I've been running a Talos cluster for the past six months, and just today decided to look into k3s. Talos has a lot of really nice things, but I have found that the lack of shell access can be frustrating at times when trying to troubleshoot.
[+] osigurdson|2 years ago|reply
Kudos to you. I feel like setting things up on real hardware is somehow needed in order to make things concrete enough to full understand. At least for me (I fully admit this may be a personal flaw) working with a vm on in the cloud is a little too abstract - even though eventually this is where things will land.
[+] sgarland|2 years ago|reply
Re: Talos persistent storage, why not run it as a VM and pass in block devices from the hypervisor? You also then gain the benefit of templated VMs that you can easily recreate or scale as needed.
[+] diggan|2 years ago|reply
"Lightweight Kubernetes" and then a graph involving 2 different nodes with 10+ services running on them.

Nomad seems to be a truly "lightweight Kubernetes" and for the small amount of time I've been using it, it seems to do it's job really well and it's easy to understand all the moving pieces without spending countless of hours reading docs and source code.

Although, it's hard to recommend Nomad for future uses as it sadly stopped being FOSS :/

[+] OhSoHumble|2 years ago|reply
I've commented on this before in a different k8s thread (one about a k8s outage) but something that bears repeating is that the entire job market is Kubernetes.

My personal experience is that it is very, very hard to find a job right now if your professional experience is primarily non-k8s orchestration systems. Most job positions out there require deep Kubernetes knowledge as well as hands-on experience with different supporting products like ArgoCD and Flux.

I chose Nomad for a large initiative at my current employer and it is honestly pretty amazing given the use case but I regret choosing it because I feel like I'm unhirable now given that every devops/SRE/platform engineering position open on the market (each one with hundreds of applicants) is heavily Kubernetes focused.

[+] figmert|2 years ago|reply
Nomad also doesn't have nearly half of the features that Kubernetes does. Need service discovery? Set up a Consul cluster. Need secret management? Install vault. Need vault enterprise? Install a separate Consul cluster! This was a few years ago, maybe it's changed? I dunno.

Anyway, lightweight here means that whole bunch of external dependencies have been ripped out. E.g. AWS/GCP/Azure and I believe other things too.

[+] mfer|2 years ago|reply
You can run k3s with a single node. In that case it uses sqlite instead of etcd which is great for a smaller resource footprint. If you're comparing k8s distros, you'll be hard pressed to find a setup that uses fewer system resources.
[+] _w1tm|2 years ago|reply
> Nomad seems to be a truly "lightweight Kubernetes" and for the small amount of time I've been using it, it seems to do it's job really well and it's easy to understand all the moving pieces without spending countless of hours reading docs and source code.

Does Nomad expose an API that you can extend with controllers running inside the cluster? Because Kubernetes without operators is not Kubernetes.

[+] doctorpangloss|2 years ago|reply
> "Lightweight Kubernetes" and then a graph involving 2 different nodes with 10+ services running on them.

Among its many strengths, core services for running Kubernetes running inside Kubernetes itself is one of its greatest.

> it's easy to understand all the moving pieces

It's legit to leverage your pre-existing knowledge. Nomad + Linux Pets + Other Hashichorp Pets works well.

There are many ways to run an application. Kubernetes + Machine Cattle, in my experience having started from zero, is superior to any other application deployment paradigm I've ever used.

[+] proxysna|2 years ago|reply
Nomad is an alternative to k8s, but it is not really a "lightweight k8s". Nomad and k8s are way to different to call them versions of eachother. And the whole license thing. Nothing changed for end users.
[+] q3k|2 years ago|reply
I mean, 2 nodes and 10 services is very light for what Kubernetes is designed to scale to.
[+] cortesoft|2 years ago|reply
K3s uses the exact same API as Kubernetes. Nomad does not.

The idea is to more easily get going with Kubernetes, not to provide an alternative. Nomad and K3s serve a completely different use case.

[+] alanwreath|2 years ago|reply
I think the tagline can be a bit misleading, you can run k3s on devices traditionally considered IoT (like the raspberry pi) but it will run on big heavy x86 servers too.
[+] worksonmine|2 years ago|reply
The amount of services running is not an indication of how lightweight the underlying tech is. K3S is for orchestration across nodes, and by design you can't run services on the master for security reasons unless you manually change it.

Are you trying to say that 10 services and 2 nodes would be less on Nomad?

[+] WJW|2 years ago|reply
Nomad is a lightweight container orchestrator, not a lightweight Kubernetes. The whole point is to keep syntax, terminology and config options etc as similar as possible between k8s and k3s.
[+] nonameiguess|2 years ago|reply
If Nomad provides workload scheduling, a runtime, workload networking, service networking, and some sort of overall orchestrator/controller plus an API, then it has these same services. Running them in one process instead of ten doesn't make it more lightweight.

You're free to run either on one node, but other than for toy purposes like demos and learning, why use a cluster compute system when you're not actually running a cluster?

[+] rmelton|2 years ago|reply
K3s is fantastic especially for local development with Kubernetes when orchestrated using k3d. This is what we use for most of our internal K8s testing at OpenC3.
[+] FourSigma|2 years ago|reply
Could someone please explain the difference between K0s[1] and K3s? They seem to both target the same minimalist K8s segment.

[1]https://k0sproject.io/

[+] nunez|2 years ago|reply
There are lighter Kubernetes "distributions" (kind, minikube), but what makes k3s special is that it's (a) packaged as two binaries that provide all of the Kubernetes components in one, and (b) it's 100% suitable for production use.

Lots of teams are using K3s to run Kubernetes at the edge and in IoT applications, and with good reason. It's a fantastic Kubernetes distribution that's well-maintained, easy to get going with and well-documented.

(Ironically, if you look at the first commits to kubernetes/kubernetes, the Kubernetes components were originally shipped as a single binary. They decided to break them up later to simplify releasing, but the k3s monolith lives on.)

[+] onionisafruit|2 years ago|reply
[+] sleepybrett|2 years ago|reply
Every kubernetes project's go.mod is quite robust. Kubernetes is a very large codebase.
[+] pests|2 years ago|reply
My only worry is the 10+ domains (involving 4+ major companies) being used for import paths. I know there are solutions but that would be a mess to figure out 10 years from now when half those links are broken or changed subtly.
[+] koito17|2 years ago|reply
I've been running k3s on my home server and it's been painless to set up compared to other options (e.g. kubeadm) while also being very lightweight. For single-node setups, it defaults to Kine instead of etcd, using SQLite as the database. This removes a significant chunk of overhead for dev clusters and running in tiny devices.

It also has Traefik set up with sane defaults, and the local path provisioner is also pretty good, too. But recently I've moved to Longhorn since I plan to eventually scale past 1 node. My only complaint about Longhorn is that applications that are write-heavy and delete old data (e.g. Prometheus with short retention) will require aggressive trimming (e.g. trim once a day) to keep the actual size of volumes down. Besides that, Longhorn makes backups to S3-compatible storage very effortless and you get RWX volumes, too!

Regarding k3s itself, you can persist modifications to the way some components (e.g. Traefik) are installed through a HelmChartConfig CRD. This is what I personally use so I can use Traefik to route SSH traffic for Forgejo. Another nice thing is that although components like kube-proxy are baked in to the single k3s binary, you can still scrape metrics with Prometheus provided that you expose their endpoints somewhere on your cluster network.

[+] sidcool|2 years ago|reply
We have been using K3S in production now for 2 years. And it's working like a charm.
[+] rcarmo|2 years ago|reply
If anyone wants a ready-to-go Azure template to play around, here you go:

https://github.com/rcarmo/azure-k3s-cluster

(I tweak this every now and then since I both used it as a training sample for my customers/peers and as a way to run my own batch processes as cheap as possible)

[+] inssein|2 years ago|reply
Loving K3S so far, I've got a "homelab" of 4 RPis running it and so far it has been pretty seamless experience.
[+] rigelina|2 years ago|reply
This was my introduction to K3s as well. RPis running K3s with enough resources left over to actually do some small tasks. I hosted a small data pipeline that analyzed trading data from an MMO. It was as fun as it was impractical, and I learned quite a bit.
[+] mpsprd|2 years ago|reply
Can this tool help to simplify self hosting implems? K3s was recommended to me to replace my personal pile of systemd units starting docker compose configs and manual reverse proxy configs.

Im am completely oblivious to how k8s works.

[+] dewey|2 years ago|reply
Check out https://kamal-deploy.org, it just hit 1.0 and 37signals moved their whole Kubernetes stack to it. I was playing around with it recently for side projects and I think it's a nice fit for simpler products like that.
[+] dinosaurdynasty|2 years ago|reply
systemd units are fine. It's even pretty close to the recommendations for podman.

You can use something like Ansible to make it a bit easier, if that even makes sense in your use case.

[+] selljamhere|2 years ago|reply
I had a great k3s experience with a silly, over engineered weekend project to automate my fog machine with motion sensors.

I connected motion sensors to battery & wifi enabled RPis, built a remote circuit to control the fog machine, and ran a k3s cluster with NATS to bring it all together. 10/10 would do again.

https://blog.apartment304.com/fog-machine-madness/

[+] ddejohn|2 years ago|reply
No pictures of fog machine and spooky graveyard -- 4/10
[+] praveenhm|2 years ago|reply
I've been closely following the discussion on k3s and Kubernetes in general. I recently acquired an M1 Mac Ultra, and I'm curious about the best options available for running Kubernetes locally on it.

Does anyone have experience or recommendations in this area? I've heard about a few tools but wanted to gather insights from this knowledgeable community.

[+] maxekman|2 years ago|reply
You should try out OrbStack: https://docs.orbstack.dev/kubernetes/

I switched to it completely, it’s very convenient to have both fast (-est on Mac) Docker support and a really smooth VM setup for running occasional Linux tools (such as Yocto in my case).

Edit: added some background info to my recommendation.

[+] mhio|2 years ago|reply
"colima" and it's underlying project "lima" are a pretty quick way to get started.

Extremely quick to stand up a single node cluster, or many types of VMs in lima.

https://github.com/lima-vm/lima

    limactl start template://k3s
https://github.com/abiosoft/colima

    colima start --kubernetes
The tools are a bit rough around the edges if you try and do something outside of the happy path with them. Nothing bad as such, just the user experience isn't as seamless when say running the VMs on custom, addressable host networks or managing vms with launchd.
[+] samcat116|2 years ago|reply
K3s is great but I'll also shout out that RKE2 is almost as simple to install as K3s but its full Kubernetes.