I've been using a 3 nuc (actually Ryzen devices) k3s on SuSE MicroOS https://microos.opensuse.org/ for my homelab for a while, and I really like it. They made some really nice decisions on which parts of k8s to trim down and which Networking / LB / Ingress to use.
The option to use sqlite in place of etcd on an even lighter single node setup makes it super interesting for even lighter weight homelab container environment setups.
I even use it with Longhorn https://longhorn.io/ for shared block storage on the mini cluster.
If anyone uses it with MicroOS, just make sure you switch to kured https://kured.dev/ for the transactional-updates reboot method.
How has your experience been with Longhorn? Performance, flexibility, issues, maintenance...? I'm interested in moving away from a traditional single-node NAS to a cluster of storage servers. Ceph/Rook seem daunting, and I'd prefer something easy to setup and maintain, that's performant, reliable and scales well. Discovering issues once you're fully invested in a storage solution is a nightmare I'd like to avoid. :)
Funny: I've been running a Talos cluster for the past six months, and just today decided to look into k3s. Talos has a lot of really nice things, but I have found that the lack of shell access can be frustrating at times when trying to troubleshoot.
Kudos to you. I feel like setting things up on real hardware is somehow needed in order to make things concrete enough to full understand. At least for me (I fully admit this may be a personal flaw) working with a vm on in the cloud is a little too abstract - even though eventually this is where things will land.
Re: Talos persistent storage, why not run it as a VM and pass in block devices from the hypervisor? You also then gain the benefit of templated VMs that you can easily recreate or scale as needed.
"Lightweight Kubernetes" and then a graph involving 2 different nodes with 10+ services running on them.
Nomad seems to be a truly "lightweight Kubernetes" and for the small amount of time I've been using it, it seems to do it's job really well and it's easy to understand all the moving pieces without spending countless of hours reading docs and source code.
Although, it's hard to recommend Nomad for future uses as it sadly stopped being FOSS :/
Nomad seems to be a truly "lightweight Kubernetes"
k3s requirements: 1 node, 512MB RAM, 1 CPU core
Nomad requirements: "Nomad servers may need to be run on large machine instances. We suggest having between 4-8+ cores, 16-32 GB+ of memory, 40-80 GB+ of fast disk and significant network bandwidth."
I've commented on this before in a different k8s thread (one about a k8s outage) but something that bears repeating is that the entire job market is Kubernetes.
My personal experience is that it is very, very hard to find a job right now if your professional experience is primarily non-k8s orchestration systems. Most job positions out there require deep Kubernetes knowledge as well as hands-on experience with different supporting products like ArgoCD and Flux.
I chose Nomad for a large initiative at my current employer and it is honestly pretty amazing given the use case but I regret choosing it because I feel like I'm unhirable now given that every devops/SRE/platform engineering position open on the market (each one with hundreds of applicants) is heavily Kubernetes focused.
Nomad also doesn't have nearly half of the features that Kubernetes does. Need service discovery? Set up a Consul cluster. Need secret management? Install vault. Need vault enterprise? Install a separate Consul cluster! This was a few years ago, maybe it's changed? I dunno.
Anyway, lightweight here means that whole bunch of external dependencies have been ripped out. E.g. AWS/GCP/Azure and I believe other things too.
You can run k3s with a single node. In that case it uses sqlite instead of etcd which is great for a smaller resource footprint. If you're comparing k8s distros, you'll be hard pressed to find a setup that uses fewer system resources.
> Nomad seems to be a truly "lightweight Kubernetes" and for the small amount of time I've been using it, it seems to do it's job really well and it's easy to understand all the moving pieces without spending countless of hours reading docs and source code.
Does Nomad expose an API that you can extend with controllers running inside the cluster? Because Kubernetes without operators is not Kubernetes.
> "Lightweight Kubernetes" and then a graph involving 2 different nodes with 10+ services running on them.
Among its many strengths, core services for running Kubernetes running inside Kubernetes itself is one of its greatest.
> it's easy to understand all the moving pieces
It's legit to leverage your pre-existing knowledge. Nomad + Linux Pets + Other Hashichorp Pets works well.
There are many ways to run an application. Kubernetes + Machine Cattle, in my experience having started from zero, is superior to any other application deployment paradigm I've ever used.
Nomad is an alternative to k8s, but it is not really a "lightweight k8s". Nomad and k8s are way to different to call them versions of eachother.
And the whole license thing. Nothing changed for end users.
I think the tagline can be a bit misleading, you can run k3s on devices traditionally considered IoT (like the raspberry pi) but it will run on big heavy x86 servers too.
The amount of services running is not an indication of how lightweight the underlying tech is. K3S is for orchestration across nodes, and by design you can't run services on the master for security reasons unless you manually change it.
Are you trying to say that 10 services and 2 nodes would be less on Nomad?
Nomad is a lightweight container orchestrator, not a lightweight Kubernetes. The whole point is to keep syntax, terminology and config options etc as similar as possible between k8s and k3s.
If Nomad provides workload scheduling, a runtime, workload networking, service networking, and some sort of overall orchestrator/controller plus an API, then it has these same services. Running them in one process instead of ten doesn't make it more lightweight.
You're free to run either on one node, but other than for toy purposes like demos and learning, why use a cluster compute system when you're not actually running a cluster?
Interestingly, SUSE now owns Rancher [0], so k3s has been backed by a large company for some time now. I've never tried k3s, but I have always thought it's probably the most loved-by-its-users version of Kubernetes.
K3s is fantastic especially for local development with Kubernetes when orchestrated using k3d. This is what we use for most of our internal K8s testing at OpenC3.
There are lighter Kubernetes "distributions" (kind, minikube), but what makes k3s special is that it's (a) packaged as two binaries that provide all of the Kubernetes components in one, and (b) it's 100% suitable for production use.
Lots of teams are using K3s to run Kubernetes at the edge and in IoT applications, and with good reason. It's a fantastic Kubernetes distribution that's well-maintained, easy to get going with and well-documented.
(Ironically, if you look at the first commits to kubernetes/kubernetes, the Kubernetes components were originally shipped as a single binary. They decided to break them up later to simplify releasing, but the k3s monolith lives on.)
My only worry is the 10+ domains (involving 4+ major companies) being used for import paths. I know there are solutions but that would be a mess to figure out 10 years from now when half those links are broken or changed subtly.
I've been running k3s on my home server and it's been painless to set up compared to other options (e.g. kubeadm) while also being very lightweight. For single-node setups, it defaults to Kine instead of etcd, using SQLite as the database. This removes a significant chunk of overhead for dev clusters and running in tiny devices.
It also has Traefik set up with sane defaults, and the local path provisioner is also pretty good, too. But recently I've moved to Longhorn since I plan to eventually scale past 1 node. My only complaint about Longhorn is that applications that are write-heavy and delete old data (e.g. Prometheus with short retention) will require aggressive trimming (e.g. trim once a day) to keep the actual size of volumes down. Besides that, Longhorn makes backups to S3-compatible storage very effortless and you get RWX volumes, too!
Regarding k3s itself, you can persist modifications to the way some components (e.g. Traefik) are installed through a HelmChartConfig CRD. This is what I personally use so I can use Traefik to route SSH traffic for Forgejo. Another nice thing is that although components like kube-proxy are baked in to the single k3s binary, you can still scrape metrics with Prometheus provided that you expose their endpoints somewhere on your cluster network.
(I tweak this every now and then since I both used it as a training sample for my customers/peers and as a way to run my own batch processes as cheap as possible)
This was my introduction to K3s as well. RPis running K3s with enough resources left over to actually do some small tasks. I hosted a small data pipeline that analyzed trading data from an MMO. It was as fun as it was impractical, and I learned quite a bit.
Can this tool help to simplify self hosting implems? K3s was recommended to me to replace my personal pile of systemd units starting docker compose configs and manual reverse proxy configs.
Check out https://kamal-deploy.org, it just hit 1.0 and 37signals moved their whole Kubernetes stack to it. I was playing around with it recently for side projects and I think it's a nice fit for simpler products like that.
I had a great k3s experience with a silly, over engineered weekend project to automate my fog machine with motion sensors.
I connected motion sensors to battery & wifi enabled RPis, built a remote circuit to control the fog machine, and ran a k3s cluster with NATS to bring it all together. 10/10 would do again.
I've been closely following the discussion on k3s and Kubernetes in general. I recently acquired an M1 Mac Ultra, and I'm curious about the best options available for running Kubernetes locally on it.
Does anyone have experience or recommendations in this area? I've heard about a few tools but wanted to gather insights from this knowledgeable community.
I switched to it completely, it’s very convenient to have both fast (-est on Mac) Docker support and a really smooth VM setup for running occasional Linux tools (such as Yocto in my case).
Edit: added some background info to my recommendation.
The tools are a bit rough around the edges if you try and do something outside of the happy path with them. Nothing bad as such, just the user experience isn't as seamless when say running the VMs on custom, addressable host networks or managing vms with launchd.
hetzner-k3s : This is a CLI tool to quickly create and manage Kubernetes clusters in Hetzner Cloud using the lightweight Kubernetes distribution k3s from Rancher.
https://github.com/vitobotta/hetzner-k3s
[+] [-] intangible|2 years ago|reply
The option to use sqlite in place of etcd on an even lighter single node setup makes it super interesting for even lighter weight homelab container environment setups.
I even use it with Longhorn https://longhorn.io/ for shared block storage on the mini cluster.
If anyone uses it with MicroOS, just make sure you switch to kured https://kured.dev/ for the transactional-updates reboot method.
I'd love to compare it against Talos https://www.talos.dev/ but Talos's lack of support for a persistent storage partition (only separate storage device https://github.com/siderolabs/talos/issues/4041 ) really hurts most small home / office usage I'd want to try.
[+] [-] imiric|2 years ago|reply
How has your experience been with Longhorn? Performance, flexibility, issues, maintenance...? I'm interested in moving away from a traditional single-node NAS to a cluster of storage servers. Ceph/Rook seem daunting, and I'd prefer something easy to setup and maintain, that's performant, reliable and scales well. Discovering issues once you're fully invested in a storage solution is a nightmare I'd like to avoid. :)
[+] [-] organsnyder|2 years ago|reply
[+] [-] osigurdson|2 years ago|reply
[+] [-] c0wb0yc0d3r|2 years ago|reply
To me it seems strange that a systemd unit is used, but I didn't know if I was missing something about the way MicroOS worked.
[0]: https://en.opensuse.org/SDB:K3s_cluster_deployment_on_MicroO...
[+] [-] sgarland|2 years ago|reply
[+] [-] diggan|2 years ago|reply
Nomad seems to be a truly "lightweight Kubernetes" and for the small amount of time I've been using it, it seems to do it's job really well and it's easy to understand all the moving pieces without spending countless of hours reading docs and source code.
Although, it's hard to recommend Nomad for future uses as it sadly stopped being FOSS :/
[+] [-] themgt|2 years ago|reply
k3s requirements: 1 node, 512MB RAM, 1 CPU core
Nomad requirements: "Nomad servers may need to be run on large machine instances. We suggest having between 4-8+ cores, 16-32 GB+ of memory, 40-80 GB+ of fast disk and significant network bandwidth."
https://docs.k3s.io/installation/requirements
https://developer.hashicorp.com/nomad/docs/install/productio...
[+] [-] OhSoHumble|2 years ago|reply
My personal experience is that it is very, very hard to find a job right now if your professional experience is primarily non-k8s orchestration systems. Most job positions out there require deep Kubernetes knowledge as well as hands-on experience with different supporting products like ArgoCD and Flux.
I chose Nomad for a large initiative at my current employer and it is honestly pretty amazing given the use case but I regret choosing it because I feel like I'm unhirable now given that every devops/SRE/platform engineering position open on the market (each one with hundreds of applicants) is heavily Kubernetes focused.
[+] [-] figmert|2 years ago|reply
Anyway, lightweight here means that whole bunch of external dependencies have been ripped out. E.g. AWS/GCP/Azure and I believe other things too.
[+] [-] mfer|2 years ago|reply
[+] [-] _w1tm|2 years ago|reply
Does Nomad expose an API that you can extend with controllers running inside the cluster? Because Kubernetes without operators is not Kubernetes.
[+] [-] doctorpangloss|2 years ago|reply
Among its many strengths, core services for running Kubernetes running inside Kubernetes itself is one of its greatest.
> it's easy to understand all the moving pieces
It's legit to leverage your pre-existing knowledge. Nomad + Linux Pets + Other Hashichorp Pets works well.
There are many ways to run an application. Kubernetes + Machine Cattle, in my experience having started from zero, is superior to any other application deployment paradigm I've ever used.
[+] [-] proxysna|2 years ago|reply
[+] [-] q3k|2 years ago|reply
[+] [-] cortesoft|2 years ago|reply
The idea is to more easily get going with Kubernetes, not to provide an alternative. Nomad and K3s serve a completely different use case.
[+] [-] alanwreath|2 years ago|reply
[+] [-] worksonmine|2 years ago|reply
Are you trying to say that 10 services and 2 nodes would be less on Nomad?
[+] [-] WJW|2 years ago|reply
[+] [-] nonameiguess|2 years ago|reply
You're free to run either on one node, but other than for toy purposes like demos and learning, why use a cluster compute system when you're not actually running a cluster?
[+] [-] BossingAround|2 years ago|reply
[0] https://www.suse.com/news/suse-completes-rancher-acquisition...
[+] [-] einstand|2 years ago|reply
https://www.youtube.com/watch?v=k58WnbKmjdA
[+] [-] rcarmo|2 years ago|reply
[+] [-] rmelton|2 years ago|reply
[+] [-] FourSigma|2 years ago|reply
[1]https://k0sproject.io/
[+] [-] nunez|2 years ago|reply
Lots of teams are using K3s to run Kubernetes at the edge and in IoT applications, and with good reason. It's a fantastic Kubernetes distribution that's well-maintained, easy to get going with and well-documented.
(Ironically, if you look at the first commits to kubernetes/kubernetes, the Kubernetes components were originally shipped as a single binary. They decided to break them up later to simplify releasing, but the k3s monolith lives on.)
[+] [-] onionisafruit|2 years ago|reply
[0] https://github.com/k3s-io/k3s/blob/master/go.mod
[+] [-] sleepybrett|2 years ago|reply
[+] [-] pests|2 years ago|reply
[+] [-] koito17|2 years ago|reply
It also has Traefik set up with sane defaults, and the local path provisioner is also pretty good, too. But recently I've moved to Longhorn since I plan to eventually scale past 1 node. My only complaint about Longhorn is that applications that are write-heavy and delete old data (e.g. Prometheus with short retention) will require aggressive trimming (e.g. trim once a day) to keep the actual size of volumes down. Besides that, Longhorn makes backups to S3-compatible storage very effortless and you get RWX volumes, too!
Regarding k3s itself, you can persist modifications to the way some components (e.g. Traefik) are installed through a HelmChartConfig CRD. This is what I personally use so I can use Traefik to route SSH traffic for Forgejo. Another nice thing is that although components like kube-proxy are baked in to the single k3s binary, you can still scrape metrics with Prometheus provided that you expose their endpoints somewhere on your cluster network.
[+] [-] sidcool|2 years ago|reply
[+] [-] rcarmo|2 years ago|reply
https://github.com/rcarmo/azure-k3s-cluster
(I tweak this every now and then since I both used it as a training sample for my customers/peers and as a way to run my own batch processes as cheap as possible)
[+] [-] inssein|2 years ago|reply
[+] [-] rigelina|2 years ago|reply
[+] [-] mpsprd|2 years ago|reply
Im am completely oblivious to how k8s works.
[+] [-] dewey|2 years ago|reply
[+] [-] dinosaurdynasty|2 years ago|reply
You can use something like Ansible to make it a bit easier, if that even makes sense in your use case.
[+] [-] selljamhere|2 years ago|reply
I connected motion sensors to battery & wifi enabled RPis, built a remote circuit to control the fog machine, and ran a k3s cluster with NATS to bring it all together. 10/10 would do again.
https://blog.apartment304.com/fog-machine-madness/
[+] [-] ddejohn|2 years ago|reply
[+] [-] praveenhm|2 years ago|reply
Does anyone have experience or recommendations in this area? I've heard about a few tools but wanted to gather insights from this knowledgeable community.
[+] [-] maxekman|2 years ago|reply
I switched to it completely, it’s very convenient to have both fast (-est on Mac) Docker support and a really smooth VM setup for running occasional Linux tools (such as Yocto in my case).
Edit: added some background info to my recommendation.
[+] [-] nunez|2 years ago|reply
You can also use k3s; it's hella easy to get started with and it works great.
[+] [-] mhio|2 years ago|reply
Extremely quick to stand up a single node cluster, or many types of VMs in lima.
https://github.com/lima-vm/lima
https://github.com/abiosoft/colima The tools are a bit rough around the edges if you try and do something outside of the happy path with them. Nothing bad as such, just the user experience isn't as seamless when say running the VMs on custom, addressable host networks or managing vms with launchd.[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] infogulch|2 years ago|reply
[1]: https://github.com/kairos-io/kairos
[+] [-] mariuz|2 years ago|reply
hetzner-k3s : This is a CLI tool to quickly create and manage Kubernetes clusters in Hetzner Cloud using the lightweight Kubernetes distribution k3s from Rancher. https://github.com/vitobotta/hetzner-k3s
Kubernetes on Hetzner Cloud the easiest way https://vitobotta.com/2023/01/07/kubernetes-on-hetzner-cloud...
[+] [-] nurettin|2 years ago|reply
[+] [-] rcarmo|2 years ago|reply
[+] [-] iamdbtoo|2 years ago|reply
https://github.com/alexellis/k3sup
[+] [-] samcat116|2 years ago|reply