top | item 21646762

Everything I know about Kubernetes I learned from a cluster of Raspberry Pis

470 points| alexellisuk | 6 years ago |jeffgeerling.com | reply

77 comments

order
[+] raesene9|6 years ago|reply
A good method for learning how clusters work and playing with them without having to spend a lot of time re-building when you break things is kind (https://kind.sigs.k8s.io/).

Each node is a Docker container, but the version of Kubernetes running inside it is vanilla Kubeadm, so it's quite representative of what "real" clusters would look like.

The great thing about it is you can spin up a cluster in < 2 mins locally to try things out, then it's a single command to delete.

[+] geerlingguy|6 years ago|reply
I use Kind (and Minikube, and a number of other solutions too), but this is kind of my "Kubernetes-the-really-hard-way" fun project.

Note that I maintain a parallel configuration that runs a multi-node cluster on Vagrant for local development [1] as well as a docker environment built for CI purposes[2] using almost all the same images, with the same Ansible playbooks to configure them all across platforms.

[1] https://github.com/geerlingguy/raspberry-pi-dramble/tree/mas...

[2] https://github.com/geerlingguy/raspberry-pi-dramble/tree/mas...

[+] ricardbejarano|6 years ago|reply
Everything I know about Kubernetes I learned from a single-node (pseudo)clustered refurbished ThinkPad X201.

It cost me 90€ at an IBM refurbished sell. It is downstairs by the router, and it has been hosting everything for me except my email and my blog (which I want to host but I'm not sure about the reliability of my ISP's service, this morning it went down for 5h12m, without prior notice or anything).

It is amazing how much you learn from doing stuff. I'm currently on my 3rd year of univerity for CS, so I've tried the academia-style learning process, reading books on my own and doing stuff on my own. The latter is the best method by far.

[+] mpfundstein|6 years ago|reply
I only really learn by doing stuff. I read books usually to get started but mostly quickly have to turn to building something. Then revisiting the book on the go. If I only read the book/paper/whatever, I usually ‘think’ I get it but nearly always that isn’t the case :-)
[+] vyshane|6 years ago|reply
3 years ago, I also wanted a bare metal cluster for my homelab. I wanted x86-64, low power consumption, small footprint, and low cost. I ended up building this 5 node nano ITX tower:

https://vyshane.com/2016/12/19/5-node-nano-itx-kubernetes-to...

I think that the exposed boards adds to its charm. Doesn't help with dust though.

[+] rwmj|6 years ago|reply
Yours is a lot neater than the four node bare cluster I built a few years ago: https://rwmj.wordpress.com/2014/04/28/caseless-virtualizatio...

One issue with caseless machines is the amount of RF they emit. Makes it hard to listen to any broadcast radio near one and probably disturbs the neighbours similarly.

I'm now using a cluster of NUCs which is considerably easier to deal with although quite a lot more expensive: https://rwmj.wordpress.com/2018/02/06/nuc-cluster-bring-up/

[+] geerlingguy|6 years ago|reply
Very nice; I've considered doing something similar and running some production sites on it from my home, but the limitation has always been my terrible Internet bandwidth through Spectrum.

We almost got Verizon gigabit fiber a few years ago... then AT&T ran fiber to the front of my neighborhood last year, and then never ran it to the houses. As it is, I'm stuck with 10 mbps uplink, which is not enough to be able to do most of what I would want to do with a more powerful local cluster.

[+] yumraj|6 years ago|reply
This is very cool. Curious, roughly, how much did this setup cost?
[+] pstadler|6 years ago|reply
Great article! Never stop tinkering.

Here‘s how I got to know Kubernetes:

By end of 2016, the Kubernetes hype was just about to pick up real steam. As somebody who always liked the idea of running something like an own cluster in the cloud, I attended KubeCon Europe in early 2017. The event was sold out, and took place in a venue almost too small for the number of attendees. It was great. During the event I was just about to finish the Hobby Kube[1] project. Back then weren’t any guides that addressed all the problems encountered when running Kubernetes on a budget, using low cost VPS on random cloud providers. So I dived into the subject in the second half of 2016 and started writing a guide, including automated provisioning using Terraform. I discovered WireGuard in the process of looking for a solution to efficiently securing network traffic between hosts. This still makes me feel like I was an early adopter to something that’s becoming hugely popular.

If somebody would like to add a Terraform module for deploying to Raspberry Pi, please ping me or open a PR here[2].

[1] https://github.com/hobby-kube/guide [2] https://github.com/hobby-kube/provisioning

[+] SirMonkey|6 years ago|reply
I learned k8s with some NUCs we had laying around at work. Might be easier than Pi, but not as cheap. Some things I used: https://metallb.universe.tf/ (LoadBalancer) https://cilium.io/ (Networking) https://rook.io/ (Persistent Storage)
[+] salamander014|6 years ago|reply
I also use MetalLB and Rook. Can't say enough good things about them.

I have also used Kube-Router (https://kube-router.io - Digital Ocean's non Virtual networking plugin for bare metal environments; it puts your containers on your physical network, which is freaking neat) and loved that, but since I started deploying kubernetes with Rancher I've found for dev clusters I'm not caring about what networking is used. (currently running Canal).

Not sure what we will decide on when we go to production.

[+] bogomipz|6 years ago|reply
Does Rook give you the equivalent of EBS root volumes for your nodes then? Is that the function you have it providing? Does it offer something beyond using local host storage and minio?

I ask because I've generally been confused about the use case for Rook despite having read the "what is Rook?" paragraph many times on the project home page. My assumption is that it lets you build your own internal cloud provider. Is that correct?

[+] tyingq|6 years ago|reply
Thankfully these days, places like Digital Ocean and Linode have managed K8s where you only pay for the compute nodes, for $5/month each.

So it's fairly cheap and easy to learn on a "real" cluster without having to build one.

[+] gatherhunterer|6 years ago|reply
Building a cluster is a fun and relatively easy project. You learn much more by having the hardware at your fingertips. You can simulate network failures and power failures or you can crash an important daemon. By causing problems you can see how K8s responds and manages itself when it does not have the nodes it expects. It is important to know these things because, for example, if you create a pod instead of a replica set then the loss of a node will mean the inability to use the pod assigned to that node. You need to know how a pod and a replica set differ in order to create a self-healing stack. You can learn all of that with the cloud solutions but the ability to answer your own questions will always be the superior means of learning.

The Agile movement has convinced many that the solution that gets you up and running today is the best. A good engineer is not afraid of working and learning instead of just buying a pre-baked implementation and calling themself an experienced user. The cluster that I know like the back of my hand is my “real” cluster. The production cluster is someone else’s that I just rent.

[+] jptoto|6 years ago|reply
I LOVE Jeff's work and I own his Ansible book. The setup is pretty awesome. FWIW I think you could do this with local VMs a little more easily and provision with Vagrant. Having said that, a cluster of Pis is super fun!
[+] JeremyNT|6 years ago|reply
Yeah, using local vms is going to greatly expand access to this kind of learning. A current developer laptop is more than beefy enough, and I'd suggest that there aren't many useful lessons to learn from the pi hardware that can't be learned fully virtualized.

But if you're having fun, of course, more power to you :)

[+] kiseleon|6 years ago|reply
Just an aside, your parts list for the pi dramble has 4 Pi 4B's but micro usb power cables
[+] geerlingguy|6 years ago|reply
Oops! I forgot to update that when I switched everything out for the 4 Bs. I'll update that in a little bit.
[+] Methusalah|6 years ago|reply
This seems like a fun project and a great way to learn Kubernetes, but if I'm dropping this much money on it, I'd like it to have some productive purpose afterwards.

I'm a full-stack web dev primarily using node/react/postgres. I've also got some projects currently hosted on a Linode instance. Ideas on fun/productive uses for this cluster after I've built it and messed around with Kubernetes?

[+] striking|6 years ago|reply
Move the projects to your cluster and see what happens!

Consider the fact that, if you make improvements to the cluster, all of your apps will see that same lift. So if you were to set up backups on the cluster's persistent volumes and its databases, you'd get free backups for all of the projects you've moved. Same with monitoring, autoscale, and so on.

[+] robgibbons|6 years ago|reply
RPI's will handle Node surprisingly well, from my experience. A small cluster of them would be well suited to any number of web projects. A few years back, I played around with haproxy and a few Pi's running Node servers. You may be surprised how well they work as servers, as long as you're not expecting Xeon-level speeds.
[+] birdyrooster|6 years ago|reply
The Kubernetes ecosystem is evolving rapidly, you might want to keep it around to play with different CNIs, CSIs, service meshes, operators for clustered software lifecycle, and more.
[+] ljm|6 years ago|reply
I suppose I was lucky dabbling in one of my side-projects because I had money to burn. I put it all in Google Cloud and then learned just how much you have to complicate the stack (beyond the complication of K8S) to lower infra costs.

Suddenly I wasn't using basic Kubernetes any more, I was setting up a new ingress controller so Google wouldn't launch a new load balancer instance for every public service I exposed. Those things aren't cheap.

It was an amazing way to experience just how much you can suffer in the cloud, and just how far you can go down the rabbit hole with this kind of tech.

[+] Mountain_Skies|6 years ago|reply
Wonder if some old netbooks could be used for this purpose. Doubt I am the only one with a small pile of them laying around.
[+] ricardbejarano|6 years ago|reply
From experience, anything with 2GB or more RAM can be a master node. Workers can even have 1GB and work just fine.

Be warned, though, in my experience etcd requires a somewhat decent read/write latency, or else it's going to fail, and when etcd fails everything fails. Your changes don't apply, etc.

[+] acd|6 years ago|reply
Thanks Jeff for your great Ansible roles!
[+] madrox|6 years ago|reply
This is also how my alma mater (Cal Poly SLO) teaches Hadoop. Building real world clusters are expensive, and giving each student their own is difficult. However, small clusters of Raspberry Pis are cheap, and it's also very easy to demonstrate how unplugging one affects the cluster.
[+] pojntfx|6 years ago|reply
@alexellisuk almost all of my interest in Kubernetes is due to your work. Thank you for everything you do!
[+] segmondy|6 years ago|reply
You can also build a cluster on your computer if it's beefy enough by running multiple VMs. I bought a hp z820 workstation, 16 cores, 128gb for $1000 a few years ago and that's my k8s experiment land.
[+] MuffinFlavored|6 years ago|reply
I don’t fully understand where the line goes from “all I need is a Docker container for nginx + Postgres + Redis + my services” to “I need Kubernetes”

When does one need to go from just Docker containers to container orchestration like k8s?

[+] tjpd|6 years ago|reply
I can also recommend MagicSandbox (https://www.msb.com/) which provides a lot of learning content alongside a real k8s remote environment.
[+] crb002|6 years ago|reply
For me it was a Blue Gene/L.