(no title)
bnabholz | 1 year ago
100% to backups. I know we all put off doing it, but you'll rest a lot easier, even with personal data you don't think you care about. It's not only about a hardware failure, but even a fluke sysadmin error where you accidentally nuke something. I'd recommend getting a account for Backblaze B2, and setting up restic on each Pi to at least daily backup the data directories and stuff you care about. For your Gitlab it's a bit less risky since presumably you also have a clone of each repo on some other machine.
I love that people are building small datacenters out of Pis. I haven't done the math as far as TCO, but instead of multiple Pis for self-hosting, I have a lonely secondhand Dell Precision with an old 8th gen Intel CPU (6C/12T), 64GB of RAM, and several TB of NVMe plus some spinning rust for the long term stuff. It's just a crazy amount of horsepower. Most trusted workloads run as containers, and my other experiments can run as VMs, and I have capacity in all the right places (I need disk and RAM more than CPU). Not as exciting as building a cluster, but I have the excess capacity to spin up multiple VMs on that one machine, if I want to play with that. It can get very Inception-like, what if I'm running VMs in KubeVirt on top of Kubernetes that is running on a cluster of VMs that are ultimately on a single machine, but while delegating whatever extra /64 IPv6 prefixes Comcast gave me to each of the bottom-layer VMs so that each pod still gets a globally routable IPv6 address. Cool times for the homelab stuff, and helped me understand things like Kubernetes and IPv6 to a much greater depth.
No comments yet.