The one thing I don't understand about these Pi based mini-racks is why you would build a home lab that's less powerful than your client devices. My 24 U rack exists precisely because I want on demand, large amounts of compute/GPU for compiling, transcoding, encrypting, etc. and the cloud is too expensive. If you're going to make the investment into any type of home labbing, why gimp yourself with devices that can only run small services you could run on a single old desktop using containers?
> The one thing I don't understand about these Pi based mini-racks is why you would build a home lab that's less powerful than your client devices.
It makes sense because you are unlikely to run production workloads at home.
So you don't really need a half a terabyte of RAM and a 220v power supply for the world's most expensive electric space heater.
Instead people are most often interested in developing infrastructure-as-code or testing deployment strategies or doing tests to see what happens when outages happen. Logging, metrics collecting, simulating network failure, simulating software attacks. etc.
In most of those cases having a number of smaller machines makes more sense then trying to emulate a small datacenter on a one or two big ones.
In practice I think most people end up with 2 or 3 'big machines' for times when they do need the Umph or want to have a big storage array for their "linux ISO collections". Then having a number of Pis or HP mini desktops in arrays is just for good fun.
If I want to simulate full blown workloads and benchmarking then I can just use AWS or Azure for that. A lot cheaper to lease verts for a evening or two, then buy big machines and leaving them idle 99.8% of the time.
I generally agree with this sentiment and have struggled to put it into words. I have a beefy Dell Precision that I bought used for $200. This is where I simulate all the things; networking, container orchestration, services and more. I have upgraded to 128GB of ECC RAM, PCIe NVMe drives, and a 24GB quadro card. All in, I have about $800 invested. It's brittle as it is also my desktop so I delay updates and what not because I don't want to break anything. Not ideal.
So, now I am left with building another system and I need to decide form factor. Is this going to be headless or run a GUI of some kind with a monitor attached? Should I buy a big ole tower case or move to a 6u or 12u rack system. I want more VRAM and I need as much PCIe as possible. One thing for sure is that I don't want it to be Raspberry Pi based. I have two Pi4 collecting dust that were fun and impressive for what they are.
I saw these mini racks and wondered how they would work with an extended ATX board. Could these be useful as some kind of "open air" or mining type case where you simply bolt stuff on. Definitely going to investigate, so while the exact application of mini-racking pi's is not my jam, I am thankful that it was brought up.
I have a HP EliteDesk (kinda like a n100) running Home Assistant and a pi4 running piHole. A lil mini rack can tidy this all up while also having room for a NVIDIA Digits and maybe a NAS. Its for hobbyists and people without alot of space. You can have a bunch of low power/low noise systems in a tiny little rack. I think it rocks.
In my case, it's because all my client devices are laptops, or locked down like Apple TV. It's nice to have a low-stakes experimentation box that can also be a Jellyfin server.
On the other hand, I don't go to the trouble this guy goes to. I just have a cheap mini PC plugged into Ethernet sitting on top of my router.
Low power device to leave on all the time, particularly for people who don't leave their clients on all the time. Like a router, but more versatile (although with a better OS on it, an actual router may do the same job).
Just a nit; the mini rack has nothing to do with Pis per-se. One of the racks in the post features a Radxa N100 as the primary node, and other builds in the showcase feature various mini PCs (new and quite old). All of which have a lot more oomph than a Pi.
There are also other Arm SBCs that are much faster (and more efficient) than a Pi, like the Orange Pi 5 Max. Many homelab-related apps run just as well if not better on that, just have to make sure to settle on a supported OS distro.
Maybe you want to configure 4 nodes to provide redundant network service.
Or experiment with VLANs, firewalls, QOS, and related. Sure transcoding multiple 4k video streams is intensive, but there's plenty to be learned about performance, networking, configuration management, DNS, fallover, network storage, rolling upgrades, SSL certs, VPN endpoints, and 100 other network services that run easily on today's SBCs.
Plenty to be learned with a handful of cheap linux boxes and even a RPI5 or RK3588 is quite capable. Sure not state of the art today, but home lab != hyperscaler, but just might help get you are job at one.
The reason I'm considering a Pi cluster is resilience and repeatability. The reason I don't have one yet is because (like you) I'm unconvinced it's the right way to get that.
At least in theory, a Pi cluster has better failure modes than a single machine even if it's less powerful overall. And yes, I'm currently running on an old laptop -- but it's all a bit ad-hoc and I really want something a bit more uniform, ideally with at least some of the affordances I'm enjoying when deploying stuff professionally.
I don't have a homelab, but I have an old mac mini (that I put Debian on) as an always-on file server and other self-hosted software. I also have a 1L office pc as my main desktop (freebsd) with a couple of dev VMs (alpine linux). Both draw very little power, which is important for me as I'm on solar.
I have an M1 Mini which is more powerful than any of these, but MacOS is not really suitable for tinkering and anything that's not Apple(TM).
A homelab, running something like Proxmox, spends a great deal of it's time mostly idling with the odd thing spiking for basic homelab tasks (storage, sharing, etc).
Doing GPU/intensive type tasks is more purpose driven than a homelab. For this, you can get a NAS with dedicated GPU for transcoding if you wanted, etc.
The call for large amounts of compute/GPU makes a lot of sense, and there's a lot of ways to get there depending on what's needed, relative to the electricity bill you're OK with if it ends up idling a lot more than anticipated.
Adding a mac mini/studio for crunching certain things might be enough for a single person or household. Adding other demands or users beyond that could change it.
I'm familiar with racks and gear, and had way too much of it when I pulled out of datacenters and went more virtual and cloud. The nice thing now is a lot of that virtualization can come home with a bit of the data center (power backups, internet backups, etc)
I think it’s the difference of budget and type of homelab. Some people homelab for production (smart home, nas, media server) and others for learning and development, and building a small complete network either multi pi or low cost devices is a great way to get experience.
I feel like this concept will be useful in other situations, too, not just homelabs. I wouldn't be surprised at all to spot one of these controlling a temporary venue.
I have to concur with this because the whole idea of a "pi" based homelab is just too anemic for my purposes, when you can spend as little as 200 bucks and get some older x86-64 quad core, 32GB RAM desktop PC from a Dell enterprise series of midtower desktops, which will serve as a much more powerful hypervisor. Spend just a little bit more and you're looking at Dell Precision systems with 64 or 128GB of RAM.
There are a few pseudo standards for "half width" 1U devices, one of the more notable vendors is mikrotik which makes devices that can be mounted as two units in 1U.
I wish there was some kind of firmly defined standard for exactly half of a 1U width so that different manufacturers' devices could be attached together.
The 10" rack is the de facto standard half-width rack. It uses the same spacing as 19" inch racks. My understanding is that two 8.75" devices fit in 19" rack with mounting bracket in the center.
This is cute, but there should be some kind of affordances for wall mounting or hanging it inside a closet. The emphasis on portability is… OK, I guess, but unlikely to ever be relevant for most people doing homelabs.
This feels kind of like a miniature train model, but for data centers. It's cool and all, but it offers neither storage, compute, or networking in a way that I would consider paying that price tag.
toprerules|1 year ago
lotharcable2|1 year ago
It makes sense because you are unlikely to run production workloads at home.
So you don't really need a half a terabyte of RAM and a 220v power supply for the world's most expensive electric space heater.
Instead people are most often interested in developing infrastructure-as-code or testing deployment strategies or doing tests to see what happens when outages happen. Logging, metrics collecting, simulating network failure, simulating software attacks. etc.
In most of those cases having a number of smaller machines makes more sense then trying to emulate a small datacenter on a one or two big ones.
In practice I think most people end up with 2 or 3 'big machines' for times when they do need the Umph or want to have a big storage array for their "linux ISO collections". Then having a number of Pis or HP mini desktops in arrays is just for good fun.
If I want to simulate full blown workloads and benchmarking then I can just use AWS or Azure for that. A lot cheaper to lease verts for a evening or two, then buy big machines and leaving them idle 99.8% of the time.
monkmartinez|1 year ago
So, now I am left with building another system and I need to decide form factor. Is this going to be headless or run a GUI of some kind with a monitor attached? Should I buy a big ole tower case or move to a 6u or 12u rack system. I want more VRAM and I need as much PCIe as possible. One thing for sure is that I don't want it to be Raspberry Pi based. I have two Pi4 collecting dust that were fun and impressive for what they are.
I saw these mini racks and wondered how they would work with an extended ATX board. Could these be useful as some kind of "open air" or mining type case where you simply bolt stuff on. Definitely going to investigate, so while the exact application of mini-racking pi's is not my jam, I am thankful that it was brought up.
nickthegreek|1 year ago
irskep|1 year ago
On the other hand, I don't go to the trouble this guy goes to. I just have a cheap mini PC plugged into Ethernet sitting on top of my router.
opan|1 year ago
geerlingguy|1 year ago
There are also other Arm SBCs that are much faster (and more efficient) than a Pi, like the Orange Pi 5 Max. Many homelab-related apps run just as well if not better on that, just have to make sure to settle on a supported OS distro.
Zanfa|1 year ago
sliken|1 year ago
Or experiment with VLANs, firewalls, QOS, and related. Sure transcoding multiple 4k video streams is intensive, but there's plenty to be learned about performance, networking, configuration management, DNS, fallover, network storage, rolling upgrades, SSL certs, VPN endpoints, and 100 other network services that run easily on today's SBCs.
Plenty to be learned with a handful of cheap linux boxes and even a RPI5 or RK3588 is quite capable. Sure not state of the art today, but home lab != hyperscaler, but just might help get you are job at one.
andrewaylett|1 year ago
At least in theory, a Pi cluster has better failure modes than a single machine even if it's less powerful overall. And yes, I'm currently running on an old laptop -- but it's all a bit ad-hoc and I really want something a bit more uniform, ideally with at least some of the affordances I'm enjoying when deploying stuff professionally.
cmcconomy|1 year ago
skydhash|1 year ago
I have an M1 Mini which is more powerful than any of these, but MacOS is not really suitable for tinkering and anything that's not Apple(TM).
j45|1 year ago
Doing GPU/intensive type tasks is more purpose driven than a homelab. For this, you can get a NAS with dedicated GPU for transcoding if you wanted, etc.
The call for large amounts of compute/GPU makes a lot of sense, and there's a lot of ways to get there depending on what's needed, relative to the electricity bill you're OK with if it ends up idling a lot more than anticipated.
Adding a mac mini/studio for crunching certain things might be enough for a single person or household. Adding other demands or users beyond that could change it.
I'm familiar with racks and gear, and had way too much of it when I pulled out of datacenters and went more virtual and cloud. The nice thing now is a lot of that virtualization can come home with a bit of the data center (power backups, internet backups, etc)
newsclues|1 year ago
bean-weevil|1 year ago
walrus01|1 year ago
sargun|1 year ago
dchuk|1 year ago
Also, I learned about this device from this post and immediately bought one for my existing home server remote access: https://jetkvm.com/
walterbell|1 year ago
BeefWellington|1 year ago
JKCalhoun|1 year ago
walrus01|1 year ago
https://mikrotik.com/product/rmk2_10
looks like this: https://cdn.mikrotik.com/web-assets/rb_images/2242_hi_res.pn...
I wish there was some kind of firmly defined standard for exactly half of a 1U width so that different manufacturers' devices could be attached together.
ianburrell|1 year ago
roflchoppa|1 year ago
Holy moly these are getting expensive 1k for something that goes in the closet is wild.
This was posted a while back, it has some good resources.
https://loganmarchione.com/2021/01/homelab-10-mini-rack/
rahimnathwani|1 year ago
It's less than $30 with a coupon, which seems to good to be true.
A look at the Amazon 2-star reviews suggests it has good build quality but can only output 75W to 100W total, not 260W as advertised.
rcarmo|1 year ago
disambiguation|1 year ago
rcarmo|1 year ago
unknown|1 year ago
[deleted]