top | item 39640992

Home Lab Beginners guide

636 points| ashitlerferad | 2 years ago |linuxblog.io | reply

387 comments

order
[+] buybackoff|2 years ago|reply
The article is good, but it is intimidating buy its size and scope. A homelab could be a single NUC on a desk. I have one with 64GB RAM and it could fit so so many things. NUCs are very efficient to run 24/7, but become noisy with sustained CPU load. For that one could grow the setup with OptiPlex or Precision Tower (can haz ECC) SFF from eBay. These Dell SFFs are very good, quite small yet proper desktops/servers with proper quiet fans, could fit 10G Mellanox 3 cards (also from eBay for $40), and you could stack one on top of another horizontally. Don't go older than OptiPlexes with 12th gen CPUs, electricity and space they take could become a constraint. The used ones with i5-12500 are already very cheap. With LGA1700 one could put i9-14900 (non-K) there if needed.
[+] cybrox|2 years ago|reply
In addition to that, I don't think people should overthink the rack as an essential component. I like racks as much as the next guy and homelab racks look really cool but for usability and learning, just go with whatever you like.

I personally have 4 boxes stacked in a corner that are connected to a "rack", which is just two rack side panels bolted to the back of a workbench that I screw components into, connected to NAS and multiple Raspberry Pi's on a shelf and I really like the mess and have learned a lot.

Just use what you have and expand as you need. Racks are cool once you're into the hobby enough to care about style points.

[+] znpy|2 years ago|reply
Intel NUCs are so *insanely* good for homelabbers.

I got a few recently, and they're just great, particularly in terms of power usage. I hooked mine to tasmota-powered smart plugs, and they idle at something like 6W ... Granted, I always tune hardware (from bios config) and software (tuned profile) for low power... But long story short, my nucs rarely spike over 30W.

Literally half of one of those old tungsten-based lightbulbs.

[+] tombert|2 years ago|reply
My first homelab thing was a laptop with a broken monitor. It wasn't terribly hard to get Ubuntu server working on there, it had a gigabit ethernet port built in, and it had USB 3.0 so it wasn't too hard to get reasonably good hard drive speeds to use as a NAS.

You can buy old used laptops (especially with broken monitors) for basically nothing (or maybe literally nothing if you already have one, or you have a friend of family member who has one they're willing to donate). If your goal is just to use it for Plex, it will work fine, and the power draw on laptops (particularly when idle) isn't that much more than a NUC.

I use a proper rack mount server now largely because I thought they were neat, but I think for most people an old laptop is fine.

[+] _peeley|2 years ago|reply
I agree. For most people just starting out, it's a lot more worthwhile to get a single cheapo repurposed desktop or a single Raspberry Pi to run PiHole or something on and then expand from there. My homelab[0] started as a single Pi running PiHole and has expanded to four machines running everything I need from Jellyfin to Calibre to DNS, etc.

That being said, when I finally got around to rackmounting and upgrading some of the other hardware in my lab, this "beginner"'s guide was really helpful.

[0] https://blog.janissary.xyz/posts/homelab-0

[+] ryukoposting|2 years ago|reply
I'll second this. My "home lab" is several small computers crammed wherever they can either be totally hidden, or where they can justify their presence. An old Celeron NUC lives under my couch, and it runs Pi-Hole, Syncthing, and some diagnostic stuff. It's extremely useful, and the impact on my electric bill is negligible. A Lenovo mini PC lives behind my TV, and serves double duty as a Syncthing node and an HTPC. I'll probably make it do other stuff eventually.

It's not the most sophisticated setup on earth, but it works great, it's dirt cheap, and it's apartment-friendly.

[+] mech422|2 years ago|reply
Personally, I really like odroid H2/H3 units (0). Much cheaper then Nuc's, DDR4 SO-DIMMs up to 64G, very low power draw, and X86 based so no compatibility issues. No where near as powerful as desktops but more then enough for running basic home lab stuff.

I used to use a half rack full of Xeon based Dell and HP blades - but the blade chassis are huge/heavy and the power usage was HUGE!

0) https://ameridroid.com/products/odroid-h3

[+] bayindirh|2 years ago|reply
My homelab (or infrastructure if you will) consists of 2 ARM SBCs distributed geographically and connected via a VPN & syncthing. They are fanless, powerful enough (8 cores and 8GB of RAM is plenty for what I do), and handle 90% of the automation my old desktop was doing.

I don't need Proxmox, tons of containers or anything. If I do, I can always throw a N95 into the mix with ample RAM and storage.

[+] rpcope1|2 years ago|reply
Unless something has recently changed, the SFF Precisions and Optiplexes used weird smaller PSUs and generally have really poor airflow. Generally the lower end precision mid-tower also flow air terribly and get hot and whiny. There's no way I would recommend an SFF system if you can get the MT variant, as your options for fixing the MT systems when the Dell PSU inevitably fails are much better than getting a regular ATX PSU, cutting a hole to feed the cords in, and bolting it to the side of the case, which I've done repeatedly for SFF Dell systems with blown PSUs.

Additionally, one challenge I have with all of these is that none of them have anything like a BMC. If you have more than one system, moving peripherals around sucks, and KVM technology is kind of iffy these days. It seemed easier to just jam an old Supermicro motherboard that had IPMI enabled in a reasonable ATX case with a lower power Xeon variant, and call it a day.

[+] smallmancontrov|2 years ago|reply
I've been thinking about upgrading my current "homelab" (just an old PC), and I've been waiting for a small form factor with an integrated power supply and a bunch of NVMe slots. Does that exist yet?
[+] vGPU|2 years ago|reply
I actually picked up a refurb desktop from Walmart with a ryzen 3500 for $400, it runs basically everything without breaking a sweat. Proxmox running homeassistant, docker, my seedbox, media server, etc and it averages 3% CPU usage.

I did not know just how much heat a 16tb disk can put out up until that point though.

[+] Scene_Cast2|2 years ago|reply
Yep, agreed. When living in a high cost of living area (i.e. where lots of tech professionals tend to live), space is limited.

I'm annoyed at the lack of ECC-capable sub-ITX NAS boards. I have a Helios4, but I've no idea what I'll migrate to when that dies.

[+] fsckboy|2 years ago|reply
> A homelab could be a single NUC on a desk

then a better name than homelab would be your "NUC/NOC-nook".

with regard to TFA, I don't trust "labs" with neat wiring.

[+] NelsonMinar|2 years ago|reply
A bit of a tangent but I want to sing the praises of Proxmox for home servers. I've run some sort of Linux server in my home for 25 years now, always hand-managing a single Ubuntu (or whatever) system. It's a huge pain.

Proxmox makes it very easy to have multiple containers and VMs on a single hardware device. I started by just virtualizing the one big Ubuntu system. Even that has advantages: backups, high availability, etc. But now I'm starting to split off services into their own containers and it's just so tidy!

[+] caconym_|2 years ago|reply
+1, Proxmox is awesome. I set up a cluster with two machines I wasn't using (one almost 10 years old, one fairly modern with a 5950X) and now I don't have to worry about that single Debian box with all my services on it shitting the bed and leaving me with nothing. VMs are useful on their own, but the tools Proxmox gives you for migrating them between machines and doing centralized backup/restore are incredibly freeing. It all works really well, too.

Most recently I set up a Windows VM with PCIe GPU passthrough for streaming (Moonlight/Sunshine) games to the various less-capable machines in my house, and it works so well that my actual gaming PC is sitting in the corner gathering dust.

My only complaint is I wish there was a cheaper paid license. I would love to give back to them for what I'm getting, but > $100/yr/cpu is just too much for what amounts to hobbyist usage. I appreciate the free tier (which is not at all obnoxious about the fact that you aren't paying), but I wish there could be a middle ground.

[+] cmehdy|2 years ago|reply
Proxmox is incredibly helpful especially for homelab beginners familiar with Linux. You can easily create and destroy various things as you learn. Only thing that's not so easy to use is storage in general : it's very difficult to truly understand the consequences of various choices and therefore it's kinda difficult to properly set up the backbone of a NAS unless you're fairly comfortable with zfs, lvm, lvm-thin, re-partitioning things, etc..
[+] didntcheck|2 years ago|reply
As someone still in monolithic home server tech debt I've been meaning to migrate too. The part I always wonder about is storage - said server is a NAS, serving files over SMB and media over Plex. From what I hear some people mount and export their data array [1] directly on Proxmox and only virtualize things above the storage layer, like Plex, while others PCI-passthrough an HBA [2] to a NAS VM. I suppose one advantage of doing the former is that you might be able to directly bind mount it into an LXC container rather than using loopback SMB/NFS/9p, right?

I also hear some people just go with TrueNAS or Unraid as their bare-metal base and use it for both storage and as a hypervisor, which makes sense. I might have to try the Linux version of TrueNAS now that it's had some time to mature

I also rely on my Intel processor's built-in Quicksync for HW transcoding. Is there any trouble getting that to run through a VM, except presumably sacrificing the local Proxmox TTY?

[1] As in the one with the files on, not VHDs. I default to keeping those separate in the first place, but it's also necessary since I can't afford to put all data on SSDs nor tolerate OS roots on HDDs. Otherwise a single master ZFS array with datasets for files and zvols or just NFS for VM roots would probably be ideal

[2] Which I hear is considered more reliable than individual SATA passthrough, but has the caveat of being more coarse-grained. I.e. you wouldn't be able to have any non-array disks attached to it due to the VM having exclusive control of all its ports

[+] globular-toast|2 years ago|reply
Obviously you could have just run containers on that Ubuntu system. And nothing stopping you backing it up either. But you might as well start with a hypervisor on a new server, though. It gives you more flexibility to add other distros and non-Linux stuff into the mix too (I run an OPNsense router, for example).

I use xcp-ng rather than Proxmox.

[+] AnarchismIsCool|2 years ago|reply
I know it's becoming the new "I use arch btw" meme but if you go down this route I highly recommend using nix as the distro. Ideally you can just leave these systems running once they're working and using nix means all your system state is logged in git. No more "wait, how did I fix that thing 6mo ago?" or having to manually put the system back together after a Ubuntu dist upgrade explodes. Every change you've made, package you've installed, and setting you've configured is in the git log.

I find myself referring to my git repo frequently as a source of documentation on what's installed and how the systems are set up.

[+] lloeki|2 years ago|reply
> Location

For a few years I put mine inside an IKEA FRIHETEN sofa (the kind that just pulls†).

    Pros:
    - easy access
    - completely invisible save for 1 power wire + 1 fiber (WAN) + 1 eth (LAN)††
    - structure allows easy cable routing out (worst case you punch a hole on the thin low-end)
    - easy layout/cable routing inside
    - noise reduction for free
    - doubles as warming in winter a seated butt is never cold
    - spouse enjoys zero blinkenlights
    - spouse didn't even notice I bought a UPS + a disk bay

    Cons:
    - a bit uncomfortable to manipulate things inside
    - vibrations (as in, spinning rust may not like it) when sitting/seated/opening/closing
    - heat (was surprisingly OK, not worse than a closet)
    - accidental spillage risks (but mostly OK as design makes it flow around, not inside the container, worst case: put some raisers under hardware)
    - accidental wire pull e.g when unsuspecting spouse is moving furniture around for housecleaning (give'em some slack)
https://www.ikea.com/us/en/images/products/friheten-sleeper-...

†† that I conveniently routed to a corner in the back then hidden in the wall to the nearest plugs, so really invisible in practice.

[+] neilv|2 years ago|reply
I had a similar thought about the IKEA KIVIK sofa (non-sleeper variants), which has wide boxy armrests that are open to the underside, and could fit towers of little SFF PCs, or maybe rackmount gear sideways. There's also space underneath the seat, where rackmount gear could fit.

One of the reasons I didn't do this was I didn't want a fire with sofa fuel right above/around it. I wasn't concerned about the servers, but a bit about the UPS. Sheet metal lining of the area, with good venting if the UPS battery type could leak gases, I'd feel a bit better about, but too much work. So the gear ended up away from upholstery, in rack/shelving, where I could keep an eye on it.

[+] jcul|2 years ago|reply
This is amazing. Do you have an actual photo with your servers inside or anything?

I would be really worried about ventilation and heat / fire risk.

Reminds me of the lack rack.

https://archive.is/Uf2k3

[+] zer00eyz|2 years ago|reply
The whole home lab scene is great.

Everyone has some set of goals... low power, interesting processors, data ownership, HA, UPS/whole home UPS... and the home is the only common intersection of all these overlapping interests, and bits of software. Even more fascinating is the types of people it attracts, from professionals playing to people far outside the industry.

I have really taken the plunge and it recaptures some of the magic of the early internet (at least for me).

[+] AnarchismIsCool|2 years ago|reply
The community is absolutely amazing. It's incredibly active on Reddit and Lemmy. People are always quick to help you find solutions and give you advice on setting things up using best practices. It's an absolute gem if learning this stuff is interesting to you.
[+] lolinder|2 years ago|reply
As an alternative perspective, this is my home lab:

* Location: Sitting on a shelf in my basement office. Ventilation is okay, the WiFi is fine but not great.

* Hardware: An old PC I picked up at a neighborhood swap meet. I added some RAM taken from another old PC and bought a hard drive and WiFi card.

* Software: Debian stable and podman/podman-compose. All my useful services are just folders with compose files. I use podman-compose to turn them into systemd units.

If the stuff in the article is the kind of thing you're into, that's awesome, go ham! But you absolutely do not need to ever, and you certainly do not need to do it right away. I run a bunch of services that my family uses daily on this thing, and we use less than half the 16GB RAM and never get over 5% CPU usage on this old, ~free PC.

[+] isopede|2 years ago|reply
I have a rather extensive homelab, painstakingly set up over time. It works great, I love it. Few questions for you guys:

- My real problem is disaster recovery. It would take me forever to replicate everything, if I could even remember it all. Router configurations, switch configurations, NAS, all the various docker containers scattered across different vlans, etc. I mapped out my network early on but failed to keep it up to date over time. Is there a good tool to draw, document, and keep up-to-date diagrams of my infra?

- Backup and upgrading is also a persistent problem for me. I will often set up a container, come back to it 6 months later, and have no idea what I did. I have dozens of containers scattered across different machines (NUCs, NAS, desktops, servers, etc). Every container service feels like it has its own convention for where the bind mounts need to go, what user it should be run as, what permissions etc it needs. I can't keep track of it all in my head, especially after the fact. I just want to be able to hit backup, restore, and upgrade on some centralized interface. It makes me miss the old cattle days with VM clone/snapshot. I still have a few VMs running on a proxmox machine that is sort of close, but nothing like that for the entire home lab.

I really want to get to a point, or at least move towards a solution where I could in theory, torch my house and do a full disaster recovery restore of my entire setup.

There has to be something simpler than going full kubernetes to manage a home setup. What do you guys use?

[+] bovem|2 years ago|reply
Since last year I’ve been configuring and maintaining my homelab setup and it is just amazing.

I’ve learned so much about containers, virtual machines and networking. Some of the self hosted applications like paperless-ngx [1] and immich [2] are much superior in terms of features than the proprietary cloud solutions.

With the addition of VPN services like tailscale [3] now I can access my homelab from anywhere in the world.

The only thing missing is to setup a low powered machine like NUC or any mini PC so I can offload the services I need 24/7 and save electricity costs.

If you can maintain it and have enough energy on weekends to perform routine maintenance and upgrades. I would 100% recommend setting up your own homelab.

[1] https://docs.paperless-ngx.com/ [2] https://immich.app/ [3] https://tailscale.com/

[+] neilv|2 years ago|reply
If your homelab gear is in a non-tech-nerd living space, also think about noise, lights/displays, and otherwise being discreet.

As an apartment-dweller, for a long time I had it in a closet. Once I moved it to living room, my solutions included:

* For discreet, IKEA CORRAS cabinet that matched my other furniture. I had rackmount posts in it before, but got rid of them because they protruded.

* For noise, going with gear that's either fanless, or can be cooled with a small number of Noctua fans. (I also replace the fans in 1U PSUs with Noctuas, which requires a little soldering and cursing.) I tend to end up with Atom servers that can run fanless in a non-datacenter except for the PSU.

* Also for noise, since my only non-silent server right now is the 3090 GPU server, I make that one spin up on demand. In this case, I have a command I can run from my laptop to Wake-on-LAN. But you could also use IPMI, kludge something with a PDU or IoT power outlet, find a way to spin down the 3090 and fans in software, make Kubernetes automate it, etc.

* For lights, covering too-bright indicator LEDs with white labelmaker tape works well, and looks better than you'd think. Black labelmaker tape for lights you don't need to see.

* For console, I like the discreet slide-out rack consoles, especially the vintage IBM ones with the TrackPoint keyboards. If I was going to have a monitoring display in my living room, I'd at least put the keyboard in a slide-out drawer.

* I also get rid of gear I don't need, or I'd need more than twice the rack space I currently have, and it would be harder to pass as possibly audiophile gear in the living room.

* For an apartment, if you don't want to play with the router right now (just servers), consider a plastic OpenWRT router. It can replace a few rack units of gear (router, switch, patch panel), and maybe you don't even need external WiFi APs and cabling to them.

[+] hedgehog0|2 years ago|reply
I’m a math grad student and want to play with LLMs as well, so I have been thinking about getting a 3060 or 3090, due to budget. However, I’m using a MacBook Pro from 2011, thus not very convenient for these kinds of things.

What do you think the lowest budget or spending that I should have, given my “requirement”? Or would you say that vast.ai or similar websites would suffice?

[+] globular-toast|2 years ago|reply
I'm older than 30 so I just call this having a home network and more than one computer.

Why do I have a home network? Like many here, I develop network applications. I'm not a network technician, but being able to truly understand how the various components of the internet work, like TCP/IP, DNS etc., is really useful and sets me apart from many developers. I also just like being in control and having the flexibility to do what I want with my own network.

Why do I have multiple computers? Playing with different operating systems etc. isn't really the reason as virtualisation is pretty good these days. It actually comes down to locality of the machines. I want my hard disks to be in a cupboard as they are quite noisy, but I want my screens and keyboard to be on a desk. So I've got a NAS in a cupboard and a (quiet) PC at my desk, plus a (silent) media centre in my living room and other stuff.

One thing I would say is don't be tempted by rack mounted server gear. Yeah, it looks cool and you can get it second hand for not much, but this isn't suitable for your home. Use desktop PC cases with big fans instead. Rack mounted network gear is pretty cool, though.

[+] marcus0x62|2 years ago|reply
> One thing I would say is don't be tempted by rack mounted server gear.

That depends on the server. Super Micro makes some rackmount servers that are shallow enough for a network cabinet/wall-mount rack and fairly low power. They also have a real OOB mgmt system, which can be really helpful. https://www.supermicro.com/en/products/system/iot/1u/sys-510...

I agree that typical deep-rackmount servers are more trouble than they are worth for a home lab.

[+] cqqxo4zV46cp|2 years ago|reply
I built my home server with a 4RU case filled with desktop components. Thankfully 4RU seems tall enough to get fans large enough to be quite quiet. *Actual^ rack mounted equipment though? I highly recommend that anyone even remotely thinking of bringing this stuff home find some way to hear how bloody loud it is beforehand. I couldn’t imagine living with that.
[+] bombcar|2 years ago|reply
Rackmounted gear is fine depending on where it is "racked" - some climates can get it into a garage, basement, etc.

But unless you have a real rack it's gonna be a bit of a pain, because it WILL end up being stacked on other pieces of gear in a rackish tower that requires full dismantling to get to the piece you want to work on.

If you go off-lease rack equipment, go whole hog and get a rack and rails too. Rails can be a bit pricey, check the listings for those that include them - rails often work for much longer than the servers so the big companies that liquidate off-lease equipment don't include them; smaller sellers often do.

[+] gorkish|2 years ago|reply
IMO one of the biggest frustration I currently have with homelab-scale gear (and edge compute in general) is how everything has massively regressed to only offering 2.5gbit connectivity at best.

Try to find one of these hundreds of small form factor products that has anything faster than a 2.5Gbit nic and despair! What good are these machines if you cant even get data in and out of them?!

The list of hardware with 2x10GBe or better connectivity is amazingly short:

  - Minisforum MS-01
  - GoWin RS86S*, GW-BS-1U-*
  - Quotom Q20331G\*
From major manufacturers, only the following:

  - Mac Mini (1x10gbaseT option, ARM, )
  - HPE Z2 Mini\* (1x10GbaseT option)
Please someone, anyone, build a SFF box with the following specs:

  Minimum of 2 NVMe
  Minimum of 2 additional storage devices (SATA or more NVMe)
  Minimum of 2 10GbE
  Dedicated or shared IMPI/OOBM
  Minimum of 8 Core x86 CPU
  Minimum of 64GB RAM
Make sure the nics have enough PCIe lanes to hit their throughput limits, give the rest to storage. Stop with the proliferation of USB and thunderbolt and 5 display outputs on these things, please already!
[+] haunter|2 years ago|reply
>For this article’s purpose, I won’t recommend any specific servers because this will vary a ton depending on what you will be hosting. NAS storage, VMs, web servers, backup servers, mail servers, ad blockers, and all the rest. For my requirements, I purchased a ThinkCentre M73 and ThinkCentre M715q; both used off eBay.

This is the way. HP, Lenovo, and Dell are all making identical small form factor PCs that are more than enough for most people at home. They are also very silent and power efficient.

I have a Lenovo M720 Tiny and a Dell Optiplex 3080 Micro (they are virtually almost the same). You can change parts, there are ample ports available and you can pretty much run any OS you want.

[+] Andrex|2 years ago|reply
I've been pretty happy with the EliteDesk G3 I got off eBay a couple months ago. I actually use it for light work (mostly spreadsheets and emails) too.

I think I paid less than $130 including shipping and sometimes it (anecdotally) has better performance than my home PC which is a full tower I bought new 4-5 years back.

[+] laweijfmvo|2 years ago|reply
> For my requirements, I purchased a ThinkCentre M73 and ThinkCentre M715q; both used off eBay.

This deserves a shout out. These things are everywhere on eBay and they Ryzen ones rock.

[+] Andrex|2 years ago|reply
This article has such a focus on hardware, when that's like... 1% of my homelab decision making.

Was hoping this article would delve into hosting your own email, websites and services and the steps to expose them to the public internet (as well as all the requisite security considerations needed for such things).

A homelab is just a normal PC unless you're doing homelab stuff.

[+] tanelpoder|2 years ago|reply
I recently bought an old Mac Pro 2013 (trashcan) with 12 cores/24 threads and 128 GB ECC RAM for my upgraded "always-on" machine - total cost $500. Installed Ubuntu 22.04, works out of the box (23.10 had some issues). Unfortunately it's hard/impossible to completely suspend/disable the two internal AMD Radeon GPUs to lower power consumption. I got it down to ~99W consumption when idle, when using "vgaswitcheroo" to suspend one of the GPUs (and it set the other one to D3hot state). My Intel NUC consumes almost nothing when idle (my UPS reports 0 W output while its running, even with 4 NVMe disks attached via a Thunderbolt enclosure). I don't want a 100W heat generator running 24x7, especially when I'm away from home, so will need to stick with the NUC...
[+] MrBuddyCasino|2 years ago|reply
I get an electronics workbench or a woodworking workshop, but I still don’t quite understand the purpose of this „home lab“. You can just run stuff on your laptop, or a mini computer in some corner if it’s supposed to run 24/7?
[+] MrVitaliy|2 years ago|reply
Surprised author didn't mention https://labgopher.com/ for finding used enterprise hardware at low cost on ebay. This hardware is cheap and typically out-of-service for enterprise, but great for home labs.
[+] bombcar|2 years ago|reply
The big problem with off-lease enterprise hardware is the power usage can quickly outrun the original costs - but that may not be an issue for everyone.

The nice thing about off-lease enterprise hardware is there is so much of it, and it's enterprise. You can easily find replacement parts, and working on that stuff is a joy, everything is easily replaceable with no tools.

[+] justinlloyd|2 years ago|reply
The notion of a homelab is really, more of a suggestion. I have, what you might call, a "homeprod."

"Be aware, the video server and sync server will be offline between the hours of 8PM and 10PM on Friday, March 8th for hardware upgrades. Expected downtime should not be more than 20 minutes but please plan around this disruption to essential services. Other services should not be affected during this work."

Like someone else mentioned in this thread: It was really best described as a "home network with more than one computer." But you can't really put that on your C.V. under "experience."

"Improved SLA uptime on client's network to five nines. Upgraded storage controllers in a live environment without disruption to established services. Deployed k8s cluster into a production environment serving thousands of hours of compressed video files with real-time format transcoding."

I will freely admit, my home network probably needs to be pared down a bit. The problem is the salvaged hardware and good deals at the liquidation auctions accumulates faster than I can cart it off to the street corner. There's also far too many laptops floating around the house doing nothing useful that could be donated away. A lot of the homelab is salvage or bought at (in-person) liquidation auctions. There's some new stuff, but I rarely need the latest and greatest for my setup. In my home network diagram I try to document everything that touches the network so I can stay on top of the security patches, and what has access to the outside world and in what way.

https://justinlloyd.li/wp-content/uploads/2023/11/home-netwo...

[+] tivert|2 years ago|reply
> Attic

> Pros: Less noise, easier cable runs.

> Cons: can get hot depending on where you live, roof leaks, humidity/condensation, and creepy at night.

Where do attics not get hot? I live in northern climate, and ours regularly gets hotter than 140F during sunny summer days.

And I say hotter, because the remote temp sensor I have up there maxes out at 140F. I wouldn't be surprised if it actually gets up to 150F or more.

[+] GabeIsko|2 years ago|reply
If someone made a UPC in the form factor of a laptop battery, we could be off to the races. I thought a laptop battery would be good for this, but they do bad things if you leave them plugged in all the time.

No reason to go crazy with your home server if it is only you using it. You can get away with a crappy laptop that your roommate is done with.

[+] PKop|2 years ago|reply
So what should you do, remove the battery on these laptop servers?
[+] quaffapint|2 years ago|reply
Unless you simply want to play with all the various bits of hardware, a standard PC tower will be more power efficient and save a lot of room. I use that with unRaid and a simple router and switch and it's power efficient, doesn't make much noise, cheaper, and takes up much less room.
[+] justinclift|2 years ago|reply
> a standard PC tower will be more power efficient

How much does it average in power draw (watts) from the wall?