In the early 90's, my Bride considered the pile of computers around my desk. "Why not just get one big one?" Fast forward a couple weeks where I was shopping at the Lockheed outlet store - and found a Sun 3/280s and 19" wide, 8' tall rack for $25. She was not amused to have a 'big one' the size of a fridge in our tiny apartment.
That old server has gone through many, many incarnations of hardware.
My wife came into our relationship with not just one, not just two, but THREE NeXTcubes. I negotiated her down to one actually-working one and one decorative one, and gave away the third.
I had a proliant 7000 (quad pentium II xeon) for a while, it was 16U and as loud as three vacuum cleaners. The spouse was very happy when that disappeared.
A few years ago I bought a switch that could rack mount and ended up looking at getting a rack to put most of my stuff in. I quickly abandoned that idea when I saw that rack mount makes things cost significantly more. I remember that the UPS pricing was especially nasty. Felt like they were just charging what they could get away with considering that it's really only businesses buying rack mount equipment.
Though if I ever buy I house I might end up going Linus level crazy and put a rack in the basement or a utlity room with fiber optic cabling back to my desk for peripherals to attach.
I've had a bunch of APC SmartUPS 1500 in both tower and rack formats.
I found the batteries in the rack version, where batteries lay on their sides, reach end of life much sooner than the tower version, where batteries are upright.
It's likely poorer heat dissipation/ventilation in the rack version, but may have to do with battery orientation. The rack version also seems to suffer corrosion damage from battery off-gasing if batteries reach critical failure and overheat. I haven't seen the same damage on the tower units. Again some combination of orientation and ventilation.
>I saw that rack mount makes things cost significantly more.
there's a logical reason for this though. it's a pretty safe assumption that you're in a larger facility if you're rack mounting. a lot of rack mount chasis come with features like dual PSUs and other redundant features. redundancy does cost.
>put a rack in the basement or a utlity room
They make enclosures[0] that are smaller than a full rack, but allow for rack mounting equipment. you can then just shove this enclosure on a shelf or wherever. much more convenient than a full on server rack.
> I quickly abandoned that idea when I saw that rack mount makes things cost significantly more.
Second hand (but perfectly usable) parts. Expect more second hand enterprise items to get to the market if the economy doesn't start to boom soon. Second hand enterprise gear is usually perfectly fine in terms of longevity. Mind the noise.
Get a 3d printer, print some rack adapters yourself. There's a surprising array of adapters for all sorts of devices and they work just fine for stuff that's not too heavy(like most home networking gear).
If the rack itself is too expensive, you can do an Ikea Lack rack. Full size racks are easy to find (relatively, location dependent). Haven't had much luck with network racks (which was the largest my wife was comfortable with). So I bought the rack new. Still managed to fit the whole networking gear stack(none of which is racked atm, they are sitting on a rack shelf) and a ITX server (Node202, which I used as my workstation in the past, fit perfectly).
You don't really want fiber optic unless you are connecting two different buildings that do not share ground. Inside the same house, ethernet works just fine. Fiber is finnicky, most home contractors don't know how to deal with it, and transceivers are expensive. More expensive than 10G usually.
In my personal computing experience, the main value of a rack is still the density, as it is in the datacenter. Moving one or two ATX PCs, a UPS, and switch into a half height rack isn't that much of a win (compared to the floor and/or basic shelving). But double that much into a full height rack is.
You don't need strictly rackmount hardware either. I just have a pair of non-rack SUA1500's sitting next to one another on the solid bottom built into the rack (not even a movable shelf). The battery type is much more common than the rackmount versions.
Case wise I'd just recommend 4U all the way for full size cards, coolers, and fans. Smaller fans make more noise.
Also just spend the money on cases with hot swap disk bays for any machine you plan to have more than a few drives. You'll end up wanting them sooner or later, and it's nicer to run the SATA cabling once with how cramped rack cases can get in places.
A nice compromise is to buy a tower UPS - the ones which are deeper but not as high - and then place them on a cantilever shelf on the rack. Mine occupies 3U of rack space and I can even place my NAS on the shelf in front of the UPS.
On the other hand, surplus rack mount servers are very cheap on eBay. You can certainly stick non-rack UPSes into a rack. I did this for about a year (two APCs sitting at the bottom of the rack) before I got a great deal on some surplus rackmount UPSes.
I have a music studio here in my home, for my own personal use. It's fairly common for high-end studio hardware such as signal processors, and synthesisers to be made in rack-mountable format. At some point I figured I could save some desk space by building a studio PC in a 2U rack mounted case. This meant I could keep it directly together with the rack-mounted power conditioner, audio, and MIDI interfaces that it connects to. I figured this was hitting a studio ergonomics home run, as the heart of my studio now fits in 5U of rack space. As the years have gone on, I've found it less and less convenient to build my studio around a bulky 19" rack. These days I'm looking at ways to shrink this setup further. Less and less mid-range hardware is rack mounted these days too. I'm not about to get rid of my setup anytime soon, but I won't be building another one like it when it comes time to upgrade.
In the home studio world this kind of setup is much more common. I think a lot of people are attracted to the aesthetic of rack-mounted studio gear. I know I definitely am. I personally don't find this setup as ergonomic as desktop modules though. I think manufacturers are coming to the same conclusion.
I used to have a rackmounted pc, and audio interface, personal server, etc all on one rack. Over time, things shrunk, and it made more sense to put it all in a well ventilated spot in the garage with a 10gbps link to my office.
I could have used wifi, the bandwidth would have been fine, but where's the fun in that ;)
I have a coworker whose entire family PCs are rack mounted in the garage. He has HDMI and USB extensions through the house, so the only thing on desks is a monitor and powered USB hub. Perfectly silent. It seems very comfy.
But that became kind of limiting for my preferences. For a while it wasnt really practical, unless you could accept this pretty low bandwidth stream.
But now fiber-optic display cables are readily available. My first purchase was a 100ft/30m 32Gbps DisplayPort cable, for $55! To be honest, I couldnt believe it worked; I assumed it was a scam.
It's indeed excessively long... too long! I can make it out my room & up to the roof & compute from there with less than half the cable. It's way harder to unroll & reroll than any cable I've ever had before, but also way longer. I have since gone back and bought shorter cables.
Thankfully active extension cables are pretty good & tend to just work. Newer models have an inline 1-port usb-hub every 25ft/7.5 (used to be every 5m), and then a wall-wart at the end to power the hubs (to honest that feels like it should be unnecessary; a dc-dc booster mid-span & bus power should be fine, albeit leaving nearly no power available at the end).
Also beware, there's a 7 tier limit from root-hub to device, so 5 hubs maximum * 7.5m, if you only plug in a single thing (a wireless keyboard/mouse dongle).
> I have a coworker whose entire family PCs are rack mounted in the garage.
SunRays (https://en.wikipedia.org/wiki/Sun_Ray for those too young) were really nice. When they came out I realized why have a noisy workstation under my desk when I can have a massively larger (far noisier) server in the rack and a totally silent SunRay on my desk! That was my setup for a long time.
No longer using the SunRay (sigh) but conceptually not too far from it. I have a macmini on my desk but I do very little compute on it, it's there basically just to drive the monitor. All the machines are on a rack in the garage.
I spent a lot of time playing in bands and doing the home recording thing 10-15 years ago. Had a bunch of rack mount gear and dreamt of building out a mobile recording rig using a DJ case[1] like my dad had (for DJing weddings).
Eventually found someone selling an old 4U (which is about as tall as a standard desktop is wide) ATX server case, locking front panel and all, on craigslist for cheap.
It was super heavy (as was all of the other audio equipment), but having all of my stuff in one case (and only needing to set up a display, keyboard, and mouse) was pretty great. Never had much issue with heat/noise, but wasn't overly concerned at the time either.
One interesting thing about going the rack route is that most of the hardware optimizes for considerations a data center/large corporation might have.
That can be nice in a lot of ways (hardware is fairly robust, layouts are usually friendly for quick access/maintenance, overall density of machines is pretty good).
But it means almost no one is paying attention to how much noise these things make, because for most of the target audience that's WAY down the list of things they care about.
I have a couple of - not elderly but not new - 1u and 2u machines, and they are LOUD AS FUCK. I used to run my server farm in my office when it was just spare personal computers (under the table I use for hobby projects), but I quickly ended up moving it down the basement when running the rack machines.
They're just too loud for comfort on a semi-regular basis.
There are a few manufacturers that make silent racks[1]. They're hard to come by, and are much more expensive than regular racks, but there's a small market for them.
Otherwise, manual soundproofing is an option, but then you have heat to deal with.
Most servers aren't very loud unless you're stressing the hardware, and fan settings can usually be tweaked, or fans replaced with silent alternatives.
Still, you wouldn't want this kind of equipment humming next to you in the same room, so it's best to dedicate a small server closet, room, or garage, if you have the space.
That said, a custom server build in a rack mounted enclosure is not the same as using enterprise servers. My gaming PC is in a 4U enclosure and is as quiet as any tower PC.
Yes! I went down the rackmount rathole about 20 years ago, and for 10 years all my computing equipment was installed in a 44u cabinet.
The problem is that rack mount plus low noise equals very high cost, if it is even achievable. Eventually I could not stand the noise anymore, and got rid of the rack. So did every single one of my colleagues who tried rackmount.
As others have mentioned rackmount equipment is also expesive, often ridiculously so. I am very happy to leave racks in the data center where they belong.
When craigslist was brand spanking new I found a ruggedized 1/4 height rack that looked like it was surplus military.
My second stroke of luck was that the hall closet in my apartment had a corner in it that would exactly fit this rack, telephone wiring junction box was immediately above the rack, and all of the house wiring was straight Cat5 runs from there, so a few new outlet covers and some cable rejigging and we had internet everywhere (but only 2 pair where the phone plugged in), terminated at the rack, which was partly muffled by winter coats. You can't push a lot of BTUs this way, but you can do a few.
My solution before this was to hide a tower PC behind the couch. For the right kind of couch and the right vent holes, you can eat an awful lot of ambient noise by using major furniture. There's lots of space out there for things like this.
I also tried at one point to build an air filter that would fit under a bed (less noise, no floor space, fewer dust bunnies.
Get a 3U or 4U case and replace all of the fans that come with it with something designed for desktops. A full size tower is basically a 4U case on its side, so there is identical room for fans &c.
I have a 3U setup where the loudest thing in it is the GPU fan on the 1050Ti.
That's a nice looking case! Seems like it would be especially good for a router or networking box. In the course of my build I found out about PlinkUSA and Sliger, who both have a few decent 2U options for those of us who don't want screaming server fans.
I moved my desktop into a 4U case* about six years ago and never looked back. Then I wanted to build my own NAS and--oh look at that I already have one rackmount chassis... I guess I'll get another and some posts? I did and it was also a great call.
I went through this in the early 2000s when I was a) exposed to rack stuff in CCNA class in high school/votech and b) dot-com surplus started showing up really cheap on eBay.
Nowadays my desktops are either SFF or mid-tower CAD workstations. We've got some control computers racked with test equipment/tools but they're not really general purpose workstations.
I moved my workstation in to a 4U rack mount case. Can highly recommend it. Built a custom rack mount cabinet that looks like a traditional arcade game cabinet. Moved the workstation along with the rack mount UPS, and several other bits of computer hardware, in to the arcade cabinet. Fabricated some shallow drawers with over extension, soft-close, self-locking slide outs, and installed a separate computer for running the emulators. Lots of noise reduction padding, lots of air flow with large dimater fans. Then I made another custom rack mount case, side-by-side, again lots of noise reduction padding, for the Synology NASes, Mac Mini, laser printer, cable modem, network switch, and various bits of networking hardware.
My gaming PC lives in a rack in my basement now in a 4u case. There is a 150ft optical hdmi cable running to the third floor, and a usb over cat6 cable returning control signals. I like how it is entirely silent to me, and does not warm my room
- Full-height PCI cards fit (though some GPUs have taller than full-height coolers; if the HS/F protrudes above the mounting bracket, you'll have to measure).
- You can use larger fans, which can help a lot with airflow and/or noise. Just throw away the fans that come with the case as anything designed for a rack assumes you'll be wearing hearing protection while working around it.
- More motherboard options. If you use a power-supply designed for rack-mount, standard ATX motherboards fit fine. If you want to use a desktop power supply there are plenty of micro-ATX options.
I'm confused why you'd choose ITX and a chassis that requires half height cards. There are plenty of 2U chassis that accept ITX and allow full height cards rotated 90 degrees using a riser.
Recently I found myself with a mini ITX motherboard and got a Ghost S1 case for it. The space saved and convenience of such a small factor is life changing. I’m never going back to ATX and big towers.
I have a meshlicious. I can fit a full ATX PSU no problem (could have more SSDs or 2.5" drives otherwise). Because it's a "sandwitch", the GPU and CPU are drawing air from opposite sides of the case. It's all mesh so airflow is great. Pretty small too. Not as small as your Ghost, but it can accommodate really large GPUs if need be. Even watercooling would be viable (I've seen some great custom loops, for the CPU even a two fan AIO would fit).
I do miss ATX because ITX boards generally do not have any expansion slots. Sure they come with Wifi, but if I decide to, say add a 2.5G(or 10G) network card, video capture or what have you, I'm out of luck.
I do not miss the case sizes. ATX are spacious, but TOO spacious, usually. Because the mainboard is large, most cases don't seem to optimize for space at all. So you fill the thing to the brim with all expansions you can think of and there's still a lot of wasted volume. Were I to go back to ATX, it would be with a non-orthodox case(open case, inside a desk, etc).
Interestingly I've gone the opposite way for a couple years - desk side cases (full towers) instead of racked equipment, part of that is I prefer 2 post racks to four.
There are so many more convenient alternatives to the traditional massive tower for consumer PCs today, yet they are all severely limited by the same
issue: modern GPU form factors. I really wish the GPU makers would get a clue and come up with a more flexible standard. I suspect just making the heat sinks modular/swappable would mostly solve the problem.
[+] [-] heelix|3 years ago|reply
That old server has gone through many, many incarnations of hardware.
[+] [-] dmd|3 years ago|reply
[+] [-] toast0|3 years ago|reply
[+] [-] geerlingguy|3 years ago|reply
[+] [-] King-Aaron|3 years ago|reply
[+] [-] rexreed|3 years ago|reply
[+] [-] kvandy|3 years ago|reply
[+] [-] systems_glitch|3 years ago|reply
[+] [-] ziml77|3 years ago|reply
Though if I ever buy I house I might end up going Linus level crazy and put a rack in the basement or a utlity room with fiber optic cabling back to my desk for peripherals to attach.
[+] [-] Lammy|3 years ago|reply
Double-edged sword. Higher prices brand new, but vastly cheaper used if you chase companies' deprecation cycles.
[+] [-] deeesstoronto|3 years ago|reply
I found the batteries in the rack version, where batteries lay on their sides, reach end of life much sooner than the tower version, where batteries are upright.
It's likely poorer heat dissipation/ventilation in the rack version, but may have to do with battery orientation. The rack version also seems to suffer corrosion damage from battery off-gasing if batteries reach critical failure and overheat. I haven't seen the same damage on the tower units. Again some combination of orientation and ventilation.
[+] [-] dylan604|3 years ago|reply
there's a logical reason for this though. it's a pretty safe assumption that you're in a larger facility if you're rack mounting. a lot of rack mount chasis come with features like dual PSUs and other redundant features. redundancy does cost.
>put a rack in the basement or a utlity room
They make enclosures[0] that are smaller than a full rack, but allow for rack mounting equipment. you can then just shove this enclosure on a shelf or wherever. much more convenient than a full on server rack.
[0] https://www.amazon.com/Wall-Mount-Server-Rack-Cabinet/dp/B07...
[+] [-] outworlder|3 years ago|reply
Second hand (but perfectly usable) parts. Expect more second hand enterprise items to get to the market if the economy doesn't start to boom soon. Second hand enterprise gear is usually perfectly fine in terms of longevity. Mind the noise.
Get a 3d printer, print some rack adapters yourself. There's a surprising array of adapters for all sorts of devices and they work just fine for stuff that's not too heavy(like most home networking gear).
If the rack itself is too expensive, you can do an Ikea Lack rack. Full size racks are easy to find (relatively, location dependent). Haven't had much luck with network racks (which was the largest my wife was comfortable with). So I bought the rack new. Still managed to fit the whole networking gear stack(none of which is racked atm, they are sitting on a rack shelf) and a ITX server (Node202, which I used as my workstation in the past, fit perfectly).
You don't really want fiber optic unless you are connecting two different buildings that do not share ground. Inside the same house, ethernet works just fine. Fiber is finnicky, most home contractors don't know how to deal with it, and transceivers are expensive. More expensive than 10G usually.
[+] [-] mindslight|3 years ago|reply
You don't need strictly rackmount hardware either. I just have a pair of non-rack SUA1500's sitting next to one another on the solid bottom built into the rack (not even a movable shelf). The battery type is much more common than the rackmount versions.
Case wise I'd just recommend 4U all the way for full size cards, coolers, and fans. Smaller fans make more noise.
Also just spend the money on cases with hot swap disk bays for any machine you plan to have more than a few drives. You'll end up wanting them sooner or later, and it's nicer to run the SATA cabling once with how cramped rack cases can get in places.
[+] [-] kobalsky|3 years ago|reply
no way, are you telling me that a $180 rackable shelf is a scam?
https://www.rackmountsolutions.net/fixed-writing-shelf/
[+] [-] hashhar|3 years ago|reply
[+] [-] aftbit|3 years ago|reply
[+] [-] ajxs|3 years ago|reply
In the home studio world this kind of setup is much more common. I think a lot of people are attracted to the aesthetic of rack-mounted studio gear. I know I definitely am. I personally don't find this setup as ergonomic as desktop modules though. I think manufacturers are coming to the same conclusion.
[+] [-] parker_mountain|3 years ago|reply
I could have used wifi, the bandwidth would have been fine, but where's the fun in that ;)
[+] [-] suprjami|3 years ago|reply
[+] [-] rektide|3 years ago|reply
But that became kind of limiting for my preferences. For a while it wasnt really practical, unless you could accept this pretty low bandwidth stream.
But now fiber-optic display cables are readily available. My first purchase was a 100ft/30m 32Gbps DisplayPort cable, for $55! To be honest, I couldnt believe it worked; I assumed it was a scam.
It's indeed excessively long... too long! I can make it out my room & up to the roof & compute from there with less than half the cable. It's way harder to unroll & reroll than any cable I've ever had before, but also way longer. I have since gone back and bought shorter cables.
Thankfully active extension cables are pretty good & tend to just work. Newer models have an inline 1-port usb-hub every 25ft/7.5 (used to be every 5m), and then a wall-wart at the end to power the hubs (to honest that feels like it should be unnecessary; a dc-dc booster mid-span & bus power should be fine, albeit leaving nearly no power available at the end).
Also beware, there's a 7 tier limit from root-hub to device, so 5 hubs maximum * 7.5m, if you only plug in a single thing (a wireless keyboard/mouse dongle).
[+] [-] jjav|3 years ago|reply
SunRays (https://en.wikipedia.org/wiki/Sun_Ray for those too young) were really nice. When they came out I realized why have a noisy workstation under my desk when I can have a massively larger (far noisier) server in the rack and a totally silent SunRay on my desk! That was my setup for a long time.
No longer using the SunRay (sigh) but conceptually not too far from it. I have a macmini on my desk but I do very little compute on it, it's there basically just to drive the monitor. All the machines are on a rack in the garage.
[+] [-] marpstar|3 years ago|reply
Eventually found someone selling an old 4U (which is about as tall as a standard desktop is wide) ATX server case, locking front panel and all, on craigslist for cheap.
It was super heavy (as was all of the other audio equipment), but having all of my stuff in one case (and only needing to set up a display, keyboard, and mouse) was pretty great. Never had much issue with heat/noise, but wasn't overly concerned at the time either.
[1]: https://www.pssl.com/products/gator-console-rack-10-top-6-bo...
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] amelius|3 years ago|reply
[+] [-] horsawlarway|3 years ago|reply
One interesting thing about going the rack route is that most of the hardware optimizes for considerations a data center/large corporation might have.
That can be nice in a lot of ways (hardware is fairly robust, layouts are usually friendly for quick access/maintenance, overall density of machines is pretty good).
But it means almost no one is paying attention to how much noise these things make, because for most of the target audience that's WAY down the list of things they care about.
I have a couple of - not elderly but not new - 1u and 2u machines, and they are LOUD AS FUCK. I used to run my server farm in my office when it was just spare personal computers (under the table I use for hobby projects), but I quickly ended up moving it down the basement when running the rack machines.
They're just too loud for comfort on a semi-regular basis.
[+] [-] imiric|3 years ago|reply
Otherwise, manual soundproofing is an option, but then you have heat to deal with.
Most servers aren't very loud unless you're stressing the hardware, and fan settings can usually be tweaked, or fans replaced with silent alternatives.
Still, you wouldn't want this kind of equipment humming next to you in the same room, so it's best to dedicate a small server closet, room, or garage, if you have the space.
That said, a custom server build in a rack mounted enclosure is not the same as using enterprise servers. My gaming PC is in a 4U enclosure and is as quiet as any tower PC.
[1]: http://quiet-rack.com/ranges.php
[+] [-] deepspace|3 years ago|reply
The problem is that rack mount plus low noise equals very high cost, if it is even achievable. Eventually I could not stand the noise anymore, and got rid of the rack. So did every single one of my colleagues who tried rackmount.
As others have mentioned rackmount equipment is also expesive, often ridiculously so. I am very happy to leave racks in the data center where they belong.
[+] [-] hinkley|3 years ago|reply
My second stroke of luck was that the hall closet in my apartment had a corner in it that would exactly fit this rack, telephone wiring junction box was immediately above the rack, and all of the house wiring was straight Cat5 runs from there, so a few new outlet covers and some cable rejigging and we had internet everywhere (but only 2 pair where the phone plugged in), terminated at the rack, which was partly muffled by winter coats. You can't push a lot of BTUs this way, but you can do a few.
My solution before this was to hide a tower PC behind the couch. For the right kind of couch and the right vent holes, you can eat an awful lot of ambient noise by using major furniture. There's lots of space out there for things like this.
I also tried at one point to build an air filter that would fit under a bed (less noise, no floor space, fewer dust bunnies.
[+] [-] aidenn0|3 years ago|reply
I have a 3U setup where the loudest thing in it is the GPU fan on the 1050Ti.
[+] [-] Lammy|3 years ago|reply
[+] [-] binaryanomaly|3 years ago|reply
I run proxmox, opnsense (virtualized) and other workloads on it. It's really good and runs almost completely silent because of the Noctua fans.
The build quality of the case is lower end though with sharp edges!
[+] [-] geerlingguy|3 years ago|reply
[+] [-] KaiserPro|3 years ago|reply
The downside is that they are 4/5u. But they are a fucktonne quieter than server hardware.
The difficult part is getting the HDMI/DP extenders to work reliably or low latency enough without spending $2k.
Having said all that, Jeff's approach is most likley cheaper, if more fiddly. its certainly neater and more compact.
[+] [-] chao-|3 years ago|reply
*Larger fans running slower for less noise
[+] [-] systems_glitch|3 years ago|reply
Nowadays my desktops are either SFF or mid-tower CAD workstations. We've got some control computers racked with test equipment/tools but they're not really general purpose workstations.
[+] [-] justinlloyd|3 years ago|reply
[+] [-] tibbon|3 years ago|reply
[+] [-] avsteele|3 years ago|reply
The only I've used and like goes out of stock regularly though.
[+] [-] aidenn0|3 years ago|reply
- Full-height PCI cards fit (though some GPUs have taller than full-height coolers; if the HS/F protrudes above the mounting bracket, you'll have to measure).
- You can use larger fans, which can help a lot with airflow and/or noise. Just throw away the fans that come with the case as anything designed for a rack assumes you'll be wearing hearing protection while working around it.
- More motherboard options. If you use a power-supply designed for rack-mount, standard ATX motherboards fit fine. If you want to use a desktop power supply there are plenty of micro-ATX options.
[+] [-] cptskippy|3 years ago|reply
[+] [-] omgtehlion|3 years ago|reply
Though, I did it in a 2U optical cross box. I paid $30 for it, and replaced some panels with fans and grilles. Not 200 euros, holy moly!
[+] [-] noncoml|3 years ago|reply
[+] [-] ncrmro|3 years ago|reply
My Ryzen 5800X3D I think has issues with c states which caused reboots from low to high power states.
Thought it was the 3800RTX and 600w PSU. Upgrade SFX-l which has you loose the rear 120mm fan.
Whole thing really started to cook itself till I disabled the c state and switched back to the 600w
[+] [-] outworlder|3 years ago|reply
I do miss ATX because ITX boards generally do not have any expansion slots. Sure they come with Wifi, but if I decide to, say add a 2.5G(or 10G) network card, video capture or what have you, I'm out of luck.
I do not miss the case sizes. ATX are spacious, but TOO spacious, usually. Because the mainboard is large, most cases don't seem to optimize for space at all. So you fill the thing to the brim with all expansions you can think of and there's still a lot of wasted volume. Were I to go back to ATX, it would be with a non-orthodox case(open case, inside a desk, etc).
[+] [-] ct0|3 years ago|reply
[+] [-] Aloha|3 years ago|reply
[+] [-] rcarmo|3 years ago|reply
(I don’t have room for a rack, but I do have a moderately tidy server closet with an UPS, and remote to these machines with GPU-accelerated RDP.)
[+] [-] ghostly_s|3 years ago|reply