top | item 4896012

160 Mac Minis, One Rack

214 points| barredo | 13 years ago |hackaday.com | reply

117 comments

order
[+] jurre|13 years ago|reply
Before the thread becomes cluttered with people suggesting alternatives or questioning why you wouldn't just run <insert manufacturer, OS, etc>, the person that did this replied the following:

simbimbo says: December 9, 2012 at 11:03 am Thanks for the great write up Hack A Day. I would like to answer some of the questions posted. @Geebles these machines all run SSD’s and I ordered them with AppleCare, so I hope to never have to change a drive ;-)

As for the reason I built this.. Well, I guess I’m just like a challenge ;-), but seriously, the company I work for has a need to have large numbers of machines to build and test the software we make.

There were plenty of discussions of Virtual environments and other “Bare Motherboard”/Google Datacenter-type solutions, but the fact is, the Apple EULA requires that Mac OS X run on Apple Hardware, since we are a software company we adhere to these rules without exception. These Mac Machines all run OS X in a NetBooted environment. We require Mac OS X because the products we make support Windows, Linux and Mac so we have data centers with thousands of machines configured with all 3 OS’s running constant build and test operations 24 hours a day 365 days a year.

As for device failure, we treat these machines like pixels in a very large display, if a few fail, it’s ok, the management software disables them until we can switch them out. This approach allows us to continue our operations regardless of machine failures.

@bitbass I tried the vertical approach, but manufacturing the required plenum to keep the air clean to the rear machines cost too much for this project, but it’s not off the table for the next rack

@Kris Lee When I open the door I can literally watch the machine temps go up, but I can keep it open for 15-20 minutes before the core temps reach 180F

@Adam Ahhh.. Nope, you can’t have my job ;-)

[+] shardling|13 years ago|reply
I believe Mozilla has large racks of mac Minis for testing Firefox on. (Same issue, I believe -- to support multiple platforms on the same hardware, you need something that's legal to run OSX on.)

e: Ah, according to [this article](http://www.wired.com/wiredenterprise/2012/05/mozillas-new-da...) they have 500 Minis in their data center.

[+] hapless|13 years ago|reply
Someone didn't finish his homework. The legal, supported method of virtualizing OSX is:

Guest: Leopard Server

Host: ESX 5.1

Hardware: Mac Pro

Mac Pro is supported on the VMware HCL. Leopard Server is legally virtualizable on Mac hardware. VMware supports OSX Leopard as a guest on ESX 5.1

Mac Pro towers are going to be less dense, but given his cooling situation, lower density is probably a win. What datacenter wants 8 kW of laptop CPUs stuffed into a rack? Virtualization would also overcome the lack of redundant PSUs.

[+] monochromatic|13 years ago|reply
> these machines all run SSD’s . . . so I hope to never have to change a drive

lol

[+] patangay|13 years ago|reply
We did something similar at Facebook for iOS and OSX automated testing and a few of them doing iOS app builds.

Here is a post that Jay Parikh (VP of Infrastructure) made about it. http://tinyurl.com/cnvss4v

Our density isn't as high (we have 64 minis) because of cooling and cabling that we designed according to our datacenter cooling standards.

@jurre - If you want to chat about our design, message me and I can put you in touch with our hardware designer.

[+] jurre|13 years ago|reply
I just copied that reply from the person that built the actual rack, but I would love to hear more about your iOS testing infrastructure/process!
[+] i386|13 years ago|reply
Would love to hear more about your automated testing!
[+] patangay|13 years ago|reply
For those who showed interest in learning more about our infrastructure/automated testing process, if you could drop me an email? gp at our corp domain fb.com.

I don't work on the team anymore, but I can probably start off a thread with the right people involved from Facebook's side.

[+] simbimbo|13 years ago|reply
I would like to chat about it.
[+] alanctgardner2|13 years ago|reply
It's very cool, but is anyone actually tied to OS X as a server platform? Couldn't they move to FreeBSD and save a ton of money in an application like this? I'm wondering if there's a real business case for this, or it's just a fun hack.

edit: I guess lumped into this is the small market that seems to exist for colocated Mac Minis. Is there something about them that is better than renting commodity x64 hardware?

[+] hugs|13 years ago|reply
> anyone actually tied to OS X as a server platform?

Yes, if you make software that runs on OS X (or iOS and you want to test on iOS Simulators) you need OS X machines for your build and test process. You need lots of machines so you and your fellow developers can run speed things up and run tests in parallel.

[+] mminer|13 years ago|reply
It's a niche use case, but Apple's Qmaster software facilitates distributed video rendering and exporting for Final Cut Pro and some of their other pro applications. A server cluster like this would probably be overkill, but when you're working with massive video files, extra (OS X) machines to do the heavy lifting make a huge difference.
[+] chrismealy|13 years ago|reply
In comments the builder said it's for testing.
[+] ahi|13 years ago|reply
One of my old employers has legacy systems built with WebObjects. In theory, WebObjects is Java and should run on anything, but typically it is on OS X Server. I left just as one of my colleagues was trying to figure out what to do about the death of the XServe.
[+] taligent|13 years ago|reply
I am currently using a cluster of Mac Minis as a server platform. Some benefits:

* Launchd is a massive improvement over the equivalent mess on Linux. This can't be understated if you are managing your own hardware. * You can develop on the same machine you are deploying to. * You have exactly the same toolchain as on Linux. * Lots of remote monitoring options that are unique to OSX e.g. OSX Server * The OS is stable and upgrades are safe enough to enable auto update. I could never do that on CentOS.

But really it comes down to hardware and resale value for me. 2 Mac Minis in 1RU is great value.

[+] rdtsc|13 years ago|reply
Mac Minis are horrible server hardware. We've had a couple running as servers. They fail randomly. Their hard drives fail. They don't rack mount easily. The only reason to have them is if you inherit some old ones, don't want to throw them away, and then don't mind replacing and throwing failed units away pretty often.
[+] simbimbo|13 years ago|reply
I agree, Mac Minis are not good servers, but these are simply for testing software.
[+] sliverstorm|13 years ago|reply
Well, they do provide good machine density when you rack them like he did.
[+] meaty|13 years ago|reply
I'm actually surprised any DC would take that equipment. They, in my experience at least, are very fussy about what what you put in the racks and power draw etc.

Oh and we get 640 cores in 20U (8x4 core xeon machines each 1u) and that leaves enough room for a 32Tb SAN, FC switches and a pair of redundant LAN switches.

REgarding splitting the power using the hack described, 160 melted minis and a halon cloud coming up.

Looks pretty though.

[+] jws|13 years ago|reply
REgarding splitting the power using the hack described, 160 melted minis and a halon cloud coming up.

You should have a talk with your power cord provider. You should be using cables that can handle at least 4 amps in anything with a 110v plug on it. I don't think you can buy one smaller than 18ga and those are good for 10 amps. Remember, you have to handle enough current to blow the breaker if something goes wrong (unless you are British and have your own fuse in the plug).

[+] alexkus|13 years ago|reply
It must be their own DC.

Most rackspace rented (in UK DC's at least) tend to be a maximum of 16A (at 240V) per 42U cabinet, so just under 4kW. By my estimation those Mac Minis will be drawing ~13kW at peak.

[+] gonzo|13 years ago|reply
it was when he started talking about using solder on a 220 VAC connection that I lost the faith in him knowing how to do it right.
[+] simbimbo|13 years ago|reply
Sorry about the wording of my sentence. It was supposed to imply that I built a prototype cable at home to show to the cable vendors who would have to build them. Also, there was plenty of discussion about the potential of these cables to be misused, the vendor manufactured the cables to handle 15A. I have corrected my Wordpress page to reflect this discussion.
[+] wlesieutre|13 years ago|reply
Out of curiosity, what's the problem with that? I assume he'd have the copper twisted together to make a good connection, with the solder just holding it in place instead of acting as the conductor.
[+] madao|13 years ago|reply
Considering you only really have to pay around the 60 dollar mark for the OS now, I dont think its much of a big deal, I use one of these at home as a mini fileserver/wiki it draws sweet FA makes little to no noise and has HDMI connector direct into my tv. I would happily deploy one for our company marketing team or small scale offices.
[+] dfc|13 years ago|reply
"Draws sweet FA"?
[+] w3pm|13 years ago|reply
I understand the idea of treating them like pixels, so if a fan dies or a NIC card dies, no problem, just stop using that Mini. But what about memory corruption or other issues that are more difficult to detect? Normally server hardware has things like ECC memory to prevent these issues, but in this case a Mini with bad RAM could intermittently corrupt data for some time before it's noticed (if ever).
[+] lallysingh|13 years ago|reply
The machines are for testing. They'll detect those through secondary means. If a machine's faulty, it'll cause two cases: (1) faulty software will register as faulty; (2) good software will register as faulty. The third case (faulty software marked as good), is really unlikely, and any time it does happen, a later bug report will give a hint.

A test failure will probably bring up an engineer that will track down the issue, and a re-test will inevitably occur. The faulty machine will eventually (hopefully) get labeled flaky and will get repaired.

Of course, nobody may care and just use a double-test to verify that an executable is good.

[+] georgebarnett|13 years ago|reply
Interestingly, it looks like the front fans blow _into_ the rack. This means that if the door isn't securely closed it'll blow open - being on hinges and having massive fans attached.

It would be better to have the fans on the back and suck air through the rack rather.

That said, DC floor space is cheap compared to power and cooling. I'm surprised they didn't lower the density so as not to have a massive fire risk.

[+] simbimbo|13 years ago|reply
I looked into some rack cooling options, but was unable to find a solution that would provide the amount and wide coverage of airflow I needed to move air slowly and uniformly through the rack to provide effective cooling. ( I was designing this to be used with the active cooling rear doors, so I couldn't overwhelm the door with too much air or it wouldn't cool the air effectively, and would raise the ambient temp of the room). So the fans move a high volume of air through the entire cabinet (including the corners) at low velocity resulting in very effective cooling of each of the 40 shelves.

The fans are large so they move a high volume of air at a low speed, the door doesn't move is left unlatched.

Also, can you please explain your "Massive Fire risk" comment? All of the hardware installed in this rack is UL certified and all of the machines will simply shut down if they get too hot.

[+] frozenport|13 years ago|reply
Why not use 2 racks?

This would solve his thermal dissipation problem and probably be easier when compared to getting custom hardware.

[+] simbimbo|13 years ago|reply
Rack "foorprints" at the datacenter are expensive, and you pay for 15kw per footprint whether you use it or not. it just made sense to fit it all into one. Having had this rack running at full power for a couple of weeks now, I can say the temps stay lower than our SuperMicro racks in the same row!! and the Supermicro racks can only hold 20 machines before they run out of power in the footprint.
[+] liquidise|13 years ago|reply
i would guess floor real estate constraints. Why do in 2 racks what you can do cost effectively in 1?
[+] nsxwolf|13 years ago|reply
Curious - what do we call a computer like this? It's obviously not going to make the TOP500, but is it a "supercomputer"? I thought perhaps "minisupercomputer" might be fitting, but according to Wikipedia that is a term for a class of computers that became obsolete in the early 90s.
[+] tzaman|13 years ago|reply
I'm a proud owner of a Mini Server (slightly customised - replaced memory and primary disk with SSD) for over a year. I use it as my main workstation and I love it; So small (and relatively cheap including the upgrade), yet so powerful.
[+] datums|13 years ago|reply
Definitely a fun challenge. If you're going to invest in the hardware and custom build. Forget the y cable. Figure out a better solution. Rent 1/2 rack next to it to hold the pdus. +1 on the massive door fans.
[+] dreadsword|13 years ago|reply
Seems like an expensive way to do data center stuff. Why not create a rack full of alienware laptops or something?
[+] ghshephard|13 years ago|reply
The company requires a legitimate test bed for their OS X testing. It wasn't designed for generic data center work.
[+] taligent|13 years ago|reply
Actually it's far cheaper than even an equivalent Supermicro solution let alone HP/Dell etc. You are getting at minimum 2 Mac Minis in 1RU which as of today could be a 8 core Core i7 / Dual SSDs / 16GB RAM.

Plus if you want to upgrade them then you can put them on eBay and get 75% of the original cost back. Try doing that with a server.

[+] lallysingh|13 years ago|reply
I'm surprised both that the high density worked for consumer devices, and that the rack wasn't prettier.