top | item 706097

Google's Chiller-Less Data Center

78 points| 1SockChuck | 16 years ago |datacenterknowledge.com | reply

20 comments

order
[+] jonknee|16 years ago|reply
I love their strategy for when it's too hot--turn it all off and go home. Not a lot of companies have that amount of redundancy so I don't see this being the "next big thing", but it's a nice low tech "hack".
[+] Periodic|16 years ago|reply
For the companies with global data centers it may prove more profitable to create more data centers around the world and shift workloads than to build one monolithic data center in the midwest (for example).

Costs of hardware have fallen enough that other costs are starting to dominate data center planning.

[+] Periodic|16 years ago|reply
It seems like a lot of these "rules" for systems were created decades ago with different hardware. Not only has the hardware design changed since then, but the economics have changed as well.

With a single big-iron mainframe it probably made sense to spend a lot of money on cooling because any failure was very expensive. With many commodity servers it may be cheaper to let them run hotter and replace any failures, though an Intel study with free-air cooling in Mexico found no significant difference in failure rate.

I'm reminded of hot-spares in RAID configurations. With today's hard drive sizes it can take so long to merge a hot spare into a system that you're better off just increasing the raid level and only having online drives.

[+] blasdel|16 years ago|reply
Really, you're better off not using any block-level mirroring at all! At scale it makes much more sense to store each (2^24-27) chunk of data on at least 3 independent servers in each facility that has a copy of the dataset (see GFS).
[+] seldo|16 years ago|reply
I love the "follow the moon" idea; I picture the globe spinning and sparks of data processing jumping from node to node to stay out of the sun.

It's pretty. Uh, in my head.

[+] pbz|16 years ago|reply
Except that the latency for those following the sun would suck pretty much all the time.
[+] donaldc|16 years ago|reply
Interesting article. I wish that it had given an estimate for the magnitude of power savings, however.
[+] datums|16 years ago|reply
Very interesting. Some DCs measure the cost of running at a higher temperature. Examining the cost of purchasing replacement hardware and the strain downtime places on relationships. Having a global presence you can do Smart DC Distribution base on the traffic they receive from the region they can spin down servers at night or during hotter days. I wonder if they reuse the heat(energy) produced.
[+] kimovski|16 years ago|reply
This actually makes sense, Google's new policy is to work on problems on a scale that no one else can. Too hot in the data center? Ah, just turn it off and redirect the traffic! :)
[+] notaddicted|16 years ago|reply
Are data centers going to be the junkyards of the information age?
[+] ZachS|16 years ago|reply
They already are, look at all the worthless twitter, blog, and forum posts that Google has stored in their data centers.
[+] yannis|16 years ago|reply
'Free Cooling' works well in colder climates such as those found in Belgium. A larger fan on the server might be a cheaper solution. Direct air in the racks rather than the room can also be more effective. Since in the evenings air is normally cooler, this can also be used effectively by adding more 'mass' in the room. Anyone thought of having these datacenters on mountains?
[+] fuzzythinker|16 years ago|reply
I like them to do 1 step further and use the free heated water for warm water needs.
[+] mhb|16 years ago|reply
Wouldn't it be better if the boxes were painted white or made reflective?
[+] hs|16 years ago|reply
the parts that often fail are hard drives and fans, so eliminate those. maybe an embedded pc (think router) with huge ssd+ram is stable at ambient room temperature. and almost never breaks (AC power adaptor normally will break first).

if google can subsidize internet with her routers (with huge ssd+ram), she basically outsources the power and maintenance to local people around the globe.

if such router can direct queries into its huge ssd+ram cache (think squid + LRU?), then search can be faster