I love their strategy for when it's too hot--turn it all off and go home. Not a lot of companies have that amount of redundancy so I don't see this being the "next big thing", but it's a nice low tech "hack".
For the companies with global data centers it may prove more profitable to create more data centers around the world and shift workloads than to build one monolithic data center in the midwest (for example).
Costs of hardware have fallen enough that other costs are starting to dominate data center planning.
It seems like a lot of these "rules" for systems were created decades ago with different hardware. Not only has the hardware design changed since then, but the economics have changed as well.
With a single big-iron mainframe it probably made sense to spend a lot of money on cooling because any failure was very expensive. With many commodity servers it may be cheaper to let them run hotter and replace any failures, though an Intel study with free-air cooling in Mexico found no significant difference in failure rate.
I'm reminded of hot-spares in RAID configurations. With today's hard drive sizes it can take so long to merge a hot spare into a system that you're better off just increasing the raid level and only having online drives.
Really, you're better off not using any block-level mirroring at all! At scale it makes much more sense to store each (2^24-27) chunk of data on at least 3 independent servers in each facility that has a copy of the dataset (see GFS).
Very interesting. Some DCs measure the cost of running at a higher temperature. Examining the cost of purchasing replacement hardware and the strain downtime places on relationships. Having a global presence you can do Smart DC Distribution base on the traffic they receive from the region they can spin down servers at night or during hotter days. I wonder if they reuse the heat(energy) produced.
This actually makes sense, Google's new policy is to work on problems on a scale that no one else can. Too hot in the data center? Ah, just turn it off and redirect the traffic! :)
'Free Cooling' works well in colder climates such as those found in Belgium. A larger fan on the server might be a cheaper solution. Direct air in the racks rather than the room can also be more effective. Since in the evenings air is normally cooler, this can also be used effectively by adding more 'mass' in the room. Anyone thought of having these datacenters on mountains?
the parts that often fail are hard drives and fans, so eliminate those. maybe an embedded pc (think router) with huge ssd+ram is stable at ambient room temperature. and almost never breaks (AC power adaptor normally will break first).
if google can subsidize internet with her routers (with huge ssd+ram), she basically outsources the power and maintenance to local people around the globe.
if such router can direct queries into its huge ssd+ram cache (think squid + LRU?), then search can be faster
[+] [-] jonknee|16 years ago|reply
[+] [-] Periodic|16 years ago|reply
Costs of hardware have fallen enough that other costs are starting to dominate data center planning.
[+] [-] Periodic|16 years ago|reply
With a single big-iron mainframe it probably made sense to spend a lot of money on cooling because any failure was very expensive. With many commodity servers it may be cheaper to let them run hotter and replace any failures, though an Intel study with free-air cooling in Mexico found no significant difference in failure rate.
I'm reminded of hot-spares in RAID configurations. With today's hard drive sizes it can take so long to merge a hot spare into a system that you're better off just increasing the raid level and only having online drives.
[+] [-] blasdel|16 years ago|reply
[+] [-] seldo|16 years ago|reply
It's pretty. Uh, in my head.
[+] [-] pbz|16 years ago|reply
[+] [-] donaldc|16 years ago|reply
[+] [-] datums|16 years ago|reply
[+] [-] kimovski|16 years ago|reply
[+] [-] notaddicted|16 years ago|reply
[+] [-] ZachS|16 years ago|reply
[+] [-] yannis|16 years ago|reply
[+] [-] fuzzythinker|16 years ago|reply
[+] [-] mhb|16 years ago|reply
[+] [-] hs|16 years ago|reply
if google can subsidize internet with her routers (with huge ssd+ram), she basically outsources the power and maintenance to local people around the globe.
if such router can direct queries into its huge ssd+ram cache (think squid + LRU?), then search can be faster