top | item 45018513

(no title)

liquidgecka | 6 months ago

[bri3d pointed out that I missed an element of this. There is a transfer between rack level and machine level coolant which makes this far less novel than I had initially understood. See their direct comment to this]

I posed this further down in a reply-to-a-reply but I should call it out a little closer to the top: The innovation here is not “we are using water for cooling”. The innovation here is that they are direct cooling the servers with chillers that are outside of the facility. Most mainframes will use water cooling to get the heat from the core out to the edges where traditional where it can be picked up by the typical heatsink/cooling fans. Even home PCs do this by moving the heat to a reservoir that can be more effectively cooled.

What Google is doing is using the huge chillers that would normally be cooling the air in the facility to cool water which is directly pumped into every server. The return water is then cooled in the chiller tower. This eliminates ANY air based transfer besides the chiller tower. This is one being done a server or a rack.. its being done on the whole data center all at once.

I am super curious how they handle things like chiller maintenance or pump failures. I am sure they have redundancy but the system for that has to be super impressive because it can’t be offline long before you experience hardware failure!

[Edit: It was pointed out in another comment that AWS is doing this as well and honestly their pictures make it way clearer what is happening: https://www.aboutamazon.com/news/aws/aws-liquid-cooling-data...]

discuss

order

bri3d|6 months ago

I don't think this comment is accurate based on the article, although you cite personal experience elsewhere so maybe your project wasn't the one that's documented here?

> What Google is doing is using the huge chillers that would normally be cooling the air in the facility to cool water which is directly pumped into every server.

From the article:

> CDUs exchange heat between coolant liquid and the facility-level water supply.

Also, I know from attaching them at some point that plenty of mainframes used this exact same approach (water to water exchange with facility water), not water to air to water like you describe in this comment and others, so I think you may have just not had experience there? https://www.electronics-cooling.com/2005/08/liquid-cooling-i... contains a diagram in Figure 1 of this exact CDU architecture, which it claims was in use in mainframes dating back to 1965 (!).

I also don't think "This eliminates ANY air based transfer besides the chiller tower." is strictly true; looking at the photo of the sled in the article, there are fans. The TPUs are cooled by the liquid loop but the ancillaries are still air cooled. This is typical for water cooling systems in my experience; while I wouldn't be surprised to be wrong (it sure would be more efficient, I'd think!), I've never seen a water cooling system which successfully works without forced air, because there are just too many ancillary components of varying shapes to successfully design a PCB-waterblock combination which does not also demand forced air cooling.

liquidgecka|6 months ago

> > CDUs exchange heat between coolant liquid and the facility-level water supply.

Oh interesting I missed that when I went through in the first pass. (I think I space bared to pass the image and managed to skip the entire paragraph in between the two images so that’s on me.

I was running off an informal discussion I had with a hardware ops person several years ago where he mentioned a push to unify cooling and eliminate thermal transfer points since they were one of the major elements of inefficiency in modern cooling solutions. By missing that as I browsed through it I think I leaned too heavily on my assumptions without realizing it!

Also, not all chips can be liquid cooled so there will always be an element of air cooling so the fans and stuff are still there for the “everything else” cases and I doubt anybody will really eliminate that effectively. The comment you quoted was mostly directed towards the idea that Cray-1 had liquid cooling, it did, but it transferred to air outside of the server which was an extremely common model for most older mainframe setups. It was rare for the heat to be kept liquid along the whole path.

matt-p|6 months ago

It's interesting because I've never seen mainframes do water to water (though I'm sure that was possible).

The only ones I've ever seen do water to compressor (then gas to the outdoor condenser, obviously)

nitwit005|6 months ago

This was before I was born, so I'm hardly an expert, but I've heard of feeding IBM mainframes chilled water. A quick check of wikipedia found some mention of the idea: https://en.wikipedia.org/wiki/IBM_3090

jauntywundrkind|6 months ago

Having to pre chill water (via a refrigeration cycle) is radically less efficient than being able to collect and then disperse heat. It generates considerably more heat ahead of time, to deliver the chilled water. This mode of gathering the heat and sending it out, dealing with the heat after it is produced rather than in advance, should be much more energy efficient.

I don't know what surprises me about it so much, but having these rack-sized CDU heat-exchangers was quite a surprise, quite novel to me. Having a relatively small closed loop versus one big loop that has to go outside seems like a very big tradeoff, with a somewhat material and space intensive demand (a rack with 6x CDUs), but the fine grained control does seem obviously sweet to have. I wish there were a little more justification for the use of heat exchangers!

The way water is distributed within the server is also pretty amazing, with each server having it's own "bus bar" of water, and each chip having it's own active electro-mechanical valve to control it's specific water flow. The TPUv3 design where cooling happens serially, each chip in sequence getting hotter and hotter water seems common-ish, where-as with TPUv4 there's a fully parallel and controllable design.

Also the switch from lidded chips to bare chips, with a cold plate that comes down to just above, channeling water is one of those very detailed fine-grained optimizations that is just so sweet.

ChuckMcM|6 months ago

When our mainframe in 1978 sprung a leak in its water cooling jacket it took down the main east/west node on IBMs internal network at the time. :-). But that was definitely a different chilling mechanism than the types Google uses.

ChuckMcM|6 months ago

Much of the Google use of liquid chillers was protected behind NDAs as part of its "hidden advantage" with respect to the rest of the world. It was the secret behind really low PUE numbers.

throwaway2037|6 months ago

Do we know if other hyperscalers also use liquid chillers to achieve very low PUE values? I think I saw photos from xAI's new data center and there was liquid cooling.

jwr|6 months ago

> they are direct cooling the servers with chillers that are outside of the facility

That is exactly what the Cray Y-MP EL that I worked with in the 90s/2000s did.

ambicapter|6 months ago

So every time they plug in a server they also plug in water lines?

liquidgecka|6 months ago

[I am not a current Google Employee so my understanding of this is based on externally written articles and “leap of faith” guestimation]

Yes. A supply and return line along with power. Though if I had to guess how its setup this would be done with some super slick “it just works” kind of mount that lets them just slide the case in and lock it in place. When I was there almost all hardware replacement was made downright trivial so it could just be more or less slide in place and walk away.

jayd16|6 months ago

Maybe we can declutter things if they get PWoE(power and water over ethernet) or just a USB-W standard.

ajb|6 months ago

I remember reading somewhere that they don't operate at the level of servers; if one dies they leave it in place until they're ready to replace the whole rack. Don't know if that's true now, though.

It does sound like connections do involve water lines though. As they are isolating different water circuits, in theory they could have a dry connection between heat exchanger plates, or one made through thermal paste. It doesn't sound like they're doing that though.

jedberg|6 months ago

Looks like it. New server means power, internet, and water.

Hilift|6 months ago

And a 12V battery.