It's possible it never made it into production; but when I was helping to commission a 4 rack "supercomputer" circa 2010 we used APC's in-row cooling (which did glycol exchange to the outside but still maintains the hot/cold aisle) and I distinctly remember reading a whitepaper about racks with built in water cooling and the problems with pressure loss, dripless connectors, and corrosion. I no longer recall if the direct cooling loop exited the building or just cycled in the rack to an adjacent secondary heat exchanger. (And I don't remember if it was an APC whitepaper or some other integrator.)There's also all the fun experiments with dunking the whole server into oil, but I'll give you that again I've only seen setups described with secondary cooling loops - probably because of corrosion and wanting to avoid contaminants.
bri3d|6 months ago
I do think Google must be doing something right, as their quoted PUE numbers are very strong, but nothing about what's in the linked chipsandcheese article seems groundbreaking at all architecturally, just strong micro-optimization. The article talks a lot about good plate/thermal interface design, good water flow management, use of active flow control valves, and a ton of iteration at scale to find the optimal CDU-to-hardware ratio, but at the end of the day it's the same exact thing in the diagram from 1965.
liquidgecka|6 months ago
[I am still annoyed at how many people are dismissive of Google’s datacenter work simply because “severs have been water cooled before” which completely misses the point of datacenter level cooling. I also learned that AWS is doing this already, along with some elements of OVH] =)