top | item 44735883

(no title)

pmichaud | 7 months ago

I don't have any particular knowledge about oxide's cooling, but think about how bloated and inefficient literally every part of the compute stack is from metal to seeing these words on a screen. If you imagine fixing every part of it to be efficient top to bottom, I think you'll agree that we're not even in the same galaxy as the physical limitations of moving electrons around at high speeds.

discuss

order

kortilla|7 months ago

But the majority of heat is going to come from the CPU and this is a product to run arbitrary customer workloads.

If the customers leave these things idle, then oxide is going to shine. But a busy rack is going to be dominated by CPU heat.

throw0101c|7 months ago

According to Oxide Computer, they found that going from 20mm to 80mm fans dropped their chassis power usage (efficiency is to the cube of the radius): a rack full of 1U servers had 25% of its power going to the fans, and they were able to get down to 1.2%:

* https://www.youtube.com/watch?v=hTJYY_Y1H9Q

From their weblog post:

> Compared to a popular rackmount server vendor, Oxide is able to fill our specialized racks with 32 AMD Milan sleds and highly-available network switches using less than 15kW per rack, doubling the compute density in a typical data center. With just 16 of the alternative 1U servers and equivalent network switches, over 16kW of power is required per rack, leading to only 1,024 CPU cores vs Oxide’s 2,048.

* https://oxide.computer/blog/how-oxide-cuts-data-center-power...

zozbot234|7 months ago

Their rack scale from-scratch redesign includes fans big enough that they've reportedly managed to cool CPU hardware that was actually designed for water-cooling, with no expectation for air cooling (though admittedly, they say they only achieved this just barely, and with a LOT of noise). That seems like something that's going to be objectively verifiable as a step up in efficiency.