(no title)
dnhkng
|
2 months ago
This is the story of how I bought enterprise-grade AI hardware designed for liquid-cooled server racks that was converted to air cooling, and then back again, survived multiple near-disasters (including GPUs reporting temperatures of 16 million degrees), and ended up with a desktop that can run 235B parameter models at home. It’s a tale of questionable decisions, creative problem-solving, and what happens when you try to turn datacenter equipment into a daily driver.
amirhirsch|2 months ago
I needed this info, thanks for putting it up. Can this really be an issue for every data center?
Tinyyy|2 months ago
ipsum2|2 months ago
GPTshop|2 months ago
pointbob|2 months ago
dnhkng|2 months ago
unknown|2 months ago
[deleted]
dauertewigkeit|2 months ago
How does the seller get these desktops directly from NVIDIA?
And if the seller's business is custom made desktop boxes, why didn't he just fit the two H100s into a better desktop box?
Ntrails|2 months ago
I expect because they were no longer in the sort of condition to sell as new machines? They were clearly well used and selling "as seen" is the lowest reputational risk associated with offload
dnhkng|2 months ago
This thing too unwieldy to make into a desktop (you can see how much effort it took), and was in pretty bad condition. I think he just wanted to get rid of it without having to deal with returns. I took a bet on it, and was lucky it paid out.
GPTshop|2 months ago
H100 PCI and GH200 are two very different things. The advantages of Grace Hopper are much higher connections speeds, bandwidth and lower power consumption.
ProAm|2 months ago
baud147258|2 months ago
devilbunny|2 months ago
dnhkng|2 months ago
jerome-jh|2 months ago
dnhkng|2 months ago
Fire-Dragon-DoL|2 months ago