I help run a fleet of GPU servers, and I might see 1 DIMM or SSD failure for every 50-100 GPU failures.
I realize NVIDIA is just cranking them out as fast as they can, but the quality on them is terrible. They overheat, disappear after you reboot, they fall off the bus, memory failures, and then mix in all the software crashes your users generate...
Our current server vendor is actually good at replacing them, unlike our previous vendor, but the failure rates are just insane. If any other component failed this much we'd have the vendor buy the servers back.
† “Critical error” refers to a NVIDIA Xid or sXid error which is not recoverable, requiring application and GPU reset.
Only a minority of GPU 'failures' appear to be permanent hardware problems, such as row remapping errors. A lot seem to be, like another comment says, a consequence of operating too close to the operational limit, tipping over it, and then requiring a power cycle.
Totally matches my experience, and it feels bizarre inside-looking-out that nobody else talks about it. Hardware from 2010-2020 was remarkably stable, and CPUs are still as stable as they were, but we've had this large influx of money spent on these chips that fall over if you look at them funny. I think it leads to a lot of people thinking, "we must be doing something wrong", because it's just outside of their mental model that hardware failures can occur at this rate. But that's just the world we live in.
It's a perfect storm: a lot of companies are doing HPC-style distributed computing for the first time, and lack experience in debugging issues that are unique to it. On top of that, the hardware is moving very fast and they're ill equipped to update their software and drivers at the rate required to have a good experience. On top of that, the stakes are higher because your cluster is only as strong as its weakest node, which means a single hardware failure can turn the entire multi-million dollar cluster into a paperweight, which adds more pressure and stress to get it all fixed. Updating your software means taking that same multi-million dollar cluster offline for several hours, which is seen as a cost rather than a good investment of time. And a lot of the experts in HPC-style distributed computing will sell you "supported" software, which is basically just paying for the privilege of using outdated software that lacks the bug fixes that your cards might desperately need. That model made sense in the 2010s, when linux (kernel and userspace) was less stable and you genuinely needed to lock your dependencies and let the bugs work themselves out. But that's the exact opposite of what you want to be doing in 2026.
You put all of this together, and it's difficult to be confident whether the hardware is bad, or going bad, or whether it's only manifesting because they're exposed to bugs, or maybe both. Yikes, it's no fun.
They're also run far closer to the edge of their operational limits than CPUs, so you're far more likely to get one that barely passes manufacturing tests, then degrades just a little tiny bit and stops working.
It's funny, I've been watching all the nvidia GTC keynotes from 2012-now to better understand the ecosystem and Jensen pretty clearly states a few times "its a miracle it works at all". Clearly he's intending to brag about defect rate on a 50 billion transistor chip but maybe he's more right than he realizes.
It's wild that these are the failure rates for datacenter-grade products. If you were pushing consumer GPU servers all-out, I would expect this kind of variation.
I expect it's not just a problem with Nvidia, though.
In his newsletter Ed Zitron hammered down the point that GPUs depreciate quickly, but these kind of reliability issues are shocking to read. The GPUs are so common to fail that they hang out in a 24/7 slack channel with customers like Meta (who apparently can't set up a cluster themselves..).
Ed Zitron also called out the business model of GPU-as-a-service middleman companies like modal deeply unsustainable, and I also don't see how they can make a profit if they are only reselling public clouds. Assuming they are VC funded the VCs need returns for their funds.
Unlike fiber cable during the dot com boom the currently used GPUs eventually end up in the trash bin. These GPUs are treated like toilet paper, you use them and throw them away, nothing you will give to the next generation.
Who will be the one who marks down these "assets"? Who is providing money to buy the next batch of GPUs, now that billions are already spent?
Maybe we'll see a wave of retirements soon.
> It’s underappreciated how unreliable GPUs are. NVIDIA’s hardware is a marvel, the FLOPs are absurd. But the reliability is a drag. A memorable illustration of how AI/ML development is hampered by reliability comes from Meta’s paper detailing the training process for the LLaMA 3 models: “GPU issues are the largest category, accounting for 58.7% of all unexpected issues.”
> Imagine the future we’ll enjoy when GPUs are as reliable as CPUs. The Llama3 team’s CPUs were the problem only 0.5% of the time. In my time at Modal we can’t remember finding a single degraded CPU core.
> For our Enterprise customers we use a shared private Slack channel with tight SLAs. Slack is connected to Pylon, tracking issues from creation to resolution. Because Modal is built on top of the cloud giants and designed for dynamic compute autoscaling, we can replace bad GPUs pretty fast!
>These GPUs are treated like toilet paper, you use them and throw them away, nothing you will give to the next generation.
I'm guessing this may be highly dependant on what the bathtub curve looks like, and how much the provider wants to spend on cooling.
Of course with Nvidia being a near monopoly here, they might just not give a fuck and will pump out cards/servers with shitty reliability rates simply because people keep buying them and they don't suffer any economic loss or have to sit in front of a judge.
Be interesting to see what the error rate per TFLOP (no /s, we're looking at operations not time) is compared to older generation cards.
> Ed Zitron also called out the business model of GPU-as-a-service middleman companies like modal deeply unsustainable, and I also don't see how they can make a profit if they are only reselling public clouds.
You got a link for that? I work on Modal and would be interested in seeing the argument!
We think building a proper software layer for multitenant demand aggregation on top of the public clouds is sufficient value-add to be a sustainable business (cf DBRX and Snowflake).
I suppose NVidia could invest in making their GPUs more reliable? But then that'll make everything else even more expensive lol. If only one of the companies on the chain can take one for the team.
H100 shows 3.2 × lower per-GPU mean time between errors (MTBE) compared to A100 for uncorrectable ECC memory errors. The per-GB MTBE of the H100’s HBM3 memory is 24% lower (∼ 8.5M hours) than the A100’s HBM2e memory (∼ 11.3M hours). We conjecture that the reduction in memory resilience stems from H100’s higher memory capacity.
We attribute the decrease in resilience is primarily due to the higher memory capacity (96 GB vs. 40 GB, a 2.4 × increase), which increases the chances of bit flips.
We additionally hypothesize that H100 memory resilience is worse due to (a) a lower signaling voltage that increases susceptibility to bit flips and (b) an increased number of stacks that make heat dissipation challenging and degrade the resilience of memory modules, of the HBM3 memory.
Increasing voltage just makes the heat dissipation problem worse, so probably can't just crank that up.
From what I can gather, a typical A100 or H100 is air cooled. Sounds like liquid cooling them might help, or at least allow you to bump up those voltages without thermal issues.
Are the numbers in the H100 PCIE vs SXM table swapped for rows 3 onwards? It looks to me like the PCI is showing higher GiB/s numbers, which is counter to expectations.
Or am I misunderstanding those benchmarks?
You're not misunderstanding, the PCIe does indeed outperform on the memory bandwidth tests. But it gets dominated on FLOP/s and real-world application bencharks.
I wonder why H100 H2D and D2H unpinned memcpy bandwidth is *faster* on PCIe with vendor B than on SXM with vendor D. Is resizable BAR available on PCIe but not SXM?
Or, could it be a software configuration difference? The driver API flag CU_MEMHOSTREGISTER_IOMEMORY states that host memory being physically contiguous may matter to the driver, in this context for memory-mapped memory. If vendor B has THP enabled or configured differently than vendor D, small allocations up to 2 MiB could be physically contiguous which may result in higher efficiency/more bytes transferred per request.
At a higher level: unpinned memcpy is a performance antipattern. Perhaps vendor D has fewer clients using unpinned memcpy in their workloads than vendor B, or they decided not to dedicate support to it for this reason. TensorFlow will go to great lengths to copy unpinned memory to a pinned staging buffer if you feed unpinned host memory tensors to a graph.
I recently had to route a PCB for a fpga using DDR3. It needed 3 designs to get the ram interface good. Dont get me wrong i have designed such things before but there are so may external factors. Now think of DDR of higher order. I think they are on the edge what can be done on todays PCB design
My experience with GPU failures is they have trouble loading tons of data - suggesting it’s also the stress such a highly performant part puts on the system.
> Today, we’re sharing our GPU reliability system as both a demonstration of our commitment to Modal customers and as a guide for fellow travelers renting hyperscaler or neocloud cards. It’s dangerous to go alone! Take this.
> We’ve chosen not to refer to cloud providers directly, but instead give them anonymized A, B, C, D identifiers. If you want know who’s who, track the clues or buy us a beer sometime.
Come on, either name names or admit it is pure PR.
Edit: or will someone who can decode the clues weigh in?
bluedino|1 month ago
I realize NVIDIA is just cranking them out as fast as they can, but the quality on them is terrible. They overheat, disappear after you reboot, they fall off the bus, memory failures, and then mix in all the software crashes your users generate...
Our current server vendor is actually good at replacing them, unlike our previous vendor, but the failure rates are just insane. If any other component failed this much we'd have the vendor buy the servers back.
thundergolfer|1 month ago
Only a minority of GPU 'failures' appear to be permanent hardware problems, such as row remapping errors. A lot seem to be, like another comment says, a consequence of operating too close to the operational limit, tipping over it, and then requiring a power cycle.
nickysielicki|1 month ago
It's a perfect storm: a lot of companies are doing HPC-style distributed computing for the first time, and lack experience in debugging issues that are unique to it. On top of that, the hardware is moving very fast and they're ill equipped to update their software and drivers at the rate required to have a good experience. On top of that, the stakes are higher because your cluster is only as strong as its weakest node, which means a single hardware failure can turn the entire multi-million dollar cluster into a paperweight, which adds more pressure and stress to get it all fixed. Updating your software means taking that same multi-million dollar cluster offline for several hours, which is seen as a cost rather than a good investment of time. And a lot of the experts in HPC-style distributed computing will sell you "supported" software, which is basically just paying for the privilege of using outdated software that lacks the bug fixes that your cards might desperately need. That model made sense in the 2010s, when linux (kernel and userspace) was less stable and you genuinely needed to lock your dependencies and let the bugs work themselves out. But that's the exact opposite of what you want to be doing in 2026.
You put all of this together, and it's difficult to be confident whether the hardware is bad, or going bad, or whether it's only manifesting because they're exposed to bugs, or maybe both. Yikes, it's no fun.
dlcarrier|1 month ago
bigwheels|1 month ago
A deep dive on why these beastly cards fail so frequently compared to all other common current day hardware would be fascinating!
jldugger|1 month ago
salynchnew|1 month ago
I expect it's not just a problem with Nvidia, though.
jayd16|1 month ago
ecesena|1 month ago
userbinator|1 month ago
stingrae|1 month ago
bflesch|1 month ago
Ed Zitron also called out the business model of GPU-as-a-service middleman companies like modal deeply unsustainable, and I also don't see how they can make a profit if they are only reselling public clouds. Assuming they are VC funded the VCs need returns for their funds.
Unlike fiber cable during the dot com boom the currently used GPUs eventually end up in the trash bin. These GPUs are treated like toilet paper, you use them and throw them away, nothing you will give to the next generation.
Who will be the one who marks down these "assets"? Who is providing money to buy the next batch of GPUs, now that billions are already spent?
Maybe we'll see a wave of retirements soon.
> It’s underappreciated how unreliable GPUs are. NVIDIA’s hardware is a marvel, the FLOPs are absurd. But the reliability is a drag. A memorable illustration of how AI/ML development is hampered by reliability comes from Meta’s paper detailing the training process for the LLaMA 3 models: “GPU issues are the largest category, accounting for 58.7% of all unexpected issues.” > Imagine the future we’ll enjoy when GPUs are as reliable as CPUs. The Llama3 team’s CPUs were the problem only 0.5% of the time. In my time at Modal we can’t remember finding a single degraded CPU core. > For our Enterprise customers we use a shared private Slack channel with tight SLAs. Slack is connected to Pylon, tracking issues from creation to resolution. Because Modal is built on top of the cloud giants and designed for dynamic compute autoscaling, we can replace bad GPUs pretty fast!
pixl97|1 month ago
I'm guessing this may be highly dependant on what the bathtub curve looks like, and how much the provider wants to spend on cooling.
Of course with Nvidia being a near monopoly here, they might just not give a fuck and will pump out cards/servers with shitty reliability rates simply because people keep buying them and they don't suffer any economic loss or have to sit in front of a judge.
Be interesting to see what the error rate per TFLOP (no /s, we're looking at operations not time) is compared to older generation cards.
charles_irl|1 month ago
You got a link for that? I work on Modal and would be interested in seeing the argument!
We think building a proper software layer for multitenant demand aggregation on top of the public clouds is sufficient value-add to be a sustainable business (cf DBRX and Snowflake).
ares623|1 month ago
zkvx7a|1 month ago
Story of Two GPUs: Characterizing the Resilience of Hopper H100 and Ampere A100 GPUs
https://dl.acm.org/doi/10.1145/3712285.3759821
magicalhippo|1 month ago
H100 shows 3.2 × lower per-GPU mean time between errors (MTBE) compared to A100 for uncorrectable ECC memory errors. The per-GB MTBE of the H100’s HBM3 memory is 24% lower (∼ 8.5M hours) than the A100’s HBM2e memory (∼ 11.3M hours). We conjecture that the reduction in memory resilience stems from H100’s higher memory capacity.
We attribute the decrease in resilience is primarily due to the higher memory capacity (96 GB vs. 40 GB, a 2.4 × increase), which increases the chances of bit flips.
We additionally hypothesize that H100 memory resilience is worse due to (a) a lower signaling voltage that increases susceptibility to bit flips and (b) an increased number of stacks that make heat dissipation challenging and degrade the resilience of memory modules, of the HBM3 memory.
Increasing voltage just makes the heat dissipation problem worse, so probably can't just crank that up.
From what I can gather, a typical A100 or H100 is air cooled. Sounds like liquid cooling them might help, or at least allow you to bump up those voltages without thermal issues.
smsx|1 month ago
thundergolfer|1 month ago
gregjm|1 month ago
Or, could it be a software configuration difference? The driver API flag CU_MEMHOSTREGISTER_IOMEMORY states that host memory being physically contiguous may matter to the driver, in this context for memory-mapped memory. If vendor B has THP enabled or configured differently than vendor D, small allocations up to 2 MiB could be physically contiguous which may result in higher efficiency/more bytes transferred per request.
At a higher level: unpinned memcpy is a performance antipattern. Perhaps vendor D has fewer clients using unpinned memcpy in their workloads than vendor B, or they decided not to dedicate support to it for this reason. TensorFlow will go to great lengths to copy unpinned memory to a pinned staging buffer if you feed unpinned host memory tensors to a graph.
checker659|1 month ago
unknown|1 month ago
[deleted]
Surac|1 month ago
pyuser583|1 month ago
eleventyseven|1 month ago
> We’ve chosen not to refer to cloud providers directly, but instead give them anonymized A, B, C, D identifiers. If you want know who’s who, track the clues or buy us a beer sometime.
Come on, either name names or admit it is pure PR.
Edit: or will someone who can decode the clues weigh in?
pests|1 month ago
Cloud A: AWS (Amazon Web Services)
Cloud B: Azure (Microsoft Azure)
Cloud C: GCP (Google Cloud Platform)
Cloud D: OCI (Oracle Cloud Infrastructure)
Gemini had some decent evidence for each choice too, but I didn't confirm anything.
squeefers|1 month ago
squeefers|1 month ago