Derbauer and Buildzoid on YouTube made nice informative videos on subject, and no it is not "simply a user error". So glad I went with 7900 XTX - should be all set for a couple of years.
Summary of the Buildzoid video courtesy of redditors in r/hardware:
> TL;DW: The 3090 had 3 shunt resistors set up in a way which distributed the power load evenly among the 6 power-bearing conductors. That's why there were no reports of melted 3090s. The 4090/5090 modified the engineering for whatever reason, perhaps to save on manufacturing costs, and the shunt resistors no longer distribute the power load. Therefore, it's possible for 1 conductor to bear way more power than the rest, and that's how it melts.
> The only reason why the problem was considered "fixed" (not really, it wasn't) on the 4090 is that apparently in order to skew the load so much to generate enough heat to melt the connector you'd need for the plug to not be properly seated. However with 600W, as seen on der8auer video, all it takes is one single cable or two making a bit better contact than the rest to take up all the load and, as measured by him, reach 23A.
- Most 12VHPWR connectors are rated to 9.5-10A per pin. 600W / 12V / 6 pin pairs = 8.33A. Spec requires 10% safety factor - 9.17A.
- 12VHPWR connectors are compatible with 18ga or at best 16ga cables. For 90C rated single core copper wires I've seen max allowed amperages of at most 14A for 18ga and 18A for 16ga. Less in most sources.. Near the connectors those wires are so close they can't be considered single core for purpose of heat dissipation..
Honestly with 50A of current we should be using connectors that screw into place firmly and have a single wipe or a single solid conductor pin style. Multi-pin connectors will always inherently have issues with imbalance of power delivery. With extremely slim engineering margins this is basically asking for disaster. I stand by what I've said elsewhere: If I was an insurance company I'd issue a notice that fires caused by this connector will not be covered by any issued policy as it does not satisfy reasonable engineering margins.
edit: replaced power with current... we're talking amps not watts
Enjoying my 7900XTX as well. I really don't understand why nvidia had to pivot to this obscure power connector. It's not like this is a mobile device where that interface is very important - you plug the card in once and forget about it.
> So glad I went with 7900 XTX - should be all set for a couple of years.
Really depends on the use case. For gaming, normal office, smaller AI/ML or video-work, yeah, it's fine. But if you want the RTX 5090 for the VRAM, then the 24GB of the 7900 XTX won't be enough.
Honestly, the smart play in that case is to buy 2 3090's and connect them with nvlink. Or...and hear me out, at this point you could probably just invest your workstation build budget and use the dividends to pay for runpod instances when you actually want to spin up and do things.
I'm sure there are some use cases for 32gb of vram but most of the cutting edge models that people are using day to day on local hardware fit in 12 or even 8gb of vram. It's been a while since I've seen anything bigger than 24gb but less than 70gb.
Yeah, I expect my next card with be AMD. I'm happy with my 3080 for now, but the cards have nearly double in price in two generations and I'm not going to support that. I can't abide the prices nor the insane power draw. I'm OK with not having DLSS.
It'll probably be fine for years, longer if you can stand looking at AI generated, upscaled frames. Liftup in GPU power is so expensive, we might as well be back to the reign of the 1080. The only thing that'll move the needle will be a new console generation.
Probably reference cards, yeah? I think the common advice is to not buy the reference cards. They rarely cool enough. I made that mistake with the RX 5700 XT, and will never again.
It's too bad AMD will stop even aiming for that market. But also, I bought a Sapphire 7900 XTX knowing it'd be in my machine for at least half a decade.
People are acting like this is some long-term position. There's no evidence of that. AMD didn't give up on the high end permanently after the RX 480 / 580 generation.
What AMD does need right now is:
* Don't cannibalize advanced packaging (which big RDNA4 required) from high-margin and high-growth AI chips
* Focus on software features like upscaling tech (which is a big multiplier and allows midrange GPUs to punch far above their weight) and compute drivers (which they badly need to improve to have a real shot at taking AI chip marketshare)
* Focus on a few SKUs and execute as well as possible to build mindshare and a reputation for quality
"Big" consumer GPUs are increasingly pointless. The better upscaling gets, the less raw power you need, and 4K gaming is already passable (1440p gaming very good) on the current gen with no obvious market for going beyond that. Both Intel and Nvidia are independently suffering from this masturbatory obsession with "moar power" causing downstream issues. I'm glad AMD didn't go down that road personally.
If "midrange" RDNA4 is around the same strength as "high-end" RDNA3, but $300 cheaper and with much better ray tracing and an upscaling solution at least on par with DLSS 3, then that's a solid card that should sell well. Especially given how dumb the RTX 5080 looks from a value perspective.
enragedcacti|1 year ago
> TL;DW: The 3090 had 3 shunt resistors set up in a way which distributed the power load evenly among the 6 power-bearing conductors. That's why there were no reports of melted 3090s. The 4090/5090 modified the engineering for whatever reason, perhaps to save on manufacturing costs, and the shunt resistors no longer distribute the power load. Therefore, it's possible for 1 conductor to bear way more power than the rest, and that's how it melts.
> The only reason why the problem was considered "fixed" (not really, it wasn't) on the 4090 is that apparently in order to skew the load so much to generate enough heat to melt the connector you'd need for the plug to not be properly seated. However with 600W, as seen on der8auer video, all it takes is one single cable or two making a bit better contact than the rest to take up all the load and, as measured by him, reach 23A.
https://old.reddit.com/r/hardware/comments/1imyzgq/how_nvidi...
c2h5oh|1 year ago
- 12VHPWR connectors are compatible with 18ga or at best 16ga cables. For 90C rated single core copper wires I've seen max allowed amperages of at most 14A for 18ga and 18A for 16ga. Less in most sources.. Near the connectors those wires are so close they can't be considered single core for purpose of heat dissipation..
leeter|1 year ago
edit: replaced power with current... we're talking amps not watts
CarVac|1 year ago
Buildzoid: https://www.youtube.com/watch?v=kb5YzMoVQyw
I went 7900 GRE, not even considering Nvidia, because I simply do not trust that connector.
whalesalad|1 year ago
lmm|1 year ago
slightwinder|1 year ago
Really depends on the use case. For gaming, normal office, smaller AI/ML or video-work, yeah, it's fine. But if you want the RTX 5090 for the VRAM, then the 24GB of the 7900 XTX won't be enough.
wing-_-nuts|1 year ago
I'm sure there are some use cases for 32gb of vram but most of the cutting edge models that people are using day to day on local hardware fit in 12 or even 8gb of vram. It's been a while since I've seen anything bigger than 24gb but less than 70gb.
dualboot|1 year ago
pshirshov|1 year ago
SketchySeaBeast|1 year ago
asmor|1 year ago
hibikir|1 year ago
Hamuko|1 year ago
shmerl|1 year ago
seanw444|1 year ago
asmor|1 year ago
dralley|1 year ago
What AMD does need right now is:
* Don't cannibalize advanced packaging (which big RDNA4 required) from high-margin and high-growth AI chips
* Focus on software features like upscaling tech (which is a big multiplier and allows midrange GPUs to punch far above their weight) and compute drivers (which they badly need to improve to have a real shot at taking AI chip marketshare)
* Focus on a few SKUs and execute as well as possible to build mindshare and a reputation for quality
"Big" consumer GPUs are increasingly pointless. The better upscaling gets, the less raw power you need, and 4K gaming is already passable (1440p gaming very good) on the current gen with no obvious market for going beyond that. Both Intel and Nvidia are independently suffering from this masturbatory obsession with "moar power" causing downstream issues. I'm glad AMD didn't go down that road personally.
If "midrange" RDNA4 is around the same strength as "high-end" RDNA3, but $300 cheaper and with much better ray tracing and an upscaling solution at least on par with DLSS 3, then that's a solid card that should sell well. Especially given how dumb the RTX 5080 looks from a value perspective.