top | item 42624846

(no title)

mrybczyn | 1 year ago

Eh? By all indications compute is now evolving SLOWER than ever. Moore's Law is dead, Dennard scaling is over, the latest fab nodes are evolutionary rather than revolutionary.

This isn't the 80s when compute doubled every 9 months, mostly on clock scaling.

discuss

order

sliken|1 year ago

Indeed, generational improvements are at an all time low. Most of the "revolutionary" AI and/or GPU improvements are less precision (fp32 -> fp16 -> fp8 -> fp4) or adding ever more fake pixels, fake frames, and now in the most recent iteration multiple fake frames per computed frame.

I believe Nvidia has some published numbers for the 5000 series that showed DLSS off performance, which allowed a fair comparison to the previous generation, on the order of 25%, then removed it.

Thankfully the 3rd party benchmarks that use the same settings on old and new hardware should be out soon.

tcdent|1 year ago

Fab node size is not the only factor in performance. Physical limits were reached, and we're pulling back from the extremely small stuff for the time being. That is the evolutionary part.

Revolutionary developments are: multi-layer wafer bonding, chiplets (collections of interconnected wafers) and backside power delivery. We don't need the transistors to keep getting physically smaller, we need more of them, and at increased efficiency, and that's exactly what's happening.

dotancohen|1 year ago

All that comes with linear increases of heat, and exponential difficulty of heat dissipation (square-cube law).

There is still progress being made in hardware, but for most critical components it's looking far more logarithmic now as we're approaching the physical material limits.