top | item 36183127

(no title)

gateorade | 2 years ago

I'm not sure this is obvious. For the consumer market, maybe. But at the end of the day, there are physical limits on how much compute you can squeeze into any given area. If your job requires more parallel compute than can reasonably be squeezed into a package for power/thermal reasons, then distributing the workload off-package is a requirement.

Maybe hardware will morph into a more heterogenous mix than we have now, with many single-package cpu-gpu nodes working in parallel, instead of a few cpus orchestrating a gigantic sea of GPUs, but maybe not.

I'm more convinced that this is just one stage of the cycle. GPUs in consumer hardware weren't really a thing before the 90s. From the 90s to like 2013 graphics software (and it's ability to load hardware) improved rapidly. Since then it's kind of stagnated and a lot more of the focus has shifted to doing the same amount of rendering with less power/less heat/less space. Even if we do see a shift toward SoCs/APUs/whatever in consumer hardware over the next few years, I'd bet on some sort of paradigm shift to come along (fully raytraced rendering?) and swing the pendulum pack toward big discrete GPUs at some point again.

discuss

order

No comments yet.