top | item 33019073

(no title)

revanx_ | 3 years ago

Imho if cpu manufacturers figure out how to slap a large cache on the same die (something like amd 3D V-Cache but much more) we may actually see graphic cards become obsolete in favor of software rendering.

discuss

order

api|3 years ago

Specialized silicon will always beat general purpose silicon.

It is true that a chip like this probably could render pretty decent 3d in software though. I wonder if combining this with the GPU in a clever way could allow more people to experience real time raytracing?

rollcat|3 years ago

> Specialized silicon will always beat general purpose silicon.

The whole history of PCs is repeatedly proving otherwise. The NES had hardware sprites. Then Carmack & Romero showed up and proved you can have smooth side scrolling in software, on an underpowered CPU. The whole concept of a PPU was thus rendered obsolete. Repeat for discrete FPUs, discrete sound cards, RAID cards (ZFS), and so on.

Specialised silicon will beat general purpose silicon at the given task, until general purpose silicon + software catches up. You need to keep pouring in proportional R&D effort for the specialised silicon to stay ahead.

What keeps GPUs relevant is that they're in fact much more general than what the "G" originally stood for.

mrguyorama|3 years ago

When the second generation of EPYC came out, linus ran a "software rendered" version of crysis that did all rendering on CPU cores instead of GPU shader units. At 640x480 it ran alright.

sudosysgen|3 years ago

Possibly - there are a lot of ray tracing algoeithms that don't really work well on GPUs (anything MCMC, for instance). But context and time aware denoising seems to be able to compensate.

bitL|3 years ago

GPU is a bandwidth monster, CPU is a latency monster. You can't have both on the same silicon.

imtringued|3 years ago

Every console SoC designed by AMD proves you wrong.