(no title)
msbarnett | 3 years ago
Is it still all that fundamentally different? All of the RDNA parts are tile-based renderers (I think even the Vega series GCN parts made that switch?)
msbarnett | 3 years ago
Is it still all that fundamentally different? All of the RDNA parts are tile-based renderers (I think even the Vega series GCN parts made that switch?)
ribit|3 years ago
Apple (inherited from PowerVR) adds another twist on top: the rasterised pixel are not shaded immediately but instead collected in a buffer. Once all fragments in a tile are rasterised you basically have an array with visible triangle information for each pixel. Pixel shading is then simply a compute pass over this array. This can be more efficient as you only need to shade visible pixels, and it might utilise the SIMD hardware better (as you are shading 32x32 blocks containing multiple triangles at once rather than shading triangles separately), plus it radically simplifies dealing with pixels (there are never any data races for a given pixel, pixel data write-out is just a block memcpy, programmable blending is super easy and cheap to do) — in fact, I don't believe that Apple even has ROPs. There are of course disadvantages as well — it's very tricky to get right and requires specialised fixed-function hardware, you need to keep transformed primitive data around in memory until all primitives are processed (because shading is delayed), there are tons of corner cases you need to handle which can kill your performance(transparency, primitive buffer overflows etc.). And of course, many modern rendering techniques rely on global memory operations and there is an increasing trend to do rasterisation in a compute shader, where this rendering architecture doesn't really help.
garaetjjte|3 years ago
my123|3 years ago