(no title)
kig | 10 years ago
It's not easily solvable by a super-powered compiler either. Sure, you could write your own compute-driven preemptive graphics pipeline and break execution after every X billion simulated instructions & random memory accesses. You'd have to do reanalysis on every frame too, due to changing shader inputs.
The computation done is roughly (number of vertices * vertex shader compute time) + (number of pixels falling under the vertex primitives * fragment shader compute time). The shader compute times are the sums of instruction execution times + data fetch times. Then take into account early exits, data access patterns and cache sizes (otherwise the compiler'll think that e.g. a simple greyscaling fragment shader is going to cause a random memory access for every pixel and take forever to run, causing the compiler to spread the shader execution across multiple frames and kill performance dead.)
No comments yet.