(no title)
yeoyeo42 | 3 months ago
perhaps it's easiest to think about regular image processing because it uses the same hardware. you can think about each pixel as a particle.
a typical 4k (3840 x 2160 at 16:9) image contains about 8 million pixels. a trivial compute shader that just writes 4 bytes per pixel of some trivial value (e.g. the compute shader thread ids) will take you anywhere from roughly speaking 0.05ms - 0.5ms on modern-ish GPUs. this is a wide spread to represent a wide hardware spread. on current high end GPUs you will be very close to the 0.05ms, or maybe even a bit faster.
but real world programs like video games do a whole lot more than just write a trivial value. they read and write a lot more data (usually there are many passes - so it's not done just once - in the end maybe a few hundred bytes per pixel), and usually run many thousands of instructions per pixel. I work on a video game everyone's probably heard about and the one of the main material shaders is too large to fit into my work GPUs instruction cache (of 32kb) to give you an idea how many instructions are in there (not all executed of course - some branching involved).
and you can still easily do this all at 100+ frames per second on high end GPUs.
so you can in principle simulate a lot of particles. of course, the algorithm scaling matters. most of rendering is somewhere in O(n). anything involving physics will probably involve some kind of interaction between objects which immediately implies O(n log n) at the very least but usually more.
No comments yet.