top | item 46293949

(no title)

aarroyoc | 2 months ago

Impressive post, so many details. I could only understand some parts of it, but I think this article will probably be a reference for future graphics API.

I think it's fair to say that for most gamers, Vulkan/DX12 hasn't really been a net positive, the PSO problem affected many popular games and while Vulkan has been trying to improve, WebGPU is tricky as it has is roots on the first versions of Vulkan.

Perhaps it was a bad idea to go all in to a low level API that exposes many details when the hardware underneath is evolving so fast. Maybe CUDA, as the post says in some places, with its more generic computing support is the right way after all.

discuss

order

erwincoumans|2 months ago

Yes, an amazing and detailed post, enjoyed all of it. In AI, it is common to use jit compilers (pytorch, jax, warp, triton, taichi, ...) that compile to cuda (or rocm, cpu, tpu, ...). You could write renderers like that, rasterizers or raytracers.

For example: https://github.com/StafaH/mujoco_warp/blob/render_context/mu...

(A new simple raytracer that compiles to cuda, used for robotics reinforcement learning, renders at up to 1 million fps at low resolution, 64x64, with textures, shadows)

qiine|2 months ago

yeah.. let's make nvidia control more things..

m-schuetz|2 months ago

Problem is that NVIDIA literally makes the only sane graphics/compute APIs. And part of it is to make the API accessible, not needlessly overengineered. Either the other vendors start to step up their game, or they'll continue to lose.