(no title)
Twinklebear | 5 years ago
What's really cool is that compute and rendering using WebGPU can get near-native level performance. So a lot of scientific applications (which typically rely on more FLOPs/parallel processing) can be implemented in WebGPU compute without sacrificing much performance. I'm not sure how many simulations would be ported to WebGPU, since they usually end up targeting large scale HPC systems and CUDA, but for visualization applications I think the use case is pretty compelling, especially for portability and ease of distribution. On the compute side, I implemented a data-parallel Marching Cubes example: https://github.com/Twinklebear/webgpu-experiments , and found the performance is on par with my native Vulkan version. You can try it out here: https://www.willusher.io/webgpu-experiments/marching_cubes.h... . There is a pretty high first-run overhead, but try moving the slider around some to see the extraction performance after that. WebGPU for parallel compute, combined with WebASM for serial code (or just easily porting older native libs), will make the browser a lot more capable for compute heavy applications. You could also combine these more capable browser clients with a remote compute server, where the server can do some heavier processing while the client can do medium scale stuff to reduce latency or work on representative subsets of the data.
As for AI, people have started looking at compiling ML tools to WebGPU + WebASM: https://tvm.apache.org/2020/05/14/compiling-machine-learning... with nice results, also getting to near-native GPU performance.
No comments yet.