top | item 42516433

Show HN: I've made a Monte-Carlo raytracer for glTF scenes in WebGPU

131 points| lisyarus | 1 year ago |github.com

This is a GPU "software" raytracer (i.e. using manual ray-scene intersections and not RTX) written using the WebGPU API that renders glTF scenes. It supports many materials, textures, material & normal mapping, and heavily relies on multiple importance sampling to speed up convergence.

37 comments

order

artemonster|1 year ago

> "GPU "software" raytracer"

> WebGPU

> this project is desktop-only

Boss, I am confused, boss.

lisyarus|1 year ago

I'm using WebGPU as a nice modern graphics API that is at the same time much more user-friendly and easier to use compared to e.g. Vulkan. I'm using a desktop implementation of WebGPU called wgpu, via it's C bindings called wgpu-native.

My browser doesn't support WebGPU properly yet, so I don't really care about running this thing in browser.

modeless|1 year ago

Do you have a link that runs in the browser?

lisyarus|1 year ago

Nope, this project is desktop-only

omolobo|1 year ago

It's a mega-kernel, so you'll get poor occupancy past the first bounce. A better strategy is to shoot, sort, and repeat, which then also allows you to squeeze in an adaptive sampler in the middle.

> // No idea where negative values come from :(

I don't know, but:

> newRay.origin += sign(dot(newRay.direction, geometryNormal)) * geometryNormal * 1e-4;

The new origin should be along the reflected ray, not along the direction of the normal. This line basically adds the normal (with a sign) to the origin (intersection point), which seems odd.

Poor's man way to find where the negatives come from is to max(0,...) stuff until you find it.

TomClabault|1 year ago

> A better strategy is to shoot, sort, and repeat

Do we have good sorting strategy whose costs are amortized yet? Meister 2020 (https://meistdan.github.io/publications/raysorting/paper.pdf) shows that the hard part is actually to hide the cost of the sorting.

> squeeze in an adaptive sampler in the middle. Can you expand on that? How does that work? I only know of adaptive sampling in screen space where you shoot more or less rays to certain pixels based on their estimated variance so far.

lisyarus|1 year ago

> It's a mega-kernel, so you'll get poor occupancy past the first bounce

Sure! If you look into the to-do list, there's a "wavefront path tracer" entry :)

> new origin should be along the reflected ray

I've found that doing it the way I'm doing it works better for preventing self-intersections. Might be worth investigating, though.

crazygringo|1 year ago

This is a completely side question, but just because it always astonishes me how "real" raytraced scenes can look in terms of lighting, but it's too complex/slow for video games.

How far have we gotten in terms of training AI models on raytraced lighting, to simulate it but fast enough for video games? Training an AI not on rendered scenes from any particular viewpoint, but rather on how light and shadows would be "baked into" textures?

Because what raytracing excels at is the overall realism of diffuse light. And it seems like the kind of thing AI would be good at learning?

I've always though, e.g. when looking at the shadows trees cast, I couldn't care less if the each leaf shape in the shadow is accurate or entirely hallucinated. The important things seem to be a combination of the overall light diffusion, combined with correct nearby shadow shapes for objects. Which is seems AI would excel at?

jms55|1 year ago

This was a recent presentation from SIGGRAPH 2024 that covered using neural nets to store baked (not dynamic!) lighting https://advances.realtimerendering.com/s2024/#neural_light_g....

Even with the fact that it's static lighting, you can already see a ton of the challenges that they faced. In the end they did get a fairly usable solution that improved on their existing baking tools, but it took what seems like months of experimenting without clear linear progress. They could have just as easily stalled out and been stuck with models that didn't work.

And that was just for static lighting, not every realtime dynamic lighting. ML is going to need a lot of advancements before it can predict lighting whole-sale, faster and easier than tracing rays.

On the other hand ML is really really good at replacing all the mediocre handwritten heuristics 3d rendering has. For lighting, denoising low-signal (0.5-1 rays per pixel) lighting is a big area of research[0] since handwritten heuristics tend to struggle with such little amount of data available, along with lighting caches[1] which have to adapt to a wide variety of situations that again make handwritten heuristics struggle.

[0]: https://gpuopen.com/learn/neural_supersampling_and_denoising..., and the references it lists

[1]:https://research.nvidia.com/publication/2021-06_real-time-ne...

Etheryte|1 year ago

At any reasonable quality, AI is even more expensive than raytracing. A simple intuition for this is the fact that you can easily run a raytracer on consumer hardware, even if at low FPS, meanwhile you need a beefy setup to run most AI models and they still take a while.

jampekka|1 year ago

The current approach seems to be ray tracing limited/feasible number of samples and upsampling/denoising the result using neural networks.

holoduke|1 year ago

Its still hard to do realtime. You need so much gpu memory that a second GPU must be used at least today. The question is what gets achieved quicker. Hard calculated simulation or AI post processing. Or maybe a combination?

lispisok|1 year ago

This is an interesting idea but please no more AI graphics generation in video games please. Games dont get optimized anymore because devs rely on AI upscaling and frame generation to get playable framerates and it makes the games look bad and play bad.

pjmlp|1 year ago

WebGPU projects that don't provide browser examples are kind of strange, then better use Vulkan or whatever.

sspiff|1 year ago

WegGPU is a way nicer HAL if you're not an experienced graphics engineer. So even if you only target desktops, it's a valid choice.

On the web, WebGPU is only supported by Chrome-based browser engines at this point, and a lot of software developers us Firefox (and don't really like encouraging a browser monoculture), so it doesn't make a ton of sense to target browser based WebGPU for some people at this point.

lisyarus|1 year ago

See my answer to artemonster above.