This is a GPU "software" raytracer (i.e. using manual ray-scene intersections and not RTX) written using the WebGPU API that renders glTF scenes. It supports many materials, textures, material & normal mapping, and heavily relies on multiple importance sampling to speed up convergence.
I'm using WebGPU as a nice modern graphics API that is at the same time much more user-friendly and easier to use compared to e.g. Vulkan. I'm using a desktop implementation of WebGPU called wgpu, via it's C bindings called wgpu-native.
My browser doesn't support WebGPU properly yet, so I don't really care about running this thing in browser.
It's a mega-kernel, so you'll get poor occupancy past the first bounce. A better strategy is to shoot, sort, and repeat, which then also allows you to squeeze in an adaptive sampler in the middle.
The new origin should be along the reflected ray, not along the direction of the normal. This line basically adds the normal (with a sign) to the origin (intersection point), which seems odd.
Poor's man way to find where the negatives come from is to max(0,...) stuff until you find it.
> squeeze in an adaptive sampler in the middle.
Can you expand on that? How does that work? I only know of adaptive sampling in screen space where you shoot more or less rays to certain pixels based on their estimated variance so far.
This is a completely side question, but just because it always astonishes me how "real" raytraced scenes can look in terms of lighting, but it's too complex/slow for video games.
How far have we gotten in terms of training AI models on raytraced lighting, to simulate it but fast enough for video games? Training an AI not on rendered scenes from any particular viewpoint, but rather on how light and shadows would be "baked into" textures?
Because what raytracing excels at is the overall realism of diffuse light. And it seems like the kind of thing AI would be good at learning?
I've always though, e.g. when looking at the shadows trees cast, I couldn't care less if the each leaf shape in the shadow is accurate or entirely hallucinated. The important things seem to be a combination of the overall light diffusion, combined with correct nearby shadow shapes for objects. Which is seems AI would excel at?
Even with the fact that it's static lighting, you can already see a ton of the challenges that they faced. In the end they did get a fairly usable solution that improved on their existing baking tools, but it took what seems like months of experimenting without clear linear progress. They could have just as easily stalled out and been stuck with models that didn't work.
And that was just for static lighting, not every realtime dynamic lighting. ML is going to need a lot of advancements before it can predict lighting whole-sale, faster and easier than tracing rays.
On the other hand ML is really really good at replacing all the mediocre handwritten heuristics 3d rendering has. For lighting, denoising low-signal (0.5-1 rays per pixel) lighting is a big area of research[0] since handwritten heuristics tend to struggle with such little amount of data available, along with lighting caches[1] which have to adapt to a wide variety of situations that again make handwritten heuristics struggle.
At any reasonable quality, AI is even more expensive than raytracing. A simple intuition for this is the fact that you can easily run a raytracer on consumer hardware, even if at low FPS, meanwhile you need a beefy setup to run most AI models and they still take a while.
Its still hard to do realtime. You need so much gpu memory that a second GPU must be used at least today. The question is what gets achieved quicker. Hard calculated simulation or AI post processing. Or maybe a combination?
This is an interesting idea but please no more AI graphics generation in video games please. Games dont get optimized anymore because devs rely on AI upscaling and frame generation to get playable framerates and it makes the games look bad and play bad.
WegGPU is a way nicer HAL if you're not an experienced graphics engineer. So even if you only target desktops, it's a valid choice.
On the web, WebGPU is only supported by Chrome-based browser engines at this point, and a lot of software developers us Firefox (and don't really like encouraging a browser monoculture), so it doesn't make a ton of sense to target browser based WebGPU for some people at this point.
bezdomniy|1 year ago
artemonster|1 year ago
> WebGPU
> this project is desktop-only
Boss, I am confused, boss.
lisyarus|1 year ago
My browser doesn't support WebGPU properly yet, so I don't really care about running this thing in browser.
modeless|1 year ago
lisyarus|1 year ago
omolobo|1 year ago
> // No idea where negative values come from :(
I don't know, but:
> newRay.origin += sign(dot(newRay.direction, geometryNormal)) * geometryNormal * 1e-4;
The new origin should be along the reflected ray, not along the direction of the normal. This line basically adds the normal (with a sign) to the origin (intersection point), which seems odd.
Poor's man way to find where the negatives come from is to max(0,...) stuff until you find it.
TomClabault|1 year ago
Do we have good sorting strategy whose costs are amortized yet? Meister 2020 (https://meistdan.github.io/publications/raysorting/paper.pdf) shows that the hard part is actually to hide the cost of the sorting.
> squeeze in an adaptive sampler in the middle. Can you expand on that? How does that work? I only know of adaptive sampling in screen space where you shoot more or less rays to certain pixels based on their estimated variance so far.
lisyarus|1 year ago
Sure! If you look into the to-do list, there's a "wavefront path tracer" entry :)
> new origin should be along the reflected ray
I've found that doing it the way I'm doing it works better for preventing self-intersections. Might be worth investigating, though.
unknown|1 year ago
[deleted]
crazygringo|1 year ago
How far have we gotten in terms of training AI models on raytraced lighting, to simulate it but fast enough for video games? Training an AI not on rendered scenes from any particular viewpoint, but rather on how light and shadows would be "baked into" textures?
Because what raytracing excels at is the overall realism of diffuse light. And it seems like the kind of thing AI would be good at learning?
I've always though, e.g. when looking at the shadows trees cast, I couldn't care less if the each leaf shape in the shadow is accurate or entirely hallucinated. The important things seem to be a combination of the overall light diffusion, combined with correct nearby shadow shapes for objects. Which is seems AI would excel at?
jms55|1 year ago
Even with the fact that it's static lighting, you can already see a ton of the challenges that they faced. In the end they did get a fairly usable solution that improved on their existing baking tools, but it took what seems like months of experimenting without clear linear progress. They could have just as easily stalled out and been stuck with models that didn't work.
And that was just for static lighting, not every realtime dynamic lighting. ML is going to need a lot of advancements before it can predict lighting whole-sale, faster and easier than tracing rays.
On the other hand ML is really really good at replacing all the mediocre handwritten heuristics 3d rendering has. For lighting, denoising low-signal (0.5-1 rays per pixel) lighting is a big area of research[0] since handwritten heuristics tend to struggle with such little amount of data available, along with lighting caches[1] which have to adapt to a wide variety of situations that again make handwritten heuristics struggle.
[0]: https://gpuopen.com/learn/neural_supersampling_and_denoising..., and the references it lists
[1]:https://research.nvidia.com/publication/2021-06_real-time-ne...
nuclearsugar|1 year ago
Etheryte|1 year ago
jampekka|1 year ago
holoduke|1 year ago
lispisok|1 year ago
omolobo|1 year ago
Specifically this one, which seems to tackle what you mentioned: https://research.nvidia.com/labs/rtr/publication/hadadan2023...
unknown|1 year ago
[deleted]
bella964|1 year ago
[deleted]
unknown|1 year ago
[deleted]
pjmlp|1 year ago
sspiff|1 year ago
On the web, WebGPU is only supported by Chrome-based browser engines at this point, and a lot of software developers us Firefox (and don't really like encouraging a browser monoculture), so it doesn't make a ton of sense to target browser based WebGPU for some people at this point.
lisyarus|1 year ago