I'm pretty sure the acronym was for "Physically Based Ray Tracing". The original preface to The Book explicitly says that
> pbrt is based on the ray-tracing algorithm
I suspect it's moved to "Rendering" in common usage to avoid getting hung up on distinctions like ray tracing versus path tracing versus hybrid techniques.
PBR shaders are not trivial. It's incredible to me that they are readily available in all the shader languages, for free. Your common gamer thinks that the game engine is responsible for the quality of graphics in a game. This is hugely untrue. The quality is largely down to how well the art assets have been authored to take advantage of PBR. And that tech is, for all practical purposes, free (in both senses of the word).
My only gripe with modern day graphics documentation is the huge gap in any guidance on building a well-performing rendering pipeline. There are hundreds of tutorial series that get you as far as rendering a single, lit, textured, normal mapped model. But managing multiple models, multiple lights, different types of materials, takes a very different design for resource management and is basically ignored.
The current situation is certainly better than it was 20, even 10, years ago. But that last, missing piece is pretty vital.
I spent a lot of time reading this literature as an outsider to graphics development this summer. I agree resource management is the real engineering problem and the graphics code itself is the idealized code that is relatively small in comparison.
My conclusion is that resource management across different hardware is a secret sauce that helps individual engines push the limits of the current generation. Listening to interviews with developers, and they rarely talk about a novel lighting formulas. Instead they talk about squeezing in high res textures or more colors. How they managed so many assets or faked a reflection. I imagine the work is incredibly tedious and makes browsers differences look trivial.
Differences between hardware is largely how memory can be mapped between CPU, shared, and GPU memory.
This is only tangentially related, but if you are into Rust & GPU programming you should check out femtovg (https://github.com/femtovg/femtovg) a nanovg Rust port.
Right now it supports OpenGL and WebGL, Metal backend is almost done, WebGPU is planned.
Related: I've read numerous reviews on Amazon about the poor printing quality of the book. Is that true? Is there a better version of it, or should I hope the updated version that is in-progress will rectify those issues?
I'd love to buy the book, but would hate to shell out money for a crappy printing.
Amazon has a terrible counterfeiting problem, including for books. It's possible that a third-party seller made a counterfeit, low-quality print of the book and set a lower price that meant they got chosen for the “buy button” by the Amazon algorithm (this has happened to many books). Since the reviews don't distinguish between sellers, you get this problem.
If you buy from a legitimate book store, perhaps this can be avoided?
[+] [-] sam_bristow|5 years ago|reply
[+] [-] mrec|5 years ago|reply
> pbrt is based on the ray-tracing algorithm
I suspect it's moved to "Rendering" in common usage to avoid getting hung up on distinctions like ray tracing versus path tracing versus hybrid techniques.
[+] [-] detaro|5 years ago|reply
[+] [-] navaati|5 years ago|reply
[+] [-] rootbear|5 years ago|reply
[+] [-] trox|5 years ago|reply
[+] [-] moron4hire|5 years ago|reply
My only gripe with modern day graphics documentation is the huge gap in any guidance on building a well-performing rendering pipeline. There are hundreds of tutorial series that get you as far as rendering a single, lit, textured, normal mapped model. But managing multiple models, multiple lights, different types of materials, takes a very different design for resource management and is basically ignored.
The current situation is certainly better than it was 20, even 10, years ago. But that last, missing piece is pretty vital.
[+] [-] dexwiz|5 years ago|reply
My conclusion is that resource management across different hardware is a secret sauce that helps individual engines push the limits of the current generation. Listening to interviews with developers, and they rarely talk about a novel lighting formulas. Instead they talk about squeezing in high res textures or more colors. How they managed so many assets or faked a reflection. I imagine the work is incredibly tedious and makes browsers differences look trivial.
Differences between hardware is largely how memory can be mapped between CPU, shared, and GPU memory.
[+] [-] adamnemecek|5 years ago|reply
You can run it in the browser https://tronical.github.io/femtovg/examples/index.html
Join the discord https://discord.gg/V69VdVu
[+] [-] mattgreenrocks|5 years ago|reply
I'd love to buy the book, but would hate to shell out money for a crappy printing.
[+] [-] TazeTSchnitzel|5 years ago|reply
If you buy from a legitimate book store, perhaps this can be avoided?
[+] [-] sxp|5 years ago|reply
[+] [-] adamnemecek|5 years ago|reply
[+] [-] oxymoron|5 years ago|reply
[+] [-] erichocean|5 years ago|reply
[+] [-] Blikkentrekker|5 years ago|reply
This s vital information, of course.
[+] [-] fluffything|5 years ago|reply
[+] [-] 0-_-0|5 years ago|reply
"Support for rendering on GPUs is available on systems that have CUDA and OptiX."
[0]: https://github.com/mmp/pbrt-v4
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] unknown|5 years ago|reply
[deleted]