top | item 35200099

(no title)

zwkrt | 3 years ago

TIL about Normal Mapping, which to my layman eyes kind of looks like computing a hologram on the flat surface of an object. In the coin example in TFA, even though I now 'know' that the coin is a cylinder, the normal map gives it very convincing coin shape. Cool!

discuss

order

a_e_k|3 years ago

If you think that's cool, the next level, which really could be considered to behave like a hologram is "parallax mapping" and it's variant "parallax occlusion mapping".

Wikipedia has a little video showing the effect of parallax mapping in action: https://en.wikipedia.org/wiki/Parallax_mapping

Another good example: https://doc.babylonjs.com/features/featuresDeepDive/material...

And there's a decent explanation here: https://learnopengl.com/Advanced-Lighting/Parallax-Mapping

Some game examples: http://wiki.polycount.com/wiki/Parallax_Map

Some nice game examples, specifically with looking into windows: http://simonschreibt.de/gat/windows-ac-row-ininite/

Basically, in terms of levels of realism via maps, the progression goes

1. Bump mapping: the shader reads a heightfield and estimates the gradients to compute an adjustment to the normals. Provides some bumpiness, but tends looks a little flat.

2. Normal mapping: basically a variant of bump mapping -- the shader reads the adjustment to the normals directly from a two- or three-channel texture.

3. Parallax mapping: the shader offsets the lookups in the texture map by a combination of the heightmap height and the view direction. Small bumps will appear to shift correctly as the camera moves around, but the polygon edges and silhouettes usually give the illusion away.

4. Parallax occlusion mapping: like parallax mapping, but done in a loop where the shader steps across the heightfield looking for where a ray going under the surface would intersect that heightfield. Handles much deeper bumps, but polygon edges and silhouettes still tend to be a giveaway.

4. Displacement mapping: the heightfield map (or vector displacement map) gets turned into actual geometry that gets rendered somewhere further on in the pipeline. Pretty much perfect, but very expensive. Ubiquitous in film (feature animation and VFX) rendering.

dragontamer|3 years ago

So "classic" rendering, as per DirectX9 (and earlier) is Vertex Shader -> Hardware stuff -> Pixel Shader -> more Hardware stuff -> Screen. (Of course we're in DirectX12U these days, but stuff from 15 years ago are easier to understand, so lets stick with DX9 for this post)

The "hardware stuff" is automatic and hard coded. Modern pipelines added more steps / complications (Geometry shaders, Tesselators, etc. etc.), but the Vertex Shader / Pixel Shader steps have remained key to modern graphics since the early 00s.

-------------

"Vertex Shader" is a program run on every vertex at the start of the pipeline. This is commonly used to implement wind-effects (ex: moving your vertex left-and-right randomly to simulate wind), among other kinds of effects. You literally move the vertex from its original position to a new one, in a fully customizable way.

"Pixel Shader" is a program run on every pixel after the vertexes were figured out (and redundant ones removed). Its one of the last steps as the GPU is calculating the final color of that particular pixel. Normal mapping is just one example of the many kinds of techniques implemented in the Pixel Shading step.

-------------

So "Pixel Shaders" are the kinds of programs that "compute a hologram on a flat surface". And its the job of a video game programmer to write pixel shaders to create the many effects you see in video games.

Similarly, Vertex Shaders are the many kinds of programs (wind and other effects) that move vertices around at the start of the pipeline.