top | item 13448047

The lost art of 3D rendering without shaders

319 points| mmphosis | 9 years ago |machinethink.net | reply

78 comments

order
[+] antirez|9 years ago|reply
The year I learned to write C code I was 19, second year of university and already willing to drop out, so I started spending time with C and 3D graphics. I was just fresh of the math exam so the 3D matrix transformations to do rotations was trivial to implement. I just wrote a function to draw triangles, used a simple z-sorting technique, and the basic shading calculating the cosine of the angle between the observer and the surface. With just these basic things I ended up with 3D "worlds" similar to the ones I saw in DOS games when I was a child. All the effort was maybe 500 or 1000 lines of code, but to build things from scratch, only starting from the ability to draw an RGB pixel, gave me a sense of accomplishment that later shaped everything else I did. I basically continued for the next 20 years to create things from scratch.
[+] stevoski|9 years ago|reply
My first year university Linear Algebra textbook even had an appendix explaining how to rotate and skew 3D objects in computer graphics using the matrix multiplication I had learned that semester. I loved it.

Then I finished university and got a programming job creating forms to gather user data, put it into a database, and generate reports. Sigh.

[+] _lce0|9 years ago|reply
Thanks for bringing back my own memories!!

I had forget about this but at my first year and while the algebra professor was drawing 3D vectors to explain the lecture, I was thinking "that is a 2D surface, so there should be a linear transformation between the two worlds".

Later at home, I've found such space transformation and built a small 3D world that you can walk, using only vectors and plain triangles :)

[+] Koshkin|9 years ago|reply
It is also worth noting that today's CPUs are faster than 3D graphics chips from 20 years ago, so it seems perfectly reasonable to expect a good performance from a basic hand-written 3D rendering library.
[+] pcwalton|9 years ago|reply
This is a good tutorial, but it's important to note that scanline rasterizers are not how GPUs (or even high-performance SIMD software implementations) work. Instead, they use barycentric coordinate sign tests for better parallelism and "free" interpolation.

A good explanation on this is Fabian Giesen's: https://fgiesen.wordpress.com/2013/02/06/the-barycentric-con...

[+] exDM69|9 years ago|reply
You're absolutely right about scanline rasterizers and GPUs, but scanline rasterization is very interesting historically. Most of the 1990s pre-GPU software renderers did scanline rasterization, for example Quake and Thief had a pure software renderer.

Before GPUs, perspective correct texture mapping was the holy grail of 3d graphics because CPUs of the time were not fast enough to do all the divisions for each pixel, and lots of clever tricks were invented to work around this limitation. It's a bit of a shame that this article does not cover it, perhaps there's a part 2 in progress.

Here's a quick write up about Thief's engine https://nothings.org/gamedev/thief_rendering.html

I've seen a similar piece on Quake but I can't find it now.

[+] Hydraulix989|9 years ago|reply
The way I saw it was that the tutorial was more of a "roll your own software renderer" tutorial; it diverges quite heavily from what actual hardware accelerators do.
[+] tluyben2|9 years ago|reply
It takes a lot of mental pain for me to make that step still. I think in old 3D and then someone derive shaders. It works but it is a painful process. Then again, for now it is nothing more than a hobby anyway.
[+] ChuckMcM|9 years ago|reply
Takes me back. A long time ago I wrote a simple rendering library for the 3DFx "Glide" library. It didn't do shaders but it would do mipmapped texture rendering which allowed you to have an image (texture) on your triangle. For a while I was stuck on the projection matrix and understanding screen clipping until my Dad gave me his copy of the Kodak Reference Handbook[1] third edition, copyright 1945. And they describe focal length, field of view, fstops, and lens effects very clearly.

[1] https://books.google.com/books?id=6DgYAQAAMAAJ&dq=Kodak%20Re...

[+] alkonaut|9 years ago|reply
Nitpick: this is software rendering. This is how we did before any kind of 3D api existed. Both GL/D3D/etc were without shaders to begin with. I still maintain a fixed pipeline (no explicit vertex or fragment shaders) 3D app with DirectX.

One can argue that the fixed pipeline of D3D is using a kind of implicit shader, but it's not the kind of shader we usually mean when we talk about vertex and fragment shaders today.

[+] dahart|9 years ago|reply
> Back in the day — way before we had hardware accelerated 3D graphics cards, let alone programmable GPUs — if you wanted to draw a 3D scene you had to do all that work yourself. In assembly. On a computer with a 7 MHz processor.

7 MHz? That's so fast and modern. Back in the day we were writing 3d fill routines on the 6502 going 1 MHz. With no floating point and no diagonal line support. And in bare machine language, going uphill both directions in the snow! ;)

[+] vidarh|9 years ago|reply
One of the first pieces of 6502 assembly I read, and spent ages deciphering, was an implementation of Bresenham's line algorithm published in some magazine. Who needs floating point...
[+] rl3|9 years ago|reply
>The framework then takes these shaders and your 3D data, performs some magic, ...

If we juxtapose that statement with the following in an unrelated introduction[0]:

'WebGL is often thought of as a 3D API. People think "I'll use WebGL and magic I'll get cool 3d". In reality WebGL is just a rasterization engine.'

I suppose when you're writing a software renderer from scratch without the luxury of any API or hardware acceleration, such things are indeed magic.

[0] http://webglfundamentals.org/webgl/lessons/webgl-fundamental...

[+] Retric|9 years ago|reply
When people talk about 'Magic' in software they mostly just mean the implementation details don't impact them. You can have two very different GPU's both implement the same WebGL calls correctly.
[+] vvanders|9 years ago|reply
Kinda of a shame they omitted matrices. They're one of the foundational bit of any 3D api and one of the few things that translates well from fixed function/sw raster to modern pipelines.

Still great to know the fundamentals, texture formats, tiling and other things are also really useful pieces to understand when working with 3D pipelines.

[+] Jasper_|9 years ago|reply
Matrices are just a convenient notational trick for a set of linear algebra expressions. They seem confusing until you realize that an identity matrix represents:

    x' = 1*x + 0*y + 0*z
    y' = 0*x + 1*y + 0*z
    z' = 0*x + 0*y + 1*z
I prefer to teach 3D graphics without matrices because it's really not anything more complicated than a compact notation. Tricks like "invert and transpose to receive the normal matrix" or "take the first column to get your up vector" make no sense unless you work out the algebra of what those things mean.

And don't get me started on homogeneous coordinates which are a way to put translation in a matrix by shoving a convenient "1" constant in the input vector, and the perspective matrix which does near/far, perspective transform, and a depth remap in the same matrix, and isn't easily separable because it steals the "w=1" constant for depth remap and also adjusts the "w" afterwards. Equivalent of reusing a local variable because you're short on registers :)

[+] jsharf|9 years ago|reply
I think once you know this math, the jump to using matrices is pretty trivial. I can understand not wanting to include it... not that I feel strongly about either direction. I just don't think it's necessary for the scope of this super useful article.
[+] air|9 years ago|reply
Minor nitpick

"The green and blue colors, z-position, and normal vector are all interpolated in the same manner. (Texture coordinates behave slightly differently because there you’d also need to take the perspective into account.)"

Colors (c), z, and texture coordinates (t) should all be interpolated differently because of perspective. You need to interpolate 1/z, c/z, t/z and for every pixel then do division eg. c/z / 1/z = c

[+] fizixer|9 years ago|reply
It may be a lost art for game developers. Far from it for CG grad students and researchers. Quite the contrary, it's actually part of the rite of passage, heck an undergrad level prerequisite to know these things like the back of your hand, plus a whole lot more, to do graduate level CG work.

Even if you're not a researcher, but wish to write your own path tracing code for example, you would end up learning this.

So no, not a lost art at all in my opinion.

[+] ykl|9 years ago|reply
This is a great article!

I strongly believe that an understanding of how old school 3D rendering worked is an excellent thing for modern graphics programmers to have, to appreciate and understand where all of our fancy modern graphics APIs and whatnot come from. Back when I helped teach a GPU programming course, one of the assignments I gave was a full-blown software rasterizer implemented entirely in CUDA. Not so much "program in OpenGL" as "program an OpenGL". :)

[+] eriknstr|9 years ago|reply
In a video I saw recently, the guy in it suggested [1] reading old books about earlier versions of DirectX from the late 90's and early 2000's, around DirectX version 9, even though one does not want to ever use DirectX for the reason that most graphics engines are built on the concepts of these versions of DirectX he said.

[1]: https://youtube.com/watch?v=06zp5GMe2rI&t=4m11s

[+] drinkjuice|9 years ago|reply
More importantly, with this knowledge one now can write pixel shaders that display 3D scenes :P
[+] linuxhansl|9 years ago|reply
Ahh. The days. I remember before I had learned about linear algebra, I saw somebody rendering molecules as 3D wire frames. I had an Amiga back then with it's "Blitter" (could draw lines in hardware, a long as you tell it which if eight octants the line's angle falls into).

Then, being the geek I was, I sat down every day until I had figured out perspective transformation and rotation (later I found I had just done matrix multiplication). Of course I never thought of homogeneous coordinates, so translation was an extra step to be done for each point.

Even worked out "real" red-green 3D. Oh the days when I had time for this stuff. Fond memories.

[+] bhouston|9 years ago|reply
I used to write my own triangle fill algorithms with their own shaders back in the 1990s. Fun times: https://github.com/bhouston/3DMaskDemo1997

Here is the optimized triangle fill code with embedded asm pixel shaders: https://github.com/bhouston/3DMaskDemo1997/blob/master/src/N...

[+] Radim|9 years ago|reply
Ah, the 80s & early 90s, when you had to implement everything yourself and every byte and instruction counted :)

The demo scene these days feels somehow less satisfying. The demos definitely look better, but they can plug into such a vast ecosystem of system libraries that 64KB feels like cheating.

ASM mode 13h nostalgia.

[+] jlarocco|9 years ago|reply
A while back I created a small project for drawing 3D wireframe graphics using the Common Lisp LTK interface to Tk.

It's slow (uses inefficient matrix algorithms, uses Tk, etc.) but it's "fast enough" for some simple 3D scenes. Not very practical for real-life use, but it was fun.

https://github.com/jl2/ltk3d

FWIW, it's not doing hidden line removal, IIRC I was careful to pick a viewing location that made it look good.

[+] paulddraper|9 years ago|reply
Great, great stuff. Terrific article.

---

It does seem to perpetuate -- or at least not make clear -- a misconception.

> 3D rendering without shaders

> We won’t use any 3D APIs at all

Those are two independent statements.

Metal, OpenGL, WebGL, and Vulcan are not a 3D APIs. They are (2D) rasterization APIs using shaders. Any 3D-ness of the math is external to them. In contrast, OGRE, Java 3D, and three.js are 3D rendering APIs.

Two independent choices yield four types of ways to do 3D rendering. E.g., in browser they could be

                     |         3D API         |       no 3D API        |
      ---------------|------------------------|------------------------|
        GPU shaders  | three.js, using WebGL  |         WebGL          |
      ---------------|------------------------|------------------------|
      no GPU shaders | three.js, using canvas |         canvas         |
      
This article fits in bottom-right corner.

I take notice when I hear the oft-repeated fact that OpenGL/WebGL are 3D rendering APIs. At www.lucidchart.com, in 2015 we chose to use WebGL when available to improve rendering performance for (2D) diagramming. Were WebGL made for 3D stuff, it'd be a weird choice, but WebGL is for high-performance rasterization of all kinds.

http://webglfundamentals.org/webgl/lessons/webgl-2d-vs-3d-li...

[+] c0ffe|9 years ago|reply
Great article! Reading the title, I thought it was about the "tricks" that games used when the best thing available was the fixed pipeline.

I still remember how amazed I was when learned the good balance between performance cost and the resulting image when using textures for static lighting (lightmaps).

[+] a_c|9 years ago|reply
This is the kind of article that I enjoy reading a lot. Most tools available today mask away fundamental concepts, and many aspiring young engineers learn to use "tools". While the ability to use various tools is of paramount importance, the most valuable skills an engineer can possible possess, in my opinion, is the ability to create new tools/concepts/whatever from 1st-ish principle
[+] hellofunk|9 years ago|reply
I have a question about the rasterization step. When creating the scanlines, would this be a possible entry point for anti-aliasing, by giving the lines a subtle gradient that goes to near 0 alpha at the right and left edges? (and maybe also the top and bottom edges for the lines at the top and bottom of the stack). There are many ways to do anti-aliasing and this seems like one possibility to me.
[+] Waterluvian|9 years ago|reply
Any suggestions on a good primer for what shaders are and how they work? For years I've always thought "shaders" are just effects you can layer onto a rendered scene. Say, to get an 80s effect, or bloom, or a cel shading effect, etc. I never really thought of it as a way to actually do the base scene rendering.