top | item 12461531

(no title)

Le_SDT | 9 years ago

I'm curious. What do you mean by "Human vision is completely overlooked"? Are you talking about color perception? 3d rendering?

Thanks :)

discuss

order

sillysaurus3|9 years ago

There's pretty much no way to avoid writing a large comment in response to this. So, with apologies...

It was my focus for a long time to achieve perfectly photorealistic rendering. I come from a gamedev background, and I've been fascinated since around age ten about how to get a computer to paint pictures. By 17 I was writing game engines mostly equivalent to quake, and used this portfolio to get into the industry. The next three years were spent getting as close as possible to the bleeding edge of real-time graphics development.

I remember having long debates with a colleague about what high-dynamic range lighting "meant." "If we spin around in our office chair, our brains do not suddenly change the overall brightness of this office. Why should we be programming games to do this? Why is everyone doing things that way?"

My concerns were of a more fundamental nature as well. What is a diffuse texture? A diffuse texture is a well-understood concept in realtime graphics. Anyone with a basic knowledge of shaders should immediately recognize:

  color = lighting * diffuse + ambient
Trouble is, it doesn't correspond to reality even slightly. It's not even roughly close. It happens to look good to humans, and that's why we use it. But the trouble was, the further I tried to probe the mystery of realism in computer graphics, the more I ran against this phenomenon of "We use X technique because it looks good."

So I turned to research papers. Books. The medical field. Everywhere that was remotely related to possible breakthroughs in photorealistic rendering. Research papers are excellent for assembling techniques, but not results. The books in human vision and color science were more promising, yet most of the industry seemed (and still seems) to pay little attention to them. Compare a book about color perception to, say, http://www.pbrt.org/ and you'll see a stark difference. Flip through the table of contents and you get transformations, shapes, primitives, color and radiometery, sampling, reflection, materials, texture, scattering, light sources, monte carlo integration...

And for what? We know that these techniques simply do not produce computer-generated videos that a human will identify as a real-life image. It's not for lack of processing power. There is a disconnect between the old rules and those that will ultimately result in real-time realism, and you won't find it in that table of contents.

Now, the trouble with writing all of this is that if I knew how to do it, I'd have done it already. It's a life-long search, and it's not so easy to refute an entire industry without being (rightly) dismissed. But if you wish to know what I suspect are the ways forward, it's this: Get a camera. Take photos. Compare these photos to the results of the algorithms you write. Iterate on your algorithms until they are producing results that match something that already captures nature, not our beliefs about how we ought to be able to capture nature. "Just throw in physical models and presto!" has not thus far been true.

You'll notice, for example, that computer graphics in videogames have plateaued. They get more impressive with each generation, but that impressiveness does not get them progressively closer to looking real. Nor should it. A computer game tells a story. The closer it looks to real life, the more restricted the artists are, along with the rest of the design of the game.

So we turn to the movie industry for hope. But it's restricted in exactly the same way. The research papers are all along the lines of new techniques to try, or studies of existing pipelines and how to deal with their complexities. It's not fundamental research.

As someone who has spent his life in pursuit of realism in computer-generated video, my recommendation is this: Read DaVinci's journal. Pay attention to what each page is saying. He had to discover from first principles what makes a painting look real, and why. You'll notice that he spends most of his time talking about human vision and our perceptions of color.

If someone is going to make this development happen, it's not going to come from the game industry, and it won't come from the movie industry. That leaves you. Hopefully this will encourage some of you to pursue this. Once you accept that most of the computer graphics industry isn't actually focused on achieving realism, you'll start to develop your own techniques. My hope is that this will eventually lead to a breakthrough.

web007|9 years ago

2 points:

1. "Take a picture and make it look right" is exactly what people have done for things like POVRay since at least the 90s. I can't find it now, but at one point someone set up a glass ball on a checkerboard and used a point light and a camera to confirm the diffraction and distortion models were correct because someone claimed they didn't look right or were doing the wrong thing. The math that's there for rays, shapes, diffraction, diffusion, caustics, etc. is accurate, and necessary but not sufficient. Which brings me to

2. CG realism has generally hit the Uncanny Valley by now. It's so close to real that we think "that's pretty good", but it's far enough away that we still know "something's wrong". It's the difference between a dummy, a corpse and a live person.

A couple of examples I remember from the past decade are laser + milk and better skin rendering on one of the NVIDIA demos a while back. There wasn't (isn't?) a good model to simulate the diffraction and subsequent diffusion of a laser shining into a glass of milk. Actual lasers with actual milk don't do the things we expect of modeled lasers in simulated milk. Some component is missing, but all the existing math is right for lots of other cases. The NVIDIA skin thing was adding 3 or 4 layers to an existing model to simulate subsurface scattering and reflection that happens in skin, vs old models that treat skin as paint. The old stuff was right, just not enough.

All of that aside, there are decent photorealistic rendering options for some materials today, but at the cost of CPU hours of render time. If you can do better then please do, even if it's just for one material or one physics action.

formula1|9 years ago

I appreciate you probably put a lot of effort out but your post largely sums up to "They are wrong, I've done extra curricular research to prove it, I have no alternative"

While noble, I cant help but wanting to understand what alternatives you were leading yourself done. As an example, light can arguably simplified to intersecting cylinders and spheres that bounce off surfaces to create new 3d shapes. Each shape also would have an origin 2d shape based upon whats reflecting it. an "eye" reads shape intersections with self and also can filter those intersections in respect to origin shape. After each bounce, the new shape takes form as the bouncing lights color multiplied by the color of the bounced object In low light situations, subtle luminosity differences can be enhanced.

What I did was offer an example. Perhaps youll one day be successful but I got the impression you are some kind of renegade with a mission. While I can certainly relate to that, I view science and building the future quite far from renegade status. And in the mean time, you gave me a sob story with no algorithms/solutions except for "take real pictures and compare them". As a lazy programmer, walking outside and discoveringvthe world doesnt interest me too much.

adrianm|9 years ago

I thought you were going to go with a different angle on this. If all you care about is photorealism, well, we'll have to agree to disagree that the current state of the art is incapable of that (given enough time).

What I thought your original point was is that 3D rendering ONLY cares about photorealism. Physically based renderers have been greatly influenced by photography (both still and film). Think light probes, camera lenses, etc. Much of the post-processing you see in a scene is also derived by what a camera would see, from focusing, to blur, to HDR now, and of course the infamous lens flare!

So I think a physically based renderer which uses the human eye as its camera would be interesting to see more of.

hypertexthero|9 years ago

The recommendation to get a camera is a very good one. A camera used for an extended period of time is a very good teacher, and not only of seeing.

To go even further, learn to draw. Use your hands and other senses instead of only thinking. Read Drawing on the Right Side of the Brain by Betty Edwards and The Hand by Frank R. Wilson.

I'm curious what your favorite games are, visually, @sillysaurus3?

A couple of mine are Far Cry 2 and [Elite: Dangerous][1] Horizons. I also love the sound in both of these which is to me a seemingly inseparable element to great screen work.

[1]: http://simongriffee.com/notebook/elite-dangerous-education/

JabavuAdams|9 years ago

> And for what? We know that these techniques simply do not produce computer-generated videos that a human will identify as a real-life image.

This is actually testable. Some archviz images are indistinguishable from reality for non-graphics experts.

> Get a camera. Take photos. Compare these photos to the results of the algorithms you write. Iterate on your algorithms until they are producing results that match something that already captures nature, not our beliefs about how we ought to be able to capture nature.

Yes, very important. Art teachers try to reinforce this by saying "seek reference". It's true for graphics as well, and it's possible to be more empirical.

marcosdumay|9 years ago

> "We use X technique because it looks good."

Replaces "looks good" with a more generic "gets the right answer" and you'll notice that phrase repeated on any simulation context where computers are not powerful yet to work from first principles.

If you want a safe place to look, look at chemistry. They have simulations that vary in complexity by a huge number of orders of magnitude. You'll see those non-natural techniques that "look good" get progressively applied as the simulation gets slower and slower. It works really well, but once computers are fast enough, everybody just throws them by the window.

dharma1|9 years ago

Maybe foveation? dynamic range perception?