top | item 8793904

(no title)

sray | 11 years ago

I liked the article, but, as a game developer who does not specialize in graphics, I really liked one of the comments:

Joe Kilner - One extra issue with games is that you are outputting an image sampled from a single point in time, whereas a frame of film / TV footage is typically an integration of a set of images over some non-infinitesimal time.

This is something that, once stated, is blatantly obvious to me, but it's something I simply never thought deeply about. What it's saying is that when you render a frame in a game, say the frame at t=1.0 in a game running at 60 FPS, what you're doing is capturing and displaying the visual state of the world at a discrete point in time (i.e. t=1.0). Doing the analogous operation with an analogous physical video camera means you are capturing and compositing the "set of images" between t=1.0 and t=1.016667, because the physical camera doesn't capture a discrete point in time, but rather opens its shutter for 1/60th of a second (0.16667 seconds) and captures for that entire interval. This is why physical cameras have motion blur, but virtual cameras do not (without additional processing, anyway).

This is obvious to anyone with knowledge of 3D graphics or real-world cameras, but it was a cool little revelation for me. In fact, it's sparked my interest enough to start getting more familiar with the subject. I love it when that happens!

discuss

order

agumonkey|11 years ago

Game renderers do sample time too, for per-object motion blur[1] and sometimes full-scene blur or AA. To push the idea further, research has been done around 'frameless' renderers, where you never render a complete frame but sample ~randomly at successive times and accumulate into the frame(sic)buffer. At low resolution it feels weird but very natural, computed persistence of vision : https://www.youtube.com/watch?v=ycSpSSt-yVs . I love how even at low res, you get valuable perception.

[1] some renderers even take advantage of that to increase performance since you get a more human oriented feel by rendering less precisely.

barium|11 years ago

Speaking of "frameless" rendering, I noticed during Carmack's Oculus keynote (https://www.youtube.com/watch?v=gn8m5d74fk8#t=764), he talks about trying to persuade Samsung to integrate programmable interlacing into their displays in order to give dynamic per-frame control over which lines are being scanned.

This would give you the same "adaptive per-pixel updating" seen in your link, though primarily to tackle the problems with HMDs (low-persistence at high frame-rates).

mietek|11 years ago

Fantastic technique. Can’t believe it’s been almost 10 years since this video. Do you know if there are is any follow-up research being done?

Retra|11 years ago

I always use this fact as a kind of analogue to explain position-momentum uncertainty in physics. From a blurry photo of an object, you can easily measure the speed, but the position is uncertain due to the blur. From a very crisp photo, you can tell exactly where it is, but you can't tell how fast it is moving because it is a still photo.

It's a good way to start building an intuition about state dependencies.

carlob|11 years ago

Another way of say of saying this is that a drum beat has no definite pitch because it's too short. It's exactly the same property of Fourier transforms behind the uncertainty principle.

danieltillett|11 years ago

If you know the direction of the motion of a blurry object isn’t the location of the object on one of the leading edges? I thought the problem was more that you have no idea of the features of the object?

sp332|11 years ago

Most movies use a 180-degree shutter angle, which means the shutter is open half the time. http://www.red.com/learn/red-101/shutter-angle-tutorial So you get motion blur for half the frame time, and no light on the film for the other half of the time. The Hobbit movies (at least the first one) used a 270-degree shutter angle, so even at half the frame time, they got 3/4 as much motion blur in each frame as a normal movie. That might contribute to the odd feeling viewers had. http://www.fxguide.com/featured/the-hobbit-weta/

tigeba|11 years ago

I believe this was done by PJ for a couple of reasons. 1) Practical, increasing the shutter speed means either increasing the amount of light for scenes, using faster film stock ( or higher ISO on your RED camera ) or some combination of both. 2) This shutter speed was a compromise between the exposure time 180-degree shutter shooting at 24 -vs- 48 fps, and would still retain some of the blur so that 24fps screenings would appear relatively 'normal'

foolrush|11 years ago

This.

The article wanders on and on, but is simply grasping at the much more learned aesthetic repulsion of motion blur.

24 and 25 fps (aka 1/48th and 1/50th) motion blur have defined the cinematic world for over a century.

Video? 1/60th. Why the aesthetic revulsion? While I am certain this is a complex sociological construct, there certainly is an overlap with lower budget video soap operas of the early 80's. Much like oak veneer, the aesthetic becomes imbued with greater meaning.

The Hobbit made a bit of a curious choice for their 1/48th presentation in choosing a 270° shutter. An electronic shutter can operate at 360°, which would have delivered the historical 1/48th shutter motion blur.

Instead, the shutter ended up being 1/64th, triggering those all-too-unfortunate cultural aesthetic associations with the dreaded world of low-budget video.

It should be noted that there are some significant minds that believe in HFR motion pictures, such as Pixar's Rick Sayre. However, a disproportionate number of DPs have been against it, almost exclusively due to the motion blur aesthetic it brings, and the technical challenges of delivering to the established aesthetic within the constraints of HFR shooting.

blakecaldwell|11 years ago

Not sure about how movies are filmed, but you don't have to shoot video frames at 1/FPS. That's just the slowest you can shoot. If you're shooting in broad daylight, each frame could be as quick as 1/8000, for example.

Shooting at the slowest shutter speed possible should make the most fluid video.

elinchrome|11 years ago

Worth noting that directors use high speed film to portray a feeling of confusion. The lack of motion blur gives that sense to the scene. E.g. the opening scene of Saving Private Ryan uses this effect.

baddox|11 years ago

I'm pretty sure that 24 FPS footage with a 1/24 second shutter speed would be completely unusable except as an extreme blur effect.

samatman|11 years ago

The corollary is that it should be possible to produce a movie-like quality in games, by over-framing and compositing a blur between frames. The result would have actual motion blur and update at say 30 fps, but without the jerkiness we normally associate with that frame rate.

Arelius|11 years ago

Sure, if you can render at so 120 hz, or more and composite the 4 frames together into your single frame you will get a single improved frame. but even at 4 renders a frame you'll still get artifacts, I imagine you'd need at least double to make it worthwhile, But even that only gives you ~8ms to time step and render the entire scene. Minus composting and any other full-frame post effects. And hitting the ~16ms required for a 60fps is already pretty difficult.

Now, in video games we do have methods to help simulate inter-frame motionblur. The most commonly used is to build a buffer of inter-frame motion vectors, this is often generated with game state, but can also be re derived by analysis of current and previous frames to some affect. Then you do a per-pixel blur based on the direction and magnitude of the motion vectors. Which often works to good effect.

teamonkey|11 years ago

Each game frame is a snapshot taken with an infinitely small shutter duration but displayed for 1/30s or 1/60s (vs one movie frame, which has a shutter duration of, e.g. 1/48s and displayed for 1/24s).

So over-framing game frames will not produce motion blur, it'll simply merge two still images together. You need to simulate motion blur (usually as a post-process). This of course takes more time to render, potentially lengthening the frame times.

simias|11 years ago

That's not entirely accurate. It's true that a camera will capture the image during a certain interval of time instead of a definite point in time (obviously) but the length of that exposition time is not necessarily connected to the framerate.

For instance if you have a digital camera where you can select the framerate (pretty common these days) and if the exposition time was simply the frame period, it would mean that the image at 30fps would be exposed twice as long as 60fps and the resulting picture would look very differently.

Of course you can mitigate that by changing the aperture and other parameters but in my experience in practice you can select the exposition time and framerate independently on digital cameras. With very sensitive sensor and/or good lighting you can achieve very small exposition times, much smaller than the frame period. If you're filming something that moves and unless you want to be blurry on purpose you probably want to reduce the exposition time as much as possible in order to have a clean image, just like in the video games.

stan_rogers|11 years ago

...and it will look very awkward if there is motion in the frame if you're not shooting very close to 1/(2 * framerate). There is a very small tolerance window, outside of which the picture will look mushy (if your camera lets you shoot very close to 1/framerate) or jerky (< 1/(3 * framerate)). Controlling exposure, if you want to maintain a constant aperture for depth-of-field reasons, is done using neutral density filters (including "variable ND" crossed polarizers) and adjusting the sensitivity/gain/ISO, not shutter speed.