(no title)
sray | 11 years ago
Joe Kilner - One extra issue with games is that you are outputting an image sampled from a single point in time, whereas a frame of film / TV footage is typically an integration of a set of images over some non-infinitesimal time.
This is something that, once stated, is blatantly obvious to me, but it's something I simply never thought deeply about. What it's saying is that when you render a frame in a game, say the frame at t=1.0 in a game running at 60 FPS, what you're doing is capturing and displaying the visual state of the world at a discrete point in time (i.e. t=1.0). Doing the analogous operation with an analogous physical video camera means you are capturing and compositing the "set of images" between t=1.0 and t=1.016667, because the physical camera doesn't capture a discrete point in time, but rather opens its shutter for 1/60th of a second (0.16667 seconds) and captures for that entire interval. This is why physical cameras have motion blur, but virtual cameras do not (without additional processing, anyway).
This is obvious to anyone with knowledge of 3D graphics or real-world cameras, but it was a cool little revelation for me. In fact, it's sparked my interest enough to start getting more familiar with the subject. I love it when that happens!
agumonkey|11 years ago
[1] some renderers even take advantage of that to increase performance since you get a more human oriented feel by rendering less precisely.
barium|11 years ago
This would give you the same "adaptive per-pixel updating" seen in your link, though primarily to tackle the problems with HMDs (low-persistence at high frame-rates).
mietek|11 years ago
Retra|11 years ago
It's a good way to start building an intuition about state dependencies.
shiven|11 years ago
[0] http://en.m.wikipedia.org/wiki/Uncertainty_principle
carlob|11 years ago
danieltillett|11 years ago
sp332|11 years ago
tigeba|11 years ago
foolrush|11 years ago
The article wanders on and on, but is simply grasping at the much more learned aesthetic repulsion of motion blur.
24 and 25 fps (aka 1/48th and 1/50th) motion blur have defined the cinematic world for over a century.
Video? 1/60th. Why the aesthetic revulsion? While I am certain this is a complex sociological construct, there certainly is an overlap with lower budget video soap operas of the early 80's. Much like oak veneer, the aesthetic becomes imbued with greater meaning.
The Hobbit made a bit of a curious choice for their 1/48th presentation in choosing a 270° shutter. An electronic shutter can operate at 360°, which would have delivered the historical 1/48th shutter motion blur.
Instead, the shutter ended up being 1/64th, triggering those all-too-unfortunate cultural aesthetic associations with the dreaded world of low-budget video.
It should be noted that there are some significant minds that believe in HFR motion pictures, such as Pixar's Rick Sayre. However, a disproportionate number of DPs have been against it, almost exclusively due to the motion blur aesthetic it brings, and the technical challenges of delivering to the established aesthetic within the constraints of HFR shooting.
blakecaldwell|11 years ago
Shooting at the slowest shutter speed possible should make the most fluid video.
elinchrome|11 years ago
Sharlin|11 years ago
baddox|11 years ago
samatman|11 years ago
Arelius|11 years ago
Now, in video games we do have methods to help simulate inter-frame motionblur. The most commonly used is to build a buffer of inter-frame motion vectors, this is often generated with game state, but can also be re derived by analysis of current and previous frames to some affect. Then you do a per-pixel blur based on the direction and magnitude of the motion vectors. Which often works to good effect.
teamonkey|11 years ago
So over-framing game frames will not produce motion blur, it'll simply merge two still images together. You need to simulate motion blur (usually as a post-process). This of course takes more time to render, potentially lengthening the frame times.
simias|11 years ago
For instance if you have a digital camera where you can select the framerate (pretty common these days) and if the exposition time was simply the frame period, it would mean that the image at 30fps would be exposed twice as long as 60fps and the resulting picture would look very differently.
Of course you can mitigate that by changing the aperture and other parameters but in my experience in practice you can select the exposition time and framerate independently on digital cameras. With very sensitive sensor and/or good lighting you can achieve very small exposition times, much smaller than the frame period. If you're filming something that moves and unless you want to be blurry on purpose you probably want to reduce the exposition time as much as possible in order to have a clean image, just like in the video games.
stan_rogers|11 years ago