Nice! Whenever I see rolling shutter photos on flickr/etc I always think about this old page where a fellow built a long-distance camera from a flatbed scanner to get the effect intentionally: http://www.sentex.net/~mwandel/tech/scanner.html
(There's a great image of a garage door opening & closing about 2/3 of the way down the page if you don't feel like reading the whole thing.)
This is so fascinating. This makes me long for the days when browsing somebody's homepage on the web felt like they were actually inviting you into their home like a friend.
It's a neat analysis from a mathematical perspective, but (especially for a rotating component like this) wouldn't the lighting be all wrong for the remapped pixels? The slow-speed scanning examples use a fixed image (note the highlight doesn't change) so it's likely not usable for real-world digital photography without updates to account for lighting.
Why would it be wrong? The pixels have the correct lighting (assuming the overall ambient light didn't change during the rotation), just the wrong position.
The questions he investigated: "Can we figure out the rate at which a propellor is spinning by analyzing this kind of photo? And can we figure out the real number of propellor blades in the photo?"
You can play a more complex version of that game using video of the vibrating strings on a musical instrument. Here's an especially good example: http://vimeo.com/4041788
Sony is making steady advancements in the global shutter with CMOS sensors. A bit harder on DSLRS with larger sensors and more pixels to read but the smaller sensors with smaller megapixels already have them [1].
So it's matter of a time that most CMOS bases videos will be free of rolling shutter, starting with higher-end video cameras that have sensors with just enough pixels to cover 2k-4k videos [2]
Somewhat relatedly, check out this awesome new camera technology which essentially captures a rolling diff of the image rather than the image itself, with impressive results: https://www.youtube.com/watch?v=LauQ6LWTkxM
> The rolling shutter is also why stills from gopro videos never quite live up to how clear the videos look in motion.
I've thought about this as well. I always assumed the lack of clarity of single frames extracted from video material is because the eye/brain incorporates several images shown really quick in succession into one whole image. So when only a single frame is shown, there's not as much information in that as, say, 10 frames shown quickly in succession.
A very cool article, indeed; but I believe he uses the term exposure wrong.
Exposure is the total time our whole light sensitive area is exposed to the light coming from our scene. You can think of it as an integral of the sensor (or film) area exposed as a function of the time, divided by the total sensor area.
In the examples he uses the term exposure to describe the total scantime of the sensor, whilst it seems that his actual exposure (which is equal to the time each row of pixels samples the scene) is much smaller.
It may sound as a small difference but if one wants to reproduce the effect, we will essentially need to match two parameters: exposure and scantime. While exposure is easy to set, scantime is pretty much hardcoded and depends on the physical characteristics of the camera. Even an analog shutter has a scantime on small exposure times.
If I understand this correctly, it is effectively doing what a photo-finish camera does at race sports events, except that the slit moves across the scene, rather than the scene moving past the slit.
And the performance of those photo-finish systems is impressive. I would be grateful if someone could explain how the software is able to almost instantaneously identify the runners who are frequently not in lanes and also often have missing numbers. For one example, see FinishLynx (http://www.finishlynx.com/)
There are two categories of artifacts caused by rolling shutter: those from objects moving in the scene (such as the propeller in this example), and those that result from movement of the camera relative to the scene (especially pans, tilts, dolly movements, etc).
Those caused by camera movement (often resulting in an image that looks skewed) are somewhat easier to "fix" as you say with post processing since a correcting transformation can sometimes be applied to the entire image uniformly. Existing tools can do this with some success, but there are still some camera moves that prove to be more difficult (zooms, irregular movement, etc).
Artifacts caused by objects moving in the scene are often much more difficult to remove, at least when it comes to providing a generic solution, because "reconstruction" of the image requires fairly accurate information as to how those objects were moving. In the case of the rotating prop or wheel, it may be somewhat simple (the algorithm would still probably require user input of things like the center of rotation and speed), but in other cases, the motion may be quite complex (e.g., multiple wheels rotating in different directions/speeds, linear vs angular motion, etc). And that doesn't even account for the fact that in most cases, there will be occlusion in the source image from artifacts of the rolling shutter. That is, in the propeller example, you have patches of the background that are covered up in the source, but wouldn't be in a "fixed" image, so they need to be filled somehow.
What I'm saying is that sophisticated software may be able to do a lot in helping to correct for rolling shutter artifacts, but I don't think there will be an automatic, fix-all solution from post processing software any time soon.
For 2D-like pictures (legs of fan pictured from bottom), the fix might be possible. For 3D (propeller blades with every blade having a right and a left face), it will be not.
The software will need to choose the final position of blades in the rendered picture. There is no ideal position since the blades have been moving throughout the scan time. Whichever position we decide, there will be information missing for one/more blade. Say, left face of blade no. X needs to be rendered but the camera only captured its right face. May be assuming that all blades have same shape and information of one blade can be used in rendering other blade will fix the problem.
Also missing background will have to be reconstructed. That's another issue.
I seem to recall that Adobe Premiere Pro and/or Adobe After Effects contain a tool for mitigating rolling shutter artifacts, though I am not certain if it uses the method described in the article.
Well that's certainly interesting. I was about to say, it reminds me strongly of a zeta function, in a way. But it turns out Dedekind's eta function was what I was thinking of [1]
Rolling shutters were also used by traditional cameras. This effect is really old school stuff. Rolling shutter providers better exposure than circular shutter. I remember that most of professional photographs taken in 80s also used rolling shutter.
[+] [-] zorpner|11 years ago|reply
(There's a great image of a garage door opening & closing about 2/3 of the way down the page if you don't feel like reading the whole thing.)
[+] [-] err4nt|11 years ago|reply
[+] [-] camillomiller|11 years ago|reply
http://camillomiller.com/sc
That website was of some inspiration, great too see it's still online!
[+] [-] rjgray|11 years ago|reply
It reminds me of the slitscan special effects technique that was used to create the stargate sequence at the end of 2001: A Space Odyssey.
http://filmmakeriq.com/lessons/slit-scan-recreating-the-star...
[+] [-] pbnjay|11 years ago|reply
[+] [-] kgabis|11 years ago|reply
[+] [-] coldtea|11 years ago|reply
[+] [-] britta|11 years ago|reply
The questions he investigated: "Can we figure out the rate at which a propellor is spinning by analyzing this kind of photo? And can we figure out the real number of propellor blades in the photo?"
[+] [-] alexqgb|11 years ago|reply
[+] [-] salimmadjd|11 years ago|reply
[1] http://www.sony.net/Products/SC-HP/new_pro/december_2013/imx...
[2] http://www.newsshooter.com/2014/09/11/io-industries-4k-super...
[+] [-] tonylemesmer|11 years ago|reply
http://scolton.blogspot.co.uk/2014/05/grasshopper3-mobile-se...
If it were possible to get this in a consumer grade video product I would be very happy. Unfortunately these sensors are $1295.
[+] [-] KaiserPro|11 years ago|reply
There are a few cameras out there with global shutter, RED isn't one of them.
That and the colour reproduction is one of the many reasons why the arri alexa is popular despite have "less resolution"
[+] [-] themgt|11 years ago|reply
[+] [-] Fuzzwah|11 years ago|reply
The rolling shutter is also why stills from gopro videos never quite live up to how clear the videos look in motion.
The cover photo from this month's parachutist magazine is a great example:
http://parachutistonline.com/sites/all/files/images/cover201...
Notice the right leg of the jumpsuit, its flapping in the wind as the shutter rolls over the scene.
When people use the slow-mo feature for gopro videos everything kind of morphs rather than moving naturally. I've always found it to be a cool effect:
https://www.youtube.com/watch?v=dUSF6xmmqJg&t=46s
[+] [-] runeks|11 years ago|reply
I've thought about this as well. I always assumed the lack of clarity of single frames extracted from video material is because the eye/brain incorporates several images shown really quick in succession into one whole image. So when only a single frame is shown, there's not as much information in that as, say, 10 frames shown quickly in succession.
[+] [-] andmarios|11 years ago|reply
Exposure is the total time our whole light sensitive area is exposed to the light coming from our scene. You can think of it as an integral of the sensor (or film) area exposed as a function of the time, divided by the total sensor area.
In the examples he uses the term exposure to describe the total scantime of the sensor, whilst it seems that his actual exposure (which is equal to the time each row of pixels samples the scene) is much smaller.
It may sound as a small difference but if one wants to reproduce the effect, we will essentially need to match two parameters: exposure and scantime. While exposure is easy to set, scantime is pretty much hardcoded and depends on the physical characteristics of the camera. Even an analog shutter has a scantime on small exposure times.
[+] [-] kitd|11 years ago|reply
Photo-finish shots also end up looking pretty weird: http://coachdeanhebert.files.wordpress.com/2007/08/100-photo...
[+] [-] mhb|11 years ago|reply
[+] [-] Magi604|11 years ago|reply
[+] [-] dperfect|11 years ago|reply
Those caused by camera movement (often resulting in an image that looks skewed) are somewhat easier to "fix" as you say with post processing since a correcting transformation can sometimes be applied to the entire image uniformly. Existing tools can do this with some success, but there are still some camera moves that prove to be more difficult (zooms, irregular movement, etc).
Artifacts caused by objects moving in the scene are often much more difficult to remove, at least when it comes to providing a generic solution, because "reconstruction" of the image requires fairly accurate information as to how those objects were moving. In the case of the rotating prop or wheel, it may be somewhat simple (the algorithm would still probably require user input of things like the center of rotation and speed), but in other cases, the motion may be quite complex (e.g., multiple wheels rotating in different directions/speeds, linear vs angular motion, etc). And that doesn't even account for the fact that in most cases, there will be occlusion in the source image from artifacts of the rolling shutter. That is, in the propeller example, you have patches of the background that are covered up in the source, but wouldn't be in a "fixed" image, so they need to be filled somehow.
What I'm saying is that sophisticated software may be able to do a lot in helping to correct for rolling shutter artifacts, but I don't think there will be an automatic, fix-all solution from post processing software any time soon.
[+] [-] jatin085|11 years ago|reply
The software will need to choose the final position of blades in the rendered picture. There is no ideal position since the blades have been moving throughout the scan time. Whichever position we decide, there will be information missing for one/more blade. Say, left face of blade no. X needs to be rendered but the camera only captured its right face. May be assuming that all blades have same shape and information of one blade can be used in rendering other blade will fix the problem.
Also missing background will have to be reconstructed. That's another issue.
[+] [-] oakwhiz|11 years ago|reply
[+] [-] KaiserPro|11 years ago|reply
http://www.youtube.com/watch?v=Zt0u9hsPuZY
[+] [-] carsonreinke|11 years ago|reply
This effect was manipulated to extract more information for this: http://newsoffice.mit.edu/2014/algorithm-recovers-speech-fro...
[+] [-] GuiA|11 years ago|reply
[+] [-] sp332|11 years ago|reply
[+] [-] kordless|11 years ago|reply
[+] [-] arh68|11 years ago|reply
[1] https://en.wikipedia.org/wiki/Dedekind_eta_function#mediavie...
[+] [-] Sami_Lehtinen|11 years ago|reply
[+] [-] rimantas|11 years ago|reply
[+] [-] unknown|11 years ago|reply
[deleted]