top | item 3097235

Photoshop 'unblur' leaves MAX audience gasping for air

564 points| suivix | 14 years ago |9to5mac.com

123 comments

order
[+] snikolov|14 years ago|reply
If you are wondering how they might be doing it, here is one approach that I saw in a computer vision class (no idea if they are doing anything similar to this)

(slides: http://cs.nyu.edu/~fergus/presentations/fergus_deblurring.pd... (~60 MB ppt) paper: http://cs.nyu.edu/~fergus/papers/deblur_fergus.pdf (~10 MB pdf) )

The basic idea is that you have an unknown original image and it is convolved with an unknown blurring kernel to produce the observed image. It turns out that problem is ill-posed. You could have a bizzare original image blurred with just the right bizzare blurring kernel to produce the observed image. So to estimate both the original image and the kernel, you have to minimize the reconstruction error with respect to the observed image, while penalizing unlikely blurring kernels or original images. If one extracts enough statistics from a dataset of natural images, one can tell whether an image is likely or not by comparing that image's statistics to the corresponding statistics of your dataset of natural images. Similarly, simple blurring kernels are favored over complex ones (think "short arc of motion" vs. "tracing the letters of a word with your camera")

[+] Jach|14 years ago|reply
I recently discovered two neat papers on deconvolution from a Bayesian perspective, written by Kevin Knuth.

http://knuthlab.rit.albany.edu/papers/knuth-ica99.pdf

Abstract: The problem of source separation is by its very nature an inductive inference problem. There is not enough information to deduce the solution, so one must use any available information to infer the most probable solution. We demonstrate that source separation problems are well-suited for the Bayesian approach which provides a natural and logically consistent method by which one can incorporate prior knowledge to estimate the most probable solution given that knowledge. We derive the Bell-Sejnowski ICA algorithm from first principles, i.e. Bayes' Theorem and demonstrate how the Bayesian methodology makes explicit the underlying assumptions. We then further demonstrate the power of the Bayesian approach by deriving two separation algorithms that incorporate additional prior information. One algorithm separates signals that are known a priori to be decorrelated and the other utilizes information about the signal propagation through the medium from the sources to the detectors.

http://knuthlab.rit.albany.edu/papers/knuth-eusipco05-final....

Abstract: Source separation problems are ubiquitous in the physical sciences; any situation where signals are superimposed calls for source separation to estimate the original signals. In this tutorial I will discuss the Bayesian approach to the source separation problem. This approach has a specific advantage in that it requires the designer to explicitly describe the signal model in addition to any other information or assumptions that go into the problem description. This leads naturally to the idea of informed source separation, where the algorithm design incorporates relevant information about the specific problem. This approach promises to enable researchers to design their own high-quality algorithms that are specifically tailored to the problem at hand.

[+] salem|14 years ago|reply
Yes, and it seems that their algorithm needs hints for natural image verses text
[+] chopsueyar|14 years ago|reply
I assume the RedLaser barcode app uses something similar from captured video frames combined with data from the accelerometer, no?
[+] Geee|14 years ago|reply
It's called blind deconvolution. Blind means that they have to first estimate the original convolution/blur kernel and in the second phase, apply the deconvolution. If there's acceleration sensor on the camera, you can use data from that for the blur kernel.

It's nothing new really, but algorithms for it have advanced tremendously. For example, there's some results from 2009 http://www.youtube.com/watch?v=uqMW3OleLM4

Teuobk on HN also made a startup/app based on this, but it seems to be down now: http://news.ycombinator.com/item?id=2460887

[+] wickedchicken|14 years ago|reply
One key problem with deconvolution is it's very susceptible to noise. I'm guessing they developed a way to ramp up the coefficients so you see an increase in clarity while keeping the noise below visible levels. So much of image (and audio!) processing is about getting away with noise the person can't detect :)

On a side note: does anybody know of workable deconvolution algorithms that vary the kernel over the image? The example would be compensating for a bad lens.

[+] teuobk|14 years ago|reply
Indeed, I gave it a go with a blind deconvolution product for the consumer market. In the end, I decided to kill it. Here's my blog post describing why I pulled the plug: http://www.keacher.com/?p=872
[+] sfvisser|14 years ago|reply
I'm no expert in this so correct me if I'm wrong, but isn't a blur convolution only reversible ambiguously? You still have to perform smart guesses how to undo the average, right?
[+] mturmon|14 years ago|reply
Blind deconvolution is easier in the time domain, and for digital signals (because you know both the input and output can have only certain levels, thus constraining the problem). It's been used routinely in modems for years.
[+] mey|14 years ago|reply
Does this algorithm lend itself to video?

Probably would take a massive cloud of systems to correct any highdef video signal, but it would be impressive for many applications (news broadcast, sports, or any live event, security, or remote robots)

Granted to achieve performance on the order of near realtime dsp, it would require an impressive hardware system. Then again, when I can spend the price of a coffee and get access to a cloud of cpu's...

[+] whinybastard|14 years ago|reply
I remember doing some blind source separation for audio when i was at school, that allows you to discriminate multiple voices in noise environment. I wonder what would be the set of output signals in the case of images ... maybe the image illuminated by different sources?
[+] Kliment|14 years ago|reply
The random anti-intellectual comments from the guys in the wheely chairs were extremely annoying and unfunny. This guy is there, showing something truly amazing, and they're all "What's an algorithm? Haha!". And they'll get away with it too.
[+] artursapek|14 years ago|reply
Rainn Wilson is actually a really smart guy. He was playing the classic stupid guy being wowed by a genius card, it was an actor's way of complimenting him.
[+] gridspy|14 years ago|reply
He made a good point. The presenter might as well have said "it works by being a computer program."

It was a pretty funny way of asking for more details imho.

[+] unreal37|14 years ago|reply
It was funny. "This should be in the next version of Photoshop. Will I pay for it? No."
[+] dlsspy|14 years ago|reply
Let me load the specially constructed set of parameters specific to this image so that when I do the next step you get a really clear image.

That was a little too hand-wavy. I'm a little dubious until I see what went into that phase.

[+] stan_rogers|14 years ago|reply
To an extent, this is already available -- for example in the Topaz Labs InFocus Photoshop plugin. There are some params to play with that make it easier to find the blur trajectory when the blur is motion-related (although if you leave it in "figure it out for yourself" mode, it gets it right often enough). InFocus (the current version) will only do linear trajectories, though -- it can't handle curves as well as this Photoshop sneak does.

The parameter preload isn't cheating -- if they're anything like the InFocus params, they're pretty obvious but somewhat tedious. They're things like telling it that you're trying to correct motion blur rather than focus blur, what level of artifacting you're willing to put up with (for forensics or text recovery, you can put up with a lot of noise in the uninteresting part of the picture), the desired hardness of recovered edges, that sort of thing. It would have just been a time-waster for the demo (and, like in the demo, InFocus allows you to save the params as a preset).

[+] jczhang|14 years ago|reply
He used the same parameter profile each time.
[+] ck2|14 years ago|reply
I'm more impressed with that overhead display - seems impossible?

How does it disappear at the end - or is that a virtual digital overlay?

Wait, is the entire background rear projected, like a borderless movie theater screen? Must be massive resolution ?!

[+] nknight|14 years ago|reply
I'm inclined to say much or all of the background is indeed all rear-projected. If you look at the top-right around 5:06 or 5:07, you see what looks a lot like it might be light-emitting text floating in midair. (EDIT: Correction! It's not just floating there, it scrolls left just as the camera is coming down, I missed that on the first viewing. So it's definitely not just on a physical banner.)

As for "massive resolution", slicing up a framebuffer and shooting out the components to multiple projectors wouldn't be a new idea, and I'll bet that's what was done here.

[+] benwerd|14 years ago|reply
Forensic police drama writers everywhere: vindicated.

This is seriously cool technology.

[+] cubicle67|14 years ago|reply
not really, because it's not the same thing

what CSI does it add information that wasn't initially there whereas what this is doing is just unscrambling the information. This is only for photos where the camera has moved during exposure, so should be great for low light shots (indoors etc) where you need a shutter speed a bit slower than optimal, say 1/5 second

[+] ookblah|14 years ago|reply
hahaha i was just thinking the same thing.

"zoom in on that. good. now....ENHANCE."

[+] po|14 years ago|reply
Does this work with just motion blur or also with aperture blur? It seems like they are calculating the motion of the camera so perhaps just the former.
[+] klodolph|14 years ago|reply
Defocus (or "aperture blur") cannot be corrected by the methods they mention in the video. However, there are other kinds of blur you can correct.
[+] waitwhat|14 years ago|reply
How is this different from what FocusMagic http://www.focusmagic.com/ has been offering for over a decade?
[+] teuobk|14 years ago|reply
FocusMagic handles only focus blur or linear motion blur, and either way, it requires a high level of user interaction to direct the deblurring. In effect, it is "non-blind" deconvolution.

The Adobe approach, on the other hand, handles complex (non-linear) motion blur and does so in a so-called "blind" way.

[+] ck2|14 years ago|reply
We're obviously going to need many more independent samples to compare both.
[+] shazam|14 years ago|reply
Wish they applied that algorithm on the video...
[+] hopeless|14 years ago|reply
Indeed. And held the camera steady. I have motion sickness now, and I'm sitting at my desk :(
[+] alanh|14 years ago|reply
Been hoping for this for a while! The information is there, it’s just distorted. Great to see Adobe keep pushing this kind of photo editing magic forward. I bet the maths are crazy.
[+] mturmon|14 years ago|reply
The information is not really there, because the phase is not captured by the sensor. All you have is the intensity of the light.
[+] bartwe|14 years ago|reply
To me it seems the magic is in getting the blur kernel in the first place, how do you get that ?
[+] kondro|14 years ago|reply
Now all Photoshop needs is an unCrash feature.
[+] kstenerud|14 years ago|reply
Well, it KINDA looked like stuff was being unblurred, but it's really hard to tell with the camera panning around out of focus. The only part I could really be sure was actually unblurred was the phone number.
[+] nethsix|14 years ago|reply
I suppose this is more of image sharpening rather than reconstruction. Is this very different from technology on cameras/phones that tries to reduce of photo blurness due to unsteady hands?
[+] klodolph|14 years ago|reply
What makes you suppose that? On an abstract level, you can model blur and camera shake with a convolution kernel. You can then invert the kernel and get back the original image. As an analogy, imagine that someone gives you an audio file with an echo. You can subtract the echo with a filter. Camera shake is harder because of the extra dimension. (Of course, you only get back the exact original in the world of mathematics)
[+] beagle3|14 years ago|reply
What's the difference between "sharpening" and "reconstruction"? Both take an input image, apply some filter which is the best estimate of an inverse of a filter originally applied (e.g., wrong focus).

The technique to unsteady hands artifacts often has access to motion data, which this thing does not.

[+] ibuildthings|14 years ago|reply
One of the tricky thrones in this method is extracting/guessing the camera motion path purely from image measurements. Better the estimate, the better the deblur kernel will be. What might be cool is if they can extract out meta-information using some kind of inertial and gyroscope-akin sensors (which are fast becoming standard in phones and cameras), which can supplement the motion path computation algorithm.
[+] TelmoMenezes|14 years ago|reply
So now we need blurring algorithms that cause actual information loss (I'm sure they already exist, but now there's suddenly a bigger market for them).
[+] hopeless|14 years ago|reply
We always did need those algorithms. There was a paper a few years ago on decoding gaussian-blurred documents to reveal the redacted passages. The only safe way is to completely remove the original pixels, e.g. by drawing a black box over the text.
[+] splicer|14 years ago|reply
Just use content-aware fill in CS5.
[+] KevinEldon|14 years ago|reply
I've read a few of the very technical responses and they are great, but, for me, the take away was the audience response. It's exactly what I look for when I write software. I want that gasp, that moment where someone realizes they can do a hard thing much easier. Where they realize that they just got a few moments of time back.