One of those annoying things is that the name "super resolution" stuck, here.
Originally super-resolution was a hardware technique, and not "guessing". If you can [edit: this was poorly worded "control an imager positioning"] control imaging with finer resolution than the sensor has, you can take multiple images and reconstruct a higher resolution image in a principled way for say 2x resolution gain (cf super-resolution microscopy), also some telescope systems. Some modern photographic systems actually do this directly (piezo motors?) on the sensor.
Of course this only works if what you are imaging is reasonably static over the time needed to take all the images.
You can do an approximate version of this with video, with caveats because you don't control the motion. The key thing is, though, you actually have more data to work with.
This idea ran in parallel with image processing people attempting to estimate higher resolution from a single image for a while, and unfortunately the terminology stuck in image processing also. Something like resolution extrapolation is probably better but that ship sailed ages ago.
There are forms of super-resolution that certainly aren't guessing. For example, you can take a video of a subject and integrate over time, so that the motion of the subject over the sensor allows you to infer sub-pixel detail.
Unfortunately someone is going to wrap up super-resolution for critical tasks and sell it likely causing many people harm or at least inconvenience. I have already tried to talk some companies out of using it for police/surveillance type work. People who do not understand the technology are determined to use it and someone is going to.
Lots of things are "only guessing." Auto color correction is only guessing. Unsharp mask is only guessing. Smart selection is only guessing. Content-aware fill is only guessing.
They're still useful tools to have in your toolbox as a photographer or designer, even for critical tasks, and I don't really see how this is different. There may be certain failure cases, but everything has failure cases.
Is there any proof that Ryan Gosling's face (or perhaps a photograph of Ryan Gosling's face) was in fact not there when the original photo was taken? :)
Machine learning is educated guessing based on previously seen data. As mentioned by others there are ways to do super resolution that only uses the data available. I can't think of any that can upscale a single image, although I have vague memories of having seen something about using moiré patterns to infer the higher resolution texture of some features.
I feel like it's also ok for critical tasks if you're willing to accept that it isn't perfect. If all you have is a grainy photo, you'll only be able to make guesses yourself; why not have a superhuman guess too? (Because the people putting it to use would be morons about it, I know, let me dream)
That reminds me of the old CSI episodes where they'd have grainy footage from a CCTV running around SIF/240p, and the lead investigator would say "Enhance... Enhance... Enhance... There!" And the face of the killer would be clear as day from inside a moving car a block away.
Is this single example of someone using different software (by Topaz Labs) relevant to this article specifically? Or just every article about enhancement?
Guessing I think is too strong of a way to phrase what super resolution does. The broader concept of, for example, regularized solving of inverse problems is used widely in things like CT and MRI where the reconstructed imagery is used for analysis. The regularization is effectively the part you’re saying is guessing, but I would phrase it as enforcing assumptions about the data. Neural network-based approaches are similarly learning the distribution of the output data.
I just wish Adobe CC wasn't the buggiest piece of software I've ever used.
I've had a number of issues over the years, but my current issue is that when I try to open CC the interface elements all freeze and are unclickable (even though the window is still scrollable – very strange behavior). So I went to uninstall it, but I can't because Photoshop is installed. So I went to uninstall Photoshop, but you guessed it, I can only uninstall PS through CC, which is unresponsive.
Note that this isn't the same as uninstalling everything via the official process since this leaves behind stuff like Adobe's Genuine client which verifies you're not using pirated software.
People are doing this to themselves. Stop buying their shitty software, and see how quickly they start to fix it.
I don't understand why people are so obsessed with Adobe, since their software nowadays isn't that good. There are tons of alternatives out there that work better and do the same thing, if not more.
Is it just laziness/reluctance to learn something new?
It's not only Creative Cloud itself but all the newer versions of their apps I recently used.
I needed to update my CV recently and expected to spend 1h in InDesign. I spent 6h in the end.
– InDesign crashes while saving and destroys my document. 1h lost.
– InDesign crashes while exporting a PDF of my document (9 pages). I hadn't saved. 1h lost.
– InDesign crashes (reproducible) when adding/inserting a page (mind you, that's page 10).
First time this happened I hadn't saved for half an hour. I was really considering changing the text because I couldn't solve this.
Then I found [1]. Quote:
> [...] after speaking to Adobe chat help, they asked me to send my file to them. They sent it back to me and everything went back to normal. [...] "File was corrupted , we recovered it by using scripts and then saved as IDML."
– Because of the above I had the idea of exporting to IDML. Re-importing then allowed me to add the page but I had subtle formatting errors where the last character before a tab or a newline on lines that had the font changed via a character style had the wrong style. Fixing this: 1h.
– When I re-arranged parts of the CV via copy & paste entire sections I copied lost the small caps/italic styles they had assigned (acronyms/names). Going through the entire document to fix this: 1.5h.
I should have known better. Less than two years ago I helped a friend do a snail mail mass mailing where we used a CSV file with addresses to create hundreds of (two page) letters. All in InDesign. Everything worked until we tried to export as PDF, for printing. The solution was to export as 'interactive' PDF and only export about ~100 pages at a time.
I bought Affinity Publisher already when the thing with the letters happened. But I naively believed updating my CV would be quick in InDesign.
In retrospect typesetting the CV from scratch in Publisher would have been the better choice.
Last week I helped a friend with a commercial that was mostly 3D and some motion graphics done in After Effects (Ae). We couldn't get it to render in After Effects 2019. It would run out memory and then just not render the frame or crash.
In the end we exported the project for an older version and went back to an Ae CC version from six years before. That worked without any issues.
All this is just shocking. I used InDesign from 1.0 and it was not that bad, a decade ago. Ae ... the same. See above.
As of a recent update, Acrobat Reader (free version) refuses to let me open any document w/o signing into CC first. Another wtf.
What a friend of mine replied when he heard about my InDesign adventure:
Pixelmator Pro (happy customer here) has great superresolution without all the cloud subscription baggage. I think it’s fair to make comparisons, which I will leave to those who have CC subscriptions, but anyone doing so should realize that Adobe is being compared to a moving target as outside options and even DIY options are only getting better.
Yes we’re not talking about accuracy here, just perceived resolution, no need to hammer on that.
This sounds like the same approach as Gigapixel AI from Topaz Labs.
I haven't tried Gigapixel but I have used Topaz' Video Enhance AI, which is phenomenal. I've been using it to upscale old TV shows which never got an HD remaster, to UHD.
Right now it's running through the first episode of Firefly, converting from 540p to 2160p (540p as the bluray rip was basically upscaled to 1080p from its original production, so I converted to 540p first in Handbrake with zero noticeable loss in quality since I used a near-lossless compression factor.. this provides better upscaling):
When it's done I'll run it through Flowframes for framerate interpolation. Then maybe another pass in Handbrake to figure out an optimal size for the end file.
Then I'll run through the rest of the season using the same settings I tested with this first episode.
A great misinterpretation of the photography is that it's an objective medium. Any combination of lenses and film stocks (or equivalent) is going to represent but a flat, skewed representation of the three-dimensional world; it's been interpreted before anyone performs any processing, computational or analog.
Susan Sontag's "On Photography" is a great read on this topic for anyone marginally interested in not just photography, but art in general.
Back in the day when they first had their sub model it was soooo much cheaper than it is now :(. Here's hoping some guy that writes super-resolution ML models on the side is a big fan of Gimp.
I’ve used this a few times already and it works perfectly about 80% of the time. If your picture has a lot of grain or low-level noise it’s just going to make the noise worse, then you throw in a median to de-noise and you’ve lost the benefit. But it’s otherwise a nice tool to have, especially for older low-res photos (like 800x600 stuff you want to print).
I think longer term stuff like neural rendering will make super resolution less relevant. If you can re-create a 3D scene from a single photo or otherwise reconstruct the photo in a less-resolution-dependent way, then playing the super resulting game is less interesting (for users and researchers alike).
If you’re having an existential crisis over interpolated/extrapolated/hallucinated images, and have been assuming that every stage of a camera throws away bits instead of interpolating, here is a list of stages in most camera pipelines that try to interpolate information already:
* demosaicing: interpolates color from nearby pixels. Each pixel gets just one of the tree color components. The other two are interpolated.
* decompressing jpeg: tries to guess information the compressor lost.
* black field correction: adjusts the brightness at every pixel to compensate for the different sensitivity at each pixels.
* de-vignetting: compensate for the border of the image being darker than the center.
* auto white balance: compensates for the fact that your eye’s color consistency doesn’t work as it would in the natural setting. This is a complicated way to get you to see the color you would have you seen the full scene.
All of these try to recover some aspect of the signal that was irretrievably lost by a previous step. They do this by making plausible guesses.
One thing I've always wanted was an AI algorithm for extracting additional detail from multiple RAW camera frames of the same scene. Many photographers will typically take 10-100 shots of a subject to ensure that at least one picture is a "keeper".
Keeping one frame and discarding the rest is a bit wasteful in a sense. The other frames have useful information that could be extracted by a well-trained AI to provide super-resolution, increased DoF, additional blur or shake reduction, etc...
I've deliberately kept all of my RAW frames, even the not-so-sharp or slightly shaky ones, because I foresee that at some point in the future this will be an automatic thing that tools like Adobe Lightroom will do that maximise the available image quality.
Storage is cheap, but I can never go back in time and photograph my memorable occasions with a better camera from the future...
When I see software enhanced photography like this, and look at the relatively primitive processing that is happening on my DSLR and my high priced mirrorless cameras, I realize that despite their huge sensors and amazing glass (which I paid a small fortune for), they will soon be outclassed by the simple smartphone in my pocket.
My wife routinely shoots photos on her Pixel 3 that get a better response on our family whatsapp group than the painstakingly post-processed DSLR shots I create and post.
This could be an indictment of my failures as a photographer. Or perhaps my family has no taste in photos. But it's also entirely possible that a Pixel 3 is all the camera you really need for family documentary work ... and I've wasted so much money on unnecessary hobby gear.
I would have liked if they had more comparisons to ground truth images instead of resampled ones. The foilage and bear comparisons also look like the "super resolution" images had contrast boosted, which is either awkward artifact from the scaling or misleading post/pre-processing.
[+] [-] arnaudsm|5 years ago|reply
Super-resolution is only guessing. It's ok for art, not for critical tasks.
[+] [-] ska|5 years ago|reply
Originally super-resolution was a hardware technique, and not "guessing". If you can [edit: this was poorly worded "control an imager positioning"] control imaging with finer resolution than the sensor has, you can take multiple images and reconstruct a higher resolution image in a principled way for say 2x resolution gain (cf super-resolution microscopy), also some telescope systems. Some modern photographic systems actually do this directly (piezo motors?) on the sensor.
Of course this only works if what you are imaging is reasonably static over the time needed to take all the images.
You can do an approximate version of this with video, with caveats because you don't control the motion. The key thing is, though, you actually have more data to work with.
This idea ran in parallel with image processing people attempting to estimate higher resolution from a single image for a while, and unfortunately the terminology stuck in image processing also. Something like resolution extrapolation is probably better but that ship sailed ages ago.
[+] [-] jonplackett|5 years ago|reply
https://www.youtube.com/watch?v=2aINa6tg3fo
[+] [-] timthorn|5 years ago|reply
https://www.cs.huji.ac.il/~peleg/papers/icpr90-SuperResoluti...
[+] [-] Datenstrom|5 years ago|reply
[+] [-] danShumway|5 years ago|reply
I worry this is going to be a case where the marketing is at direct odds with public education efforts.
[+] [-] jonas21|5 years ago|reply
They're still useful tools to have in your toolbox as a photographer or designer, even for critical tasks, and I don't really see how this is different. There may be certain failure cases, but everything has failure cases.
[+] [-] porphyra|5 years ago|reply
[+] [-] tshaddox|5 years ago|reply
[+] [-] savant_penguin|5 years ago|reply
[+] [-] cycrutchfield|5 years ago|reply
[+] [-] sorenjan|5 years ago|reply
Machine learning is educated guessing based on previously seen data. As mentioned by others there are ways to do super resolution that only uses the data available. I can't think of any that can upscale a single image, although I have vague memories of having seen something about using moiré patterns to infer the higher resolution texture of some features.
[+] [-] ravi-delia|5 years ago|reply
[+] [-] nojokes|5 years ago|reply
It is especially something that some cameras can do by deliberately doing sensor shifting.
They should not call it super resolution or at best emulated super resolution or artificial super resolution.
[+] [-] ineedasername|5 years ago|reply
[+] [-] natemo|5 years ago|reply
Who knows how this evolves and what new applications people may devise? For today, I agree: it's just art.
[+] [-] ryanwhitney|5 years ago|reply
>To be clear, this isn’t a knock on the Gigapixel software. Vaarakallio tells PetaPixel that the software is “amazing” and he uses it all the time.
[+] [-] aktuel|5 years ago|reply
[+] [-] rom1v|5 years ago|reply
[+] [-] greggturkington|5 years ago|reply
[+] [-] dirtyid|5 years ago|reply
[+] [-] kthartic|5 years ago|reply
>you may want to uncheck detect faces… unless you want Ryan Gosling popping up all over the place.
Sooo not really a case against super-resolution, just a funny result of having used the wrong settings
[+] [-] oivey|5 years ago|reply
[+] [-] nnmg|5 years ago|reply
- https://en.wikipedia.org/wiki/Super-resolution_microscopy
- Stimulated emission depletion microscopy (STED): https://en.wikipedia.org/wiki/STED_microscopy
- stochastic optical reconstruction (PALM/STORM)
- structured illumination microscopy (SIM)
Here is one of my favorite STED imaging papers, looking at the skeleton of neurons: https://www.sciencedirect.com/science/article/pii/S221112471...
[+] [-] fastball|5 years ago|reply
I've had a number of issues over the years, but my current issue is that when I try to open CC the interface elements all freeze and are unclickable (even though the window is still scrollable – very strange behavior). So I went to uninstall it, but I can't because Photoshop is installed. So I went to uninstall Photoshop, but you guessed it, I can only uninstall PS through CC, which is unresponsive.
Smh.
[+] [-] judge2020|5 years ago|reply
https://helpx.adobe.com/creative-cloud/kb/cc-cleaner-tool-in...
Note that this isn't the same as uninstalling everything via the official process since this leaves behind stuff like Adobe's Genuine client which verifies you're not using pirated software.
[+] [-] bogwog|5 years ago|reply
I don't understand why people are so obsessed with Adobe, since their software nowadays isn't that good. There are tons of alternatives out there that work better and do the same thing, if not more.
Is it just laziness/reluctance to learn something new?
[+] [-] virtualritz|5 years ago|reply
I needed to update my CV recently and expected to spend 1h in InDesign. I spent 6h in the end.
– InDesign crashes while saving and destroys my document. 1h lost.
– InDesign crashes while exporting a PDF of my document (9 pages). I hadn't saved. 1h lost.
– InDesign crashes (reproducible) when adding/inserting a page (mind you, that's page 10). First time this happened I hadn't saved for half an hour. I was really considering changing the text because I couldn't solve this. Then I found [1]. Quote:
> [...] after speaking to Adobe chat help, they asked me to send my file to them. They sent it back to me and everything went back to normal. [...] "File was corrupted , we recovered it by using scripts and then saved as IDML."
– Because of the above I had the idea of exporting to IDML. Re-importing then allowed me to add the page but I had subtle formatting errors where the last character before a tab or a newline on lines that had the font changed via a character style had the wrong style. Fixing this: 1h.
– When I re-arranged parts of the CV via copy & paste entire sections I copied lost the small caps/italic styles they had assigned (acronyms/names). Going through the entire document to fix this: 1.5h.
I should have known better. Less than two years ago I helped a friend do a snail mail mass mailing where we used a CSV file with addresses to create hundreds of (two page) letters. All in InDesign. Everything worked until we tried to export as PDF, for printing. The solution was to export as 'interactive' PDF and only export about ~100 pages at a time.
I bought Affinity Publisher already when the thing with the letters happened. But I naively believed updating my CV would be quick in InDesign.
In retrospect typesetting the CV from scratch in Publisher would have been the better choice.
Last week I helped a friend with a commercial that was mostly 3D and some motion graphics done in After Effects (Ae). We couldn't get it to render in After Effects 2019. It would run out memory and then just not render the frame or crash. In the end we exported the project for an older version and went back to an Ae CC version from six years before. That worked without any issues.
All this is just shocking. I used InDesign from 1.0 and it was not that bad, a decade ago. Ae ... the same. See above.
As of a recent update, Acrobat Reader (free version) refuses to let me open any document w/o signing into CC first. Another wtf.
What a friend of mine replied when he heard about my InDesign adventure:
> I'm on CS6 for anything Adobe. Just junk now.
[1] https://community.adobe.com/t5/indesign/indesign-crashes-whe...
[+] [-] medicineman|5 years ago|reply
[deleted]
[+] [-] taisalie|5 years ago|reply
[deleted]
[+] [-] natch|5 years ago|reply
Yes we’re not talking about accuracy here, just perceived resolution, no need to hammer on that.
[+] [-] endlessvoid94|5 years ago|reply
I hear it’s just monstrously fast on the M1 too.
[+] [-] heroprotagonist|5 years ago|reply
I haven't tried Gigapixel but I have used Topaz' Video Enhance AI, which is phenomenal. I've been using it to upscale old TV shows which never got an HD remaster, to UHD.
Right now it's running through the first episode of Firefly, converting from 540p to 2160p (540p as the bluray rip was basically upscaled to 1080p from its original production, so I converted to 540p first in Handbrake with zero noticeable loss in quality since I used a near-lossless compression factor.. this provides better upscaling):
https://i.imgur.com/hcRYM5n.jpg
When it's done I'll run it through Flowframes for framerate interpolation. Then maybe another pass in Handbrake to figure out an optimal size for the end file.
Then I'll run through the rest of the season using the same settings I tested with this first episode.
[+] [-] psychomugs|5 years ago|reply
Susan Sontag's "On Photography" is a great read on this topic for anyone marginally interested in not just photography, but art in general.
[+] [-] BiteCode_dev|5 years ago|reply
At one point do we go from picture to painting ?
[+] [-] sbarre|5 years ago|reply
https://news.ycombinator.com/item?id=26448986
[+] [-] beyondcompute|5 years ago|reply
[+] [-] rhacker|5 years ago|reply
[+] [-] choppaface|5 years ago|reply
I think longer term stuff like neural rendering will make super resolution less relevant. If you can re-create a 3D scene from a single photo or otherwise reconstruct the photo in a less-resolution-dependent way, then playing the super resulting game is less interesting (for users and researchers alike).
[+] [-] TwoBit|5 years ago|reply
[+] [-] sctgrhm|5 years ago|reply
[+] [-] turrini|5 years ago|reply
https://topazlabs.com/gigapixel-ai/
[+] [-] rahimiali|5 years ago|reply
* demosaicing: interpolates color from nearby pixels. Each pixel gets just one of the tree color components. The other two are interpolated.
* decompressing jpeg: tries to guess information the compressor lost.
* black field correction: adjusts the brightness at every pixel to compensate for the different sensitivity at each pixels.
* de-vignetting: compensate for the border of the image being darker than the center.
* auto white balance: compensates for the fact that your eye’s color consistency doesn’t work as it would in the natural setting. This is a complicated way to get you to see the color you would have you seen the full scene.
All of these try to recover some aspect of the signal that was irretrievably lost by a previous step. They do this by making plausible guesses.
[+] [-] jiggawatts|5 years ago|reply
Keeping one frame and discarding the rest is a bit wasteful in a sense. The other frames have useful information that could be extracted by a well-trained AI to provide super-resolution, increased DoF, additional blur or shake reduction, etc...
I've deliberately kept all of my RAW frames, even the not-so-sharp or slightly shaky ones, because I foresee that at some point in the future this will be an automatic thing that tools like Adobe Lightroom will do that maximise the available image quality.
Storage is cheap, but I can never go back in time and photograph my memorable occasions with a better camera from the future...
[+] [-] TheMagicHorsey|5 years ago|reply
My wife routinely shoots photos on her Pixel 3 that get a better response on our family whatsapp group than the painstakingly post-processed DSLR shots I create and post.
This could be an indictment of my failures as a photographer. Or perhaps my family has no taste in photos. But it's also entirely possible that a Pixel 3 is all the camera you really need for family documentary work ... and I've wasted so much money on unnecessary hobby gear.
[+] [-] ISL|5 years ago|reply
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] bryanlarsen|5 years ago|reply
[+] [-] zokier|5 years ago|reply
[+] [-] Faaak|5 years ago|reply
And then, the jury decides to use "Super resolution" to "enhance" the picture. The ML model decided that what it saw was a gun instead of a rose.