top | item 12619413

Subpixel: A subpixel convolutional neural network implementation with Tensorflow

210 points| jgoldsmith | 9 years ago |github.com | reply

44 comments

order
[+] maxander|9 years ago|reply
So, basically, this is the thing in a crime detective movie where the forensic analyst is looking at a terrible pixelated surveillance camera still and says "enhance," and the computer magically increases the resolution to reveal the culprit's face.

Just another entry on the "things that are supposed to be impossible that convolutional nets can do now."

[+] Eliezer|9 years ago|reply
And that's how that guy whose face appeared a few times in ImageNet became the world's most wanted terrorist, on the run for thousands of crimes.
[+] eejr|9 years ago|reply
yup, to certain point! there are information theoretic limits though. You can fill in information, but there will be biases to a certain point. in this case defined by the dataset. if the "enhance" is too strong, we should be careful with what we do with the results in forensics.

but man, it can make your internet pics look smooth! :) thanks for the comment!

[+] duaneb|9 years ago|reply
I imagine you'd want PII erased from the training set, but the danger stands.
[+] joelthelion|9 years ago|reply
Except it might unblur to the face of someone else.
[+] anotheryou|9 years ago|reply
I think it's always problematic to compare to images upscaled via nearest-neighbor. The big pixels are hard to parse for our brain, we detect all the blocky edges.

A good content unaware upscaling would be nice (one of the default photoshop algos)

I also wonder what they used for the downscaling. I see 4x4 pixel blocks, but also some with 3px or 7px lengths.

This looks pixely and is supposed to be a source file?: https://raw.githubusercontent.com/Tetrachrome/subpixel/d2e28...

[+] anotheryou|9 years ago|reply
from trustswz' comment:

https://arxiv.org/abs/1609.04802

The pic with the boat on page 13 is interesting. In the SRGAN version I would take the shore for some sort of cliff, while the original shows separated boulders.

[+] Roboprog|9 years ago|reply
Interesting image "upscale" algorithm.

I'm not familiar enough with the field to understand how the "neutral net" part feeds in, other than to do parallel computation on the x-pos, y-pos, (RGB) color-type-intensity tensor interpolated/weighted into a larger/finer tensor.

(linear algebra speak for upscaling my old DVD to HD, that sort of thing)

At the risk of exposing my ignorance, this has nothing to do with "AI", right? It's "just" parallel computation?

[+] eejr|9 years ago|reply
yeah, no AI. Its low level computer vision. There is no implicit understanding of the scene to enhance it here. We show the neural nets several examples of low and high quality images it learns a function that makes the low quality looks more like the high quality.

this may make you feel disappointed now, but in the write up we are also pitching this same module to be used in generative networks and other models that do build an understanding of the scene. Lets see what the community (and ourselves) can do next...

[+] tree_of_item|9 years ago|reply
Everything that we understand how to do is "not really AI". It's only "AI" when it's still a mystery. At least that's the way people act.
[+] zokier|9 years ago|reply
I'm not sure, but there seems to be something wonky in the input images. They are very blocky, so I thought that they would be just pixel doubled (or quadrupled) from low-res pictures, but the blockiness lacks the regularity I'd expect from pixel-doubled images.

How were the input images prepared?

[+] anotheryou|9 years ago|reply
Super wonky indeed. Also it should compare to something like photoshops bicubic enlargement or the original size, because the brain gets stuck on the pixel edges.
[+] Keyframe|9 years ago|reply
This is impressive! But, I'll be really impressed once this 'new thing' brings us roto masks in motion. That is, isolating objects from background on a movie with pixel-perfect accuracy. It will also make a lot of people out of job and a lot of people happy at the same time.
[+] trobertson|9 years ago|reply
Considering motion blur, "pixel-perfect" is a difficult requirement.
[+] robertkrahn01|9 years ago|reply
And I always wondered how those photo enhancers in Blade Runner worked...!
[+] imaginenore|9 years ago|reply
The problem with subpixel images is that there are RBG and GBR monitors. Not only that, there are horizontal and vertical variations. And there's no way to tell which one the user is using on the web. And that's not even counting all the mobile number like pentile.

It's still useful though, browsers, for instance, could use it for displaying downscaled images.

[+] mappu|9 years ago|reply
This project is using 'subpixel' not to refer to monitor subpixels, but instead, lost information between existing pixels in an image.

You're right though, and that's why chroma hinting for subpixel AA has fallen out of favor. It also doesn't work on mobile where the screen can be rotated from RGB-horz to RGB-vert at a moment's notice. This was changed for ClearType in Windows 8 (DirectWrite never did chroma hinting).

[+] eejr|9 years ago|reply
this is supposed to be used in the data processing step. you load your image from jpeg or your video using ffmpeg, enhance the images and then pass it to the next step where color rendering is done. you can do that in the browser or mobile just as fine.