The learned frequency banks reminded me of a notion I had: Instead of learning upscaling or image generation in pixel space, why not reuse the decades of effort that has gone into lossy image compression by generating output in a psychovisually optimal space?
Perhaps frequency space (discrete cosine transform) with a perceptually uniform color space like UCS. This would allow models to be optimised so that they spend more of their compute budget outputting detail that's relevant to human vision. Color spaces that split brightness from chroma would allow increased contrast detail and lower color detail. This is basically what JPG does.
You may already know this, but image generators like Stable Diffusion and Flux already do this in the form of “latent diffusion”.
Rather than operate on pixel space directly, they learn to operate on images that have been encoded by a VAE (latents). To generate an image with them, you run the reverse diffusion (actually flow in the case of flux) process they’ve learned and then decode the result using the VAE.
These VAE encoded latent images are 8x smaller in width/height and have 4 channels in the case of Stable Diffusion and 16 in the case of Flux.
I do think it would be more useful if it worked more like you said, though - if the channels weren’t encoded arbitrarily but some of them had pretty clear, useful human meaning like lightness, it would be another hook to control image generation.
To some extent, you can control the existing VAE channels, but it is pretty finicky.
> by generating output in a psychovisually optimal space? Perhaps frequency space (discrete cosine transform)
I've never understood the DCT to be psychovisually optimal at all. At lower bitrates, it degrades into ringing and blockiness that don't match a "simplified perception" at all.
The frequency domain models our auditory space well, because our ears literally process frequencies. Bringing that over to the visual side has never been about "psychovisual modeling" but about existing mathematical techniques that happen to work well, despite their glaring "psychovisual" flaws.
On the other hand, yes a HSV color space could make more sense than RGB, for example. But I'm not sure it's going to provide a significant savings? I'd certainly be curious. It also might create problems though, because hue is undefined when saturation is zero, saturation is undefined when brightness is zero, etc. It's not smooth and continuous at the edges the way RGB is. And while something like CIELAB doesn't have that problem, you have the problem of keeping valid value combinations "in bounds".
Lossy image compression has mostly targeted an entirely different performance envelope.
E.g. in the image you can see a diagonal bands basis function. Image codecs don't generally have those-- not because they wouldn't be useful but because codec developers favor separable transforms that have fast factorizations for significant performance improvements.
I don't think we know and can really make good comparisons between traditional tools and ML powered compression because of this. We just don't have decades of efforts where the engineers were allowed a million multiples and a thousand memory accesses per pixel.
Interesting thoughts! First thing to mention is that if you look at the code, it uses SSIM, which is a perceptual image metric. Second is that it may be using sRGB, which isn’t a perceptually uniform color space, but is closer to one than linear RGB. I say that simply because most images these days are sRGB encoded. Whether Thera is depends on the dataset.
Aren’t Thera’s frequency banks pretty darn close to DCT or Fourier transform already? This is a frequency space decomposition & reconstruction, and their goal is similar to JPG in that it aims to capture the low frequencies accurately, and skimp on the frequencies that matter less, either by being less visible or lead to error (aliasing artifacts). It doesn’t seem entirely accurate to frame this paper as learning in pixel space.
As far as perceptual color spaces, yeah that might be worth trying. It’s not clear exactly what the goal is or how it would help, but it might. Thera does use the same color spaces that JPG encoding uses: RGB and YCbCr, which are famously bad. Perceptual color spaces save some bits in the file format, and like frequency space, they are convenient and help with perceptual decisions, but it’s less common to see them used to save work, at least outside of research. Notably, image generation often needs to work in linear color space anyway, and convert to a perceptual color space at the end. For example, CG rendering is all done in linear space, even when using a perceptual color metric to guide adaptive sampling.
Another question worth asking is whether in general a neural network already learns the perceptual factors. When it comes to black box training, if the data and loss function capture what a viewer needs to see, then the network will likely learn what it needs and use it’s own notion of perceptual metrics in it’s latent space. In that case, it may not help to use inputs and output that are encoded in a perceptual space, and we might be making incorrect assumptions.
In this case with Thera, the paper’s goal may be difficult to pin down perceptually. Doesn’t the arbitrary in ‘arbitrary-scale super resolution’ toss viewing conditions and the notion of an ideal viewer out the window? If we don’t even want to know what the solid angle of a pixel is, we can’t know very much about how they’re perceived.
Seems like a nice result but wouldn’t have hurt for them to give a few performance benchmarks. I understand that the point of the paper was a quality improvement, but it’s always nice to reference a baseline for practicality.
Not disagreeing, but the number of parameters are listed in the single digit millions size (which surprised me). So, I would expect this to be very fast on modern hardware.
As other have mentioned, this models just puts emphasis on pixels and compression artifacts, so it's of not much use for improving old or low quality images.
I tried doing some pixelart->HD conversion with Gemini2.0Flash instead and the results look quite promising:
The images are however all over the place, as it doesn't seem to stick very close to the prompt. Trying to fine tune the image with further chatting often leads to overexposed looking pictures.
All the results are done with prompts along the lines of "here is a pixelart image convert it into a photo" or some variation there of. No img2img, LoRA or anything here, all plain Gemini chat.
Hi, this is a complete rework, though the core idea remains the same. Results are now much better due to improved engineering, and we compare to recent SOTA methods up until 2025. Also we have some new experiments and worked a lot on figures and presentation :)
Hi, author here :) It shouldn’t be OOD, unless its too noisy maybe? And what scaling factor did you use? Single image SR is a highly ill-posed problem, so at higher upscaling factors it just becomes really difficult…
You were imagining something where you give it one grey pixel, then zoom in infinitely and read the Magna Carta? Where did you imagine it would get the information from?
Instead of training on vast amounts of arbitrary data that may lead to hallucinations, wouldn't it be better to train on high-resolution images of the specific subject we want to upscale? For example, using high-resolution modern photos of a building to enhance an old photo of the same building, or using a family album of a person to upscale an old image of that person. Does such an approach exist?
Author here -- Generally in single image super-resolution, we want to learn a prior over natural high-resolution images, and for that a large and diverse training set is beneficial. Your suggestion sounds interesting, though it's more reminiscent of multi image super-resolution, where additional images contribute additional information, that has to be registered appropriately.
That said, our approach is actually trained on a (by modern standards) rather small dataset, consisting only of 800 images. :)
Not a data scientist, but my understanding is that restricting the set of training data for the initial training run often results in poorer inference due to a smaller data set. If you’re training early layers of a model, you’re often recognizing rather abstract features, such as boundaries between different colors.
That said, there is a benefit to fine-tuning a model on a reduced data set after the initial training. The initial training with the larger dataset means that it doesn’t get entirely lost in the smaller dataset.
jiggawatts|11 months ago
Perhaps frequency space (discrete cosine transform) with a perceptually uniform color space like UCS. This would allow models to be optimised so that they spend more of their compute budget outputting detail that's relevant to human vision. Color spaces that split brightness from chroma would allow increased contrast detail and lower color detail. This is basically what JPG does.
mturnshek|11 months ago
Rather than operate on pixel space directly, they learn to operate on images that have been encoded by a VAE (latents). To generate an image with them, you run the reverse diffusion (actually flow in the case of flux) process they’ve learned and then decode the result using the VAE.
These VAE encoded latent images are 8x smaller in width/height and have 4 channels in the case of Stable Diffusion and 16 in the case of Flux.
I do think it would be more useful if it worked more like you said, though - if the channels weren’t encoded arbitrarily but some of them had pretty clear, useful human meaning like lightness, it would be another hook to control image generation.
To some extent, you can control the existing VAE channels, but it is pretty finicky.
crazygringo|11 months ago
I've never understood the DCT to be psychovisually optimal at all. At lower bitrates, it degrades into ringing and blockiness that don't match a "simplified perception" at all.
The frequency domain models our auditory space well, because our ears literally process frequencies. Bringing that over to the visual side has never been about "psychovisual modeling" but about existing mathematical techniques that happen to work well, despite their glaring "psychovisual" flaws.
On the other hand, yes a HSV color space could make more sense than RGB, for example. But I'm not sure it's going to provide a significant savings? I'd certainly be curious. It also might create problems though, because hue is undefined when saturation is zero, saturation is undefined when brightness is zero, etc. It's not smooth and continuous at the edges the way RGB is. And while something like CIELAB doesn't have that problem, you have the problem of keeping valid value combinations "in bounds".
nullc|11 months ago
E.g. in the image you can see a diagonal bands basis function. Image codecs don't generally have those-- not because they wouldn't be useful but because codec developers favor separable transforms that have fast factorizations for significant performance improvements.
I don't think we know and can really make good comparisons between traditional tools and ML powered compression because of this. We just don't have decades of efforts where the engineers were allowed a million multiples and a thousand memory accesses per pixel.
cma|11 months ago
https://arxiv.org/abs/1907.11503
https://arxiv.org/abs/2308.09110
With generative ai they tend to have a learned compressed representation instead (VAE)
pizza|11 months ago
littlestymaar|11 months ago
I've been wondering exactly this for a while, if somebody more knowledgeable knows why we're not doing that I'd be happy to hear it.
dahart|11 months ago
Aren’t Thera’s frequency banks pretty darn close to DCT or Fourier transform already? This is a frequency space decomposition & reconstruction, and their goal is similar to JPG in that it aims to capture the low frequencies accurately, and skimp on the frequencies that matter less, either by being less visible or lead to error (aliasing artifacts). It doesn’t seem entirely accurate to frame this paper as learning in pixel space.
As far as perceptual color spaces, yeah that might be worth trying. It’s not clear exactly what the goal is or how it would help, but it might. Thera does use the same color spaces that JPG encoding uses: RGB and YCbCr, which are famously bad. Perceptual color spaces save some bits in the file format, and like frequency space, they are convenient and help with perceptual decisions, but it’s less common to see them used to save work, at least outside of research. Notably, image generation often needs to work in linear color space anyway, and convert to a perceptual color space at the end. For example, CG rendering is all done in linear space, even when using a perceptual color metric to guide adaptive sampling.
Another question worth asking is whether in general a neural network already learns the perceptual factors. When it comes to black box training, if the data and loss function capture what a viewer needs to see, then the network will likely learn what it needs and use it’s own notion of perceptual metrics in it’s latent space. In that case, it may not help to use inputs and output that are encoded in a perceptual space, and we might be making incorrect assumptions.
In this case with Thera, the paper’s goal may be difficult to pin down perceptually. Doesn’t the arbitrary in ‘arbitrary-scale super resolution’ toss viewing conditions and the notion of an ideal viewer out the window? If we don’t even want to know what the solid angle of a pixel is, we can’t know very much about how they’re perceived.
unknown|11 months ago
[deleted]
WhitneyLand|11 months ago
vessenes|11 months ago
i5heu|11 months ago
Sadly this model really does not like nosy images that have codec compression artifacts, at least with my few test images.
LoganDark|11 months ago
flerchin|11 months ago
nandometzger|11 months ago
grumbel|11 months ago
I tried doing some pixelart->HD conversion with Gemini2.0Flash instead and the results look quite promising:
* https://imgur.com/a/t9F94F1
The images are however all over the place, as it doesn't seem to stick very close to the prompt. Trying to fine tune the image with further chatting often leads to overexposed looking pictures.
All the results are done with prompts along the lines of "here is a pixelart image convert it into a photo" or some variation there of. No img2img, LoRA or anything here, all plain Gemini chat.
mastax|11 months ago
smusamashah|11 months ago
earthnail|11 months ago
0x12A|11 months ago
varispeed|11 months ago
rini17|11 months ago
mrybczyn|11 months ago
That said, your examples are promising, and thank you for posting a HF space to try it out!
0x12A|11 months ago
saddat|11 months ago
nthingtohide|11 months ago
DLSS 3 vs DLSS 4 (Transformer)
https://www.youtube.com/watch?v=CMBpGbUCgm4
seanalltogether|11 months ago
throwaway2562|11 months ago
It wouldn’t be more funny ha-ha, just more funny strange.
Hizonner|11 months ago
WhitneyLand|11 months ago
flufluflufluffy|11 months ago
imoreno|11 months ago
adhoc32|11 months ago
0x12A|11 months ago
That said, our approach is actually trained on a (by modern standards) rather small dataset, consisting only of 800 images. :)
MereInterest|11 months ago
That said, there is a benefit to fine-tuning a model on a reduced data set after the initial training. The initial training with the larger dataset means that it doesn’t get entirely lost in the smaller dataset.
crazygringo|11 months ago
But it's extremely time-consuming and currently expensive.
imoreno|11 months ago