top | item 39926004

(no title)

nigeltao | 1 year ago

> all decoders will render the same pixels

Not true. Even just within libjpeg, there are three different IDCT implementations (jidctflt.c, jidctfst.c, jidctint.c) and they produce different pixels (it's a classic speed vs quality trade-off). It's spec-compliant to choose any of those.

A few years ago, in libjpeg-turbo, they changed the smoothing kernel used for decoding (incomplete) progressive JPEGs, from a 3x3 window to 5x5. This meant the decoder produced different pixels, but again, that's still valid:

https://github.com/libjpeg-turbo/libjpeg-turbo/commit/6d91e9...

discuss

order

JyrkiAlakuijala|1 year ago

Moritz, the author of that improvement, implemented the same for jpegli.

I believe the standard does not specify what the intermediate progressive renderings should look like.

I developed that interpolation mechanism originally for Pik, and Moritz was able to formulate it directly in the DCT space so that we don't need to go into pixels for the smoothing to happen, but he computed it using a few of the low frequency DCT coefficients.

nigeltao|1 year ago

> I believe the standard does not specify what the intermediate progressive renderings should look like.

This is possibly getting too academic, but IIUC for a progressive JPEG, e.g. encoded by cjpeg to have 10 0xDA Start Of Scan markers, it's actually legitimate to post-process the file, truncating to fewer scans (but re-appending the 0xD9 End Of Image marker). The shorter file is still a valid JPEG, and so still relevant for discussing whether all decoders will render the same pixels.

I might be wrong about validity, though. It's been a while since I've studied the JPEG spec.

andrewla|1 year ago

I was not aware of that; I thought that it was pretty deterministic.

Nonetheless, for this particular case, comparing jpegs decoded into lossless formats is unnecessary -- you can simply compare the two jpegs directly based on the default renderer in your browser.

iggldiggl|1 year ago

And nowadays, for subsampled images libjpeg post classic version 6 insists on doing the chroma upscaling using DCT where possible, so for classic 4:2:0 subsampled images (i.e. chroma resolution is half the luma resolution both horizontally and vertically) each subsampled 8x8 chroma block is now upscaled individually to 16x16 for the final image, which can and does introduce additional artefacts at the boundaries between each 16x16 px block in the final image. But the current libjpeg maintainer insists on that new algorithm because it is mathematically more beautiful…

Granted, the introduced artefacts aren't massive, but under certain circumstances they are noticeable, which is how I stumbled across that topic in the first place.

Thankfully, most software that isn't still stuck on libjpeg 6 has switched to libjpeg-turbo or some other library instead which continues using a more sensible algorithm for chroma upscaling.