I process photos in ProPhoto RGB and I’m in the process of switching up my process to always publish images to the web as Display P3 which can be done just fine in JPEG and WEBP by attaching a color profile.
Display P3 is moderately larger than the old standard sRGB; you are trading some color resolution in the “mainstream” area for more saturated greens and reds.
4K TV’s use Rec 2020 which has a huge color gamut, because it is covering a bigger space, 8-bit color is not enough, you need to go to 10-bit, 12-bit or more (I process in 16 bits) and neither JPEG or WEBP can handle that. AVIF can, but so can JPEG XL.
I know people doing synthetic tests (instead of looking at the image they run a program that estimates how bad compression artifacts are) are impressed with AVIF but I’ve done some shootouts with JPEG/WEBP/AVIF/JPEG XL where I look at images with my own eyes.
For pictures that are moderate-low quality (say images for a blog) I think AVIF does very well, but I want to publish pictures I took with my mirrorless where I work really hard to get them “tack sharp” (e.g. sometimes a 4000x6000 image w/ my Sony looks almost like pixel art when you blow it up) and I want people to see something consistent with that on the web. And my experience is that AVIF falls down at that, it does not really save bits compared to JPEG and WEBP at high quality. JPEG XL gives superior compression at high quality and it supports high color depths and it’s an option I’d really like to have.
> And my experience is that AVIF falls down at that, it does not really save bits compared to JPEG and WEBP at high quality.
In all the comparisons I've seen, it's not even a contest.
"I picked this image because it's a photo with a mixture of low frequency detail (the road) and high frequency detail (parts of the car livery). Also, there are some pretty sharp changes of colour between the red and blue. And I like F1.
Roughly speaking, at an acceptable quality, the WebP is almost half the size of JPEG, and AVIF is under half the size of WebP. I find it incredible that AVIF can do a good job of the image in just 18 kB."
It'd be interesting to see file size comparisons of AVIF lossless images vs. JPEG's "almost lossless" 100% compression, but I haven't run across any yet.
Yes, the unfortunate thing is that Google is not interested in a higher quality Web so much as they are in a Web that is cheaper to index and serve.
So it's unsurprising that they have pushed the format optimized for "as few bits as we can get away with before things like too terrible" rather than actually improving quality and extending capabilities.
It seems AVIF has better compression at lower bit rates. At high bit rates they seem similar. AVIF especially shines for pictures with large homogeneous surfaces like the sky.
However, AVIF is missing some important features, such as progressive image loading. The maximum resolution is apparently also quite limited.
Is defined by some standard to be able to be declared "4K", or is it just what seems to be happening because all/most of the panel makers just threw it in?
> I want people to see something consistent with that on the web.
Don't get too hung up on picking a file format then. All sorts of middleboxes, CDNs, and edge network acceleration systems can potentially "right-size" your image for what the requesting device can handle optimally.
So, all it takes to consider a small community requested change in Chromium is a massive protest from thousands of users, small businesses, and Fortune 500 companies for almost a year...
Or maybe they are just trying to keep feature parity with Safari.
Of course Apple adding support in Safari is far more important than Internet outrage!
At this point adding new {image, audio, video, compression} codecs to browsers is probably a net negative, unless there's a good chance they get deployed across the entire browser ecosystem. Safari is generally the browser that's most conservative about implementing anything new, so their support makes a huge difference in the viability of getting the format universally supported.
Yeah, Google's top three revenue sources are ads, ads, and ads. (Respectively search, network, and YouTube.) Their customers are advertisers. Chrome's job (and Android's) is to make sure they retain control of sufficient surface area to place ads. Chrome user opinion to them is important to their business in about the same way meatpackers care about what cattle think of the design of the feeding stations. As long as they keep coming to eat, it's just mooing.
I dont think anything has changed. JPEG XL being supported by Apple would only be 20% of user world wide. Assuming every one uses it. According to the initial Google thread this is likely not considered as high enough interest.
With Google's study [1], by Google's Engineer, JPEG XL is no where near good enough compared to AVIF.
None of the above facts have changed since Google Chrome's decision on JPEG XL.
Ignore the codec information (fascinating though that branch of comp.sci. is), what's interesting here is exactly how much Google is in control of Chromium, and by extension the web.
The fact that we have to get on our knees and plead for their consideration versus just fork and ship should make you ill. No compression without representation or some such.
Scanning the comments here and I don't see anyone addressing the elephant in the room: PATENTS.
After a bit of searching, it's unclear what degree of "patent risk" comes with JPEG XL. JPEG historically was subject to patent troll lawsuits until the patent expired in 2006.
Please note that it's not enough for there to be a "royalty-free reference implementation" of JPEG XL, even if it's licensed with Apache 2.0, because you can't be sure from a glance that the Apache license patent grant includes all relevant patents. If you care about open source and free formats, you should look for two things: a comprehensive patent pool transferred to the standards body AND a royalty-free patent license to anyone with no strings attached.
The game here is that companies with potential claims over some techniques used within codecs have an incentive to withhold their patents from the official pool until years after adoption. Then they sue the biggest users of the codec (like Google) for obscene sums of money. That's why ALL of the patents used in a codec must be assigned to the standards body for open licensing, and you have to be SURE that none are withheld. This is difficult.
AVIF (and it's standards body, the AOM) was created in part (I believe) to solve this very problem. All the major tech companies are members and they've effectively agreed to a patent truce with regards to codecs.
This is arguably the most important commercial concern in distributing a browser for free that includes codecs. If you ship unlicensed codecs, some random company can crawl out of the woodwork 5 year later and sue you for a billion dollars.
In my view, AVIF only needs to be competitive with compression and quality. It patent risk is so low that it is the obvious choice. AVIF is truly open, there are multiple implementations, and its reason for existing is to solve the codec patent problem.
Source: I was near the activities within Netflix that helped found the AOM.
Disclaimer: I'm not a lawyer and this isn't legal advice; also I'm several years out of date w.r.t. JPEG XL specifically, so I'd be happy to be corrected about the relevant patent risk. Maybe someone has better info?
Did google cite patents as one of the reasons they removed support initially? I thought it was all around lack of benefits and difficulty of maintenance.
Apparently Microsoft was granted this patent on rANS in early 2022 (https://patents.google.com/patent/US11234023B2/en) and Google deprecated JPEG XL support late 2022. JPEG XL uses rANS, so I think there's some likelihood that this motivated Google to change their focus. Google didn't mention anything about this in their reasoning, but would they have mentioned patent issues publicly if that were the real reason? Google isn't obligated to tell us everything and the reasons they gave always felt weak and weirdly dismissive.
Although I can sympathize, I don't really understand the point of opening a new issue when all the same information has already been left in comments on the old closed issue.
If the new issue gets closed, then it just reaffirms that the Chromium team doesn't care about this feature request. If the new issue somehow convinces the team to do something about it, then it shows that the team is utterly dysfunctional because their decision-making is more influenced by whether you say "pretty pretty please" in the right way than by the content of the discussion.
I still need to sit down and convert my personal Linux computer over to using JPEG XL for picture archival and figure out what tools need to change or be updated.
Using it on the web is one thing, but getting better compression for my family photos would also probably be a win, and I suspect it would be possible to build a pipeline for viewing/editing that would be fairly transparent.
This isn't a good article because of how biased it is against Google. It ignores that there is added cost to Google and their partners in supporting it and ignores the recommendation to use a WASM decoder.
I'm just reading through the wikipedia page on this for the first time.
Does JPEG XL allow encoders to switch between the DCT and modular modes on a per-macroblock basis, or is it just on a per-channel basis?
If it's the former then I can see this offering a lot of utility over other image formats because you'd be able to disable the DCT on high-contrast macroblocks and finally be done with all those god-awful "checkerboard" artifacts around the edges of objects.
But if it's merely on a per-channel basis then I'm not sure I see what the point of this is since I can already use a different format when I need lossless encoding; If anything JXL would become an annoyance because I can't tell if a JXL image is lossless or not based on the file's extension.
> Does JPEG XL allow encoders to switch between the DCT and modular modes on a per-macroblock basis, or is it just on a per-channel basis?
Tricky but it can be indeed done on a per-macroblock basis. The encoding itself is fixed per frame, but JPEG XL mandates zero-duration frames to be merged with the prior frame, so multiple frames with different encodings can be used for that. In fact I believe patches already work like this.
JPEG XL has 10 8x8 transforms and 9 larger transforms (IIRC).
Two of the 8x8 transforms are extremely local. One is called IDENTITY and the other DCT2x2. It is very difficult to produce ringing artefacts when using these transforms.
When going to higher quality settings in libjxl, it tends to favor the DCT2x2 quite a bit.
Can someone summarize the issue with JPEG XL? Is this something that really matters? I've seen this mentioned a couple of times in the last few days but I don't see what the big deal is, is it really that necessary?
JPEG is 30 years old so we need something more modern (better compression, less visual artifacts, web optimized, etc...). There already was a plan to change it with JPEG 2000, but it failed, obviously, as we still use jpegs.
Now several formats are competing, most notably AVIF (which is basically just single AV1 video compressed frame) and JPEG XL. JPEG XL might be slightly better in some cases (as AVIF is based on a video codec) and most importantly it's backwards compatible with JPEG. So this means we can re-encode 30 years of JPEGs to JPEG XL without image degradation. Having a wide support would help immensely to make the format standard as otherwise everybody will just continue to use JPEGs. Google is somewhat against this as they already have support for AV1 and thus don't need to maintain a separate codec for JPEG XL.
Please can we have browsers not advertise support via Accept heders or <picture> tag support this time until they actually support all features so that those don't become useless for progressive enhancement of anything that isnt a static lossy image.
How much work is actually involved in adding support for this format? Like is it just plugging an existing implementation into the abstractions they already have for other image formats? or is there more to it?
Integration of a new decoder is not all that complicated code wise. What is complicated is the effects of the change and ongoing support.
1. Binary size cost, in my experience working on Firefox this is in the 100s of KiB range when adding a new decoder.
2. Ongoing costs increased compile times, new integration tests, functional tests and so forth. Keeping those tests passing and non-flaky.
3. Once something is accepted into the web ecosystem the intention is to support it for 10s of years if not forever. Web feature deprecation is quite slow, ex <keygen> & <blink>. The web has not deprecated a primary image format.
4. Security, a 'new' binary format is a place for security vulnerabilities, crashes and hangs. The web is actively hostile place for web browsers.
For a browser, it means permanent, forever, support for the format and continued maintenance and security patching for the library. Any CVE, any issue that might cause the browser to be insecure will be blamed on the browser and the developers will have to make sure any codec they use is safe forever.
That's the cost for the maintainers. Codecs are historically one of the most problematic sources of security issues (they're complex code that handles malicious downloaded files) and supporting a new one is a rather big maintenance burden for everyone involved.
And if Chrome gets backdoored by a JXL library security hole, everyone will blame Google for it.
If, by any chance, supporting JXL becomes too much of a burden, everyone will again blame Google for being evil if they ever remove it from Chrome.
At this point this just seems like one of those internet religious wars instead of anything actually tecnically usable.
I bet after (re)introduction, most of people yelling for it won't actually convert their JPEGs to XL. Just like almost noone whining about Reader actually uses or pays for any of the alternatives.
> I bet after (re)introduction, most of people yelling for it won't actually convert their JPEGs to XL.
The idea is converting workflows to JPEG XL (and particularly to enable uses for which JPEG isn’t suitable and even AVIF is supposedly less optimal), not converting existing JPEGs, mainly.
Personally, I've got no great love for these new image formats.
It's always a pain in the ass when you discover your phone has actually been saving your photos as heic or webp or avif or whatever and hardly anything will open them.
I could understand wanting to improve JPEG in the age of dial-up and 1.44MB floppy disks - 60% smaller images could have been a great benefit in those days. But today, even if I'm taking 30 photos every day at 4k resolution, it'd take 20 years to fill up a $50 1TB disk.
The other benefits of the format might be great for some specialist applications, but options like billion-pixel-wide images, 32 bits per channel and 4099 channels ready for medical imaging only get a shrug from me. I doubt my browser is going to start displaying 4099 channel images.
I just wish we could get rid of heic, webp and avif at the same time.
[+] [-] PaulHoule|2 years ago|reply
I process photos in ProPhoto RGB and I’m in the process of switching up my process to always publish images to the web as Display P3 which can be done just fine in JPEG and WEBP by attaching a color profile.
Display P3 is moderately larger than the old standard sRGB; you are trading some color resolution in the “mainstream” area for more saturated greens and reds.
4K TV’s use Rec 2020 which has a huge color gamut, because it is covering a bigger space, 8-bit color is not enough, you need to go to 10-bit, 12-bit or more (I process in 16 bits) and neither JPEG or WEBP can handle that. AVIF can, but so can JPEG XL.
I know people doing synthetic tests (instead of looking at the image they run a program that estimates how bad compression artifacts are) are impressed with AVIF but I’ve done some shootouts with JPEG/WEBP/AVIF/JPEG XL where I look at images with my own eyes.
For pictures that are moderate-low quality (say images for a blog) I think AVIF does very well, but I want to publish pictures I took with my mirrorless where I work really hard to get them “tack sharp” (e.g. sometimes a 4000x6000 image w/ my Sony looks almost like pixel art when you blow it up) and I want people to see something consistent with that on the web. And my experience is that AVIF falls down at that, it does not really save bits compared to JPEG and WEBP at high quality. JPEG XL gives superior compression at high quality and it supports high color depths and it’s an option I’d really like to have.
[+] [-] CharlesW|2 years ago|reply
In all the comparisons I've seen, it's not even a contest.
"I picked this image because it's a photo with a mixture of low frequency detail (the road) and high frequency detail (parts of the car livery). Also, there are some pretty sharp changes of colour between the red and blue. And I like F1.
Roughly speaking, at an acceptable quality, the WebP is almost half the size of JPEG, and AVIF is under half the size of WebP. I find it incredible that AVIF can do a good job of the image in just 18 kB."
https://jakearchibald.com/2020/avif-has-landed/
It'd be interesting to see file size comparisons of AVIF lossless images vs. JPEG's "almost lossless" 100% compression, but I haven't run across any yet.
[+] [-] elishah|2 years ago|reply
So it's unsurprising that they have pushed the format optimized for "as few bits as we can get away with before things like too terrible" rather than actually improving quality and extending capabilities.
[+] [-] cubefox|2 years ago|reply
https://afontenot.github.io/image-formats-comparison/#end-of...
It seems AVIF has better compression at lower bit rates. At high bit rates they seem similar. AVIF especially shines for pictures with large homogeneous surfaces like the sky.
However, AVIF is missing some important features, such as progressive image loading. The maximum resolution is apparently also quite limited.
[+] [-] throw0101a|2 years ago|reply
Is defined by some standard to be able to be declared "4K", or is it just what seems to be happening because all/most of the panel makers just threw it in?
[+] [-] mike_d|2 years ago|reply
Don't get too hung up on picking a file format then. All sorts of middleboxes, CDNs, and edge network acceleration systems can potentially "right-size" your image for what the requesting device can handle optimally.
[+] [-] ComodoHacker|2 years ago|reply
Can you share some examples of such images/fragments?
[+] [-] gsich|2 years ago|reply
[+] [-] meindnoch|2 years ago|reply
[+] [-] brucethemoose2|2 years ago|reply
Or maybe they are just trying to keep feature parity with Safari.
[+] [-] jsnell|2 years ago|reply
At this point adding new {image, audio, video, compression} codecs to browsers is probably a net negative, unless there's a good chance they get deployed across the entire browser ecosystem. Safari is generally the browser that's most conservative about implementing anything new, so their support makes a huge difference in the viability of getting the format universally supported.
[+] [-] wpietri|2 years ago|reply
[+] [-] ksec|2 years ago|reply
With Google's study [1], by Google's Engineer, JPEG XL is no where near good enough compared to AVIF.
None of the above facts have changed since Google Chrome's decision on JPEG XL.
/S
[1] https://storage.googleapis.com/avif-comparison/index.html
[+] [-] DannyBee|2 years ago|reply
[+] [-] nathan_phoenix|2 years ago|reply
[+] [-] coldpie|2 years ago|reply
[+] [-] anotherhue|2 years ago|reply
The fact that we have to get on our knees and plead for their consideration versus just fork and ship should make you ill. No compression without representation or some such.
[+] [-] boesboes|2 years ago|reply
> @Reporter Could you please confirm the OS details.
[+] [-] alex_suzuki|2 years ago|reply
[+] [-] pavel_lishin|2 years ago|reply
> As the issue seems similar to crbug.com/1178058 adding firsching to cc list for more inputs.
[+] [-] MikeCapone|2 years ago|reply
I get that most bandwidth goes to video, but it would still be nice to have a great modern standard for images.
[+] [-] FoxBJK|2 years ago|reply
[+] [-] trunnell|2 years ago|reply
After a bit of searching, it's unclear what degree of "patent risk" comes with JPEG XL. JPEG historically was subject to patent troll lawsuits until the patent expired in 2006.
Please note that it's not enough for there to be a "royalty-free reference implementation" of JPEG XL, even if it's licensed with Apache 2.0, because you can't be sure from a glance that the Apache license patent grant includes all relevant patents. If you care about open source and free formats, you should look for two things: a comprehensive patent pool transferred to the standards body AND a royalty-free patent license to anyone with no strings attached.
The game here is that companies with potential claims over some techniques used within codecs have an incentive to withhold their patents from the official pool until years after adoption. Then they sue the biggest users of the codec (like Google) for obscene sums of money. That's why ALL of the patents used in a codec must be assigned to the standards body for open licensing, and you have to be SURE that none are withheld. This is difficult.
AVIF (and it's standards body, the AOM) was created in part (I believe) to solve this very problem. All the major tech companies are members and they've effectively agreed to a patent truce with regards to codecs.
This is arguably the most important commercial concern in distributing a browser for free that includes codecs. If you ship unlicensed codecs, some random company can crawl out of the woodwork 5 year later and sue you for a billion dollars.
In my view, AVIF only needs to be competitive with compression and quality. It patent risk is so low that it is the obvious choice. AVIF is truly open, there are multiple implementations, and its reason for existing is to solve the codec patent problem.
Source: I was near the activities within Netflix that helped found the AOM.
Disclaimer: I'm not a lawyer and this isn't legal advice; also I'm several years out of date w.r.t. JPEG XL specifically, so I'd be happy to be corrected about the relevant patent risk. Maybe someone has better info?
[+] [-] neandrake|2 years ago|reply
[+] [-] Sakos|2 years ago|reply
[+] [-] tssva|2 years ago|reply
[+] [-] teraflop|2 years ago|reply
If the new issue gets closed, then it just reaffirms that the Chromium team doesn't care about this feature request. If the new issue somehow convinces the team to do something about it, then it shows that the team is utterly dysfunctional because their decision-making is more influenced by whether you say "pretty pretty please" in the right way than by the content of the discussion.
[+] [-] danShumway|2 years ago|reply
Using it on the web is one thing, but getting better compression for my family photos would also probably be a win, and I suspect it would be possible to build a pipeline for viewing/editing that would be fairly transparent.
[+] [-] chungy|2 years ago|reply
Combined with GNU parallel, I did this:
JPEGs get losslessly recompressed with JPEG XL. PNGs and (lossless) WebPs get converted to lossless JPEG XL.[+] [-] thyrox|2 years ago|reply
https://www.techspot.com/news/98355-google-deprecating-jpeg-...
[+] [-] charcircuit|2 years ago|reply
[+] [-] snickerbockers|2 years ago|reply
Does JPEG XL allow encoders to switch between the DCT and modular modes on a per-macroblock basis, or is it just on a per-channel basis?
If it's the former then I can see this offering a lot of utility over other image formats because you'd be able to disable the DCT on high-contrast macroblocks and finally be done with all those god-awful "checkerboard" artifacts around the edges of objects.
But if it's merely on a per-channel basis then I'm not sure I see what the point of this is since I can already use a different format when I need lossless encoding; If anything JXL would become an annoyance because I can't tell if a JXL image is lossless or not based on the file's extension.
[+] [-] wpietri|2 years ago|reply
It was discussed here a few weeks ago: https://news.ycombinator.com/item?id=36801448
[+] [-] lifthrasiir|2 years ago|reply
Tricky but it can be indeed done on a per-macroblock basis. The encoding itself is fixed per frame, but JPEG XL mandates zero-duration frames to be merged with the prior frame, so multiple frames with different encodings can be used for that. In fact I believe patches already work like this.
[+] [-] JyrkiAlakuijala|2 years ago|reply
Two of the 8x8 transforms are extremely local. One is called IDENTITY and the other DCT2x2. It is very difficult to produce ringing artefacts when using these transforms.
When going to higher quality settings in libjxl, it tends to favor the DCT2x2 quite a bit.
This is in VarDCT -- not modular coding.
[+] [-] purpleblue|2 years ago|reply
[+] [-] gulikoza|2 years ago|reply
Now several formats are competing, most notably AVIF (which is basically just single AV1 video compressed frame) and JPEG XL. JPEG XL might be slightly better in some cases (as AVIF is based on a video codec) and most importantly it's backwards compatible with JPEG. So this means we can re-encode 30 years of JPEGs to JPEG XL without image degradation. Having a wide support would help immensely to make the format standard as otherwise everybody will just continue to use JPEGs. Google is somewhat against this as they already have support for AV1 and thus don't need to maintain a separate codec for JPEG XL.
[+] [-] edandersen|2 years ago|reply
[+] [-] baggy_trough|2 years ago|reply
[+] [-] untitaker_|2 years ago|reply
[+] [-] andybak|2 years ago|reply
[+] [-] izacus|2 years ago|reply
[+] [-] account42|2 years ago|reply
[+] [-] padjo|2 years ago|reply
[+] [-] kbrosnan|2 years ago|reply
1. Binary size cost, in my experience working on Firefox this is in the 100s of KiB range when adding a new decoder.
2. Ongoing costs increased compile times, new integration tests, functional tests and so forth. Keeping those tests passing and non-flaky.
3. Once something is accepted into the web ecosystem the intention is to support it for 10s of years if not forever. Web feature deprecation is quite slow, ex <keygen> & <blink>. The web has not deprecated a primary image format.
4. Security, a 'new' binary format is a place for security vulnerabilities, crashes and hangs. The web is actively hostile place for web browsers.
[+] [-] izacus|2 years ago|reply
That's the cost for the maintainers. Codecs are historically one of the most problematic sources of security issues (they're complex code that handles malicious downloaded files) and supporting a new one is a rather big maintenance burden for everyone involved.
And if Chrome gets backdoored by a JXL library security hole, everyone will blame Google for it.
If, by any chance, supporting JXL becomes too much of a burden, everyone will again blame Google for being evil if they ever remove it from Chrome.
[+] [-] izacus|2 years ago|reply
I bet after (re)introduction, most of people yelling for it won't actually convert their JPEGs to XL. Just like almost noone whining about Reader actually uses or pays for any of the alternatives.
[+] [-] dragonwriter|2 years ago|reply
The idea is converting workflows to JPEG XL (and particularly to enable uses for which JPEG isn’t suitable and even AVIF is supposedly less optimal), not converting existing JPEGs, mainly.
[+] [-] bricss|2 years ago|reply
[+] [-] michaelt|2 years ago|reply
It's always a pain in the ass when you discover your phone has actually been saving your photos as heic or webp or avif or whatever and hardly anything will open them.
I could understand wanting to improve JPEG in the age of dial-up and 1.44MB floppy disks - 60% smaller images could have been a great benefit in those days. But today, even if I'm taking 30 photos every day at 4k resolution, it'd take 20 years to fill up a $50 1TB disk.
The other benefits of the format might be great for some specialist applications, but options like billion-pixel-wide images, 32 bits per channel and 4099 channels ready for medical imaging only get a shrug from me. I doubt my browser is going to start displaying 4099 channel images.
I just wish we could get rid of heic, webp and avif at the same time.