From what I can tell, the author is focusing a bit too much on numbers, but is unsure why these alternatives haven't gained widespread adoption.
The comparison photo isn't great. Even my pedestrian eyes are able to tell that the Webp and Avi conversions result in poorer photos (most glaring example is the tree in the background), so immediately I'd not want my photos created, converted, or displayed in that format.
But numbers aside, the main reason is because JPG is 'good enough'. It's the same reason that old protocols like FTP and SMTP still hang around, why customers still want CSVs/Excels over Parquets. If a thing is good enough, it will hang around for a very long time because there's no compelling reason to move away from it. Considering the bloat that websites already present to the user, and the general lack of attention to bandwidth savings in the development stage, AND the existence of 'workarounds' like CDNs, even the development teams have little incentive to look for savings here (for now).
There will be hundreds of workflows built around JPG's capabilities as well, right from how cameras take the photos, embed metadata into it, how tools read that metadata. Think of embedded devices and webcams that produce images, which will be running 'in the field' for decades.
Additionally it's not just about browser support, which is a very limited way of considering it. For people working with those image types, they will want to know if it's compatible on all desktop OSes, and through tooling like GIMP, Photoshop, Affinity, exiftool, ffmpeg, imagemagick, etc.
It'll probably be a good number of years before there is widespread adoption that enable those workflows, at which point we (hopefully) no longer have to care whether it's a .webp or .jpg.
There are also plenty of websites that either won't accept webp images or don't handle them very well. For example Facebook will not accept webp on mobile, on desktop most of the time it thinks it's a gif. While jpeg works everywhere
I encourage everyone to click the images and zoom in on them. The test itself is pretty flawed since the images probably all came from the jpeg in the first place (and if they didn't, then that's pretty damning to the non-jpeg options, in my opinion).
The place I found the most interesting is the dark, top of the screen. Zoomed to 240% and looking at the top area, especially where the power lines are going across, there's a very, very clear difference in quality between even the 95% jpeg and webp, and in my opinion, the jpeg wins for being more honest. That difference is more stark at the 65% compression option.
Is that difference worth the larger size? That's up for each of us to decide as we choose our technology; but, those images are very different from my perception.
Caveat: I've used Firefox to render them, which may have different results than Chrome, perhaps.
This isn't what any lossy formats are designed for.
How much do the lossy spots stick out when viewed in a cursory glance without zooming from a normal viewing distance? This is what lossy formats are designed for.
Yeah, WebP was long criticized for its loop filter for example, which was a good fit for videos but not much for still images. In comparison, JPEG compression artifacts are so well understood that better encoders that minimize visual artifacts are available. I'm not even sure whether there is an original uncompressed image---that JPEG file might even directly come from a camera.
Well, there are many things at play here... JPEG leaves it to the codec author to interpret the compressed information. You can get different JPEG implementations that will result in visually different images (on screen) when displaying the same file. Some JPEG viewers will even allow you to control some of the parameters of the render (typically, the amount of blur and the size of the "swatch" with which the blur is applied).
This is less noticeable with stills, but it's a special form of art / a matter of professional pride for teams working on video codecs: how to make pictures produced from the same file look "better".
Similarly for compression. It's far from given that two different JPEG implementations will create byte-for-byte identical files given raw image input and all the same compression settings. There's special art in figuring out what parts of the picture will compress better, what parts can use wider "swatches" etc.
It's also true that different codecs can perform better on different kinds of pictures either in the sense of producing smaller files or in the sense of producing more visually appealing picture. So, anyone trying to establish the behavior of compression of competing codecs needs to try this on a carefully selected set of images which need to include images with high and low brightness, sharp and blurred images, pastel colors and neon glow as well as pictures of things we are often interested in seeing s.a. portraits or medical images etc.
There won't be usually an all-winning codec.
Also, as for the images compared in OP: does OP know if there's any metadata written into those images? I mean, the answer can be as simple as discovering that JPEG images included a thumbnail in the image, and then all that measuring would be worthless...
Why? Isn't this the entire point. 99.99999% of vistors to my website never click on a photo and zoom to 240%. Why serve all that extra data so that once in a lifetime pixel peeping weirdo doesn't get upset? If you're running a photography portfolio or something akin to it, then sure, serve big images. But for most of the web there's no reason to.
"According to my benchmarks, JPEG XL also outperforms AVIF at quality settings very much suitable for the web, especially if you compare them at same-cpu-effort encoder settings. If you have data that says something else, could you please share it?"
I really hope JPEG XL wins the next generation lossy image codec war, for the killer feature of being able to losslessly convert my back catalog of JPEG photos to JPEG XL photos.
JPEG XL still has too poor browser support to be taken seriously although I hope Safari's move in the most recent version will apply more pressure on Google.
Seconding this. It feels like every other day I download an image, realize it's WebP and start bashing my head against the desk because it means opening it in GIMP to convert it to JPEG once again.
I've got several hundreds of GB of unused SSD storage, and I'm downloading it over a 1Gbps connection. Honestly, I really could not care less about those 150KB I'm saving.
I tried to use it for a screenshot in a README on GitHub, and raw.gitusercontent.com couldn't serve the right MIME type. I was surprised. PNG it is, then.
So not only browser support, but server support. (Which is really easy to support.)
We need a website like caniuse.com that tracks support for file formats on operating systems and software. I would be interested to see the support of WebP out of the box on Windows 11, macOS, Ubuntu, stock Android, iOS, the GIMP, Adobe Photoshop, and so on.
Just for fun, I downloaded the wbep in the article to my desktop to check non-web comparability and found something strange. For usability reasons, I still use Windows Photo View from the XP days on my modern Windows 11 computer. After loading it up there, I noticed that the webp image is viewable but decodes very differently than other applications. For comparison, I included the image as Firefox sees it, as the native Windows 11 Photos app sees it, and as the XP-era Photo Viewer sees it. Notice that Photo Viewer is darker and with what looks like messed up HSL saturation values. Why would it do this, instead of just not displaying the file at all?
I give some leeway before passing judgement, but this article is not very good. It uses one single photo, a bad one at that, picks some arbitrary percentages that don't mean the same thing for all formats and in the end the only message is that "progress is happening".
There are many better comparisons on the internet, with much better examples and metrics.
It would be nice if the author would add mozjpeg[1] to the comparison. At certain image resolutions, it can produce smaller file sizes than WebP, and because it is still a jpeg, it has a much better compatibility story, which the author alluded to.
Great advice but... really should have picked another picture. A night shot on a phone camera is going to be heavily postprocessed, it's grainy... just not the best demonstration of the tech
As Aurélien points out, if you fixate on a bunch of metrics without actually caring about the professional applications, the outcome will look... amateurish.
Jpegli will lift traditional JPEG compression density by 25–30% and support 10+ bit HDR within the backward compatible 8 bit formalism. Jpegli is based on porting jpeg xl encoding strategies back to old jpeg. It is a whole new reimplementation, instead of tweaking old libs such as libjpeg, libjpeg-turbo or mozjpeg. Jpegli is API and ABI compatible with its usual alternatives like three mentioned before.
Try to open Google Maps, or any Google product webpage for that matter. In Maps, just get to a chosen place and repeatedly zoom in to the maximum and then zoom out to see a few countries (or US states). The best is to click on + and - buttons so that you're sure the area and zoom ratio is the same. Observe network requests. Many requests are in range 1-20 bytes, but they send 800 bytes in request headers and cookies. Cookies? Really? For a static image (map tile) or a supporting JSON? Do they have to be that long? Are those requests really necessary?
Also look at the length of URL. Is it really required to send that much crap? And there are thousands of those requests, only some of them get cached. And there's a grid that blinks now and then, especially when those buttons "restaurants", "hotels" etc.
Compare that to OpenStreetMap which is way leaner and smoother (and now, after changing maps color scheme by Google, much nicer and more professionally looking), and works flawlessly with Firefox, too.
Google could substantially reduce the maps servers' load, and the network load, but their "top talent" programmers just made it heavyweight and ugly by design. They're against all web best practices they require others to follow. Is all that crap required to spy users, or is it because of their programmers are way overrated?
Look at the enormous amount of requests to www.google.com/log204 and /gen_204. There can be several of them for one display of map in a specific place at a specific zoom rate. Each of them is about 680 bytes, of which 500 is the GET request, rest is headers (+ cookie, of course!)
And I need to mention that my mobile data transfer plan gets depleted much faster than it could have if this product was properly developed (yeah I often use maps on my laptop with mobile internet plans). Not everybody sits in a colorful office having 1- or 10Gbps fiber internet connection and nicely stuffed microkitchen.
> In fairness, it wasn’t practical to use even with a fallback until around 2015 and only became usable without a fallback in the last 2-3 (ish?) years.
And because of that school of thought, about 20% of all sites I visit are currently broken for me. I'm still on macOS Catalina, which is the last somewhat bearable version of macOS, but which has no webp support in Safari.
Even pages which pretend to specify JPG fallbacks via srcset and the like do not, because the JPG endpoints return webp anyway.
And sometimes some image CDNs seem to have been configured such as to ignore an explicit request for JPEG via their image format parameter (you can change it to all sorts of different formats that then actually get delivered, only jpeg stubbornly continues to return a webp image (!)) unless you also spoof the user agent to some older browser.
I never see the point of giving an argument for replacement, even just as a title, on the basis of being old.
It is good that inside there are functional arguments but the title should reflect that too.
The age is never the prime characteristic of a technology, only collateral in many of the circumstances, but far from for all. Emphasizing modernness or freshness is a bit superficial and childish.
Worth mentioning that it's only been "the future" for a little less than 3 years.
Safari on iOS added support for webp in iOS 14, in late 2020. Before that version reached widespread adoption you would have needed a jpeg fallback for your webp images.
> Regardless of whether AVIF ends up being better than WebP, it’s clear that there are viable alternatives you can use today that are massive improvements over JPEG.
Friendly reminder that there is JPEG-XL which is arguably better for all cases than WebP and AVIF (and also supports progressive decoding!). Unfortunately Google (who have a vested interest in WebP and AVIF), are actively hostile towards supporting it and have outright lied about their reasoning (stating lack of interest despite thousands of developers and market-leading corps saying otherwise).
The story lost me at the subtitle--I'll wait until there's one. Seriously though if a very large part of your company's cost or user experience depends on efficiently rendering quality images, then this should be on your reading list.
For the vast majority, choosing the nearly-right image dimensions and compression level is probably going to do a lot more than choosing any format over jpeg.
Years ago when I was tight on disk space I converted my photos all to jpeg2000 to save space. I've regretted it ever since, as I've permanently lost resolution of my personal photos of friends and family, and some turn out to be completely lost.
For web pages, you can just turn up the jpeg compression on photos, and use pngs for constructed images with hard edges.
Apart from being a single sample of a low quality photo to start with. The WebP seems noticeably softer at 100% viewing, especially textures on the wall and some faces.
I doubt Microsoft would give such a project the passion it deserves in terms of optimization and support of new and future formats. I think this is better left for a third party like ImageSharp, Magick.NET, or libvips with the .NET bindings.
webp is perfect for serving images but not perfect as a source. jpeg at 100% quality is less lossy than webp lossy version. webp lossless is fine, but then not very small. taking photos at 100% jpeg can be a good middle ground for that reason, unless you really dont care about the quality.
politelemon|2 years ago
The comparison photo isn't great. Even my pedestrian eyes are able to tell that the Webp and Avi conversions result in poorer photos (most glaring example is the tree in the background), so immediately I'd not want my photos created, converted, or displayed in that format.
But numbers aside, the main reason is because JPG is 'good enough'. It's the same reason that old protocols like FTP and SMTP still hang around, why customers still want CSVs/Excels over Parquets. If a thing is good enough, it will hang around for a very long time because there's no compelling reason to move away from it. Considering the bloat that websites already present to the user, and the general lack of attention to bandwidth savings in the development stage, AND the existence of 'workarounds' like CDNs, even the development teams have little incentive to look for savings here (for now).
There will be hundreds of workflows built around JPG's capabilities as well, right from how cameras take the photos, embed metadata into it, how tools read that metadata. Think of embedded devices and webcams that produce images, which will be running 'in the field' for decades.
Additionally it's not just about browser support, which is a very limited way of considering it. For people working with those image types, they will want to know if it's compatible on all desktop OSes, and through tooling like GIMP, Photoshop, Affinity, exiftool, ffmpeg, imagemagick, etc.
It'll probably be a good number of years before there is widespread adoption that enable those workflows, at which point we (hopefully) no longer have to care whether it's a .webp or .jpg.
froh|2 years ago
thanks for this gem
I'd add to "good enough" a viable ecosystem.
perl lived on for quite a surprising while despite the parrot disaster and python and ruby competition.
and while rust has amazing momentum, c++ and c have a plethora of existing production quality libraries.
joecot|2 years ago
t-writescode|2 years ago
The place I found the most interesting is the dark, top of the screen. Zoomed to 240% and looking at the top area, especially where the power lines are going across, there's a very, very clear difference in quality between even the 95% jpeg and webp, and in my opinion, the jpeg wins for being more honest. That difference is more stark at the 65% compression option.
Is that difference worth the larger size? That's up for each of us to decide as we choose our technology; but, those images are very different from my perception.
Caveat: I've used Firefox to render them, which may have different results than Chrome, perhaps.
jug|2 years ago
This isn't what any lossy formats are designed for.
How much do the lossy spots stick out when viewed in a cursory glance without zooming from a normal viewing distance? This is what lossy formats are designed for.
lifthrasiir|2 years ago
crabbone|2 years ago
This is less noticeable with stills, but it's a special form of art / a matter of professional pride for teams working on video codecs: how to make pictures produced from the same file look "better".
Similarly for compression. It's far from given that two different JPEG implementations will create byte-for-byte identical files given raw image input and all the same compression settings. There's special art in figuring out what parts of the picture will compress better, what parts can use wider "swatches" etc.
It's also true that different codecs can perform better on different kinds of pictures either in the sense of producing smaller files or in the sense of producing more visually appealing picture. So, anyone trying to establish the behavior of compression of competing codecs needs to try this on a carefully selected set of images which need to include images with high and low brightness, sharp and blurred images, pastel colors and neon glow as well as pictures of things we are often interested in seeing s.a. portraits or medical images etc.
There won't be usually an all-winning codec.
Also, as for the images compared in OP: does OP know if there's any metadata written into those images? I mean, the answer can be as simple as discovering that JPEG images included a thumbnail in the image, and then all that measuring would be worthless...
303uru|2 years ago
xuhu|2 years ago
ksec|2 years ago
apichat|2 years ago
Jon Sneyers (main JPEG XL designer) :
"According to my benchmarks, JPEG XL also outperforms AVIF at quality settings very much suitable for the web, especially if you compare them at same-cpu-effort encoder settings. If you have data that says something else, could you please share it?"
https://twitter.com/jonsneyers/status/1666062661585367042
out_of_protocol|2 years ago
jl6|2 years ago
jug|2 years ago
j16sdiz|2 years ago
crote|2 years ago
I've got several hundreds of GB of unused SSD storage, and I'm downloading it over a 1Gbps connection. Honestly, I really could not care less about those 150KB I'm saving.
tommiegannert|2 years ago
So not only browser support, but server support. (Which is really easy to support.)
Flimm|2 years ago
vsnf|2 years ago
https://imgur.com/a/inom1D0
layer8|2 years ago
KingLancelot|2 years ago
mihaic|2 years ago
There are many better comparisons on the internet, with much better examples and metrics.
richeyryan|2 years ago
[1]https://github.com/mozilla/mozjpeg
habitue|2 years ago
rollcat|2 years ago
https://eng.aurelienpierre.com/2021/10/webp-is-so-great-exce...
As Aurélien points out, if you fixate on a bunch of metrics without actually caring about the professional applications, the outcome will look... amateurish.
The_Colonel|2 years ago
JyrkiAlakuijala|2 years ago
HackerThemAll|2 years ago
Try to open Google Maps, or any Google product webpage for that matter. In Maps, just get to a chosen place and repeatedly zoom in to the maximum and then zoom out to see a few countries (or US states). The best is to click on + and - buttons so that you're sure the area and zoom ratio is the same. Observe network requests. Many requests are in range 1-20 bytes, but they send 800 bytes in request headers and cookies. Cookies? Really? For a static image (map tile) or a supporting JSON? Do they have to be that long? Are those requests really necessary?
Also look at the length of URL. Is it really required to send that much crap? And there are thousands of those requests, only some of them get cached. And there's a grid that blinks now and then, especially when those buttons "restaurants", "hotels" etc.
Compare that to OpenStreetMap which is way leaner and smoother (and now, after changing maps color scheme by Google, much nicer and more professionally looking), and works flawlessly with Firefox, too.
Google could substantially reduce the maps servers' load, and the network load, but their "top talent" programmers just made it heavyweight and ugly by design. They're against all web best practices they require others to follow. Is all that crap required to spy users, or is it because of their programmers are way overrated?
Look at the enormous amount of requests to www.google.com/log204 and /gen_204. There can be several of them for one display of map in a specific place at a specific zoom rate. Each of them is about 680 bytes, of which 500 is the GET request, rest is headers (+ cookie, of course!)
And I need to mention that my mobile data transfer plan gets depleted much faster than it could have if this product was properly developed (yeah I often use maps on my laptop with mobile internet plans). Not everybody sits in a colorful office having 1- or 10Gbps fiber internet connection and nicely stuffed microkitchen.
gambiting|2 years ago
I'm glad I'm not the only one who finds the new colour scheme for Google maps horrendous.
janfoeh|2 years ago
And because of that school of thought, about 20% of all sites I visit are currently broken for me. I'm still on macOS Catalina, which is the last somewhat bearable version of macOS, but which has no webp support in Safari.
Even pages which pretend to specify JPG fallbacks via srcset and the like do not, because the JPG endpoints return webp anyway.
iggldiggl|2 years ago
rigid|2 years ago
Just sayin...
bitsandboots|2 years ago
GoblinSlayer|2 years ago
mihaaly|2 years ago
The age is never the prime characteristic of a technology, only collateral in many of the circumstances, but far from for all. Emphasizing modernness or freshness is a bit superficial and childish.
jrmg|2 years ago
swiftcoder|2 years ago
Safari on iOS added support for webp in iOS 14, in late 2020. Before that version reached widespread adoption you would have needed a jpeg fallback for your webp images.
AVIF is still not universally supported.
supermatt|2 years ago
Friendly reminder that there is JPEG-XL which is arguably better for all cases than WebP and AVIF (and also supports progressive decoding!). Unfortunately Google (who have a vested interest in WebP and AVIF), are actively hostile towards supporting it and have outright lied about their reasoning (stating lack of interest despite thousands of developers and market-leading corps saying otherwise).
karmakaze|2 years ago
The story lost me at the subtitle--I'll wait until there's one. Seriously though if a very large part of your company's cost or user experience depends on efficiently rendering quality images, then this should be on your reading list.
For the vast majority, choosing the nearly-right image dimensions and compression level is probably going to do a lot more than choosing any format over jpeg.
mikewarot|2 years ago
For web pages, you can just turn up the jpeg compression on photos, and use pngs for constructed images with hard edges.
everfrustrated|2 years ago
rado|2 years ago
have_faith|2 years ago
cm2187|2 years ago
jug|2 years ago
bitsandboots|2 years ago
jgalt212|2 years ago