top | item 46817781

(no title)

blopker | 1 month ago

This post greatly over simplifies how many issues come up optimizing images. Also, the Github workflow doesn't commit the optimized images back to git, so this would have to run before packaging and would have to run on every image, not just new images added.

Ideally, images are compressed _before_ getting committed to git. The other issue is that compression can leave images looking broken. Any compressed image should be verified before deploying. Using lossless encoders is safer. However, even then, many optimizers will strip ICC profile data which will make colors look off or washed out (especially if the source is HDR).

Finally, use webp. It's supported everywhere and doesn't have all the downsides of png and jpg. It's not worth it to deploy these older formats anymore. Jpgxl is ever better, but support will take a while.

Anyway, I made an ImageOptim clone that supports webp encoding a while ago[0]. I usually just chuck any images in there first, then commit them.

[0]: https://github.com/blopker/alic

discuss

order

butvacuum|1 month ago

There's one thing JPEG has the edge on- true Progressive loading.

If you're clever you can use fetch freqests to render a thumbnail based off the actual image by manually parsing the JPEG and stopping after some amount of detail. I'm more than a little suprised that no self-hosted photo solution uses this in any capacity (at least when I last checked).

blopker|1 month ago

Google answers this question in the FAQ: https://developers.google.com/speed/webp/faq#does_webp_suppo...

But in my experience, webp is better enough that the whole file loads around the same time the jpg progressive loading kicks in. Given that progressive jpgs are larger than non progressive (so not a 'free' feature), jpg is just a waste of bandwidth at this point.

tatersolid|1 month ago

Where do you store high quality original images in case future edits or recompression with better codecs are needed? Generation loss is a thing.

I view the high-quality originals as “source” and resized+optimized images as “compiled” binaries. You generally want source in Git, not your compiled binaries.

blopker|1 month ago

As always, it really depends on what the source is. Often images are created with some software like Photoshop, would you commit the psd file to git? If you're a photographer, would you commit 20mb+ raw image files? Might make sense for a few images, but git is just not the right solution for binary data in general. Every modification has to duplicate the entire file. This makes working with the repo unpleasant very quickly.

In general, I recommend people back up binary files to cloud storage, like S3, and only commit optimized, deployment ready assets to git. There's also GitLFS, but it's clucky to use.

ramijames|1 month ago

I like this take. I tend to agree.