(no title)
blopker | 1 month ago
Ideally, images are compressed _before_ getting committed to git. The other issue is that compression can leave images looking broken. Any compressed image should be verified before deploying. Using lossless encoders is safer. However, even then, many optimizers will strip ICC profile data which will make colors look off or washed out (especially if the source is HDR).
Finally, use webp. It's supported everywhere and doesn't have all the downsides of png and jpg. It's not worth it to deploy these older formats anymore. Jpgxl is ever better, but support will take a while.
Anyway, I made an ImageOptim clone that supports webp encoding a while ago[0]. I usually just chuck any images in there first, then commit them.
butvacuum|1 month ago
If you're clever you can use fetch freqests to render a thumbnail based off the actual image by manually parsing the JPEG and stopping after some amount of detail. I'm more than a little suprised that no self-hosted photo solution uses this in any capacity (at least when I last checked).
blopker|1 month ago
But in my experience, webp is better enough that the whole file loads around the same time the jpg progressive loading kicks in. Given that progressive jpgs are larger than non progressive (so not a 'free' feature), jpg is just a waste of bandwidth at this point.
tatersolid|1 month ago
I view the high-quality originals as “source” and resized+optimized images as “compiled” binaries. You generally want source in Git, not your compiled binaries.
blopker|1 month ago
In general, I recommend people back up binary files to cloud storage, like S3, and only commit optimized, deployment ready assets to git. There's also GitLFS, but it's clucky to use.
ramijames|1 month ago