(no title)
gforce_de | 4 months ago
#!/bin/sh
# originally from https://jpegxl.info/images/precision-machinery-shapes-golden-substance-with-robotic-exactitude.jpg
# URL1="http://intercity-vpn.de/files/2025-10-04/upload/precision-machinery-shapes-golden-substance-with-robotic-exactitude.png"
# URL2="http://intercity-vpn.de/files/2025-10-04/upload/image-png-all-pngquant-q13.png"
curl "$URL1" -so test.png
curl "$URL2" -so distorted.png
# https://github.com/cloudinary/ssimulacra2/tree/main
ssimulacra2 test.png distorted.png
5.90462597
# https://github.com/gianni-rosato/fssimu2
fssimu2 test.png distorted.png
2.17616860
computerbuster|4 months ago
If you run the `validate.py` script available in the repo, you should see correlation numbers similar to what I've pre-tested & made available in the README: fssimu2 achieves 99.97% linear correlation with the reference implementation's scores.
fssimu2 is still missing some functionality (like ICC profile reading) but the goal was to produce a production-oriented implementation that is just as useful while being much faster (example: lower memory footprint and speed improvements make fssimu2 a lot more useful in a target quality loop). For research-oriented use cases where the exact SSIMULACRA2 score is desirable, the reference implementation is a better choice. It is worth evaluating whether or not this is your use case; an implementation that is 99.97% accurate is likely just as useful to you if you are doing quality benchmarks, target quality, or something else where SSIMULACRA2's correlation to subjective human ratings is more important than the exactness of the implementation to the reference.
gforce_de|4 months ago