top | item 42358701

Linux EFI Zboot Abandoning "Compression Library Museum", Focusing on Gzip, ZSTD

70 points| rbanffy | 1 year ago |phoronix.com

19 comments

order
[+] fn-mote|1 year ago|reply
The article gives reasons, but it sounds to me like gzip was retained mainly because it is in use. I don’t have a horse in this race, but if only Zstd were left I would feel like even more simplification was accomplished.

Everybody has their own tiny preference for one of the other but really… if it’s not embedded what is the impact?

> - GZIP is tried and tested, and is still one of the fastest at decompression time, although the compression ratio is not very high; moreover, Fedora is already shipping EFI zboot kernels for arm64 that use GZIP, and QEMU implements direct support for it when booting a kernel without firmware loaded;

> - ZSTD has a very high compression ratio (although not the highest), and is almost as fast as GZIP at decompression time.

[+] bawolff|1 year ago|reply
> but it sounds to me like gzip was retained mainly because it is in use.

Isn't that what the article is literally saying?

They aren't trying to get rid of badly performing compression algos, just ones that are effectively unused.

[+] klysm|1 year ago|reply
Yeah I generally agree that leaving just zstd would simplify things, but I also see advantages in leaving 2. It seems plausible that a better thing will come along in the future. It’s nice to leave the abstraction in place that enables having more than one algo in place.
[+] theandrewbailey|1 year ago|reply
> - ZSTD has a very high compression ratio (although not the highest), and is almost as fast as GZIP at decompression time.

If you're using zstd and its decompression speed isn't at least twice as fast as gzip's, you're using it wrong.

[+] consp|1 year ago|reply
Yes, everyone tunes their compression settings. /s

It's a generalized statement of 'good enough'.

[+] walrus01|1 year ago|reply
zstandard with the correct flags and more cpu/time-intensive options, when creating the compressed file, can be almost as good in ratio as xz.
[+] alkh|1 year ago|reply
As far as I can tell, one of zstandard's main benefits is compression/decompression speed. Is the ratio still worse than xz though?
[+] user20180120|1 year ago|reply
Long Range Zip tends to give faster decompression. Anyone know which command line switches gives optimum results?
[+] lousken|1 year ago|reply
why not keep lz4 as well, shouldn't it have the smallest performance hit compared to those two?
[+] Ufvasl|1 year ago|reply
This can't be answered with a simple yes/no without benchmarking this specific use case on specific hardware.

For other use cases I've personally never really found lz4 very useful (neither gzip). Usually zstd on either low/medium can be both faster (or at least, fast enough) and better compressed.