(no title)
greypowerOz | 4 years ago
I'd not really thought of that aspect before... My old brain is hard-coded to save cpu cycles ... Time to change my ways :)
greypowerOz | 4 years ago
I'd not really thought of that aspect before... My old brain is hard-coded to save cpu cycles ... Time to change my ways :)
kzrdude|4 years ago
http://fastcompression.blogspot.com/2015/01/zstd-stronger-co...
Taken from the fastcompression blog - where one could follow ZSTD's author since before ZSTD was even conceived.
"Conveniently" enough the author of the blog has written both ZSTD and LZ4, which top the chart for their respective link speed domains. (2015 data - things have improved in both ZSTD and others since then.)
flohofwoe|4 years ago
Someone|4 years ago
In that case, there typically isn’t additional explicit compression (1). The main gain is in decreasing the number of http requests.
(1) the image itself may have inherent compression, and that may be improved by combining images with similar content, and the web server may be configured to use compression, but the first typically isn’t a big win, and the second is independent from this strategy.
magicalhippo|4 years ago
[1]: https://www.anandtech.com/bench/SSD21/3017
1_player|4 years ago
In fact, if you install Fedora 35 on btrfs, zstd:1 is enabled by default, using fs-level heuristics to decide when and when not to compress, reducing write amplification on SSD drives and gaining some space for free with negligible performance impact, which is nice.
My 8GB ~/src directory on encrypted btrfs on NVMe uses 6GB on disk and I can easily saturate the link while reading from it. Computers are plenty fast.
unknown|4 years ago
[deleted]
mdp2021|4 years ago
bob1029|4 years ago