Meta: This post is yet another victim of the HN verbatim title rule despite the verbatim title making little sense as one of many headlines on a news page.
How is "Now using Zstandard instead of xz for package compression" followed by the minuscule low-contrast grey "(archlinux.org)" better than "Arch Linux now using Zstandard instead of xz for package compression" like it was when I originally read this a few hours ago?
Saying it's "yet another victim" seems slightly too emotive to me.
If people can't read the source site's domain after the headline then I agree there wouldn't be much context, but equally, if they can't read that, surely their best solution is to adjust the zoom level in the browser.
It's clear you won't get complete context from the headline list plus domain, but a hint of it is provided and if you want more you click the link. Maybe I'm being a little uncharitable but I don't see a big problem here.
I just woke up to this and was surprised the title was eddied as well. I looked up the guidelines and it looks like I violated the "If the title includes the name of the site, please take it out, because the site name will be displayed after the link." guideline.
Yes, this is a real problem, verbatim titles are often far from the "optimal" title. In some cases the original title provides almost no information about the content.
The question is what's better than a strict "no editorialization" rule.
The window title of the submission is "Arch Linux - News: Now using Zstandard instead of xz for package compression". There is no need to invent a new title.
Earlier last year I was doing some research that involved repeatedly grepping through over a terabyte of data, most of which were tiny text files that I had to un-zip/7zip/rar/tar and it was painful (maybe I needed a better laptop).
With Zstd I was able to re-compress the whole thing down to a few hundred gigs and use ripgrep which solved the problem beautifully.
Out of curiosity I tested compression with (single-threaded) lz4 and found that multi-threaded zstd was pretty close. It was an unscientific and maybe unfair test but I found it amazing that I could get lz4-ish compression speeds at the cost of more CPU but with much better compression ratios.
tar automatically detects and supports unpacking zstd-compressed archives (as well as other compression types). there's no reason to use -x combined with other compression flags.
Package installations are quite a bit faster, and while I don't have any numbers I expect that the ISO image compose times are faster, since it performs an installation from RPM to create each of the images.
Hopefully in the near future the squashfs image on those ISOs will use zstd, not only for the client side speed boost for boot and install, but it cuts the CPU hit for lzma decompression by a lot (more than 50%).
https://pagure.io/releng/issue/8581
BTW, Fedora recently switched to zstd compression for its packages as well. For the same resons basically - much better overall de/compression speed while keeping the result mostly the same size.
Also one more benefit of zstd compression, that is not widely noted - a zstd file conpressed with multiple threads is binary the same as file compressed with single thread. So you can use multi threaded compression and you will end up with the same file cheksum, which is very important for package signing.
On the other hand xz, which has been used before, produces a binary different file if compressed by single or multiple threads. This basucally precludes multi threaded compression at package build time, as the compressed file checksums would not match if the package was rebuild with a different number of compression threads. (the unpacked payload will be always the same, but the compressed xz file will be binary different)
Zstd has an enormous advantage in compression and, especially, decompression speed. It often doesn't compress quite as much, but we don't care as much as we once did. We rebuild packages more than we once did.
This looks like a very good move. Debian should follow suit.
I build packages periodically from the AUR, and compression is the longest part of the process much of the time. For a while, I disabled compression on AUR packages because it was becoming enough of a problem for me to look into solutions. If it's annoying for me, I can imagine it's especially problematic for package maintainers. I can only imagine how much CPU time switching the compression tool will save.
> Recompressing all packages to zstd with our options yields a total ~0.8% increase in package size on all of our packages combined, but the decompression time for all packages saw a ~1300% speedup.
Impressive. As a AUR package maintainer I am also wondering how the compression speed is though.
While the speedup is nice pacman still seems to operate sequentially, i.e. download, then decompress one by one. Decompressing while downloading or decompressing in parallel seems like a low-hanging fruit that hasn't been plucked yet that wouldn't have needed any changes to the compressor.
Since most people are interested in the time taken to compress/decompress rather than the speed at which it happens, seems to me a better metric would be:
"... decompression time dropped to 14% of what it was..." (s/14/actual_value)
I learned about this one the hard way when I went to update a really crufty (~ 1 year since last update) Arch system I use infrequently the other day. I had failed to update my libarchive version prior to the change and the package manager could not process the new format.
Luckily updating libarchive manually with an intermediate version resolved my issue and everything proceeded fine.
This is a good change, but it's a reminder to pay attention to the Arch Linux news feed, because every now and then something important will change. The maintainers provided ample warning about this change there (and indeed I had updated by other systems in response) so we procrastinators really had no excuse :)
I used zstd for on-the-fly compression of game data for p2p multiplayer synchronization, and got 2-5x as much data (depends on the payload type) in each TCP packet. Sad that it still doesn't get much adoption in the industry.
I'd love to see Zstandard accepted in other places where the current option is only the venerable zlib. E.g., git packing, ssh -C. It's got more breadth and is better (ratio / cpu) than zlib at every point in the curve where zlib even participates.
AUR users -- the default settings in /etc/makepkpg.conf (delivered by the pacman package as of 5.2.1-1) are still at xz, you must manually edit your local config:
PKGEXT='.pkg.tar.zst'
The largest package I always wait on perfect for this scenario is `google-cloud-sdk` (the re-compression is a killer -- `zoom` is another one in AUR that's a beast) so I used it as a test on my laptop here in "real world conditions" (browsers running, music playing, etc.). It's an old Dell m4600 (i7-2760QM, rotating disk), nothing special. What matters is using default xz, compression takes twice as long and appears to drive the CPU harder. Using xz my fans always kick in for a bit (normal behaviour), testing zst here did not kick the fans on the same way.
After warming up all my caches with a few pre-builds to try and keep it fair by reducing disk I/O, here's a sampling of the results:
xz defaults - Size: 33649964
real 2m23.016s
user 1m49.340s
sys 0m35.132s
----
zst defaults - Size: 47521947
real 1m5.904s
user 0m30.971s
sys 0m34.021s
----
zst mpthread - Size: 47521114
real 1m3.943s
user 0m30.905s
sys 0m33.355s
I can re-run them and get a pretty consistent return (so that's good, we're "fair" to a degree); there's disk activity building this package (seds, etc.) so it's not pure compression only. It's a scenario I live every time this AUR package (google-cloud-sdk) is refreshed and we get to upgrade. Trying to stick with real world, not synthetic benchmarks. :)
I did not seem to notice any appreciable difference in adding the `--threads=0` to `COMPRESSZST=` (from the Arch wiki), they both consistently gave me right around what you see above. This was compression only testing which is where my wait time is when upgrading these packages, huge improvement with zst seen here...
It should be noted that the makepkg.conf file distributed with pacman does not contain the same compression settings as the one used to build official packages.
I’ve used LZ4 and Snappy in production for compressing cache/mq payloads. This is on a service serving billions of clicks in a day. So far very happy with the results, I know zstd requires more CPU than LZ4 or snappy on average but has someone used it under heavy traffic loads on web services. I am really interested trying it out but at the same time held back by “don’t fix it if it ain’t broken”.
Use Lz4 where latency matters, Zstd if you can afford some CPU.
I have a server that spools off the entire New York stock and options market every day, plus Chicago futures, using Lz4. But when we copy to archive, we recompress it with Zstd, in parallel using all the cores that were tied up all day.
There is not much size benefit to more than compression level 3: I would never use more than 6. And, there's not much CPU benefit for less than 1, even though it will go into negative numbers; switch to Lz4 instead.
For those who want a TLDR :
The trade off is 0.8% increase of package size for 1300% increase in decompression speed.
Those numbers come from a sample of 542 packages.
> If you nevertheless haven't updated libarchive since 2018, all hope is not lost! Binary builds of pacman-static are available from Eli Schwartz' personal repository, signed with their Trusted User keys, with which you can perform the update.
I am a little shocked that they bothered; Arch is rolling release and explicitly does not support partial upgrades (https://wiki.archlinux.org/index.php/System_maintenance#Part...). So to hit this means that you didn't update a rather important library for over a year, which officially implies that you didn't update at all for over a year, which... is unlikely to be sensible.
Arch is actually surprisingly stable and even with infrequent updates on the order of months still upgrades cleanly most of the time. The caveats to this were the great period of instability when switching to systemd, changing the /usr/lib layout, etc but those changes are now pretty far in the past.
That's only not sensible if you continued to use that computer for the year. You might have just not used it for a year, which doesn't seem unlikely. In fact I just updated my Arch desktop, which I had indeed not used for more than a year :)
pacman-static existed already, and can be used to fix some of the most broken systems in a variety of circumstances. So, they didn't make it just for this, might as well mention it as the right tool to fix the problem should it occur.
A little known fact is that parallel XZ do compress worse than XZ !
I measured pixz as being approximately ~2% worse than xz.
That's because input is split into independent chunks.
In comparison, the 0.8% of zstd looks like a bargain.
Most of the results published show very little positive or negative speed in decompression, where is all this -1300% coming from?
edit: Sorry, my fault that was decompression RAM I was thinking about, not speed, although I was influenced by my test that without measuring both xz and zstd seemed instant.
I couldn't care less about decompression speed, because the bottleneck is the network, which means that I want my packages as small as possible. Smaller packages mean faster installation; at 54 MB/s or faster decompression rate of xz, I couldn't care less about a few milliseconds saved during decompression. For me, this decision is dumbass stupid.
Per the post, the speedup on decompress is _13x_ while the size is 1.008x.
For those figures, this will be better total time for you if your computer network connection is faster than about 1.25mbit/sec. For a slow arm computer with an XZ decompress speed of 3MB/s the bandwidth threshold for a speedup drops to _dialup_ speeds.
And no matter how slow your network connection is and how fast your computer is you'll never take more than 0.8% longer with this change.
For many realistic setups it will be faster, in some cases quite a bit. Your 54MB XZ host should be about 3% faster if you're on a 6mbit/sec link-- assuming your disk can keep up. A slow host that decompresses xz at 3MB/s w/ a 6mbit link would a wopping 40% faster.
Why do you care so much about the few extra miliseconds wasted downloading, then? (0.8% size increase is ~ 0). Also don't forget that Arch can also be used on machines with very slow CPU but very fast network connections, like many VPSs. I think this will make a tangible difference on mine. This is also a big improvement for package maintainers and anyone building their own packages without bothering to modify the makepkg defaults, eg. most people using an AUR helper.
There are nice plots [1] to see the tranfer+decompression speedup depening on the network bandwidth.
This is for html web compression, but the results are similar for other datasets. For internet transfer more compression is better than more decompression speed.
You can make your own experiments incl. the plots with turbobench [2]
Lammy|6 years ago
How is "Now using Zstandard instead of xz for package compression" followed by the minuscule low-contrast grey "(archlinux.org)" better than "Arch Linux now using Zstandard instead of xz for package compression" like it was when I originally read this a few hours ago?
nmstoker|6 years ago
If people can't read the source site's domain after the headline then I agree there wouldn't be much context, but equally, if they can't read that, surely their best solution is to adjust the zoom level in the browser.
It's clear you won't get complete context from the headline list plus domain, but a hint of it is provided and if you want more you click the link. Maybe I'm being a little uncharitable but I don't see a big problem here.
hinkley|6 years ago
should be allowed. But I'm not sure that it is.
nloomans|6 years ago
RivieraKid|6 years ago
The question is what's better than a strict "no editorialization" rule.
imtringued|6 years ago
hordeallergy|6 years ago
throwGuardian|6 years ago
Unless one uses a link shortener. Are shorteners permitted on HN?
Dylan16807|6 years ago
If it's not easy to read, then the problem is between the css and your screen. Not the title rules.
WinonaRyder|6 years ago
Earlier last year I was doing some research that involved repeatedly grepping through over a terabyte of data, most of which were tiny text files that I had to un-zip/7zip/rar/tar and it was painful (maybe I needed a better laptop).
With Zstd I was able to re-compress the whole thing down to a few hundred gigs and use ripgrep which solved the problem beautifully.
Out of curiosity I tested compression with (single-threaded) lz4 and found that multi-threaded zstd was pretty close. It was an unscientific and maybe unfair test but I found it amazing that I could get lz4-ish compression speeds at the cost of more CPU but with much better compression ratios.
EDIT: Btw, I use arch :) - yes, on servers too.
bufferoverflow|6 years ago
http://pages.di.unipi.it/farruggia/dcb/
Looks like Snappy beats both LZ4 and Zstd in compression speed and compression ratio, by a huge margin.
LZ4 is a ahead of Snappy in the decompression speed.
filereaper|6 years ago
Hopefully there's another option added to tar that simplifies this if this compression becomes mainstream.
viraptor|6 years ago
xorcist|6 years ago
cmurf|6 years ago
chungy|6 years ago
For compression, you can use "-c -I zstd"
ben0x539|6 years ago
cmurf|6 years ago
Package installations are quite a bit faster, and while I don't have any numbers I expect that the ISO image compose times are faster, since it performs an installation from RPM to create each of the images.
Hopefully in the near future the squashfs image on those ISOs will use zstd, not only for the client side speed boost for boot and install, but it cuts the CPU hit for lzma decompression by a lot (more than 50%). https://pagure.io/releng/issue/8581
m4rtink|6 years ago
Also one more benefit of zstd compression, that is not widely noted - a zstd file conpressed with multiple threads is binary the same as file compressed with single thread. So you can use multi threaded compression and you will end up with the same file cheksum, which is very important for package signing.
On the other hand xz, which has been used before, produces a binary different file if compressed by single or multiple threads. This basucally precludes multi threaded compression at package build time, as the compressed file checksums would not match if the package was rebuild with a different number of compression threads. (the unpacked payload will be always the same, but the compressed xz file will be binary different)
ncmncm|6 years ago
This looks like a very good move. Debian should follow suit.
beatgammit|6 years ago
kbumsik|6 years ago
Impressive. As a AUR package maintainer I am also wondering how the compression speed is though.
ncmncm|6 years ago
the8472|6 years ago
michaelcampbell|6 years ago
"... decompression time dropped to 14% of what it was..." (s/14/actual_value)
JeremyNT|6 years ago
Luckily updating libarchive manually with an intermediate version resolved my issue and everything proceeded fine.
This is a good change, but it's a reminder to pay attention to the Arch Linux news feed, because every now and then something important will change. The maintainers provided ample warning about this change there (and indeed I had updated by other systems in response) so we procrastinators really had no excuse :)
golergka|6 years ago
ncmncm|6 years ago
But if latency matters you might better use lz4.
loeg|6 years ago
rurban|6 years ago
jacobolus|6 years ago
ncmncm|6 years ago
Also lz4, of course.
rwmj|6 years ago
ncmncm|6 years ago
Also, parallel zstd must have some way to split up the work, that you could maybe use too.
gravitas|6 years ago
After warming up all my caches with a few pre-builds to try and keep it fair by reducing disk I/O, here's a sampling of the results:
I can re-run them and get a pretty consistent return (so that's good, we're "fair" to a degree); there's disk activity building this package (seds, etc.) so it's not pure compression only. It's a scenario I live every time this AUR package (google-cloud-sdk) is refreshed and we get to upgrade. Trying to stick with real world, not synthetic benchmarks. :)I did not seem to notice any appreciable difference in adding the `--threads=0` to `COMPRESSZST=` (from the Arch wiki), they both consistently gave me right around what you see above. This was compression only testing which is where my wait time is when upgrading these packages, huge improvement with zst seen here...
Foxboron|6 years ago
pacman:
https://git.archlinux.org/svntogit/packages.git/tree/trunk/m...devtools:
https://github.com/archlinux/devtools/blob/master/makepkg-x8...maxpert|6 years ago
ncmncm|6 years ago
I have a server that spools off the entire New York stock and options market every day, plus Chicago futures, using Lz4. But when we copy to archive, we recompress it with Zstd, in parallel using all the cores that were tied up all day.
There is not much size benefit to more than compression level 3: I would never use more than 6. And, there's not much CPU benefit for less than 1, even though it will go into negative numbers; switch to Lz4 instead.
loeg|6 years ago
G4E|6 years ago
agumonkey|6 years ago
Phlogi|6 years ago
yjftsjthsd-h|6 years ago
I am a little shocked that they bothered; Arch is rolling release and explicitly does not support partial upgrades (https://wiki.archlinux.org/index.php/System_maintenance#Part...). So to hit this means that you didn't update a rather important library for over a year, which officially implies that you didn't update at all for over a year, which... is unlikely to be sensible.
jpgvm|6 years ago
ubercow13|6 years ago
jonathonf|6 years ago
That sort of attention to detail is what continues to impress me about the Arch methodology.
computerfriend|6 years ago
cjbillington|6 years ago
xiaq|6 years ago
shmerl|6 years ago
telendram|6 years ago
In comparison, the 0.8% of zstd looks like a bargain.
esotericn|6 years ago
rubicks|6 years ago
ncmncm|6 years ago
zerogara|6 years ago
edit: Sorry, my fault that was decompression RAM I was thinking about, not speed, although I was influenced by my test that without measuring both xz and zstd seemed instant.
dhsysusbsjsi|6 years ago
https://github.com/lzfse/lzfse
unknown|6 years ago
[deleted]
nwah1|6 years ago
yjftsjthsd-h|6 years ago
Squithrilve|6 years ago
imtringued|6 years ago
vmchale|6 years ago
Annatar|6 years ago
nullc|6 years ago
For those figures, this will be better total time for you if your computer network connection is faster than about 1.25mbit/sec. For a slow arm computer with an XZ decompress speed of 3MB/s the bandwidth threshold for a speedup drops to _dialup_ speeds.
And no matter how slow your network connection is and how fast your computer is you'll never take more than 0.8% longer with this change.
For many realistic setups it will be faster, in some cases quite a bit. Your 54MB XZ host should be about 3% faster if you're on a 6mbit/sec link-- assuming your disk can keep up. A slow host that decompresses xz at 3MB/s w/ a 6mbit link would a wopping 40% faster.
ubercow13|6 years ago
powturbo|6 years ago
This is for html web compression, but the results are similar for other datasets. For internet transfer more compression is better than more decompression speed.
You can make your own experiments incl. the plots with turbobench [2]
[1] https://sites.google.com/site/powturbo/home/web-compression [2] https://github.com/powturbo/TurboBench
snvzz|6 years ago
If this was netbsd m68k, you'd probably easily understand.