top | item 41546619

Dissecting the gzip format (2011)

113 points| solannou | 1 year ago |infinitepartitions.com

55 comments

order

Filligree|1 year ago

One of my favorite gzip party tricks is that (ungzip (cat (gzip a) (gzip b))) == (cat a b). That is to say, the concatenation of two gzip streams is still a valid gzip file.

This hasn't ever been practically useful, but it means you can trivially create a 19-layer gzip file containing more prayer strips than there are atoms in the universe, providing a theological superweapon. All you need to do is write it to a USB-stick, then drop the USB-stick in a river, and you will instantly cause a heavenly crisis of hyperinflation.

mbreese|1 year ago

In bioinformatics we use a modified gzip format called bgzip that exploits this fact heavily. The entire file is made of concatenated gzip chunks. Each chunk then contains the size of the chunk (stored in the gzip header). This lets you do random access inside the compressed blocks more efficiently.

Sadly, the authors hard coded the expected headers so it’s not fully gzip compatible (you can’t add your own arbitrary headers). For example, I wanted to add a chunk hash and optional encryption by adding my own header elements. But as the original tooling all expects a fixed header, it can’t be done in the existing format.

But overall it is easily indexed and makes reading compressed data pretty easy.

So, there you go - a practical use for a gzip party trick!

noirscape|1 year ago

Is that by any chance related to how the TAR format was developed?

Considering the big thing with TAR is that you can also concatenate it together (the format is quite literally just file header + content ad infinitum; it was designed for tape storage - it's also the best concatenation format if you need to send an absolute truckloads of files to a different computer/drive since the tar utility doesn't need to index anything beforehand), making gzip also capable of doing the same logic but with compression seems like a logical followthrough.

fwip|1 year ago

We use it at $dayjob to concatenate multi-GB gzipped files. We could decompress, cat, and compress again, but why spend the cycles?

Joker_vD|1 year ago

> This hasn't ever been practically useful,

I used it a couple times to merge chunks of gzipped CSV together, you know, like "cat 2024-Jan.csv.gz 2024-Feb.csv.gz 2024-Mar.csv.gz > 2024-Q1.csv.gz". Of course, it only works when there is no column headers.

Hakkin|1 year ago

This is true for a few different compression formats, it works for bzip2 too. I've processed a few TBs of text via `curl | tar -xOf - | bzip2 -dc | grep` for tar files with lots of individually compressed bz2 files inside.

hansvm|1 year ago

It's kind of nice whenever you find yourself in an environment where, for whatever reason, you need to split a payload before sending it. You just `ungzip cat *` the gzipped files you've collected on the other end.

blibble|1 year ago

I've used this trick to build docker images from a collection of layers on the fly without de/recompression

kajaktum|1 year ago

I think ZSTD is also like this.

userbinator|1 year ago

Besides a persistent off-by-one error, and the use of actual trees instead of table lookup for canonical Huffman, this is a pretty good summary of the LZ+Huffman process in general; and in the 80s through the mid 90s, this combination was widely used for data compression, before the much more resource-intensive adaptive arithmetic schemes started becoming popular. It's worth noting that the specifics of DEFLATE were designed by Phil Katz, of PKZIP fame. Meanwhile, a competing format, the Japanese LZH, was chosen by several large BIOS companies for compressing logos and runtime code.

Note that real-world GZIP decoders (such as the GNU GZIP program) skip this step and opt to create a much more efficient lookup table structure. However, representing the Huffman tree literally as shown in listing 10 makes the subsequent decoding code much easier to understand.

Is it? I found the classic tree-based approach to become much clearer and simpler when expressed as a table lookup --- along with the realisation that the canonical Huffman codes are nothing more than binary numbers.

082349872349872|1 year ago

I'd agree with TFA that canonical Huffman, although interesting, would be yet another thing to explain, and better left out of scope, but it does raise a question:

In what other areas (there must be many) do we use trees in principle but sequences in practice?

(eg code: we think of it as a tree, yet we store source as a string and run executables which —at least when statically linked— are also stored as strings)

yyyk|1 year ago

>before the much more resource-intensive adaptive arithmetic schemes started becoming popular

The biggest problem was software-patent stuff nobody wanted to risk before they expired.

lifthrasiir|1 year ago

The lookup table here would be indexed by a fixed number of lookahead bits, so it for example would duplicate shorter codes and put longer codes into side tables. So a tree structure, either represented as an array or a pointer-chasing structure would be much simpler.

hk1337|1 year ago

Why do people use gzip more often than bzip? There must be some benefit but I don’t really see it, you can split and join two bzipped files (presumably CSV so you can see the extra rows). Bzip seems to compress better than gzip too.

jcranmer|1 year ago

Using gzip as a baseline, bzip2 provides only modest benefits: about a 25% improvement in compression ratio, with somewhat more expensive compression times (2-3×) and horrifically slow decompression times (>5×). xz offers a more compelling compression ratio (about 40-50% better), at the cost of extremely expensive compression time (like 20×), but comparable decompression time to gzip. zstd, the newest kid on the box, can achieve more slight benefits to compression ratio (~10%) at the same compression time/decompression time as gzip, but it's also tunable to give you as good results as xz (as slow as xz does).

What it comes down to is, if you care about compression time, gzip is the winner; if you care about compression ratio, then go with xz; if you care about tuning compression time/compression ratio, go with zstd. bzip2 just isn't compelling in either metric anymore.

rwmj|1 year ago

Faster than most alternatives, good enough, but most importantly very widely available. Zstd is better on most axes (than bzip as well), except you can't be sure it's always there on every machine and in every language and runtime. zlib/gzip is ubiquitous.

We use xz/lzma when we need a compressed format that you can seek through the compressed data.

duskwuff|1 year ago

bzip2 is substantially slower to compress and decompress, and uses more memory.

It does achieve higher compression ratios on many inputs than gzip, but xz and zstd are even better, and run faster.

zie|1 year ago

Muscle memory. We've been doing gzip for decades and we are too lazy to remember the zstd commands to tar, assuming the installed version of tar has been updated.

oneshtein|1 year ago

gzip is fast (pigz is even faster), supported everywhere (even in DOS), uses low amounts of memory, and compresses well enough for practical needs.

bzip2 is too slow.

xz is too complex (see https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1068024 ), designed to compress .exe files.

lzip is good, but less popular.

zstd is good and fast, but less popular.

082349872349872|1 year ago

Has anyone taken the coding as compression (when you create repeated behaviour, stuff it in the dictionary via creating a function; switching frameworks is changing initial dicts; etc.) metaphor seriously?

commandlinefan|1 year ago

Sounds like LZW compression to me - is what you're thinking of different than that?