top | item 46392382

Streaming compression beats framed compression

36 points| bouk | 2 months ago |bou.ke

16 comments

order

duskwuff|2 months ago

Before you get too excited, keep two things in mind:

1) Using a single compression context for the whole stream means you have to keep that context active on the client and server while the connection is active. This may have a nontrivial memory cost, especially at high compression levels. (Don't set the compression window any larger than it needs to be!)

2) Using a single context also means that you can't decompress one frame without having read the whole stream that led up to that. This prevents some possible useful optimizations if you're "fanning out" messages to many recipients - if you're compressing each message individually, you can compress it once and send the same compressed message to every recipient.

adzm|2 months ago

The analogy to h264 in the original post is very relevant. You can fix some of the downsides by using the equivalent of keyframes, basically. Still a longer context than a single message but able to be broken up for recovery or etc.

yellow_lead|2 months ago

> This may have a nontrivial memory cost, especially at high compression levels. (Don't set the compression window any larger than it needs to be!)

It sounds like these contexts should be cleared when they reach a certain memory limit, or maybe reset periodically, i.e every N messages. Is there another way to manage the memory cost?

lambdaloop|2 months ago

Does streaming compression work if some packets are lost or arrive in a different order? Seems like the compression context may end up different on the encoding/decoding side.. or is that handled somehow?

dgoldstein0|2 months ago

I think the underlying protocol would have to guarantee in order delivery - either via tcp (for http1, 2, or spdy), or in http3, within a single stream.

duskwuff|2 months ago

It sounds as though the data is being transferred over HTTP, so packet loss/reordering is all handled by TCP.

efitz|2 months ago

When I worked at Microsoft years ago, me and my team (a developer and a tester) built a high volume log collector.

We used a streaming compression format that was originally designed for IBM tape drives.

It was fast as hell and worked really well, and was gentle on CPU and it was easy to control memory usage.

In the early 2000s on a modest 2-proc AMD64 machine we ran out of fast Ethernet way before we felt CPU pressure.

We got hit by the SOAP mafia during Longhorn; we couldn’t convince the web services to adopt it; instead they made us enshittify our “2 bytes length, 2 bytes msgtype, structs-on-the-wire” speed demon with their XML crap.

vlovich123|2 months ago

Using zstd with a tuned small file custom dictionary probably gets you most of the benefit without giving up independence of compression.

masklinn|2 months ago

Surely that is obvious to anyone who has compared zip and tgz?

skulk|2 months ago

MUD clients and servers use MCCP which is essentially keeping a zlib stream open, adding text to it, and flushing it whenever something is received. I think this has been around since 2000.

https://tintin.mudhalla.net/protocols/mccp/