top | item 42552606

(no title)

leijurv | 1 year ago

Here's the submission that won the Hutter Prize in 2021: https://github.com/amargaritov/starlit It uses a LSTM to predict the next token lossily, then uses https://en.wikipedia.org/wiki/Arithmetic_coding to convert that to lossless compression. Lossless compression can definitely leverage a lossy compressor, such as via arithmetic coding. Also see: https://en.wikipedia.org/wiki/Context-adaptive_binary_arithm... which has a simple "Example" section - imagine if the top prediction made by your neural network was correct, you emit "0", if the 2nd was correct, you emit "10", if the 3rd, "110", if the 4th, "1110". As you can see, this is lossless, but the fundamental prediction is lossy, and the better that prediction is, the better the compression. (In actuality, you wouldn't waste your 1 bits like this, you'd use arithmetic coding instead).

discuss

order

willvarfar|1 year ago

Yes, one way to think about arithmetic compression is encoding the difference between the prediction and reality.

This isn't normally what people mean by lossy compression, though. In lossy compression (e.g. mainstream media compression like JPEG) you work out what the user doesn't value and throw it away.

vlovich123|1 year ago

That’s a stretch to call it lossy. To my eye the purpose of the LSTM seems indistinguishable from a traditional compression dictionary.

And that still doesn’t show how lossless compression is tied to intelligence. The example I always like to give is, “What’s more intelligent? Reciting the US war of independence Wikipedia page verbatim every time or being able to synthesize a useful summary in your own words and provide relevant contextual information such as it’s role in the French Revolution?”

tshaddox|1 year ago

These lossless compression algorithms don’t just recall a fixed string of text. Obviously that can be accomplished trivially by simply storing the text directly.

These lossless compression algorithms compress a large corpus of English text from an encyclopedia. The idea is that you can compress this text more if you know more about English grammar, the subject matter of the text, logic, etc.

I think you’re distracted by the lossless part. The only difference here between lossy and lossless compression is that the lossy algorithm also needs to generate the diff between its initial output and the real target text. Clearly a lossy algorithm with lower error needs to waste fewer bits representing that error.

canjobear|1 year ago

This is standard lossless compression. None of the concepts particular to lossy compression (like rate-distortion theory) are used.