top | item 38403237

(no title)

thargor90 | 2 years ago

Nobody is arguing that the usecases are the same. In the end you can't even chat with gzip (although you could with it's predictor).

The thing is, that building the predictor is almost the same thing for compression and LLM. Of course the goals and taken tradeoffs are different. The paper shows this analogy.

ChatGPT et al use structured prediction to simulate intelligence. Building the predictor is fancy lossy compression.

Questions arise if lossy compression of things without copyright is legal or not. If I mp3 a lossless recording we currently think it is not legal. With LLMs this is not entirely clear yet.

discuss

order

Arnt|2 years ago

As it happens I know zip. I ported that to linux back in 0.95 days, I think the code was called info-zip in those days. Chatting with its predictor is a fanciful description to say the least.

I've also read the transformers paper mentioned in the tweet.

Of course an LLM is on some level similar to a compression system, and on some level it's also just high and low voltages on some integrated circuits. Saying "just" glorified compression isn't something I'll believe without good arguments, though.