top | item 38689653

LLMLingua: Compressing Prompts for Faster Inferencing

149 points| TarqDirtyToMe | 2 years ago |github.com

47 comments

order

thebeardisred|2 years ago

Wild. if I'm reading this correctly it's effectively a sort of "zip" algorithm for both the inputs and outputs of a prompt based model. thus, it allows a user to compress their request down to the minimal token size which retains the same semantics. In effect, this then allows a user to encode a more dense set of tokens into the original request.

Does that sound about right?

refulgentis|2 years ago

Yes you're correct -- it's a really interesting thing, in that it reminds me of early 2023 when people would "compress" prompts by having ChatGPT rewrite it itself into something smaller.

There's really no substantive difference between that and what they're doing here, other than they're purposefully using a crappier model than GPT 3.5/ChatGPT to increase the cost savings.

For example, the first set of graphics is demonstrating switching a long question with 5 Q/A examples ("5-shot", in the literature) into ~4 sentences that are a paraphrasing of the question and have one or two very brief examples without reasoning.

That's all well and fine if you're confident the model is so amazing that it answers as well as it does with 1-shot as it does with 5-shot, but it is very, very, very likely that is not the case. Additionally, now you're adding this odd layer between the user's input and OpenAI that will easily be "felt".

TarqDirtyToMe|2 years ago

Sounds right to me. I think it’s fun that is this may be the only compression algorithm where the output is still human understandable.

It reads like a slightly garbled version of what someone writing down bullet point notes of a lecture might write.

It’s so rare that the human optimized and machine optimized versions of an input are so similar

iofu728|2 years ago

In fact, it can be seen as semantic communication, which is defined by Shannon.

TarqDirtyToMe|2 years ago

LLMLingua uses a well-trained small language model after alignment, such as GPT2-small or LLaMA-7B, to detect the unimportant tokens in the prompt and enable inference with the compressed prompt in black-box LLMs, achieving up to 20x compression with minimal performance loss.

cyanydeez|2 years ago

“Why waste time say lot word when few word do trick” -Kevin Malone

sroussey|2 years ago

What would happen if instead of the long prompt, you just sent the mean of the embeddings of the prompt tokens?

behnamoh|2 years ago

Came here to mention this. Whenever I hear "alignment" I immediately say "No way am I going to use that shit". Seriously, there's alignment and then there's censorship—the AI creators are using the former when they actually mean the latter. This needs to stop.

avereveard|2 years ago

I wonder if this could also be useful in reverse, you'd have a large expensive llm producing a few tokens per sentence about the answer, then a expansion llm forming sentences out of it.

PoignardAzur|2 years ago

Some teams have researched ways to do this.

For instance, you can have a smaller model generate ten tokens in sequence, and then ask the larger mode "given these N tokens, what is the token N+1" ten times in parallel.

If the large and small model agree on, say, the first 7 tokens, then you keep these and throw the next 3 away and start over. So you still have to run the large model for each token, but you can at least do batch calculations (which is a lot more efficient, because loading layer weights is the bottleneck, not matrix ops).

cuuupid|2 years ago

The expansion llm would have to have a pretty good model of language so would likely need to be 7B realm though, but could be useful given we are almost at a time where 7b models can run ubiquitously on most consumer hardware

swyx|2 years ago

the text to image community has upscalers like this… i wonder if useful

wklm|2 years ago

I copied all the the text from this thread and compressed it, the result is:

``` {'compressed_prompt': '\t | submit\twout\nLLMLing byqtyTo\n\n. ". which\nq1 that only down human\nnextaccount many examples,pressed\nq4\n\n as semantic\n" having noreings of isating withoutre this and\n31] a\n\n the0 of, to workaroundqTo\ning after and in loss\n\n time say -. Word a\nb-leep big\n\nsr the\namshipIqToMy hear alignment to its andics this only target\n will tokensq be: The the usinging\n\nbeamIt" mying\na large expensive am\n\n generate larger"3 loading).\n\nThe expansionB run has this if\nhas] agents think it\n\npyinstall into game in " promptter\nos\n particular (. ( == transformations to given smaller) ownups\n\n this better [] thewithout\n\n. is -Error medium\n\n<\n decode\n\r\nbehnamoh 1 day ago | prev [3 more]\r\n\r\n\r\n\r\n\r\n\r\nGuidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact\r\n\r\nSearch: \r\n', 'origin_tokens': 2863, 'compressed_tokens': 217, 'ratio': '13.2x', 'saving': ', Saving $0.2 in GPT-4.'} ```

Chat GPT4 doesn't know what to do with it: https://chat.openai.com/share/73bc7b96-4453-4a6e-944d-d9d4c5...

TarqDirtyToMe|2 years ago

I would think you’d need to unescape the new lines and tabs and have a task for the model to perform with it.

Maybe try prefixing it with “summarize the following text” before compression.

Otherwise I’m not sure how it would judge what it’s important. Honestly I’m not sure what ChatGPT would do if you copied the text from this page uncompressed without asking it do something

Edit: pasting uncompressed it summarizes the discussion.

I think this solution isn’t well suited for this kind of task. It seems like you’d want to compress instructions, system prompts and memory. With a big block of text with no prior context you’re essentially relying on the smaller model to decide what’s important without enough information to judge.

Worth some more experimentation for sure

icanhasjonas|2 years ago

Very interesting, we've started on an approach to enable LLM agents communicate and context share in their own language, but I think calling it compression is actually more intuitive. I love this

pyinstallwoes|2 years ago

Intelligence is compressing information into irreducible representation.

mbb70|2 years ago

This always seemed like the end game vs. getting a degree in prompt engineering.

If you get enough data on "initial prompt attempt" -> "final successful prompt", the whole thing can be replaced by a fine tuned model.

You would just select a "prompt rewritter llm" that optimizes for accuracy, cost, alignment etc.

pseudosavant|2 years ago

GPT on top of GPT. It is turtles all the way down.

MrYellowP|2 years ago

I was working on the same thing months ago and it works, but it was a purely trial and error way of doing it and the compressed prompts, naturally, wouldn't necessarily work for different LLMs easily.

I am not actually convinced this is a good idea, though. This path eventually leads to a "prompt compiler" that compiles prompts into byte code for a future "more efficient" LLM to understand.

Oh and it definitely didn't require its own language model. All it required was finding how many letters one can remove from a word and which words can be completely omitted.

the_omegist|2 years ago

Made me think of Speedtalk by Heinlein [0].

One way to increase the context window, I thought, would be to teach the LLM a compressed language based on abbreviations etc and to have some compressing/uncompressing script do the translating with the LLM. That would allow longer prompts too.

Not as sophisticated as this LLMLingua but good enough for basic users.

[0] https://en.wikipedia.org/wiki/Speedtalk

fabmilo|2 years ago

This means that we need some new form of preprocessing the data before training LLMs from simple text. Probably just using this compressor and then try to decompress the full text could give some better Supervised Fine Tuned results. Wonder how to deploy this right away. Probably using its own optimized triton inference server?

baq|2 years ago

Redundancy is resiliency - wonder if there’s still enough error correction in the compressed language?

samus|2 years ago

Error connection is not really required here since there is no lossy communication medium.

joelthelion|2 years ago

I wonder if, as humans, we could benefit from this. Could we learn to read this compressed lingo?

samus|2 years ago

The model doing the compression is trained with a human language corpus. Also, this is a generic procedure to feed another model trained on a similar corpus. Therefore, I'd not expect the compressing model to do anything exotic.

Btw., humans are quite good at compressing as well. SMS used to be billed per 128 characters. Also, any slang or technical jargon are attempts at compression. These are how people push the limits of expressivness and contribute to language evolution.

phh|2 years ago

Are we going to do encode => LLM => decode architectures? That would be ironic

samus|2 years ago

It's turtles^H^H^H^H^H^H^H encoders/decoders all the way down!

behnamoh|2 years ago

[deleted]

TarqDirtyToMe|2 years ago

I think alignment in this context refers to distribution alignment: “aligning” it so its outputs more efficiently steer the target model with less tokens

m1sta_|2 years ago

Stop writing the same inane comment. Nobody cares that you don't understand alignment nor that because of the misunderstanding and your political beliefs you see it as the devil.