top | item 35866861

(no title)

Beltiras | 2 years ago

I'm working on something where I need to basically add on the order of 150,000 tokens into the knowledge base of an LLM. Finding out slowly I need to delve into training a whole ass LLM to do it. Sigh.

discuss

order

akvadrako|2 years ago

Can't you use fine-tuning for this?

A other option is to ask GPT to compress your tokens into a shorter prompt for itself.