top | item 35866861 (no title) Beltiras | 2 years ago I'm working on something where I need to basically add on the order of 150,000 tokens into the knowledge base of an LLM. Finding out slowly I need to delve into training a whole ass LLM to do it. Sigh. discuss order hn newest v3ss0n|2 years ago https://deepai.org/publication/scaling-transformer-to-1m-tok...Can this be implemented in current opensource models? akvadrako|2 years ago Can't you use fine-tuning for this?A other option is to ask GPT to compress your tokens into a shorter prompt for itself. RhodesianHunter|2 years ago Or, at this rate, just wait 6 months. Zetice|2 years ago I don't think this rate is sustainable. [0][0] https://www.theverge.com/2023/4/14/23683084/openai-gpt-5-rum... Beltiras|2 years ago When I would have had to add another 2 batches of ~150,000 tokens.....
v3ss0n|2 years ago https://deepai.org/publication/scaling-transformer-to-1m-tok...Can this be implemented in current opensource models?
akvadrako|2 years ago Can't you use fine-tuning for this?A other option is to ask GPT to compress your tokens into a shorter prompt for itself.
RhodesianHunter|2 years ago Or, at this rate, just wait 6 months. Zetice|2 years ago I don't think this rate is sustainable. [0][0] https://www.theverge.com/2023/4/14/23683084/openai-gpt-5-rum... Beltiras|2 years ago When I would have had to add another 2 batches of ~150,000 tokens.....
Zetice|2 years ago I don't think this rate is sustainable. [0][0] https://www.theverge.com/2023/4/14/23683084/openai-gpt-5-rum...
v3ss0n|2 years ago
Can this be implemented in current opensource models?
akvadrako|2 years ago
A other option is to ask GPT to compress your tokens into a shorter prompt for itself.
RhodesianHunter|2 years ago
Zetice|2 years ago
[0] https://www.theverge.com/2023/4/14/23683084/openai-gpt-5-rum...
Beltiras|2 years ago