top | item 47159784

(no title)

sheepscreek | 4 days ago

I think in the long run, we may need something like a batch job that compresses context from the last N conversations (in LLMs) and applies that as an update to weights. A looser form of delayed automated reinforcement learning.

Or make something like LoRA mainstream for everyone (probably scales better for general use models shared by everyone).

discuss

order

No comments yet.