top | item 35259234

(no title)

eggie5 | 2 years ago

to add state (memory), u can either:

* inject the running chat log into the prompt * inject the summary of the chat into the prompt

discuss

order

tlrobinson|2 years ago

Or perhaps a progressive summary, where the most recent messages are full fidelity, and older messages get “compressed” into a summary.

You can also fine tune the model to incorporate larger amounts of data, but that may be more expensive (and slower)

This kind of sounds like human short term and long term memory. Maybe “fine tuning” is analogous what happens to our memory when we sleep.

underlines|2 years ago

you just explained how human memory works and i thought about implementing that in a future model that allows for more max input tokens. the further back the text, the more it goes through a "summarize this text: ..." prompt. GPT4 has 28k Token limit, so it has the brain of a cat maybe, but future models will have more max. tokens and might be able to have a human like memory that gets worse the older the memory is.

Alternatives are maybe architectures using langchain or toolformer to retrieve "memories" from a database by smart fuzzy search. But that's worse, because reasoning would only be done on that context, instead of all memories it ever had.