(no title)
jmacc93 | 2 years ago
I'm like 70% confident that a LLM that is constantly condensing its own memory (its immediate memory / input, and condensed memory) using internal prompting, and conditioning its output using both that condensation and its immediate memory, would lead to effective long term memory. You could probably make a model that does this on multiple scales, with condensations on top of condensations, and so on
pocketarc|2 years ago
I'm currently investigating a long-term storage system using embeddings, and having GPT output a "remember" command whenever it decides it should remember something. There's lots of work to be done to get it just right, but this is an incredibly exciting future, for sure.