top | item 35851918

(no title)

jmacc93 | 2 years ago

I've discovered that when role-playing (or really doing anything that extends over many messages) with ChatGPT, its a good idea to add a header to all of your messages that describes what you're doing, a synopsis of the conversation so far (you can use ChatGPT for this), and what you want ChatGPT to continue doing. ChatGPT (GPT 3.5) seems to lose context really, really fast, so this technique helps keep it focused on what you actually want

I'm like 70% confident that a LLM that is constantly condensing its own memory (its immediate memory / input, and condensed memory) using internal prompting, and conditioning its output using both that condensation and its immediate memory, would lead to effective long term memory. You could probably make a model that does this on multiple scales, with condensations on top of condensations, and so on

discuss

order

pocketarc|2 years ago

I've been investigating this condensation theory, and at first I did what you suggested, but the problem becomes the size of the context. Either you limit the number of things condensed (reducing the usefulness of the long-term memory because you'll lose the ability to say things like "remember when X?"), or you let it grow until it breaks the model's limit.

I'm currently investigating a long-term storage system using embeddings, and having GPT output a "remember" command whenever it decides it should remember something. There's lots of work to be done to get it just right, but this is an incredibly exciting future, for sure.