top | item 45691583

(no title)

EigenLord | 4 months ago

I think there's a critical flaw with Anthropic's approach to memory which is that they seem to hide it behind a tool call. This creates a circularity issue: the agent needs to "remember to remember." Think how screwed you would be if you were consciously responsible for knowing when you had to remember something. It's almost a contradiction in terms. Recollection is unconscious and automatic, there's a constant auto-associative loop running in the background at all times. I get the idea of wanting to make LLMs more instrumental and leave it to the user to invoke or decide certain events: that's definitely the right idea in 90% of cases. But for memory it's not the right fit. In contrast OpenAI's approach, which seems to resemble more generic semantic search, leaves things wanting for other reasons. It's too lossy.

discuss

order

No comments yet.