top | item 47045203

Comparing how 3 AI assistants implement memory

1 points| gdad | 13 days ago |maximem.ai

1 comment

order

gdad|13 days ago

Author here. I have been learning about how ChatGPT, Claude, and OpenClaw handle persistent memory; what actually gets stored, how it is retrieved, etc.

Quick summary of the findings: - ChatGPT pre-computes lightweight summaries of your ~15 most recent conversations and injects them into every prompt. No vector search, no RAG. Simpler than I expected. - Claude takes an on-demand approach; it has search tools it can invoke to query your past conversations, but only fires them when it judges the context is relevant. More flexible, less consistent. - OpenClaw stores memory as plain Markdown on your local machine with hybrid search (semantic + BM25, 70/30 weighting). Fully transparent, but single-platform.

Full disclosure: I'm building in this space (Maximem Vity — a private, secure vault for cross-LLM memory). The comparison stands on its own, but that context motivated the research.

Happy to discuss the architectural differences or answer questions.