clawdbot has been getting a lot of attention recently, so I tried it out and went through part of the codebase.
The experience is impressive in terms of proactivity — a 24/7 agent that actually does things for you feels like a glimpse of the future.
But when it comes to *memory*, I was a bit underwhelmed.
From what I can see, the current memory mechanism is still quite shallow. It behaves more like lightweight state or short-term context than a real long-term, evolving memory system. There’s no clear structure, no visibility, and limited ability to reason over past experiences in a durable way.
For an always-on agent, memory is not a nice-to-have — it is the core capability.
A 24/7 proactive agent should be able to:
- remember past conversations and user preferences
- accumulate knowledge over weeks or months
- learn from previous actions and outcomes
- evolve instead of restarting from zero every session
This made me think: clawdbot (and agents like it) might benefit from a dedicated memory layer rather than ad-hoc storage.
We’ve been building an open-source memory framework called memU that stores agent memory as structured, inspectable files and supports both retrieval and direct LLM file reading. My intuition is that plugging something like this into clawdbot could turn it from a “smart automaton” into an agent that truly grows over time.
k_kiki|1 month ago
The experience is impressive in terms of proactivity — a 24/7 agent that actually does things for you feels like a glimpse of the future.
But when it comes to *memory*, I was a bit underwhelmed.
From what I can see, the current memory mechanism is still quite shallow. It behaves more like lightweight state or short-term context than a real long-term, evolving memory system. There’s no clear structure, no visibility, and limited ability to reason over past experiences in a durable way.
For an always-on agent, memory is not a nice-to-have — it is the core capability.
A 24/7 proactive agent should be able to:
- remember past conversations and user preferences - accumulate knowledge over weeks or months - learn from previous actions and outcomes - evolve instead of restarting from zero every session
This made me think: clawdbot (and agents like it) might benefit from a dedicated memory layer rather than ad-hoc storage.
We’ve been building an open-source memory framework called memU that stores agent memory as structured, inspectable files and supports both retrieval and direct LLM file reading. My intuition is that plugging something like this into clawdbot could turn it from a “smart automaton” into an agent that truly grows over time.