(no title)
matiasmolinas | 3 months ago
Inspired by the recent Google Research paper "Nested Learning: The Illusion of Deep Learning Architectures", we've implemented a practical version of its "Continuum Memory System" (CMS) in our open-source agent framework, LLMunix.
https://research.google/blog/introducing-nested-learning-a-n...
The idea is to create a memory hierarchy with different update frequencies, analogous to brain waves, where memories "cool down" and become more stable over time.
Our implementation is entirely file-based and uses Markdown with YAML frontmatter (no databases):
High-Frequency Memory (Gamma): Raw agent interaction logs and workspace state from every execution. Highly volatile, short retention. (/projects/{ProjectName}/memory/short_term/)
Mid-Frequency Memory (Beta): Successful, deterministic workflows distilled into execution_trace.md files. These are created by a consolidation agent when a novel task is solved effectively. Much more stable. (/projects/{ProjectName}/memory/long_term/)
Low-Frequency Memory (Alpha): Core patterns that have been proven reliable across many contexts and projects. Stored in system-wide logs and libraries. (/system/memory_log.md)
Ultra-Low-Frequency Memory (Delta): Foundational knowledge that forms the system's identity. (/system/SmartLibrary.md)
A new ContinuumMemoryAgent orchestrates this process, automatically analyzing high-frequency memories and deciding what gets promoted to a more stable, lower-frequency tier.
This enables:
Continual Learning: The system gets better and more efficient at tasks without retraining, as successful patterns are identified and hardened into reusable traces.
No Catastrophic Forgetting: Proven, stable knowledge in low-frequency tiers isn't overwritten by new, transient experiences.
Full Explainability: The entire learning process is human-readable and version-controllable in Git, since it's all just Markdown files. The idea was originally sparked by a discussion with Ismael Faro about how to build systems that truly learn from doing.
We'd love to get your feedback on this architectural approach to agent memory and learning.
GitHub Repo: https://github.com/EvolvingAgentsLabs/llmunix
Key files for this new architecture:
- The orchestrator agent: system/agents/ContinuumMemoryAgent.md
- The memory schema: system/infrastructure/memory_schema.md
- The overall system design: CLAUDE.md (which now includes the CMS theory)
What are your thoughts on this approach to agent memory and learning?
No comments yet.