(no title)
MaxNardit | 4 days ago
How it works: SQLite-backed knowledge graph with scoped entities, relations, and slots. An MCP server exposes 9 tools so the agent proactively saves facts as you work. At session start, an LLM summarizes the relevant facts into a structured briefing instead of dumping raw data.
What makes it different from a context file: Scope chains with inheritance (same person, different roles per project), bitemporal history (old facts archived, not deleted), and AI briefings that scale beyond what you'd maintain by hand.
Where I need help:
If you use Cursor, Windsurf, or Cline — try the MCP config, tell me what breaks
PRs for other LLM backends (Ollama, local models) welcome
pip install 'agent-recall[mcp]'
No comments yet.