Good observation. Agent Recall doesn't do semantic search at all — deliberately.
Instead of query → rank → top-k, it loads all entities/slots/observations within the agent's scope chain at session start, then an LLM summarizes them into a structured briefing. Priority is scope relevance (your project > your org > global) and data type (people and active tasks first, historical logs last), with a token budget that truncates lower-priority sections.
For in-session recall, there's search_nodes — keyword matching, not embeddings. Less powerful but perfectly adequate for structured facts like "who works on project X" or "what did we decide about auth."
Cold start: first session has no briefing, but the package auto-discovers project files (CLAUDE.md, README.md) and includes them in context, so the agent isn't completely blind. The MCP tools come with proactive-saving instructions, so memory builds organically. After 2-3 sessions the briefing is already useful.
The tradeoff is explicit: optimized for structured scoped facts (people, decisions, roles), not fuzzy semantic recall. For a coding agent that needs "Alice is the lead on project X, we decided to use REST" — keyword search + scope filtering works. For "find me something vaguely related to that auth discussion" — you'd want embeddings, and that's not what this does.
MaxNardit|4 days ago
Instead of query → rank → top-k, it loads all entities/slots/observations within the agent's scope chain at session start, then an LLM summarizes them into a structured briefing. Priority is scope relevance (your project > your org > global) and data type (people and active tasks first, historical logs last), with a token budget that truncates lower-priority sections.
For in-session recall, there's search_nodes — keyword matching, not embeddings. Less powerful but perfectly adequate for structured facts like "who works on project X" or "what did we decide about auth."
Cold start: first session has no briefing, but the package auto-discovers project files (CLAUDE.md, README.md) and includes them in context, so the agent isn't completely blind. The MCP tools come with proactive-saving instructions, so memory builds organically. After 2-3 sessions the briefing is already useful.
The tradeoff is explicit: optimized for structured scoped facts (people, decisions, roles), not fuzzy semantic recall. For a coding agent that needs "Alice is the lead on project X, we decided to use REST" — keyword search + scope filtering works. For "find me something vaguely related to that auth discussion" — you'd want embeddings, and that's not what this does.