(no title)
dabaja
|
1 month ago
We hit something similar building document processing agents. What helped was treating memory as typed knowledge rather than flat text.
Three buckets: (1) constraints -- hard rules that should always apply, (2) decisions -- past choices with context on why, (3) heuristics -- soft preferences that can be overridden.
Retrieval then becomes: constraints always injected, decisions pulled by similarity to current task, heuristics only when ambiguity is high. Still experimenting with how to detect "ambiguity" reliably -- right now it's a cheap classifier on the task description.
The deduplication problem is real. We ended up hashing on (topic, decision_type) and forcing manual review when collision detected. Brutal but necessary. TBH we are not home yet but that's the path we walking.
No comments yet.