(no title)
JosephjackJR | 3 days ago
This started from debugging agent workflows that behaved differently after restarts. In several cases the memory layer relied on embeddings and approximate search through a vector database. It worked, but recall was not deterministic and restarting the process sometimes changed behaviour in subtle ways.
I wanted something simpler and predictable.
So I built a restart-persistent local memory engine that behaves more like SQLite than a cloud vector database. It runs as a single binary, stores data locally, and retrieval is deterministic. If you kill the process and restart it, the same query returns the same IDs in the same order.
It is not an LLM, not an agent framework, and not a SaaS product. It is meant to sit underneath those systems as a low-level memory primitive.
This is not a replacement for semantic search in every case. If you genuinely need approximate similarity over unstructured text, embeddings make sense. But I have a suspicion that in many structured agent and infra workflows, deterministic storage would be simpler and cheaper.
I would really appreciate feedback from people building ML or data infrastructure. In what cases is approximate search actually required, and where is it just become default?
No comments yet.