top | item 47164542

(no title)

jamiecode | 4 days ago

[dead]

discuss

order

__parallaxis|4 days ago

Good call on WAL — we do use it. The DB layer explicitly enables WAL on every connection so we get concurrent readers and a single writer without blocking, which matters when the MCP server is serving search while a sync is running in the same process. Concurrency model: one process (one ctx serve mcp or one ctx sync at a time) with a small connection pool. Inside that process, reads (search, get) and writes (sync, ingest) can overlap; WAL keeps readers from blocking the writer and the writer from blocking readers. So it’s built for single-agent/single-server use in the sense of one Context Harness process, but that process can handle many concurrent MCP clients (all reading) and the occasional sync (one writer).

If you run multiple processes (e.g. a cron that runs ctx sync while another sync or a long ctx serve mcp is also writing), you still only get one writer at a time at the SQLite level. We recommend not overlapping writers across processes — e.g. cron that runs sync every N hours and doesn’t start the next run until the previous one has finished (or use a lockfile). Our deployment doc says: if you see "database is locked", ensure only one ctx sync runs at a time. So: WAL is on and does what you’d expect; multi-process write contention is avoided by design (one sync at a time / no overlapping cron invocations, etc).

I might add configurable vector storage later (e.g. plug in something else for the embedding index), but I’m still not sure I need or want it. I like keeping the stack opinionated toward SQLite — one file, one binary, no extra services — so that’s the default for the foreseeable future.