top | item 46502088

Show HN: Context Protocol, a sovereign-first workflow for thinking with LLMs

1 points| zpusmani | 1 month ago

I shared DeckBuilder (my local-first slide editor) here last month (https://news.ycombinator.com/item?id=46365332). While using it with LLMs, I kept hitting a bigger problem: context loss, hallucinated instructions, and ideas I'd rejected coming back from the dead.

I realized I was expecting AI to be a partner with memory while it is more of a stateless CPU. Even the latest models reset between sessions.

So I built a protocol where:

Your files are the memory (plain markdown)

- Git is the version control

- You inject context at session start

- AI proposes, you ratify, files record

The system has:

- 5 commands: CHECKPOINT, SCOPE LOCK, HARD STOP, MODE STRATEGY, MODE EXPLORATION

- 3 constraint tags for locked decisions, rejected ideas, and hard constraints

- Works on Claude, ChatGPT, and Gemini with the same files and same behavior

- No vector DBs, LangChain or cloud dependencies

The core insight: LLMs are excellent stateless processors whereas decision memory, auditability, and long-horizon state are human responsibilities. This protocol makes that division explicit.

I tested it across the latest versions of all three platforms and it passed constraint enforcement, rejected idea protection, scope lock compliance, and checkpoint format consistency.

This is intentionally manual and opinionated. It's not for fully autonomous workflows. Friction is the feature.

Repo: https://github.com/zohaibus/context-protocol

Would love feedback, especially from anyone who's tried managing context across long projects with LLMs. What's worked for you? What's failed?

discuss

order

No comments yet.