top | item 46754169

(no title)

kxbnb | 1 month ago

Your framing of the problem resonates - treating the LLM as untrusted is the right starting point. The CAR spec sounds similar to what we're building at keypost.ai.

On canonicalization: we found that intercepting at the tool/API boundary (rather than parsing free-form output) sidesteps most aliasing issues. The MCP protocol helps here - structured tool calls are easier to normalize than arbitrary text.

On stateful intent: this is harder. We're experimenting with session-scoped budgets (max N reads before requiring elevated approval) rather than trying to detect "bad sequences" semantically. Explicit resource limits beat heuristics.

On latency: sub-10ms is achievable for policy checks if you keep rules declarative and avoid LLM-in-the-loop validation. YAML policies with pattern matching scale well.

Curious about your CAR spec - are you treating it as a normalization layer before policy evaluation, or as the policy language itself?

discuss

order

No comments yet.