Nice work on the formal verification — 50K states with zero violations is solid.\n\nOne thing I've been thinking about in this space: authorization (what an agent CAN do) is only half the problem. The other half is identity (WHO is the agent). Your K-of-N threshold approach handles the authorization side well, but it assumes you already know which agents are part of the quorum.\n\nIn multi-agent systems where agents come from different frameworks or organizations, establishing identity first becomes critical. Who issued this agent? Can I verify it cryptographically? What's its track record?\n\nI've been working on this from the identity angle — Ed25519 DIDs, challenge-response verification, and a peer-to-peer vouching system for building trust graphs between agents. The two approaches feel very complementary: identity tells you who's asking, authorization (like IC-AGI) tells you what they're allowed to do.\n\nCurious: does your capability token model support any notion of agent reputation or trust scores, or is it purely policy-based?
Hi HN. I built IC-AGI because every AI agent framework I looked at gives the agent full authority once it gets an API key. That's fine for demos but scary for anything touching production data or infrastructure.
IC-AGI splits authority using K-of-N threshold approval (Shamir Secret Sharing). No single node, human or AI, can unilaterally execute a critical action. Capability tokens have TTL, scope, and consumable budgets, so an agent can only do what it was explicitly authorized to do, for a limited time, within a spending cap.
The part I'm most interested in feedback on: I formally verified 14 safety properties using TLA+ model checking (~50K states, ~250K property checks) plus 8 algebraic proofs for the Shamir implementation. Zero violations. If you work with TLA+ or formal methods, I'd love to hear if my specs are sound or if I'm missing edge cases.
Stack: Python, Shamir SSS over GF(p), HMAC-SHA256, FastAPI control plane, K8s manifests for distributed workers. 273 tests including adversarial attacks (replay, MITM, oracle extraction, code injection). Apache 2.0.
the_nexus_guard|9 days ago
saezbaldo|12 days ago
IC-AGI splits authority using K-of-N threshold approval (Shamir Secret Sharing). No single node, human or AI, can unilaterally execute a critical action. Capability tokens have TTL, scope, and consumable budgets, so an agent can only do what it was explicitly authorized to do, for a limited time, within a spending cap.
The part I'm most interested in feedback on: I formally verified 14 safety properties using TLA+ model checking (~50K states, ~250K property checks) plus 8 algebraic proofs for the Shamir implementation. Zero violations. If you work with TLA+ or formal methods, I'd love to hear if my specs are sound or if I'm missing edge cases.
Stack: Python, Shamir SSS over GF(p), HMAC-SHA256, FastAPI control plane, K8s manifests for distributed workers. 273 tests including adversarial attacks (replay, MITM, oracle extraction, code injection). Apache 2.0.
Longer writeup with the threat model: https://dev.to/saezbaldo/every-ai-agent-framework-trusts-the...
Happy to answer questions about the architecture or the formal verification approach.