(no title)
nobulexdev | 1 day ago
The problem: AI agents are making real decisions for loans, trades, hiring, diagnostics with zero cryptographic proof of what they have done or whether they followed any rules. The EU AI Act requires tamper-evident audit trails by August 2026. Nobody has infrastructure for this.
Nobulex is three things:
Agents will be able to sign behavioral covenants before they act (cryptographic commitments — "I will not do X")
Middleware enforces those covenants at runtime as violations are blocked before execution
Every action is logged in a hash-chained, merkle-tree audit trail that anyone can use and verify independently
The quickstart is 3 lines: const { protect } = require('@nobulex/sdk'); const agent = await protect({ name: 'my-agent', rules: ['no-data-leak', 'read-only'] });
npm install @nobulex/sdk
Everything is MIT licensed and on npm under @nobulex/*. Site: https://nobulex.com
Would love feedback on the architecture, the covenant model, or anything else. Happy to answer questions.
mlyle|1 day ago
An agent signing a covenant doesn't do anything. You're not going to enforce a contract against it, and there's not some kind of non-repudiation problem to solve.
Enforcing behavioral covenants or boundaries is inherent to how you make things safe. But how do you really do it for anything that matters? How do you make sure that an agent isn't discriminating based on race or other factors?
The whole reason you're using an LLM is because you're doing something either:
A) at very low scale, at which case it's hard to capture sufficient covenants cost-efficiently
or B) with very great complexity, where the behavior you want is hard to encapsulate in code-- in which case meaningful enforcement of the complex covenants that may result is hard.
Indeed, if you could just write code to do it, you'd just write code to do it.
I'm glad you're interested in these issues and playing with them. I'll leave you with one last thought: 134 KSLOC is a bug, not a feature. Some software systems that need to be huge, but for software systems that need to be trusted-- small, auditable, and understandable to humans (and agents) is the key thing you're looking for. Could you build some kind of small trustable core that solves a simple problem in an understandable way?
nobulexdev|1 day ago