top | item 46838022

Show HN: Reg.Run – Authorization layer for AI agents

3 points| regrun | 1 month ago

Hi HN, I'm Sara, and I need to be upfront: I'm not a developer. I come from governance and HR with 10y of seeing systems go rogue when authority is unclear, resulting in absolute chaos. Please don't be an asshole and undermine a non-technical woman trying to build something difficult - I know that it is. The Problem, in my POV: Today I woke up to Moltbook and AI agents are pulling instructions from external servers every 4 hours, executing them autonomously, with full access to databases, email, god knows what else. There's no explicit "you can do this, not that" - no authorization layer, just prompts and prayers. Quoting Simon Willison who called out the "lethal trifecta" of AI agent design: Everyone's focused on guardrails (is this safe to say?) but nobody built the layer for "is this allowed to DO? RIGHT NOW? IN THIS CONTEXT?" Reg was born, December 2025: Reg.Run sits between an AI agent's decision and execution:

Default deny (nothing runs without explicit permission) Time-boxed permissions (grant access for minutes, not forever) Full audit logs (know exactly what happened and why)

Think of it as: the model decides, but Reg.Run approves or blocks before side effects happen. Auth0 for AI Agents if you wish. Why I'm here: I'm pre-cofounder, running design partner discovery right now. I have a website running, an MVP, and I'm finishing what I'm calling the APAA - Authorization Protocol for AI Agents - open to everyone on Github. I know I'm not the typical founder here. I can't write elegant code. But I've lived through what happens when systems act with implicit authority, and I believe we need this infrastructure before we scale agents everywhere. Sort of - if you wear a seatbelt and your brakes work, you probably go a little faster right? What I've built: https://reg-run.com/ https://regrunmvp.replit.app/ Please be kind, but be honest. What am I missing? What would you build differently? Is this even the right problem to solve? Looking for design partners who are already deploying agents in production and want to protect themselves. Thanks for reading, Sara

2 comments

order

dheavy|1 month ago

Hi, thanks for posting this. I appreciate you not coming from engineering and being laser focused on product building.

There's a real gap identified (execution permission instead of output guardrails). The timing concern is valid (we're scaling agent framework way faster than security infrastructure — see Clawdbot-Moltbot). The default-deny + time-boxed permissions + audit logs is a solid model, easy to discuss at high-level with security teams in an org. The "Auth0 for AI Agents" framing is clear and positions it well.

Actually, the audit log piece is really huge. Having a complete execution trace with authorization decisions is invaluable for incident response. That alone might justify adoption even if the blocking mechanism is imperfect.

My concerns and questions:

- Where exactly does this sit? If it's between the agent and tool calls, that's relatively straightforward. If it needs to intercept arbitrary code execution or API calls, that's significantly harder.

- Adding another authorization layer means more setup, more policy configuration, more potential points of failure. Adoption challenge.

- Who defines what's "allowed"? In what format? How granular? Actually expressing "this agent can do X in context Y at time Z" in a way that's both powerful and usable, that's the whole ballgame (IMHO). I have in mind how complex AWS IAM policies got, and those are for relatively static systems. AI agents are dynamic, context-dependent, and probabilistic.

- By the time Reg sees a request to execute, the LLM has already decided. What happens when you block it? Does the agent gracefully handle denials and retry with different approaches?

I'd be interested in seeing real-world policy examples from your design partners. That'll tell you whether you've found the right abstraction layer.

Congratulations for just framing the idea and getting this far. Being very concerned about the current free-wheeling AI expansion with minimal security, I strongly believe this is going in the right direction and would like to know where this leads.

regrun|1 month ago

Thanks for the thoughtful questions! You've identified exactly the right challenges I am also facing, solution wise.

> Where exactly does this sit?

Between agent reasoning and tool execution. The agent/framework calls Reg.Run before executing any tool/action. The Pattern would be:

1. Agent decides: "I should refund $250" 2. Calls Authorization Protocol/authorize with action details 3. Reg.Run evaluates policy → approved/denied/requires_approval 4. If approved, agent proceeds. If denied, agent knows immediately.

Integration points: LangChain/LangGraph tools, MCP servers, custom agent frameworks. We provide middleware that wraps tool calls.

> Adoption challenge (more setup, more config)

Valid concern tbh - I think this was the most difficult part of thinking about Reg.. Specially because I didn't know where to start. After speaking with Engineers and friends I came to this:

- Start with sane defaults (deny-all, then allowlist incrementally) - Pre-built policies for common patterns (refunds, data access, transfers) - Dashboard UI for ops teams (no code/policy language needed) - Gradual rollout: monitor-only mode first, then enforce

I would like to do adoption easier, with a good UX, not just spec.

> Who defines what's "allowed"?

Great question, Trying to llearn this with design partners right now.

My current thoughts/ approach: Three-tier system - Simple rules (amount thresholds, time windows) → YAML config - Business context (customer LTV, fraud flags) → external data lookups - Complex logic → delegate to approval workflow

You're right that AWS IAM got unwieldy. We're trying to avoid that by: 1. Keeping policies human-readable (ops teams, not just engineers) 2. Starting simple, adding complexity only when needed 3. Approval workflows as escape hatch (when policy can't decide)

The abstraction I'm testing: "auto-approve simple cases, require human judgment for edge cases, deny obviously bad things." Keeping it fairly simple and evolve from there.

Thank you so much for reading, giving feedback - and most importantly - making my think!