top | item 46740645

Ask HN: How are you enforcing permissions for AI agent tool calls in production?

3 points| amjadfatmi1 | 1 month ago

I’m seeing more teams ship agentic systems that can call real tools (DB writes, deploys, email, billing, internal APIs). Most of the safety patterns I hear are prompt rules + basic validation + “human-in-the-loop for risky stuff.”

My question: in a real production environment, what’s your enforcement point that the agent cannot bypass? Like, what actually guarantees the tool call isn’t executed unless it passes policy?

Some specific things I’m curious about:

Are you enforcing permissions inside each tool wrapper, at a gateway/proxy, or via centralized policy service?

How do you handle identity + authorization when agents act on behalf of users?

Do you log decisions separately from execution logs (so you can answer “why was this allowed?” later)?

How do you roll out enforcement safely (audit-only/shadow mode -> enforcement)?

What failure modes hurt most like policy bugs, agent hallucinations, prompt injection, or tool misuse?

Would love to hear how people are doing this in practice (platform/security/infra teams especially)

2 comments

order

jcmartinezdev|25 days ago

I've seen solutions implementing authorization in multiple ways, some still rely on the underlying services that map to the tools, guaranteeing the AT sent to those services is acting on behalf of the user.

Other's do checks at the tool level, systems like openfga can help make that easier by centralizing the authorization policies.

kxbnb|1 month ago

We're building this at keypost.ai - the enforcement point is a proxy that sits between the agent and MCP servers. Tool calls go through the proxy, get evaluated against policy, and either pass or get blocked before reaching the actual tool.

The key insight: policy evaluation has to happen outside the agent's context. If the agent can reason about or around the policy, it's not really enforcement. So we treat it like a firewall - deterministic rules, no LLM in the decision path.

What we've found works: - Argument-level rules, not just tool-level ("github.delete_branch is fine, but only for feature/* branches") - Rate limits that reset on different windows (per-minute for burst, per-day for cost) - Explicit rule priority for when constraints conflict

The audit trail piece is critical too. Being able to answer "why was this blocked?" after the fact builds trust with teams rolling this out.

Curious what failure modes people have actually hit - is it more "agent tried something it shouldn't" or "policy was too restrictive and blocked legitimate work"?