top | item 47159151

Show HN: AI-runtime-guard – Policy enforcement layer for MCP AI agents

2 points| JimmyRacheta | 4 days ago |github.com

I built this after realizing that AI agents with filesystem and shell access can delete files, leak credentials, or execute destructive commands — and there's no enforcement layer stopping them at the execution level.

ai-runtime-guard is an MCP server that sits between your AI agent and your system. It enforces a policy layer before any file or shell action takes effect. No retraining, no prompt engineering, no changes to your agent or workflow.

Your agent can say anything. It can only do what policy allows.

What it does: - Blocks dangerous commands (rm -rf, dd, shutdown, privilege escalation) before execution - Gates risky commands behind human approval via a web GUI - Simulates blast radius for wildcard operations before they run - Creates automatic backups before destructive actions - Full audit trail of everything the agent does

Works with Claude Desktop, Cursor, Codex, and any stdio MCP-compatible client. Default profile is basic protection out of the box — advanced tiers are opt-in.

Validated on macOS Apple Silicon. Linux expected to work, formal validation coming in v1.1.

Would love feedback from anyone running AI agents with filesystem access.

1 comment

order

entrustai|4 days ago

[deleted]

JimmyRacheta|3 days ago

Thank you. It was a very interesting development and testing, it's amazing to see the models learn in realtime that there's a protection layer and sometimes they say they expect a command to fail because of it even before it runs. Amazing but scary at the same time. While the tool still has a lot of room for improvement, I'm looking forward to add additional features as quickly as I can.