top | item 47208388

(no title)

1 points| sidk24 | 22 hours ago

discuss

order

sidk24|22 hours ago

Author here. I work on authorization infrastructure (OpenFGA maintainer, CNCF Incubating) & have been building agent security tooling.

The Check Point disclosure this week (CVE-2025-59536, CVE-2026-21852) showed that malicious repo configs could execute shell commands and steal API keys before the trust prompt even appeared. Anthropic patched the specific bugs. But the underlying problem is architectural.

Claude Code gives you two options: approve every mkdir & npm test individually, or pass "--dangerously-skip-permissions" & give the agent unrestricted access to your filesystem, network, and shell. Most devs end up on the second option within a week.

We solved this for CI/CD and service accounts decades ago. Declarative policies, scoped permissions, audit trails. None of that exists for AI agents yet.

The post lays out what a real permission model would look like: declarative policy files per project, relationship-based scoping (so a feature branch agent gets different access than a production hotfix agent), and structured audit logs by default.

Happy to answer ques. about the auth patterns or the Check Point findings.