(no title)
elevenapril | 1 month ago
I built SkillRisk because I was terrified of giving my AI agents shell_exec or broad API access without checking them first.
It is a free security analyzer strictly for AI Agent Skills (Tools).
The Problem: We define skills in JSON/YAML for Claude/OpenAI, often copy-pasting code that grants excessive permissions (wildcard file access, dangerous evals, etc.).
The Solution: SkillRisk parses these definitions and runs static analysis rules to catch:
Privilege Escalation: Detects loosely scoped permissions. Injection Risks: Finds arguments vulnerable to command injection. Data Leaks: Checks for hardcoded secrets in skill schemas. You can paste your skill definition and get a report instantly. No login required for the core scanner. I linked directly to the free scanner so you can try it instantly.
Try it here: https://skillrisk.org/free-check
I'd love to hear how you handle security for your AI agents!
No comments yet.