top | item 46647938

Show HN: SkillRisk – Free security analyzer for AI agent skills

2 points| elevenapril | 1 month ago |skillrisk.org

8 comments

order

elevenapril|1 month ago

Hi HN,

I built SkillRisk because I was terrified of giving my AI agents shell_exec or broad API access without checking them first.

It is a free security analyzer strictly for AI Agent Skills (Tools).

The Problem: We define skills in JSON/YAML for Claude/OpenAI, often copy-pasting code that grants excessive permissions (wildcard file access, dangerous evals, etc.).

The Solution: SkillRisk parses these definitions and runs static analysis rules to catch:

Privilege Escalation: Detects loosely scoped permissions. Injection Risks: Finds arguments vulnerable to command injection. Data Leaks: Checks for hardcoded secrets in skill schemas. You can paste your skill definition and get a report instantly. No login required for the core scanner. I linked directly to the free scanner so you can try it instantly.

Try it here: https://skillrisk.org/free-check

I'd love to hear how you handle security for your AI agents!

aghilmort|1 month ago

this is really great

toss in test building skills

macro linter skills

Etc

elevenapril|1 month ago

Thanks! The 'macro linter' framing is spot on—treating skill definitions with the same rigor as code is exactly the goal. regarding 'test building': are you envisioning something that auto-generates adversarial inputs (like fuzzing) based on the schema, or more like scaffolding for unit tests to ensure the tool executes correctly? I’d love to dig into that use case.