top | item 46452019

(no title)

xsourcesec | 2 months ago

Hi HN! I built a GitHub Action that automatically scans AI/LLM endpoints for security vulnerabilities on every push/PR.

Why? Most teams ship AI features without security testing. This action catches prompt injection, jailbreaks, and data leakage before they hit production.

How it works: - Add 5 lines of YAML to your workflow - Scans run automatically on push/PR - PRs get blocked if critical vulns are found - Full report with remediation steps

Free tier: 5 scans/month. 650+ attack vectors.

Built this because I do AI red teaming professionally (OSCP+, C-AI/MLPen). Happy to answer questions!

discuss

order

No comments yet.