(no title)
danenania | 2 months ago
I'm an engineer at Promptfoo (open source evals and red teaming for AI). We're launching a tool that scans GitHub PRs for common LLM-related vulnerabilities. This post goes into detail on how it was built and the kinds of vulnerabilities that LLM apps are most prone to.
It includes a few real CVEs in open source projects that we reproduced as PRs so we could test the scanner.
I'd love to hear your thoughts.
No comments yet.