Show HN: Compliant-LLM: Audit AI Agents for Compliance with NIST AI RMF
11 points| kaushik92 | 9 months ago |github.com
Infosec and compliance teams are now responsible for tracking security and compliance risks of a growing number of AI agents across external and internal apps and third-party vendors.
compliant-llm gives you a way to:
- Define and run comprehensive red-teaming tests for AI agents - Maps test outcomes to compliance frameworks like NIST AI RMF - Generate detailed audit logs and documentation - Integrate with Azure, OpenAI, Anthropic, or wherever you host your models - With an open-source, self-hosted solution
Install and launch the red-teaming dashboard locally:
pip install compliant-llm
compliant-llm dashboard
This opens an interactive UI for running AI compliance checks and analyzing results.We’re at v0.1, and would love your feedback. Tell us about the compliance or AI risk issues you’re facing, and we’ll prioritize what matters most.
aavci|9 months ago
praveenkumarnew|9 months ago
nikhil896|9 months ago
andrewski77|9 months ago