top | item 47062684

Show HN: Axon – Agentic AI with mandatory user approval and audit logging

1 points| NeuroVexon | 13 days ago |github.com

Hey HN,

I built AXON because I wanted AI agents that can actually do things — but with real security controls.

Every tool call (file ops, web search, shell commands, email, code execution) requires explicit user approval before execution. Parameters and risk level are shown, you approve or deny. Everything is logged.

Key features: - Multi-agent system (different roles, models, permissions per agent) - Multi-LLM: Ollama (fully local), Claude, OpenAI, Gemini, Groq, OpenRouter - 100% on-premise, no cloud needed, GDPR-compliant - Docker-based code sandbox with network isolation - MCP server (works as tool provider for Claude Desktop, Cursor) - Encrypted API key storage (Fernet)

Stack: Python 3.11+, FastAPI, React 18, TypeScript, Docker

Apache 2.0 license. Made in Germany.

Happy to answer questions about the architecture or security model.

2 comments

order

NeuroVexon|13 days ago

Hey HN,

I built AXON because I wanted AI agents that can actually do things — but with real security controls.

Every tool call (file ops, web search, shell commands, email, code execution) requires explicit user approval before execution. Parameters and risk level are shown, you approve or deny. Everything is logged.

Key features: - Multi-agent system (different roles, models, permissions per agent) - Multi-LLM: Ollama (fully local), Claude, OpenAI, Gemini, Groq, OpenRouter - 100% on-premise, no cloud needed, GDPR-compliant - Docker-based code sandbox with network isolation - MCP server (works as tool provider for Claude Desktop, Cursor) - Encrypted API key storage (Fernet)

Stack: Python 3.11+, FastAPI, React 18, TypeScript, Docker

Apache 2.0 license. Made in Germany.

Happy to answer questions about the architecture or security model.

YaraDori|8 days ago

I love the “mandatory approval + audit log” stance — feels like the only sane way to run agents near real accounts.

Curious how you think about the reliability layer for UI work: - do you store any per-step evidence (DOM snapshot/screenshot) in the audit trail? - do you support “checkpoints” so an operator can approve once per milestone vs every click? - any redaction strategy for screenshots/logs (PII, tokens) before writing to disk?

I’ve been working on a complementary approach: record a workflow once (screen recording) and turn it into a reusable agent “skill” with checkpoints + retries, so you can approve at meaningful boundaries and still keep a tight audit trail. Would love to compare notes.

https://skillforge.expert