I like that you’ve added secret detection and multi-provider support — that’s something most LLM commit tools miss.
Have you benchmarked latency differences between local models (like Ollama) and OpenAI/Anthropic? Would be interesting to see a speed comparison.
No comments yet.