(no title)
sgk284 | 2 months ago
We open-sourced our impl just this week: https://github.com/with-logic/intent
We use Groq with gpt-oss-20b, which gives great results and only adds ~250ms to the processing pipeline.
If you use mini / flash models from OpenAI / Gemini, expect it to be 2.5s-3s of overhead.
No comments yet.