top | item 46749810

(no title)

deckar01 | 1 month ago

I got your fork working (also on a 3090). I was not impressed with the latency or the recommended LLM’s quality.

discuss

order

nsbk|1 month ago

Make sure you’re using the nemotron-speech asr model. I added support for Spanish via Canary models but these have like 10x the latency: 160ms on nemotron-speech vs 1.5s canary.

For the LLM I’m currently using Mistral-Small-3.2-24B-Instruct instead of Nemotron 3 and it works well for my use case