top | item 46577020

(no title)

nsbk | 1 month ago

I actually forked the repo, modified the Dockerfile and build/run scripts targeting Ampere and the whole setup is running seamlessly on my 3090, Magpie is running fine and using under 3Gb of memory, ~2Gb for nemotron STT, and ~18Gb for Nemotron Nano 30b. Latencies are great and the turn detection works really well!

I'm going to use this setup as the base for a language learning App for my gf :)

discuss

order

deckar01|1 month ago

I got your fork working (also on a 3090). I was not impressed with the latency or the recommended LLM’s quality.

nsbk|1 month ago

Make sure you’re using the nemotron-speech asr model. I added support for Spanish via Canary models but these have like 10x the latency: 160ms on nemotron-speech vs 1.5s canary.

For the LLM I’m currently using Mistral-Small-3.2-24B-Instruct instead of Nemotron 3 and it works well for my use case