top | item 46954136

Rust implementation of Mistral's Voxtral Mini 4B Realtime runs in your browser

401 points| Curiositry | 20 days ago |github.com

69 comments

order

HorizonXP|20 days ago

If folks are interested, @antirez has opened a C implementation of Voxtral Mini 4B here: https://github.com/antirez/voxtral.c

I have my own fork here: https://github.com/HorizonXP/voxtral.c where I’m working on a CUDA implementation, plus some other niceties. It’s working quite well so far, but I haven’t got it to match Mistral AI’s API endpoint speed just yet.

kingreflex|20 days ago

hey,

how does someone get started with doing things like these (writing inference code/ cuda etc..). any guidance is appreciated. i understand one doesn't just directly write these things and this would require some kind of reading. would be great to receive some pointers.

simonw|20 days ago

I tried the demo and it looks like you have to click Mic, then record your audio, then click "Stop and transcribe" in order to see the result.

Is it possible to rig this up so it really is realtime, displaying the transcription within a second or two of the user saying something out loud?

The Hugging Face server-side demo at https://huggingface.co/spaces/mistralai/Voxtral-Mini-Realtim... manages that, but it's using a much larger (~8.5GB) server-side model running on GPUs.

refulgentis|20 days ago

It's not fast enough to be realtime, though you could do a more advanced UI and a ring buffer and have it as you describe. (ex. I do this with Whisper in Flutter, and also inference GGUFs in llama.cpp via Dart)

This isn't even close to realtime on M4 Max. Whisper's ~realtime on any device post-2022 with an ONNX implementation. The extra inference cost isn't worth the WER decrease on consumer hardware, or at least, wouldn't be worth the time implementing.

adefa|18 days ago

Hello, I pushed up and merged a PR that greatly improves performance on CUDA, Metal, and in WASM.

Depending on your hardware, the model is definitely real time (able to transcribe audio faster than the length of the audio).

mentalgear|20 days ago

Kudos, this is were it's add: open-models running on-premise. Preferred by users and businesses. Glad Mistral's got that figured out.

another_twist|20 days ago

Mistral can really end up having its RedHat moment. I think open models will only get more interesting from here.

Jayakumark|20 days ago

Awesome work, Would be good to have it work with handy.computer. Also are there plans to support streaming ?

JLO64|20 days ago

Just tried out Handy. This is much better and lightweight UI than the previous solutions I've tried out! I know it wasn't you intention, but thank you for the recommendation!

That said, I now agree with your original statement and really want Voxtral support...

sipjca|20 days ago

I'm looking into porting this into transcribe-rs so handy can use it.

The first cut will probably not be a streaming implementation

zaptheimpaler|20 days ago

I don't know anything about these models, but I've been trying Nvidia's Parakeet and it works great. For a model like this that's 9GB for the full model, do you have to keep it loaded into GPU memory at all times for it to really work realtime? Or what's the delay like to load all the weights each time you want to use it?

d4rkp4ttern|20 days ago

Same here. I haven’t found an ASR/STT/transcription setup that beats Parakeet V3 on the speed/accuracy tradeoff spectrum: transcription is extremely fast (near instant for a couple sentences, 1-3 seconds for long ramblings), and the slight accuracy drop relative to heavier/slower models is immaterial for the use case of talking to AIs that can “read between the lines” (terminal coding agents etc).

I use Parakeet V3 in the excellent Handy [1] open source app. I tried incorporating the C-language implementation mentioned by others, into Handy, but it was significantly slower. Speed is absolutely critical for good UX in STT.

[1] https://github.com/cjpais/Handy

1dom|20 days ago

Personally I run an ollama server. Models load pretty quickly.

There's a distinction between tokens per second and time to first token.

Delays come for me when I have to load a new model, or if I'm swapping in a particularly large context.

Most of the time, since the model is already loaded, and I'm starting with a small context that builds over time, tokens per second is the biggest impactor.

It's worth noting I don't do much fancy stuff, a tiny bit of agent stuff, I mainly use qwen-coder 30a3b or qwen2.5 code instruct/base 7b.

I'm finding more complex agent stuff where multiple agents are used can really slow things down if they're swapping large contexts. ik_llama has prompt caching which help speed this up when swapping between agent contexts up until a point.

tldr: loading weights each time isn't much of a problem, unless you're having to switch between models and contexts a lot, which modern agent stuff is starting to.

Retr0id|20 days ago

hm, seems broken on my machine (Firefox, Asahi Linux, M1 Pro). I said hello into the mic, and it churned for a minute or so before giving me:

panorama panorama panorama panorama panorama panorama panorama panorama� molest rist moundothe exh� Invothe molest Yan artist��������� Yan Yan Yan Yan Yanothe Yan Yan Yan Yan Yan Yan Yan

adefa|18 days ago

Please try again. The model weights are unchanged, but the inference code is improved.

arkensaw|20 days ago

Look I think its great that it runs in the browser and all, but I don't want to live in a world where its normal for a website to download 2.5Gb in the background to run something

michaelbuckbee|20 days ago

I recently dug into this as I was trying to benchmark the possibility of using Gemini Nano (Chrome's built in AI model) vs a server side solution for a sideproject.

Nano's stored in localstorage with shared access across sites (because Google), so users only need to download it once. Which I don't think is the case with Mistral, etc.

There's some other production stats around adoption, availability and performance that were interesting as well:

https://sendcheckit.com/blog/ai-powered-subject-line-alterna...

freakynit|20 days ago

You have already gotten used to loading multiple megabytes of bytes just to display a static landing page. You'll get used to this as well... just give it some time :-D

BHSPitMonkey|20 days ago

It's obviously not something you'd want to happen _passively_ when visiting a web page, but if the alternative is installing an executable / using a package manager / etc., why not? At least the browser is a more secure sandboxed environment for running untrusted code than most peoples' native OS.

radarsat1|20 days ago

It's cool but do I really want a single browser tab downloading 2.5 GB of data and then just leaving it to be ephemerally deleted? I know the internet is fast now and disk space is cheap but I have trouble bringing myself around to this way of doing things. It feels so inefficient. I do like the idea of client-side compute, but I feel like a model (or anything) this big belongs on the server.

tyushk|20 days ago

I don't think local as it stands with browsers will take off simply from the lead time (of downloading the model), but a new web API for LLMs could change that. Some standard API to communicate with the user's preferred model, abstracting over local inference (like what Chrome does with Gemini Nano (?)) and remote inference (LM Studio or calling out to a provider). This way, every site that wants a language model just has to ask the browser for it, and they'd share weights on-disk across sites.

xandrius|20 days ago

There will always be someone unhappy for literally any aspect of something new. Finding 2.5gb for a local LLM problematic in 2026, I really cannot think what is safe anymore.

We went from impossible to centralised to local in a couple of years and the "cost" is 2.5gb of hard drive.

mikebelanger|20 days ago

Neat, and neat to see the burn framework getting used. I tried this on latest Chromium, but my system froze until my OS killed Chromium. My VPN connection died right after downloading the model too. (it doesn't have a bandwidth cap either, so I'm not sure what's happening)

adefa|18 days ago

This should be fixed now. There were a number of bugs that kept the model from working correctly in different environments. Please let me know if you test again. :)

scronkfinkle|20 days ago

Nice!

I'm interested in your cubecl-wgpu patches. I've been struggling to get lower than FP32 safetensor models working on burn, did you write the patches to cubecl-wgpu to get around this restriction, to add support for GGUF files, or both?

I've been working on something similar, but for whisper and as a library for other projects: https://github.com/Scronkfinkle/quiet-crab

adefa|18 days ago

The cubecl-wgpu were only needed to reduce the number of kernel workgroups, otherwise I was getting errors in WASM.

boutell|20 days ago

This stuff is cool. So is whisper. But I keep hoping for something that can run close to real time on a Raspberry Pi Zero 2 with a reasonable English vocabulary.

Right now everything is either archaic or requires too much RAM. CPU isn't as big of an issue as you'd think because the pi zero 2 is comparable to a pi 3.

antonvs|18 days ago

Why be so unambitious? I want a model that can run on a circa-1977 Apple II with a 6502 processor and 4 KB of RAM.

rglover|20 days ago

Naive, semi-related question: what is the state of stuff like Mistral when compared to OpenAI, Anthropic, etc?

Could I reasonably use this to get LLM-capability privately on a machine (and get decent output), or is it still in the "yeah it does the thing, but not as well as the commercial stuff" category?

ubixar|20 days ago

For those exploring browser STT, this sits in an interesting space between Whisper.wasm and the Deepgram KC client. The 2.5GB quantized footprint is notably smaller than most Whisper variants — any thoughts on accuracy tradeoffs compared to Whisper base/small?

fusslo|20 days ago

I wonder if there's a metric or measure of how much jargon goes into a README or other document.

Reading the first three sentences of this README. 43 words, I would consider 15 terms to be jargon incomprehensible to the layman.

sofixa|20 days ago

> Streaming speech recognition running natively and in the browser. A pure Rust implementation of Mistral's Voxtral Mini 4B Realtime model using the Burn ML framework.

> The Q4 GGUF quantized path (2.5 GB) runs entirely client-side in a browser tab via WASM + WebGPU. Try it live.

Excluding names (Mistral's Voxtral Mini 4B Realtime), you have 1 pretty normal sentence introducing what this is (Streaming speech recognition running natively and in the browser) and the rest is technical details.

It's like complaining that a car description Would contain engine size and output in the third sentence.

explosion-s|20 days ago

Just curious, is there any smaller version of this model capable of running on edge devices? Even my Mac M1 with 8gb ram couldn't run the C version.

sofixa|20 days ago

https://kyutai.org/stt has an implementation for MLX and mentions iPhones, so it should work on edge devices, Macs and iPhones.

adefa|18 days ago

I'm curious to see if you are able to run the model now from the CLI?

TZubiri|20 days ago

Impressive, but to state the obvious, this is not yet practical for browser use due to it's (at least) 2.5GB memory footprint

jszymborski|20 days ago

Man, I'd love to fine-tune this, but alas the huggingface implementation isn't out as far as I can tell.

sergiotapia|20 days ago

>init failed: Worker error: Uncaught RuntimeError: unreachable

Anything I can do to fix/try it on Brave?

burky|20 days ago

I had to enable the following chrome flag for this to load.

chrome://flags/#enable-unsafe-webgpu

mp3geek|20 days ago

Does disabling shields help?

another_twist|20 days ago

Uggh. I had just started working on this. Congratulations to the author !

misiek08|20 days ago

(no speech detected)

or... not talking anything generate random German sentences.

refulgentis|20 days ago

Notable this isn't even close to realtime. M4 Max.

adefa|18 days ago

True :)

After some performance improvements, it is realtime on my DGX Spark with an RTF of .416 -- now getting ~19.5 tokens per second. Check it out, see if it's better for you.

Nathanba|20 days ago

I just tried it, I said "what's up buddy, hey hey stop" and it transcribed this for me: " وطبعا هاي هاي هاي ستوب" No, I'm not in any arabic or middle eastern country. The second test was better, it detected english.

7moose|20 days ago

fwiw, that is the right-ish transliteration into arabic. It just picked the wrong language to transcribe to lol