Show HN: Chirp – Local Windows dictation with ParakeetV3 no executable required
34 points| whamp | 3 months ago |github.com
To scratch that itch, I built Chirp, a Windows dictation app that runs fully locally, uses NVIDIA’s ParakeetV3 model, and is managed end‑to‑end with `uv`. If you can run Python on your machine, you should be able to run Chirp—no additional executables required.
Under the hood, Chirp uses the Parakeet TDT 0.6B v3 ONNX bundle. ParakeetV3 has accuracy in the same ballpark as Whisper‑large‑v3 (multilingual WER ~4.9 vs ~5.0 in the open ASR leaderboard), but it’s much faster and happy on CPU.
The flow is: - One‑time setup that downloads and prepares the ONNX model: - `uv run python -m chirp.setup` - A long‑running CLI process: - `uv run python -m chirp.main` - A global hotkey that starts/stops recording and injects text into the active window.
A few details that might be interesting technically:
- Local‑only STT: Everything runs on your machine using ONNX Runtime; by default it uses CPU providers, with optional GPU providers if your environment allows.
- Config‑driven behavior: A `config.toml` file controls the global hotkey, model choice, quantization (`int8` option), language, ONNX providers, and threading. There’s also a simple `[word_overrides]` map so you can fix tokens that the model consistently mishears.
- Post‑processing pipeline: After recognition, there’s an optional “style guide” step where you can specify prompts like “sentence case” or “prepend: >>” for the final text.
- No clipboard gymnastics required on Windows: The app types directly into the focused window; there are options for clipboard‑based pasting and cleanup behavior for platforms where that makes more sense.
- Audio feedback: Start/stop sounds (configurable) let you know when the mic is actually recording.
So far I’ve mainly tested this on my own Windows machines with English dictation and CPU‑only setups. There are probably plenty of rough edges (different keyboard layouts, language settings, corporate IT policies, etc.), and I’d love feedback from people who:
- Work in restricted corporate environments and need local dictation. - Have experience with Parakeet/Whisper or ONNX Runtime and see obvious ways to improve performance or robustness. - Want specific features (e.g., better multi‑language support, more advanced post‑processing, or integrations with their editor/IDE).
Repo is here: `https://github.com/Whamp/chirp`
If you try it, I’d be very interested in:
- CPU usage and latency on your hardware, - How well it behaves with your keyboard layout and applications, - Any weird failure cases or usability annoyances you run into.
Happy to answer questions and dig into technical details in the comments.
lxe|3 months ago
clueless|3 months ago
hamza_q_|3 months ago
https://github.com/FluidInference/FluidAudio
https://github.com/FluidInference/eddy-audio
whamp|3 months ago
Accuracy Average WER: Whisper-large-v3 4.91 vs Parakeet V3 5.05
Speed RTFx: Whisper-large-v3 126 vs PArakeet V3 2154
~17x faster
https://huggingface.co/spaces/hf-audio/open_asr_leaderboard
feynmanquest|3 months ago
https://github.com/Code-and-Sorts/chirp-ai-note-app
zahlman|3 months ago
> NVIDIA’s ParakeetV3 model
You can't install .exe's, but you can connect to the Internet, download and install approximately two hundred wheels (judging by uv.lock), many of which contain opaque binary blobs, including an AI model?
Why does your organization think this makes any sense?
whamp|3 months ago
hebelehubele|3 months ago
My use case is to generate subtitles for Youtube videos (downloaded using yt-dlp). Word-level accurracy is also nice to have, because I also translate them using LLMs and edit the subtitles to better fit the translation.
redrove|3 months ago
[1] https://goodsnooze.gumroad.com/l/macwhisper
whamp|3 months ago
hastamelo|3 months ago
I'm using that to dictate prompts, it struggles with technical terms: JSON becomes Jason, but otherwise is fine
lxe|3 months ago
(this was transcribed using whisper.cpp with no edits. took less than a second on a 5090)
whamp|3 months ago
i loved whisper but it was insanely slow on cpu only and even then it was with a smaller whisper that isn't as accurate as parakeet.
my windows environment locks down the built-in windows option so i don't have a way to test it. i've heard it's pretty good if you're allowed to use it, but your inputs don't stay local which is why i needed to create this project.