top | item 47047049

(no title)

seyz | 13 days ago

The moat here is local inference. Whisper.cpp + Metal gives you <500ms latency on M1 with the small model. no API costs + no privacy concerns. Ship that and you've got something the paid tools can't match. The UI is already solid, the edge is in going fully offline.

discuss

order

No comments yet.