(no title)
sburud
|
3 months ago
That’s cool! Slight fear of replicating the Dropbox comment here, but all you really need to do is run whisper (or some other speech2text), then once the user stops talking jam the transcript through a LLM to force it into JSON or some other sensible structure.
raybb|3 months ago
However it's still wild to me how fast and responsive it is. I can talk for 10 seconds and then in ~500ms I see the updates. Perhaps it doesn't even transcribe and rather feeds the audio to a multimodal llm along with whatever tasks it already knows about? Or maybe it's transcribing live as you talk and when you stop it sends it to the llm.
Anyone have a sense of what model they might be using?
makingstuffs|3 months ago
I want to say 300ms which would coincide with your 500ms example
SteveMorin|3 months ago
LLM to types and done