top | item 46256087

(no title)

bogtog | 2 months ago

Using voice transcription is nice for fully expressing what you want, so the model doesn't need to make guesses. I'm often voicing 500-word prompts. If you talk in a winding way that looks awkward when in text, that's fine. The model will almost certainly be able to tell what you mean. Using voice-to-text is my biggest suggestion for people who want to use AI for programming

(I'm not a particularly slow typer. I can go 70-90 WPM on a typing test. However, this speed drops quickly when I need to also think about what I'm saying. Typing that fast is also kinda tiring, whereas talking/thinking at 100-120 WPM feels comfortable. In general, I think just this lowered friction makes me much more willing to fully describe what I want)

You can also ask it, "do you have any questions?" I find that saying "if you have any questions, ask me, otherwise go ahead and build this" rarely produces questions for me. However, if I say "Make a plan and ask me any questions you may have" then it usually has a few questions

I've also found a lot of success when I tell Claude Code to emulate on some specific piece of code I've previously written, either within the same project or something I've pasted in

discuss

order

Marsymars|2 months ago

> I'm not a particularly slow typer. I can go 70-90 WPM on a typing test. However, this speed drops quickly when I need to also think about what I'm saying. Typing that fast is also kinda tiring, whereas talking/thinking at 100-120 WPM feels comfortable.

This doesn't feel relatable at all to me. If my writing speed is bottlenecked by thinking about what I'm writing, and my talking speed is significantly faster, that just means I've removed the bottleneck by not thinking about what I'm saying.

eucyclos|2 months ago

It's often better to segregate creative and inhibitive systems even if you need the inhibitive systems to produce a finished work. There's a (probably apocryphal) conversation between George RR Martin and Stephen King that goes something like:

GRRM: How do you write so many books?... Don't you ever spend hours staring at the page, agonizing over which of two words to use, and asking 'am I actually any good at this?'

SK: Of course! But not when I'm writing.

bogtog|2 months ago

That's fair. I sometimes find myself pausing or just talking in circles as I'm deciding what I want. I think when I'm speaking, I feel freer to use less precise/formal descriptions, but the model can still correctly interpret the technical meaning

In either case, different strokes for different folks, and what ultimately matters is whether you get good results. I think the upside is high, so I broadly suggest people try it out

hexaga|2 months ago

Alternatively: some people are just better at / more comfortable thinking in auditory mode than visual mode & vice versa.

In principle I don't see why they should have different amounts of thought. That'd be bounded by how much time it takes to produce the message, I think. Typing permits backtracking via editing, but speaking permits 'semantic backtracking' which isn't equivalent but definitely can do similar things. Language is powerful.

And importantly, to backtrack in visual media I tend to need to re-saccade through the text with physical eye motions, whereas with audio my brain just has an internal buffer I know at the speed of thought.

Typed messages might have higher _density_ of thought per token, though how valuable is that really, in LLM contexts? There are diminishing returns on how perfect you can get a prompt.

Also, audio permits a higher bandwidth mode: one can scan and speak at the same time.

mattmanser|2 months ago

It's kind of the point. If you start writing it, you'll start correcting it and moving things around and adding context and fiddling and more and more.

And your 5 minute prompt just turned I to 1/2 hour of typing

With voice you get on with it, and then start iterating, getting Claude to plan with you.

Not been impressed with agentic coding myself so far, but I did notice that using voice works a lot better imo, keeping me focused on getting on with letting the agent do the work.

I've also found it good for stopping me doing the same thing in slack messages. I ramble my general essay to ChatGPT/Claude, get them to summarize rewrite a few lines in my own voice. Stops me spending an hour crafting a slack message and tends to soften it.

buu700|2 months ago

I prefer writing myself, but I could see the appeal of producing a first draft of a prompt by dumping a verbal stream of consciousness into ChatGPT. That might actually be kind of fun to try while going on a walk or something.

dyauspitr|2 months ago

I don’t feel restricted by my typing speed, speaking is just so much easier and convenient. The vast majority of my ChatGPT usage is on my phone and that makes s2t a no brainer.

cjflog|2 months ago

100% this, I built laboratory.love almost entirely with my voice and (now-outdated) Claude models

My go-to prompt finisher, which I have mapped to a hotkey due to frequent use, is "Before writing any code, first analyze the problem and requirements and identify any ambiguities, contradictions, or issues. Ask me to clarify any questions you have, and then we'll proceed to writing the code"

Applejinx|2 months ago

It's an AI. You might do better by phrasing it, 'Make a plan, and have questions'. There's nobody there, but if it's specifically directed to 'have questions' you might find they are good questions! Why are you asking, if you figure it'd be better to get questions? Just say to have questions, and it will.

It's like a reasoning model. Don't ask, prompt 'and here is where you come up with apropos questions' and you shall have them, possibly even in a useful way.

dominotw|2 months ago

surprised ai companies are not making this workflow possible instead of leaving it upto users to figure out how to get voice text into prompt.

alwillis|2 months ago

> surprised ai companies are not making this workflow possible instead of leaving it upto users to figure out how to get voice text into prompt.

Claude on macOS and iOS have native voice to text transcription. Haven't tried it but since you can access Claude Code from the apps now, I wonder if you use the Claude app's transcription for input into Claude Code.

dyauspitr|2 months ago

All the mobile apps make this very easy.

johnfn|2 months ago

That's a fun idea. How do you get the transcript into Claude Code (or whatever you use)? What transcription service do you use?

hn_throw2025|2 months ago

I'm not the person you're replying to, but I use Whispering connected to the whisper-large-v3-turbo model on Groq.

It's incredibly cheap and works reliably for me.

I have got it to paste my voice transcriptions into Chrome (Gemini, Claude, ChatGPT) as well as Cursor.

https://github.com/EpicenterHQ/epicenter

quinncom|2 months ago

I use Spokenly with local Parakeet 0.6B v3 model + Cerebras gpt-oss-120b for post-processing (cleaning up transcription errors and fixing technical mondegreens, e.g., `no JS` → `Node.js`). Almost imperceptible transcription and processing delay. Trigger transcription with right ⌥ key.

hurturue|2 months ago

your OS might have a built in dictation thing. Google for that and try it before online services.

thehours|2 months ago

I use the Raycast + Whisper Dictation. I don't think there is anything novel about it, but it integrates nicely into my workflow.

My main gripe is when the recording window loses focus, I haven't found a way to bring it back and continue the recorded session. So occasionally I have to start from scratch, which is particularly annoying if it happens during a long-winded brain dump.

primaprashant|2 months ago

I built my own open-source tool to do exactly this so that I can run something like `claude $(hns)` in my terminal and then I can start speaking, and after I'm done, claude receives the transcript and start working. See this workflow here: https://hns-cli.dev/docs/drive-coding-agents/

bogtog|2 months ago

There are a few apps nowadays for voice transcription. I've used Wispr Flow and Superwhisper, and both seem good. You can map some hotkey (e.g., ctrl + windows) to start recording, then when you press it again to stop, it'll get pasted into whatever text box you have open

Superwhisper offers some AI post-processing of the text (e.g., making nice bullets or grammar), but this doesn't seem necessary and just makes things a bit slower

victorbjorklund|2 months ago

I do the same. On Mac I use macwhisper. The transcription does not have to be correct. Lots of times it writes the wrong word when talking about technical stuff but Claude understands which word I mean from context

singhrac|2 months ago

I use VoiceInk (needed some patches to get it to compile but Claude figured it out) and the Parakeet V3 model. It’s really good!

d4rkp4ttern|2 months ago

> if you talk in a winding way …

My regular workflow is to talk (I use VoiceInk for transcription) and then say “tell me what you understood” — this puts your words into a well structured format, and you can also make sure the cli-agent got it, and expressing it explicitly likely also helps it stay on track.

listic|2 months ago

Thanks for the advice! Could you please share how did you enable voice transcription for your setup and what it actually is?

binocarlos|2 months ago

I use https://github.com/braden-w/whispering with an OpenAI api key.

I use a keyboard shortcut to start and stop recording and it will put the transcription into the clipboard so I can paste into any app.

It's a huge productivity boost - OP is correct about not overthinking trying to be that coherent - the models are very good at knowing what you mean (Opus 4.5 with Claude Code in my case)

mattmanser|2 months ago

Aquavoice, YC company, really good. Got it after doing a bit of research on here, there's something for Mac that's supposed to be good too.

If you want local transcription, locally running models aren't quite good enough yet.

They use right-ctrl as their trigger. I've set mine to double tap and then I can talk with long pauses/thinking and it just keeps listening till I tap to finish.

bogtog|2 months ago

I'm using Wispr flow, but I've also tried Superwhisper. Both are fine. I have a convenient hotkey to start/end recording with one hand. Having it just need one hand is nice. I'm using this with the Claude Code vscode extension in Cursor. If you go down this route, the Claude Code instance should be moved into a separate window outside your main editor or else it'll flicker a lot

kapnap|2 months ago

For me, on Mac, VoiceInk has been top notch. Got tired of superwhispr

lukax|2 months ago

Spokenly on macOS with Soniox model.

j45|2 months ago

Speech also uses a different part of the brain, and maybe less finger coordination.

journal|2 months ago

voice transcription is silly when someone is listening you talking to something that isn't exactly human, imagine explaining you were talking to AI. When it's more than one sentence I use voice too.