(no title)
parliament32 | 3 days ago
Here we are a few decades later, and we don't see business units using Word's built-in dictation feature to write documents, right? Funny how that tech seems to have barely improved in all that time. And, despite dictation being far faster than typing, it's not used all that often because.. the error rate is still too high for it to be useful, because errors in speech-to-text are fundamentally an unsolvable problem (you can only get so far with background noise filtering and accounting for accents etc).
I see the parallel in how LLM hallucinations are fundamentally an unsolvable component of transformers-based models, and I suspect LLM usage in 20 years will be around the level of speech-to-text today: ubiquitously in the background, you use it here and there to set a timer or talk to a device, but ultimately not useful for any serious work.
prescriptivist|3 days ago
Prior to three weeks ago, I had used speech-to-text to do accomplish approximately 0% of the work I've done in my 20 years of coding. In the last three weeks, well over half of the direction that I've given to Claude Code has been done with speech-to-text.
plomme|2 days ago
inigyou|2 days ago
dweinus|3 days ago
LLMs create a new workflow wherever they are employed. Even if capable, that is not always a more desirable/efficient experience.
buzzerbetrayed|3 days ago
In my world AI is already far more influential than text to speech.
People on here act like we don’t know if AI will be useful. And I’m sitting over here puzzled because of how fucking useful it is.
Very strange.
prescriptivist|3 days ago
Yes, it's very strange to read AI threads here because the general tone is so different than, say, at the company I work at, where hundreds of engineers are given enormous monthly token budgets and are being pushed to have the LLMs write as much code as possible. They're not forced to, and no one is reprimanded for not adopting Claude Code or Codex or Cursor. But there's been a strong tonal shift in technology leadership in the last month that basically implies that this is how it is going to be done in the future whether one likes it or not.
As for me, I've been writing all of my code via Claude for a while now, and I don't think I will ever go back to working in an editor writing code the way I did for most of my career. Nor do I want to.
sadeshmukh|3 days ago
SchemaLoad|3 days ago
Spoken language is very different to written language, which is why for example you can easily tell when an article is transcribing a spoken interview.
asdff|2 days ago
jamilton|3 days ago
Similarly, raw LLM/chat interfaces are usually not the best option.
bigstrat2003|3 days ago
selridge|3 days ago
https://arxiv.org/abs/2510.14928
Was Gemini worse than no tool at all there?
johnfn|3 days ago
johnfn|3 days ago
alpaca128|3 days ago
In the context of AI most people I know tend to mean wrong output, not just hallucinations in the literal sense of the word or things you cannot catch in an automated way.
bojan|3 days ago
Granted, it fixed the problem in the very next prompt.
bogzz|3 days ago
I encounter stuff like this every week, I don't know how you don't. I suppose a well-structured codebase in a statically typed language might not provide as much of a surface for hallucinations to present themselves? But like you say, logical problems of course still occur.
gambiting|3 days ago
I literally just went on Gemini, latest and best model and asked it "hey can you give me the best prices for 12TB hard drives available with the British retailer CeX?" and it went "sure, I just checked their live stock and here they are:". Every single one was made up. I pointed it out, it said sorry, I just checked again, here they are, definitely 100% correct now. Again, all of them were made up. This repeated a few times, I accused it of lying, then it went "you're right, I don't actually have the ability to check, so I just used products and values closest to what they should have in stock".
So yeah, hallucinations are still very much there and still very much feeding people garbage.
Not to mention I'm a part of multiple FB groups for car enthusiasts and the amount of AI misinformation that we have to correct daily is just staggering. I'm not talking political stuff - just people copy pasting responses from AI which confidently state that feature X exists or works in a certain way, where in reality it has never existed at all.