top | item 38307498

(no title)

erostrate | 2 years ago

The main problem with Alexa / Google Assistant / Siri is that the tech was not ready when they launched. We didn't have models that could understand non-trivial user requests, generate non-trivial actions or keep track of context properly. Now we do.

Amazon, Apple and Google are all working on incorporating LLMs but why is it taking so long? Why are these assistants still so bad? ChatGPT has been available for a year, GPT3 API for 3 years. I suspect some of it is legacy tech and legacy researchers from the pre-LLM era.

discuss

order

shmatt|2 years ago

How often would ChatGPT hit the exact API you expect with the exact request you need in 1 shot. People I know will still very often try 5 prompts before they get what they wanted from LLMs, that doesn't work in a Siri world (or its just as frustrating)

mamp|2 years ago

With ChatGPT some of the limitations of the tech are handled by the user e.g. starting a new chat when you want to discuss a new topic. An assistant has to detect changes in user context somehow. Also, I think it would be harder to know what to inject in the prompt since conversations are more like context based RAG rather than topic (embedding) based.

Then you have all the usual generative issues: hallucinations, alignment, sticking within guardrails, no repeatable testing, drift. The potential for errors at that scale is pretty staggering.

Sakos|2 years ago

No, the main problem is that these projects were grossly mismanaged and didn't really have a concrete purpose. It would've been possible to build something incredibly useful without LLMs, I just don't know why they didn't. Though the top comment I feel answers that question quite clearly. These assistants were never developed to be useful.