top | item 35170148

How Siri, Alexa and Google Assistant Lost the A.I. Race

20 points| fortran77 | 3 years ago |nytimes.com | reply

48 comments

order
[+] khazhoux|3 years ago|reply
Fundamental misunderstanding of what chatgpt does versus virtual assistants, coupled with a bizarre declaration that ChatGPT has “won” even though it has yet to establish itself in any actual daily-use scenarios.
[+] Idiot_in_Vain|3 years ago|reply
ChatGPT demonstrates a huge potential. It will take time for companies to integrate it in their products and for it to start impacting everyday lives of big number of people.
[+] redox99|3 years ago|reply
A lot of people (including me) use ChatGPT daily.
[+] corobo|3 years ago|reply
I ask it silly things like "is it defence or defense" all the time, multiple times a day! Love my `ai` cli script, haha. Only thing missing is speech recognition and tts output for the basic knowledge queries.

Also looking into writing a prompt that turns "timer one hour 30" into set_timer(5400) and other functions but not got the bandwidth just yet.

I imagine I'll ask GPT for an intent-in-json-format and go from there, I'll need to do the programming for things like timers and todos. I just need it to convert natural language into function names and parameters really.

I now only use Siri to set a timer when I put a load of washing in and it manages to get that wrong sometimes.. always appreciate my one hour thirty timer being interpreted as "set an alarm named Timer for 01:30am" and making me jump when I'm trying to sleep haha, daft robot. It wouldn't take much to do better.

[+] flashgordon|3 years ago|reply
... and in a way that is cost effective or cost attractive?
[+] guestbest|3 years ago|reply
I wouldn’t say the race is won or lost at this point, just that we are on the edge of a bubble that is revealing decent, more affordable technology like in the dot com era. But for me at least the GitHub copilot and cpt-3 have been entirely underwhelming except for things I use fiverr for. I don’t understand what is causing all the hype. Would someone please show me something technically impressive or explain the steps to make it useable like the input engineering? It is just too deep a rabbit hole to go in to casually get a good answer from it for me. It requires a setup that I must be overlooking. Also, I am unimpressed mostly by hugging face. Skimming the article and it didn’t stand out. It takes a lot of inputs to get one good output, which requires a lot of expensive equipment and electricity.
[+] tensor|3 years ago|reply
What a weird article. Given that there are yet to be devices like siri that use these new language models, it's fairly nonsensical to compare a "chatbot" like chatgpt to a device like Siri/Alexa/Google that actually takes commands verbally and performs real actions on them.

NYT quality is really awful these days.

[+] quitit|3 years ago|reply
I've used Siri around 10 times today, I haven't used ChatGPT 10 times to date.

It's about being the right tool for the job. I don't catch a flight to do my groceries, even though a car needs me to drive it, only fits 5 people, and doesn't go thousands of miles on a single tank. However an analogous headline would be HOW AEROPLANES BEAT CARS.

[+] carlycue|3 years ago|reply
I watched the “Introducing GPT-4” on the OpenAI YouTube channel (https://youtu.be/TxkJMX0KyS0) and you see top of the line MacBook Pro’s and Apple Watches all over the place in their offices. I bet most of them have iPhones too.

The moral of the story is, all Apple care about is selling you hardware. It has made no dent in their business or brand that Siri is awful.

[+] Idiot_in_Vain|3 years ago|reply
It hasn't made a dent because people had very limited use of AI assistants so far. With the progress that ChatGPT is showing, that can rapidly change and Apple can become old news - same as Nokia and Motorola, that once dominated the mobile market.
[+] mark_l_watson|3 years ago|reply
Well, I lost confidence in the NYT quite a while ago but I read this article anyway.

I disagree about Apple: Siri on my Apple Watch does everything I want it to: using the on-board Apple Silicon neural cores, it accurately processes my speech input to set times, open email or messages, and many other things that I want to do even with no real keyboard input.

I think that Google also effectively uses hardware tensor support for the same kind of thing, at least on their phones.

I also think that the AI assistant requirements are different between watch vs. phone vs. tablet vs. laptop vs. smart car.

I would hope that even though they use some common code and deep learning models, that each platform is treated individually.

Now, on my laptop, absolutely Bing/ChatGPT search, the ChatGPT Plus web app, and custom applications written with LLMs + LangChain + Llama-Index, etc. are relevant to developing AI assistants for laptops.

EDIT: and a few years ago Apple did a fantastic job integrating deep learning (and other ML) into CoreML making it easy for developers to integrate this tech into their apps.

[+] slowmovintarget|3 years ago|reply
Siri, Alexa, and Google Assistant "lost" because they aren't assistants. They are voice user interfaces. They have discreet modes, requiring discreet vocal commands for controlling timers, controlling media players, creating schedule entries, or ordering products.

An assistant would be able to take an instruction like "In twenty minutes, play Lullabye by Billie Joel, and turn the lights off when it's done." That combines functions in useful ways, that none of the previous generation of assistants can do.

ChatGPT can't do that either, as it is cannot control things. It can generate text, but there's very little likelihood we'd want to allow LLMs to manipulate systems until they're far more reliable.

Where it will be useful is as supercharged autocomplete. I fully believe LLMs will pop up all over the place where content generation is the task (like Copilot).

[+] fdgsdfogijq|3 years ago|reply
Theres a simple equation that I think is true:

Time to integrate LLM into existing home assistant >= Time to build home assistant from scratch using LLM.

They lost it in a very simple way:

Research Scientists and Managers refusing to let their importance be impacted. Existing NLP paradigms preserved at all costs, conservative technical decision making. Building something that can be delivered in 4 months, as opposed to long term technical vision. The people running the important NLP orgs in Alexa have been there since alexa started. There is no fresh air, just constant pushing for feature upon feature. No room for bottom up innovation, road maps are built and then delegated to worker bee coders.

I have first hand knowledge, that in Alexa right now, there are year long+ initiatives to productionalize ML models that are completely obsolete compared to GPT-4. Models that are still being figured out, are already borderline worthless.

The only moat is the number of devices already in peoples homes. A startup armed with a ChatGPT model could take down alexa very quickly if they could distribute speakers to homes.

Lets not forget the panic and the existential crises of careers built on old NLP knowledge, in both academia and industry. It turns out the solution to NLP is just "scale". All of the complicated heuristics for getting old school NLP solutions to work will be thrown away

[+] rcpt|3 years ago|reply
How would GPT be useful for Alexa? I don't actually want to chat with the speaker I just want it to find my keys and set timers
[+] quitit|3 years ago|reply
Editor: Here's a racy headline, do you think you can make an article out of it?
[+] josefresco|3 years ago|reply
Lost? These companies may have lost the AI software race, but they have quite a moat. They created a new hardware category, a large 3rd party ecosystem (in the case of Alexa), sold a shitload of mostly useless "smart" devices and now they just need new software and suddenly they're... not mostly useless! Who else can sell hardware right now with "new" AI tech?

If GPT-4 was hardware exclusive, it would be a different story.

[+] Idiot_in_Vain|3 years ago|reply
Amazon lost a ton of money with the Alexa effort. They hoped people will use Alexa to order stuff on Amazon, which did not realize. A few months ago the Alexa department was downsized. With the huge progress in chat bots that ChatGPT demonstrated Alexa's future in uncertain.
[+] mataug|3 years ago|reply
I doubt the race is over, its only a matter of time before Apple, Amazon, and Google integrate large language models into their assistants.

Most people I know have some home assistant, they are genuinely useful for some actions around the home, such as playing music/radio, weather, timers, controlling smart home devices.

[+] goatlover|3 years ago|reply
The race is over?
[+] ProAm|3 years ago|reply
For Alexa it is. None of the others have shown any improvement or added value in years. Id say they have a long uphill battle, and these companies are already looking to trim as much fat as possible. This is a new epoch and I don't think the old gate keepers stand a chance.
[+] adamrezich|3 years ago|reply
I just assumed that nobody wanted "their AI voice" to be recorded saying any kind of wrongthink or anything else uncouth, until they sufficiently lock that stuff down (if such a thing is even 100% possible).
[+] syntaxing|3 years ago|reply
Kinda extreme to declare a winner when the race just started. But I really really really hope this forces Apple to make a Siri + LLM feature. Even a 7B model would work better than Siri today.
[+] melling|3 years ago|reply
Which public and private companies are doing well with AI? I’m thinking more along the lines as a long term investor.
[+] senectus1|3 years ago|reply
pfft sure.

google/apple/amazon etc can plug their assistants into chatgpt with minimal efforts.

They haven't yet because they need to tweak it for profit.