top | item 14516127

(no title)

paul9290 | 8 years ago

I have used Siri faithfully from 2011 til about the start of this year where I bought a google home.

Siri is flat out dumb and frustrating to use comparatively in terms of only understanding 85 percent of my queries vs. Google Home understanding 97 percent.

It seems to me Apple has not taken all their hordes of cash and used it to up the AI game. Looks like they are solely focused on the present while competitors are building the future in which they foolishly try to catch up with.

Like last weeks WWDC they debuted the AR kit but did not bother to create an amazing/innovate AR app to showcase and get the masses excited for AR and their possible upcoming AR gear.

discuss

order

k-mcgrady|8 years ago

>> Like last weeks WWDC they debuted the AR kit but did not bother to create an amazing/innovate AR app to showcase and get the masses excited for AR and their possible upcoming AR gear.

Why would they announce an app for supposed upcoming AR product before announcing the actual product? You announce the developer kit 6 months before that happens, get them working on apps, and then when you announce your AR product there are lots of apps, not just an in-house one, ready to go. Even if an AR specific product doesn't launch soon they've brought a good AR API to devs a few months before iOS 11 - and when it does (if developers take advantage of the API's) millions of people will be using AR daily on their iOS devices. Right now I know absolutely nobody using AR on any platform. That's going to change very quickly and an in-house app won't make a bit of difference.

JimDabell|8 years ago

> Like last weeks WWDC they debuted the AR kit but did not bother to create an amazing/innovate AR app to showcase and get the masses excited for AR and their possible upcoming AR gear.

It's a developer conference not a product launch, so of course it focused on the code rather than "getting the masses excited". The point is that developers can now spend the next few months creating those AR applications to impress people before iOS 11 with ARKit is launched to the public.

paul9290|8 years ago

Yeah but they showed new features in iMessage, the camera app and etc. all copycat ideas.

They could have built an AR feature into the camera app to show they are not just followers of all other companies who innovate they still are innovators. Further to showcase and get the masses excited about AR.

abritinthebay|8 years ago

I feel this fundamentally misunderstands how each of their AI's work.

You're describing an interface problem. Google's AI is extremely good at recognizing speech in comparison to Siri but that's a wholly different thing to deep understanding.

Siri is a more advanced AI in many ways due to it's understanding of intents. Google's is much more command based (under the hood). However Apple's speech recognition is letting it down here. Intents are a much more sophisticated way to interact with an AI than Google's command based structure but they are correspondingly more complex to integrate and you realistically need to infer things more from the user (which Apple needs to get better at).

I don't disagree that Apple needs to step up their game with how people interact with Siri, but it's a perceptual issue with the interface not with the underlying AI.

(source - have discussed these exact issues with one of the founders of Nuance).

wstrange|8 years ago

This runs contrary to the experience of most people.

See http://www.businessinsider.com/siri-vs-google-assistant-cort...

Siri is dead last. Surprisingly, Cortana did quite well (I didn't realize Microsoft was catching up in this space).

Those responses require a lot more than just voice recognition - they require an understanding of context and "intent".

yincrash|8 years ago

This morning I told my Google Home "Can you turn your volume all the way up?" and it did it. It maxed the volume, not just turned it up a little bit. To me, this indicates that the Google Assistant is actually pretty good at understanding intent and not just repeating exact commands. Or maybe _I'm_ just misunderstanding your definition of 'commands' and 'intents'.

mtthwmtthw|8 years ago

I don't really understand your distinction between intents and commands. I've created apps that leverage a variety of bot frameworks, and most of them seem to fall under your "command" criticisms, but are labeled as intents. I think I understand the heart of your argument which is akin to saying google handles intents / actions as if it were filling inputs on a web form that ultimately goes to an api for response generation.

Having said that, I don't know how you can say that Siris backend is much better from an intent perspective without being able to leverage it properly because of the shortcomings related to the UI. From what I've seen, it doesn't even handle context well. Now it sounds like Siri will be used to do proactive things which is certainly new and different from Google Assistant. Yet, I suspect that logic is just being branded as Siri because it is a push to label Siri as your intelligent assistant as opposed to the weird robot thing you can use to check the weather

kartD|8 years ago

So what would be an advantage of Apple's system (assuming they improve speech recognition) and how would that contrast with Google's approach?

Eridrus|8 years ago

[citation needed]