It's not so much about AI as it is about putting all services behind an Apple interface. "Users will soon be able to use Slack, Uber, or Skype, by talking directly to Siri." That doesn't mean launching the vendor's app. It means bypassing it.
Apple is taking control of the user experience with third parties. It's the next generation of the "portal" concept. Expect to see Apple standards on what your API needs to look like.
iOS apps have always been behind an Apple interface. I fail to see how registering a voice command is different from registering, say, a mailto: handler. There'll be guidelines (hopefully) regarding preferred sentence structure, because it's a basic need of such an interface.
Users may often not see the app – that's kinda the purpose of voice commands. But the interface isn't replaced by an Apple interface, its surface is simply reduced.
You'll say 'Facebook message mom' or something like it, and it's going to message mom. You won't see the FB logo, because you won't see any logo. You may see more your mom, though.
In fact, third-party apps will profit, because they are no longer second-class citizen compared to Apple-provided apps.
There are certain tasks that are predestined for Siri – such as starting A Nike+/Runkeeper/etc. run: easy one-shot commands that replace unlocking, navigating, opening, menus etc. I'd suspect if this were a major conspiracy, Apple would have opened the API earlier.
Voice is a new interface that replaces the GUI, simple as that. It's more similar to a CLI. Either way, it's an Apple interface, but so is the iPhone screen.
Voice (and messaging) are constrained interfaces, so you need to put more horsepower behind them to help them work and become interactive.
As I understand it, those Slack/Uber/Skype Siri widgets are opt-in by the developer and it is the developer who needs to write those. The same goes today for sharing extensions and notification center widgets. So Apple is not bypassing vendor's app per se, it just provides a way to not launch whole app UI when it's not required, but it's the developer who is still in control.
> Such technology can make the phone or other device appear smarter because it anticipates the types of activities people want to do.
Currently all the attempts I seen to anticipate what I want make applications far more annoying (do you want to send this email to your mother as well?). Its kind of like an uncanny valley. An application that truly could anticipate my needs would be good, but when it tries to anticipate and gets it wrong, it becomes worse than the application that doesn't try and silently waits for me to give it instructions.
Imagine a human PA who knows you really well. You'll still have to clarify what you want from him/her a good fraction of the time.
I suspect AI lives in a special kind of uncanny valley where we're less tolerant of mistakes and imperfections than we would be if we were telling humans to attempt the same tasks.
I'm not sure why this is, but it could be because in spite of all the bugs and failures, we still expect computers to be far more predictable and reliable than humans.
If AI doesn't match the expectation, it's perceived as more frustrating and less useful than perhaps it really is.
Google at their last conference put AI so central – as the next big thing, that I just can't help to think, that Apple was forced to follow that path. So they used the terminology of "AI" and "deep learning" in their presentation. Yet it didn't make me confident they are really up to it. That they really have the expertise, to make this into something that does work (even on the phone itself! - all the other deep learning algorithm need huge frameworks with GPUs!)
Myself, I wasn't able to experience the promised virtues of AI from Google itself. So I wouldn't go as far to call it a bluff from Google. But definitely from Apple, as of now.
I see it more as a back-and-forth. If anything Apple fired the first shot in anger in the platform AI wars by integrating Siri into iOS. Before then voice control was just simple vocal commands and dictation.
It seemed for a while though that Apple was falling behind. Google Now and Cortana overtook them in capability and now they're fighting for the lead again.
I wonder what happened. Perhaps the original Siri team didn't properly gel into the Apple corporate culture? I know the founders left after a few years. Anyway, it looks like Apple now have a solid internal Siri team able to push the platform forwards.
> For example, Apple will now scan your photos using facial recognition to cluster people together in your photo collection.
Apple's OS X photo apps have done facial recognition since 2009. This new thing is more advanced as well as a first for iOS, but I'd hardly call it a "big shift".
The first is doing it on a mobile device, locally, without sending data to the cloud. That's pretty big, and only been possible recently with the performance increase in mobile SoC's.
Apple lead with Graphical User Interfaces for the better part of 30 years. First with the original Mac bringing Xerox technology to the masses. Then with the NeXTStep revived iMac, the clever iPod, and the smart phone.
Now they struggle in the post GUI era of Voice and A.I.
Another company going to do what they don't have any clue how to do. Google+, Bing, ... Seems like we now have 4-5 companies copying each other in a silly way all the time. Apple/MS simply can't do cloud properly whereas Google barely keeps with Amazon, Amazon's AI is horrible comparing to Google, Apple is copying MS' surface and design (!), Google+ can't do Facebook at all, Facebook can't do ads properly, MS can't do search... Looks wonderful for Apple's AI.
Facebook can't do ads properly? I don't have any ties to FB but man, Facebook does ads extremely well. I am not sure on what argument you are basing this on.
I thought MS cloud is doing fine in comparison to Google -- MS is second while Google is third. However, I totally agree with you on others. Particularly, Apple with its closed culture. How many researchers Apple can hire AI researchers while the researchers are so high in demand. The researchers can go to other companies, whom are more open and let the researchers publish their work.
Besides the fact that I disagree with some of your observations, it's not really clear what your point is. Are you saying these companies shouldn't try to compete if it isn't in an area where they've already been successful?
IMO, it doesn't matter whether they can do it, because they must do it.
If they don't somehow learn to do it, they will be eaten by Google and Facebook; the money is not in the hardware anymore. A decade from now, it certainly won't be.
No it's not, Apple have since decade facial recognition in iPhone, voice recognition in Siri (with a bit of AI to generate the answer), and handwriting recognition too.
You must be a millennial. Those challenges have been pushing AI research for decades (and still are). In case you haven't noticed, "build an AI" consists of quite a few different problem domains -- not just chatbots and fully formed humanoid robots.
You're confusing AI and AGI (artificial general intelligence). AGI is a subset of AI focused on introspective systems that are resourceful and don't get stuck. Like humans. Face recognition and picture processing are not AGI.
All of those features sound awful to me. I don't want AI scanning my text messages and "anticipating" what I want to do. That's just way too creepy. The photo stuff, irrelevant. I realized long ago that I never look at my photos later so i stopped taking them. But if I did I don't think I'd want Apple slurping them all up to run recognition algorithms.
I've been jealous of Android adding those features, but I never considered using them because of the privacy implications. What excites me about Apple doing it is that they seem to really focus on maintaining privacy. I'd love more 'AI' on my phone if they can pull that off!
That's nothing. For years, iPhones have had this creepy technology which scans what you're typing and anticipates which word you're trying to type. It's Steve Jobs' greatest coup: getting millions of people to willingly let their phone AIs spy on every letter they type.
[+] [-] Animats|9 years ago|reply
Apple is taking control of the user experience with third parties. It's the next generation of the "portal" concept. Expect to see Apple standards on what your API needs to look like.
[+] [-] matt4077|9 years ago|reply
Users may often not see the app – that's kinda the purpose of voice commands. But the interface isn't replaced by an Apple interface, its surface is simply reduced.
You'll say 'Facebook message mom' or something like it, and it's going to message mom. You won't see the FB logo, because you won't see any logo. You may see more your mom, though.
In fact, third-party apps will profit, because they are no longer second-class citizen compared to Apple-provided apps.
There are certain tasks that are predestined for Siri – such as starting A Nike+/Runkeeper/etc. run: easy one-shot commands that replace unlocking, navigating, opening, menus etc. I'd suspect if this were a major conspiracy, Apple would have opened the API earlier.
[+] [-] jayd16|9 years ago|reply
As an Android dev familiar with Intent style interoperability, I simultaneously detest and envy Apple's hand built APIs.
[+] [-] robbiemitchell|9 years ago|reply
Voice (and messaging) are constrained interfaces, so you need to put more horsepower behind them to help them work and become interactive.
[+] [-] M4v3R|9 years ago|reply
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] johnm1019|9 years ago|reply
[+] [-] ClassyPuff|9 years ago|reply
[+] [-] sametmax|9 years ago|reply
The golden jail is getting a new door.
[+] [-] arca_vorago|9 years ago|reply
Paraphrasing, "Apple puts the user in a prison... But its a very beautiful prison."
[+] [-] abruzzi|9 years ago|reply
Currently all the attempts I seen to anticipate what I want make applications far more annoying (do you want to send this email to your mother as well?). Its kind of like an uncanny valley. An application that truly could anticipate my needs would be good, but when it tries to anticipate and gets it wrong, it becomes worse than the application that doesn't try and silently waits for me to give it instructions.
[+] [-] TheOtherHobbes|9 years ago|reply
Imagine a human PA who knows you really well. You'll still have to clarify what you want from him/her a good fraction of the time.
I suspect AI lives in a special kind of uncanny valley where we're less tolerant of mistakes and imperfections than we would be if we were telling humans to attempt the same tasks.
I'm not sure why this is, but it could be because in spite of all the bugs and failures, we still expect computers to be far more predictable and reliable than humans.
If AI doesn't match the expectation, it's perceived as more frustrating and less useful than perhaps it really is.
[+] [-] mklarmann|9 years ago|reply
Myself, I wasn't able to experience the promised virtues of AI from Google itself. So I wouldn't go as far to call it a bluff from Google. But definitely from Apple, as of now.
[+] [-] simonh|9 years ago|reply
It seemed for a while though that Apple was falling behind. Google Now and Cortana overtook them in capability and now they're fighting for the lead again.
I wonder what happened. Perhaps the original Siri team didn't properly gel into the Apple corporate culture? I know the founders left after a few years. Anyway, it looks like Apple now have a solid internal Siri team able to push the platform forwards.
[+] [-] MaysonL|9 years ago|reply
[+] [-] Razengan|9 years ago|reply
[+] [-] ansgri|9 years ago|reply
[+] [-] canuckintime|9 years ago|reply
[+] [-] comex|9 years ago|reply
Apple's OS X photo apps have done facial recognition since 2009. This new thing is more advanced as well as a first for iOS, but I'd hardly call it a "big shift".
[+] [-] philjohn|9 years ago|reply
[+] [-] baldfat|9 years ago|reply
[+] [-] peter303|9 years ago|reply
[+] [-] NEDM64|9 years ago|reply
[+] [-] bitL|9 years ago|reply
[+] [-] bhouston|9 years ago|reply
[+] [-] hvass|9 years ago|reply
[+] [-] riyadparvez|9 years ago|reply
[+] [-] johnthedebs|9 years ago|reply
[+] [-] Someone|9 years ago|reply
If they don't somehow learn to do it, they will be eaten by Google and Facebook; the money is not in the hardware anymore. A decade from now, it certainly won't be.
[+] [-] zepto|9 years ago|reply
[+] [-] NEDM64|9 years ago|reply
[+] [-] hokkos|9 years ago|reply
[+] [-] Oletros|9 years ago|reply
[+] [-] arjie|9 years ago|reply
[+] [-] daveguy|9 years ago|reply
[+] [-] sandstrom|9 years ago|reply
Sort of like BigData and Web 2.0, vague terms used for almost everything.
[+] [-] galistoca|9 years ago|reply
[+] [-] MawNicker|9 years ago|reply
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] _pmf_|9 years ago|reply
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] ams6110|9 years ago|reply
[+] [-] coldtea|9 years ago|reply
I'm sure this makes photography irrelevant for everybody /s
[+] [-] mercer|9 years ago|reply
[+] [-] hrayr|9 years ago|reply
[+] [-] askafriend|9 years ago|reply
[+] [-] arjie|9 years ago|reply
Viva la vida! Down with the robot revolution!
[+] [-] ddebernardy|9 years ago|reply
Just wait 'till you have old enough kids (and, later, grand kids).