(no title)
mtthwmtthw | 8 years ago
Another example is how as an Alexa skill maker, you have to provide utterance / intent mappings. Is that just used to accurately classify intent / entities? Or is it also used to identify which skill to pass a user utterance to as a part of the alexa skill service because there could be very little variation between skill names or inquiries amazon is supposed to actually fulfill when a person is talking ??
sib|8 years ago
Yes, you are right, in that there are ways to blend the two (use data from one to improve the other). However, in the end of the day, the better the system can determine which words were spoken (using whatever technology), the better it can determine the meaning and the intent, and then decide what to do about it...
> Another example is how as an Alexa skill maker, you have to provide utterance / intent mappings. Is that just used to accurately classify intent / entities? Or is it also used to identify which skill to pass a user utterance to as a part of the alexa skill service
It is primarily used for the former (classify utterance / intent mappings for bootstrapping). Over time, with ML, the goal would be to help understand which "skills" apply to which intents. Unfortunately, today, that's not the case (in Alexa) and that's why the skill-specific keywords or names are still needed. From the user's point of view, it would be preferable to be able to say "Alexa, I need a ride to the airport" and have Alexa figure out whether Uber, Lyft, or the light rail service with a station two blocks from your house is the "best" option for you right now, based on price, availability, and your explicit and implicit preferences. Of course, the system would also have to allow you to specifically request a Lyft, if that's what you want.
(And, of course, it should ultimately be proactive and just offer to get you to the airport when it sees a flight in your calendar, of from having scanned the flight purchase confirmation in your email...)