The author's particular gripe is that the Watson advertisements showing someone sitting down and talking to "Watson." They bother me as well (and did so when I was working at IBM in the Watson group) because they portray a capability that nothing in IBM can provide. Nobody can provide it (again to the author's point) because dialog systems (those which interact with a user through conversational speech) don't exist out side specific, tightly constrained, decision trees (like voice mail or customer support prompts).
If SpaceX were to advertise like that, they would have famous people sitting in their living room, on mars, and talking about what they liked about the Martian way of life. In that case I believe that most people would understand that SpaceX wasn't already hosting people on Mars.
Unfortunately many, many people think that talking to your computer in actually already possible, they just haven't experienced it yet. Not sure how we fix that.
It all goes back to how the Watson that played jeopardy - what the vast majority think of when they hear the word "Watson" - was a really cool research experiment and amazing advertising.
A lot of the people that pay for "Watson" probably think they're paying for something really similar to the Watson that beat Ken Jennings at Jeopardy! and cracked jokes on TV. They're paying for something that might use some of the same algorithms and software, but they're not actually getting something that seems as sentient and "clever" as what was on TV.
To me, the whole "Watson" division does seem like false advertising.
> Unfortunately many, many people think that talking to your computer in actually already possible, they just haven't experienced it yet.
Given how often another person can't correctly infer meaning when people talk, I doubt it will ever be how people imagine it.
Initially, I imagine it will be a lot of people trying to talk normally and a lot of AI responses asking how your poorly worded worded request should be carried out, choice A, or choice B. You'll be annoyed because why can't the system just do the thing that's right 90% of the time? The problem is that 10% of the time you're just not explaining yourself well at all and 10% is really quite a lot of the time, and probably ranges from a few requests to dozens of requests a day, and while being asked what you meant is annoying, having the wrong thing done can be catastrophic.
As humans, we get around this to some degree by modeling the person we are conversing with as thinking "what would they want to do right now", which is another class of problem for AIs I assume, so I think we may not get human level interaction until AI's can fairly closely model humans themselves.
I imagine we'll probably develop some AI pidgin language that provides more formality that as long as we follow it AI voice interactions become less annoying (but you have to spend a bit of time before to formulate it).
Then again, maybe I'm way off base. You sound like you would know much better than me. :)
What's bad about it is that it leads to a "market for lemons" for anyone working in the A.I. field in either engineering or sales. People see so much bullshit that they come to the conclusion that it is all bullshit.
This is not just Watson and IBM. Many, many people in AI make grandiose claims and throw around big words like "Natural Language Understanding" "scene understanding" or "object recognition" etc.
And it is a very old problem, at least from the time of Drew McDermot and "Artificial Intelligence meets Natural Stupidity":
However, in AI, our programs to a great degree are problems rather than solutions. If a researcher tries to write an "understanding" program, it isn't because he has thought of a better way of implementing this well-understood task, but because he thinks he can come closer to writing the _first_ implementation. If he calls the main loop of his program UNDERSTAND, he is (until proven innocent) merely begging the question. He may mislead a lot of people, most prominently himself, and enrage a lot of others.
This is Marketing 101 though. It's easier to sell someone a dream or some emotional state than it is to sell an actual product/thing that you have. I'd give people a little more credit. Adults know what advertisements are and that they're all phony.
There was a time when I'd introduce people to 'Eliza'-type programs. Those programs stashed phrases typed in by users. When new users typed in a phrase, the programs would parrot back the stashed phrases ... based on really crude algo's, or even random selection.
Nonetheless, I watched people get really worked up about what the computer was 'saying'. Partly because of the sarcastic stuff people would say, partly because of their expectations about 'cybernetic brains'.
Now the 'Cyc'le is back. And people actually working on this really hard problem are not helped by dumb marketing.
The most ingenious trick that the IBM marketing department pulled was to get non-technical (and probably even technical people, judging by this thread) to think that Watson is some kind of singular thing. Like that it’s a single big neural network with different APIs on it, or something. I honestly think that’s what most people think Watson refers to.
Watson is like Google Cloud Platform. It’s just a name for a platform with a bunch of technologies.
E.g. Watson Natural Language Understanding was previously AlchemyLanguage. It was just rebranded.
It’s very clever though, I’ll give them that. Use a human name so it has all the anthropomorphic connotations and let people think it’s some kind of AI learning things.
I briefly worked with a Watson team on a cool idea to map a person's 'knowledge space' (or probable knowledge space given their background) against Watson's knowledge space and guide them to relevant learning materials and journal articles and the like.
The idea was to save people time so they aren't rehashing stuff they know down pat or jumping ahead into material they cannot understand but, instead, find that next step into what they almost know. The idea from there would be to let them specify where they want to go and guide them, step by step, exposure by exposure, to that summit.
In a few days, it turned into Just Another News Article Recommendation Engine based on interest and similar profiles with other clients. Yawn.
Watson is the IBM marketing department going mad about ways in which IBM can continue to remain relevant in a world that increasingly doesn't care about what hardware a particular computer program runs on.
If there is going to be a 'second AI winter' I fully expect Watson and other such efforts to be the cause.
IBM hasn't been about the hardware for a long time, instead it has been about the consulting services contract. And when we, as a startup, first engaged with the Watson folks it was clearly a sales funnel for their consulting services.
That said, IBM has a tremendous amount of research they have done in AI over the years. It is not that they don't have a lot of interesting technology they can throw at different business problems, it seems like they are having a hard time getting invited to the party if they don't track the same hype buzz that the current ML/AI craze has embraced.
Here I am doing a reasonably good impression of Watson, by totally missing your point and just adding that we have already had (at least) two AI winters, this would be (at least) the third coming up, if it does.
One thing about Watson that I remember is this presentation by a very senior guy. He had just come back from the US and was presenting what he learned there about Watson Healthcare (IIRC, that's what it was called), which I assumed was a division of the Watson team that was focused on cancer and stuff like that.
I'm paraphrasing, but during the presentation he said something like: "The project was not originally called Watson Healthcare, it was called X (I can't remember exactly), but potential customers were like 'No, no, leave X, we want Watson', so we had to change the name to Watson Healthcare for the sake of our customers. Watson Healthcare actually doesn't have anything to do with Watson."
I couldn't believe, at the time, how much respect I lost for IBM in about 20s. First of all, he thought we're idiots. You have to be brain dead in order to believe that he renamed X to Watson Healthcare in order to help customers. They just wanted to ride the hype train of the Watson brand and were lying to everybody about it.
When your customer is an executive who needs to sell their decision to a board or C-level then pitching "IBM Whatever Health Stuff" is a bit harder than "Watson Healthcare" because of marketing.
The way IBM talks about it is completely bs. However, this round of AI is definitely better than the last one. Specifically, whats different this time around is that previously, expert based systems and many machine learning techniques require that you specifically hand code things like:
1. Parsing and providing the input dataset into 'features'
2. Hand coding the logic and rules for many different cases (Expert systems)
Now, it has become easier to train a model such as a neural net where you can provide much 'rawer' data; similarly you just provide it a 'goal' in the form of a loss function which it tries to optimize over the dataset.
By 'true' AI, I think most people mean 'how a human learns' - which is actually a very biased thing, since we humans have goals of things like the need to survive, etc. I do believe it would be possible to encode these into goals, although doing that properly and more generically seems a little bit in the future.
Having had some level of access to inside IBM the whole cognitive initiative has just been this bizarre self-feeding marketing sales escalation where the real engineering has to 'bring the cognitive' in the most Dilbert pointy-haired boss sort of way.
Dan Klein shared in a graduate NLP class at Berkeley a few years back a AAAI article on Watson back in 2010 (when it actually was a distinct technology stack and not just marketing nonsense). At that time IBM was focused on question answering in Jeopardy. It was pretty clearly incremental rather than novel— Dan used the example to show that 1) ensemble techniques can be effective if done properly and 2) hyper parameters matter, a lot 3) there's human intelligence and then there's Ken Jennings intelligence: looking at precision and percent answered, he's in his own separate league. It made me think a lot about individual differences in terms of declarative knowledge.
It was also unclear to me when they did the contest as to whether Watson only had access to the analog audio and/or image of the questions asked. So did they have to parse the question the same as Ken.
Also, it was clearly optimized for a specific use case. If the questions were reworded with more clues that were puns or needed inference, I think Ken would have done about the same but Watson would have faired much more poorly.
"Machine learning" was a pretty good buzz word, but "Artificial Intelligence" is even better. And in a way, ML is part of AI so it isn't really lying.
IBM tries to sell into c-suites of companies that are less technically-adept than the average HN reader. Their marketing seems to be pretty effective, at least in getting proof of concept projects signed with big names.
Watson is simply IBM's ML product, but they call it AI and wrap it in marketing for all the reasons every AI startup does the same thing.
I disagree that companies implementing "AI" are less technically-adept than the average HN reader. This sort of comment has happened on other similar discussions.
It is untrue. They are very technically adept, with teams of people who are also aware of their problem spaces, and technology.
I understand there are some cases of lack of technical teams making these sort of decisions, but it isn't the norm, and even smaller companies often have incredibly technical teams.
I don't understand where this idea comes from. I've done consulting, and implementation of these type of projects for most of my life. My experience says it's false. What is the feeling that this is true?
Is it from people who aren't a part of the process theorizing that some unknown force must not be as intelligent as they are? Is it from looking at the decision making in general (why did they buy an ERP?), and making correlations?
I'm honestly unsure how there's this widespread idea that there aren't brilliant people everywhere doing the same work they are. Yes there are problems, and challenges all over the place, but I find I am amazed all across the country, and world at the level of expertise in companies.
Yep, matches my experience. We invited IBM to our company to pitch Watson, there was very little that was impressive about it. "Watson" is mostly just a coding services integration team, who will assign a team to add some basic NLP to your web services. Someone with a free weekend and a book on TensorFlow or NLTK can replicate most of what the IBM sales engineers pitch for Watson.
Oh wow, Roger Schank [1]! Haven't heard that name in a while - he was quite famous in the early days of AI. I wonder if he has figured out a good way to marry ML to his theory of Conceptual Dependency (CD) [2] - because that would could be ground-breaking for hard NLP problems.
Interestingly I started reading the article without paying much attention to who the author is. A few lines in I began to wonder if this is going to be unproductive rant, and if the author has heard of things like CD etc ... It became funny right about then because that's also when I happened to glance at the URL and saw Schanks name.
In the 1980s I spent too much time trying to use Conceptual Dependency in a few small R&D projects. Looked promising, but I had little success with it.
I'd like to make a plug for my company (http://www.cyc.com) whose "AI" is not machine-learning based, does actual cognition and generalized symbolic reasoning, and lived through the AI winter of the 80s. We've gotten some contracts as a direct result of companies being disenchanted with Watson's capabilities.
I would like to ask you a question: over the years I experimented quite a bit with OpenCyc that you stopped distributing last year. Is ResearchCyc reasonable to experiment with on a small server of powerful laptop? Is an OWL version available?
OT from the headline, but I take issue with the author's claim that Bob Dylan's work doesn't relate to the theme "love fades". Dylan has had a vast career beyond his protest song days, and I'd argue that one of his best albums, "Blood on the Tracks" would be accurately summed up as "love fades".
Dear engineers, merit is useless when you're trying to sell something. Authority is king, and people remember emotion and hyperbole. Marketing and sales is almost always about representing authority regardless of merit. The only thing that matters after a Watson sale is if Watson can help solve the problems the customers have.
I heard someone say that A.I. is just what we call technology that doesn't work yet. Once it works, we give it a specific name, like "natural speech recognition".
Personally I define AI as software that you "train" rather than "program". In the sense that neural nets and other ML tools function as black boxes rather than explicit logic.
By that definition, AI is a real thing—it's built on top of programming that uses compilers and languages and ones and zeros—but it's different and it's valuable.
To say it's all bullshit, I feel, is to cut yourself off from new skills. Kind of like "compilers are all bullshit—it's opcodes at the bottom anyway."
Not sure what you mean by that. Is there some industry standard around the term "Artificial Intelligence"? I agree that its become a bit of a buzzword, but I'm not sure that its being misused.
As someone who is currently working on successful commercial product with a strong a expert system component, I agree with the sentiment. The funny thing about this project is that the product owners nor marketers and not even the coders ever use the term "expert system". It just doesn't sell any licenses nor garner any attention.
My view is that expert systems are a ubiquitous part of many products to the degree it's hard to even recognize them as such. They're not the main focus of anyone's marketing budget, because that makes about as much sense as promoting your "revolutionary axle technology" to sell a car.
Heh. Of course that’s a space with a pretty constrained and well defined set of rules. Until they aren’t of course. Which is why we still have accountants.
It's a pretty open secret in the community that what IBM pitches that Watson can do, versus what it (or any state of the art system) really does is pretty much bunk. This author calls it fraud, but a more charitable interpretation would be extreme marketing. We've seen a lot of failures with Watson, particularly in the medical space - MD Anderson's Cancer work for example (https://www.forbes.com/sites/matthewherper/2017/02/19/md-and...) where MD Anderson payed around $40 million (on a original contract deal somewhere near $4 million) and eventually abandoned it.
I do think Watson may be a fake it until you make it thing - in particular, they still have access to a incredible amount of data, and data determines destiny on a lot of AI.
but their message is for outside the community, so I don't think they deserve any charitable interpretation, they lost this privilege a long time ago. I have to say I'm biased, I wish I can see them disappear soon, but I think I have the right motivations.
Had IBM sales people present on Watson as a security solution recently. The stench of BS was so bad I nearly had to leave the room. It wouldn't bother me if they kept things generic, but they deliberately sprinkle the presentations with specific terms referencing hyped technology (deep learning, etc), with the clear objective of deceiving the audience into thinking they are using those technologies when they clearly aren't. It was unethical IMHO.
Chomsky calls it a sophisticated form of computational behaviorism. Just like the research program of behaviorism died out, this will eventually too. There are other respectable criticisms of AI, like Hubert Dreyfus' 'What computers can't do'.
Neither Chomsky nor Dreyfus claimed that machine learning and/or AI won't solve any problems, but rather that the kind of problems these solve are not relevant in terms of aspiring to be humans.
I'm no expert in the field of AI or Machine Learning, so I have a question for the experts here? Has there been any theoretical breakthrough in the AI in the recent years? I knew we had neural networks and different types of classifiers, etc. for more than a few of decades now, so apart from better marketing, has there been a significant breakthrough that explains this sudden surge in the interest in AI?
There have been a number of breakthroughs in specific fields recently. In fact it seems like there is a paper every week that pushes the state of the art forward in some branch of AI/ML. I think the big one that triggered the current excitement was the success of convolutional networks in image recognition tasks. You can read about that one (in 2012) here: https://www.technologyreview.com/s/530561/the-revolutionary-...
EDIT: This paper refers to the algorithm as SuperVision which was the team name, but it is more commonly called AlexNet. Here is another article discussing it:
Before 2005 experts estimated that AIs that could beat humans at Go would take many years still. In October AlphaGo came out and in March 2016 beat everyone. In 2017 AlphaGo Zero came out, which was only trained on self-play. It beats all previous versions.
But it is not so much these big breakthroughs that cause current optimism, as much as the large amounts of continuous improvements and refinements.
Recent breakthroughs in ML from a commercial perspective are achieving super-human performance in some computer vision tasks and end-to-end methods beating traditional ones in machine translation and speech recognition.
However, these breakthroughs have not much to do with a general notion of intelligence as they are very limited to a specific task. In the media, the term AI is used very inflationary these days.
> has there been a significant breakthrough that explains this sudden surge in the interest in AI?
Yes. Some tasks like language translation, face recognition etc. have reached human level performance. Why has this happened? Large data and increase in computing power (Thanks gamers and GPUs!).
However
- MCMC for scalable bayesian reasoning.
- Answer Sets for reasoning in FOL.
- Big data sets for training lots of classifiers. Including deep networks. A big breakthrough has been crowdsourcing the labelling.
[+] [-] ChuckMcM|7 years ago|reply
If SpaceX were to advertise like that, they would have famous people sitting in their living room, on mars, and talking about what they liked about the Martian way of life. In that case I believe that most people would understand that SpaceX wasn't already hosting people on Mars.
Unfortunately many, many people think that talking to your computer in actually already possible, they just haven't experienced it yet. Not sure how we fix that.
[+] [-] opportune|7 years ago|reply
A lot of the people that pay for "Watson" probably think they're paying for something really similar to the Watson that beat Ken Jennings at Jeopardy! and cracked jokes on TV. They're paying for something that might use some of the same algorithms and software, but they're not actually getting something that seems as sentient and "clever" as what was on TV.
To me, the whole "Watson" division does seem like false advertising.
[+] [-] kbenson|7 years ago|reply
Given how often another person can't correctly infer meaning when people talk, I doubt it will ever be how people imagine it.
Initially, I imagine it will be a lot of people trying to talk normally and a lot of AI responses asking how your poorly worded worded request should be carried out, choice A, or choice B. You'll be annoyed because why can't the system just do the thing that's right 90% of the time? The problem is that 10% of the time you're just not explaining yourself well at all and 10% is really quite a lot of the time, and probably ranges from a few requests to dozens of requests a day, and while being asked what you meant is annoying, having the wrong thing done can be catastrophic.
As humans, we get around this to some degree by modeling the person we are conversing with as thinking "what would they want to do right now", which is another class of problem for AIs I assume, so I think we may not get human level interaction until AI's can fairly closely model humans themselves.
I imagine we'll probably develop some AI pidgin language that provides more formality that as long as we follow it AI voice interactions become less annoying (but you have to spend a bit of time before to formulate it).
Then again, maybe I'm way off base. You sound like you would know much better than me. :)
[+] [-] PaulHoule|7 years ago|reply
[+] [-] YeGoblynQueenne|7 years ago|reply
And it is a very old problem, at least from the time of Drew McDermot and "Artificial Intelligence meets Natural Stupidity":
https://homepage.univie.ac.at/nicole.rossmanith/concepts/pap...
From which I quote:
However, in AI, our programs to a great degree are problems rather than solutions. If a researcher tries to write an "understanding" program, it isn't because he has thought of a better way of implementing this well-understood task, but because he thinks he can come closer to writing the _first_ implementation. If he calls the main loop of his program UNDERSTAND, he is (until proven innocent) merely begging the question. He may mislead a lot of people, most prominently himself, and enrage a lot of others.
[+] [-] ryandrake|7 years ago|reply
[+] [-] 8bitsrule|7 years ago|reply
Nonetheless, I watched people get really worked up about what the computer was 'saying'. Partly because of the sarcastic stuff people would say, partly because of their expectations about 'cybernetic brains'.
Now the 'Cyc'le is back. And people actually working on this really hard problem are not helped by dumb marketing.
[+] [-] laichzeit0|7 years ago|reply
Watson is like Google Cloud Platform. It’s just a name for a platform with a bunch of technologies.
E.g. Watson Natural Language Understanding was previously AlchemyLanguage. It was just rebranded.
It’s very clever though, I’ll give them that. Use a human name so it has all the anthropomorphic connotations and let people think it’s some kind of AI learning things.
[+] [-] clavalle|7 years ago|reply
The idea was to save people time so they aren't rehashing stuff they know down pat or jumping ahead into material they cannot understand but, instead, find that next step into what they almost know. The idea from there would be to let them specify where they want to go and guide them, step by step, exposure by exposure, to that summit.
In a few days, it turned into Just Another News Article Recommendation Engine based on interest and similar profiles with other clients. Yawn.
[+] [-] jacquesm|7 years ago|reply
If there is going to be a 'second AI winter' I fully expect Watson and other such efforts to be the cause.
[+] [-] ChuckMcM|7 years ago|reply
That said, IBM has a tremendous amount of research they have done in AI over the years. It is not that they don't have a lot of interesting technology they can throw at different business problems, it seems like they are having a hard time getting invited to the party if they don't track the same hype buzz that the current ML/AI craze has embraced.
[+] [-] rossdavidh|7 years ago|reply
[+] [-] microtherion|7 years ago|reply
[+] [-] waynecochran|7 years ago|reply
[+] [-] CivilEngineer|7 years ago|reply
[deleted]
[+] [-] throwawayWatson|7 years ago|reply
One thing about Watson that I remember is this presentation by a very senior guy. He had just come back from the US and was presenting what he learned there about Watson Healthcare (IIRC, that's what it was called), which I assumed was a division of the Watson team that was focused on cancer and stuff like that.
I'm paraphrasing, but during the presentation he said something like: "The project was not originally called Watson Healthcare, it was called X (I can't remember exactly), but potential customers were like 'No, no, leave X, we want Watson', so we had to change the name to Watson Healthcare for the sake of our customers. Watson Healthcare actually doesn't have anything to do with Watson."
I couldn't believe, at the time, how much respect I lost for IBM in about 20s. First of all, he thought we're idiots. You have to be brain dead in order to believe that he renamed X to Watson Healthcare in order to help customers. They just wanted to ride the hype train of the Watson brand and were lying to everybody about it.
[+] [-] RobLach|7 years ago|reply
[+] [-] sgt101|7 years ago|reply
[+] [-] nartz|7 years ago|reply
1. Parsing and providing the input dataset into 'features'
2. Hand coding the logic and rules for many different cases (Expert systems)
Now, it has become easier to train a model such as a neural net where you can provide much 'rawer' data; similarly you just provide it a 'goal' in the form of a loss function which it tries to optimize over the dataset.
By 'true' AI, I think most people mean 'how a human learns' - which is actually a very biased thing, since we humans have goals of things like the need to survive, etc. I do believe it would be possible to encode these into goals, although doing that properly and more generically seems a little bit in the future.
[+] [-] xemdetia|7 years ago|reply
[+] [-] glup|7 years ago|reply
https://www.aaai.org/Magazine/Watson/watson.php
[+] [-] snarf21|7 years ago|reply
Also, it was clearly optimized for a specific use case. If the questions were reworded with more clues that were puns or needed inference, I think Ken would have done about the same but Watson would have faired much more poorly.
[+] [-] denzil_correa|7 years ago|reply
http://brenocon.com/watson_special_issue/
[+] [-] seibelj|7 years ago|reply
IBM tries to sell into c-suites of companies that are less technically-adept than the average HN reader. Their marketing seems to be pretty effective, at least in getting proof of concept projects signed with big names.
Watson is simply IBM's ML product, but they call it AI and wrap it in marketing for all the reasons every AI startup does the same thing.
[+] [-] AndyNemmity|7 years ago|reply
It is untrue. They are very technically adept, with teams of people who are also aware of their problem spaces, and technology.
I understand there are some cases of lack of technical teams making these sort of decisions, but it isn't the norm, and even smaller companies often have incredibly technical teams.
I don't understand where this idea comes from. I've done consulting, and implementation of these type of projects for most of my life. My experience says it's false. What is the feeling that this is true?
Is it from people who aren't a part of the process theorizing that some unknown force must not be as intelligent as they are? Is it from looking at the decision making in general (why did they buy an ERP?), and making correlations?
I'm honestly unsure how there's this widespread idea that there aren't brilliant people everywhere doing the same work they are. Yes there are problems, and challenges all over the place, but I find I am amazed all across the country, and world at the level of expertise in companies.
[+] [-] chomp|7 years ago|reply
[+] [-] abhgh|7 years ago|reply
Interestingly I started reading the article without paying much attention to who the author is. A few lines in I began to wonder if this is going to be unproductive rant, and if the author has heard of things like CD etc ... It became funny right about then because that's also when I happened to glance at the URL and saw Schanks name.
[1] https://en.wikipedia.org/wiki/Roger_Schank
[2] https://en.wikipedia.org/wiki/Conceptual_dependency_theory
[+] [-] mark_l_watson|7 years ago|reply
[+] [-] brundolf|7 years ago|reply
[+] [-] mark_l_watson|7 years ago|reply
[+] [-] samfriedman|7 years ago|reply
[+] [-] zerotolerance|7 years ago|reply
[+] [-] Dryken|7 years ago|reply
[+] [-] flamtap|7 years ago|reply
[+] [-] neolefty|7 years ago|reply
By that definition, AI is a real thing—it's built on top of programming that uses compilers and languages and ones and zeros—but it's different and it's valuable.
To say it's all bullshit, I feel, is to cut yourself off from new skills. Kind of like "compilers are all bullshit—it's opcodes at the bottom anyway."
[+] [-] jcrotor|7 years ago|reply
[+] [-] fixermark|7 years ago|reply
In 2017, Intuit, Inc., owners of Quicken, posted revenue of about $5 billion.
Not too shabby.
[+] [-] ballenf|7 years ago|reply
My view is that expert systems are a ubiquitous part of many products to the degree it's hard to even recognize them as such. They're not the main focus of anyone's marketing budget, because that makes about as much sense as promoting your "revolutionary axle technology" to sell a car.
[+] [-] ghaff|7 years ago|reply
[+] [-] InTheArena|7 years ago|reply
I do think Watson may be a fake it until you make it thing - in particular, they still have access to a incredible amount of data, and data determines destiny on a lot of AI.
[+] [-] genofon|7 years ago|reply
[+] [-] zmmmmm|7 years ago|reply
[+] [-] dx034|7 years ago|reply
[+] [-] raincom|7 years ago|reply
Neither Chomsky nor Dreyfus claimed that machine learning and/or AI won't solve any problems, but rather that the kind of problems these solve are not relevant in terms of aspiring to be humans.
[+] [-] wintorez|7 years ago|reply
[+] [-] vishvananda|7 years ago|reply
EDIT: This paper refers to the algorithm as SuperVision which was the team name, but it is more commonly called AlexNet. Here is another article discussing it:
https://qz.com/1034972/the-data-that-changed-the-direction-o...
[+] [-] jononor|7 years ago|reply
But it is not so much these big breakthroughs that cause current optimism, as much as the large amounts of continuous improvements and refinements.
[+] [-] julvo|7 years ago|reply
However, these breakthroughs have not much to do with a general notion of intelligence as they are very limited to a specific task. In the media, the term AI is used very inflationary these days.
[+] [-] denzil_correa|7 years ago|reply
Yes. Some tasks like language translation, face recognition etc. have reached human level performance. Why has this happened? Large data and increase in computing power (Thanks gamers and GPUs!).
[+] [-] sgt101|7 years ago|reply
However - MCMC for scalable bayesian reasoning. - Answer Sets for reasoning in FOL. - Big data sets for training lots of classifiers. Including deep networks. A big breakthrough has been crowdsourcing the labelling.