This is a good balanced article that gets a lot of things right. We should take a forgiving approach when we talk about AI systems. And as the author points out the problem is not that AI systems dont have understanding yet. The problem is with the hype which leads many to believe that we are close to building systems which can understand us.
That said, I have a small problem with the examples presented to say that already machines understand us :)
The article says 'For example, when I tell Siri “Call Carol” and it dials the correct number, you will have a hard time convincing me that Siri did not understand my request"
Let me try to take a shot at trying to explain that Siri did not "understand" your request.
Siri was waiting for a command and executed the best command that matched. Which is, make a phone call.
It did not understand what you meant because it did not take the whole environment into consideration. What if Carol was just in the other room. A human would maybe just shout "hey Carol, Thomas is asking you to come", instead of making a phone call.
If listening to a request and executing a command is understanding, then computers have been understanding us for a long time. Even without the latest advances in AI.
> It did not understand what you meant because it did not take the whole environment into consideration.
This is the crux of the matter. These voice recognition agents are trained with goal of accurately modelling a function that converts recorded sound to a series of words, and then act on those words to perform the most appropriate action. They are NOT trained to model the entire world, which is an incredibly complex task that no one has been able to formulate as a problem that computers can solve, yet. Humans on the other hand, have a machine that is extremely well-equipped to do just that - the brain. And that is exactly why humans are able to "understand" things, while we feel that machines are not, with our definition of "understand".
In the far distant future, if and when we do figure out a way to model the entire world, come up with suitable objective function, and solve it on a computer, there's no reason why that machine should be any less capable of understanding things than the average human.
I think this is partly down to us humans defining "intelligence" as "like us".
We have a very specific set of evolved traits that define our understanding of the universe. A lot of that is social. So our "understanding" of the phrase "call Carol" includes a wide range of social cues about what that means, and your example is perfect: "call Carol" means that I want to talk to her, and that would be better done in person if possible, but that "if possible" has a more-or-less specific range of "if she's within earshot so I can yell for her", which is limited to the range of a human voice (but not the maximum range, like screaming, but just a normal yelling range). Which is less if the door is closed, or there's music playing, or Kevin is trying to nap in the other room. And not at all if we're in a library, or concert, or even a public space where yelling would draw attention. If "call Carol" has to include all of these to qualify for "understanding" then I think I know some people who fail at this test.
My go-to thought experiment on this is Dolphins. Dolphins are intelligent, have language, etc. But their understanding of the world must be so different. Trying to explain to a dolphin what "tripping someone up" means is going to be tricky. They may understand the words, but they'll never understand the concept.
We swim in a sea of social cues and non-verbal communication. We can program an AI to imitate more and more of this, and be aware of more of it, but it's like teaching dolphins about long-distance running. It's never going to come naturally. And they're never going to evolve that understanding naturally (like we do as children) because it's not in their nature. We anthropomophise our machines a lot, and we assume that they'll grow (like children) to grok all of our social cues eventually, because our only experience of similar situations is, well, children. But they're just machines, designed for a single purpose. They're never going to grok this. They're never going to be "like us" and really understand all the social ramifications of "call Carol". At some point I think we're going to have to accept this, and say that the machine understands the phrase "call Carol" enough. TFA draws the line at the machine calling Carol, and that seems reasonable.
So the next version of Siri can locate Carol's phone in the next room and will just beep her phone to tell her to see you. Of course that's still not understanding.
> Siri was waiting for a command and executed the best command that matched. Which is, make a phone call.
ISTM there's no more "understanding" involved in this than when I touch the Contacts icon on my screen, then "C", "A", "R", etc until Carol's entry is displayed, and then I touch the Phone icon to initiate a call.
The fact that the interface used was sound-waves that the device recognised as matching the keyword "call" and the contact-list entry "Carol", rather than my finger touching specific areas of the screen, may be a handy feature. Of course it's a triumph of signal processing, fuzzy recognition, etc. But there's no more "understanding" involved than in the touch-screen version of the action, or in typing a command and parameter into a terminal window.
> If listening to a request and executing a command is understanding, then computers have been understanding us for a long time. Even without the latest advances in AI.
I think this is a reasonable thing to say, in the limited way he has defined ‘understanding’. People forget what a titanic achievement that user interfaces that allow us to communicate our intentions to a computer and receive a relevant response actually are, whether it’s using a voice or clicking a button.
I have a straightforward definition of "understand". To understand means to be able to give a (representative) example of the (intensionally) given set. Though it is harder than it seems, as it usually means solving the constraint satisfaction problem.
For example, take the classical AI knowledgebase fragment, "bird is animal that flies". If I ask example of bird, it can say "eagle", and exhibit some understanding. We can then probe further and ask for a bird which is not an eagle. If it says "bat" or "balloon", it exhibits that it still doesn't understand birds quite right.
In particular, if the description is nonsensical and thus impossible to understand, we cannot give any examples.
This idea was really inspired by the study, where they asked people to recognize nonsensical and profound sentences, describing certain situation. The profound are the ones where you can create a concrete instance of the situation.
You've rigged this up to operationalize it for current digital machines.
"Understanding", "Intelligence", etc. is a feature of animals in their environment. We need to begin there; and that is what we are talking about.
We "understand" how to drive as a dog "understands" how to play fetch. Understanding is not ever going to be a trivial rule that some digital system may instantiate.
It will always require direct causal contact with an environment. In my view "understanding" is "competent play in a changing environment" -- ie., the ability to modify the environment as it changes in accordance with your goals.
This rough definition is inspired by work in animals to understand the role of the neocortex, and animal learning, and the role of consciousness therein. Roughly: consciousness is "perceptual and cognitive intelligence grappling with environmental change".
Very good observation, although I'd say this is still just understanding at the micro-level. A lot of what is going on in communication between people depends just as much on what hasn't been said, what would normally be said in this situation, having an idea of what the situation is in the first place, what was said recently or the last time you interacted with this person (which could potentially be a very long time ago), etc. I do believe that a lot or all of this can be posed as CSPs though.
On my reading list is "The proper treatment of events", a book which "studies the semantics of tense and aspect" within a formal framework of constraint logic programming[1]. There is other similar work in this area, like "Good-enough parsing, Whenever possible interpretation:a constraint-based model of sentence comprehension"[2].
Question: What is an example of a bird?
Answer: An egret.
Question: What is another example?
Answer: Canaries.
Seems to do fine. I don't really have a stop though, so it goes on making up new questions on it's own. Make of it what you will. Very few of the answers are correct or even coherent enough to be correct:
https://hastebin.com/agululiqif.txt
I do like this one though:
Question: Who is the inventor of the English ham?
Answer: Poor old Francis Bacon.
Also to be able to manipulate and use the conceptual realisation ? "I could use this twig (but not that twig) as a hook for ants if I strip the bark off and turn it when I hold it i the hole, then I can eat the ants"
On the one hand the quote by Edsger Dijkstra comes to mind. "The question of whether machines can think is about as relevant as the question of whether submarines can swim." We are hardwired to attribute great significance to what happens both in our own head and that of other people.
On the other hand, machines still perform actions that one could call 'stupid'. When alphago was losing in the fourth match against Lee Sedol it would play 'stupid' moves. These were, for instance, trivial threads that any somewhat accomplished amateur go player would recognize in an instant and answer correctly.
Humans, and also animals, have a hierarchy in their understanding of things. This maps on brain structure too. Evolution has added layers to the brain while keeping the existing structure. In this layered structure the lower parts are faster and more accurate but not as sophisticated. Stupidity arises because of a lack of layeredness so when the goal of winning the game is thwarted the top layer doesn't have any useful thing to do anymore and it falls back on a layer behind that. For alphago pretty much the only layer behind its very strong go engine is the rules of go. So, even when it is losing it will never play an illegal move but it will do otherwise trivially stupid things. For humans there is a layer between these things that prevents them from doing useless stuff. For living entities this is essential for survival. You can be forgetful of your dentist appointment but it is not possible to forget to let your heart beat. It seems that this problem could be mended by putting layers between the top level algorithm and most basic hardware level such that stupid stuff is preempted.
> When alphago was losing in the fourth match against Lee Sedol it would play 'stupid' moves. These were, for instance, trivial threads that any somewhat accomplished amateur go player would recognize in an instant and answer correctly.
I think this behavior is less 'stupid' than it appears. When human beings play Go, the points matter even to the loser, and everyone goes home when it is over. There is life outside of Go. To Alpha Go, Go is it's entire universe. Part of the way it was trained was competing against other instances of itself, a sort of Thunderdome where the loser doesn't get to continue existing, and doesn't contribute to future generations. To Alpha Go, defeat is death. The behavior we observe when losing is nigh-certain has a human equivalent, we call it desperation. Alpha Go is trying moves that can only possibly work if the opponent makes a catastrophic blunder, which is incredibly unlikely, but it's the only shot it has.
> When I ask Google “Who did IBM’s Deep Blue system defeat?” and it gives me an infobox with the answer “Kasparov” in big letters, it has correctly understood my question. Of course this understanding is limited. If I follow up my question to Google with “When?”, it gives me the dictionary definition of “when” — it doesn’t interpret my question as part of a dialogue.
Google Search doesn't, but Google Assistant does. I posed the exact queries suggested by the article and the second query of simply the word "when" did give the correct answer (May 11 1997).
I remember that when my friend got a Google Home almost 2 years ago, I was asking it some questions to explore the limitations. I asked about a certain restaurant chain, and it gave me the information, but then I asked "is there one near me". It listed all places with "one" in the name near me.
I wonder if now it would correctly take the previous context into account. Google has been working a lot on improving their search and assistants to be "conversational". [1] looks like one of the results of this endevour.
That example seems pretty unrelated to what I would think of as “understanding.” That’s more just a feature request for Siri.
It’s like saying “my calculator lets me type ’1 + 2 =’ and gives me the answer ‘3,’ so it seems to understand that question, but when I look at the calculator I see there’s no ‘sqrt’ button that would show me the square root of 3.”
The fact that my basic calculator doesn’t have a “sqrt” button is pretty irrelevant to how well it “understands” how to add two numbers together.
I don't remember where I first saw it, but the best definition of "understanding" I've seen is "being able to encode and compress".
For example, imagine a system that has as input the picture of a human face in RAW format. If the system runs the picture through JPEG compression, for example, and returns something substantially smaller, it has shown some understanding of the input (color, spatial repetition, etc).
A more advanced system, with more understanding, may recognize it as a human face, and convert it to a template like the ones used for facial recognition. It doesn't care about individual pixels anymore, or the lighting, just general features of faces. It understands faces.
An even more advanced system may recognize the specific person and compress the whole thing to a few bits.
I would say that an OCR scanner understands the alphabet and how text is laid out, GPT-2 understands the relationship between words and how text is written. And a physics simulator understands basic physics because it can approximately compress a sequence of object movements into only initial conditions and small corrections.
Lossy compression makes this concept non-trivial to measure, but it's still a world's away from the normal philosophical arguments.
> Speaking as a psychologist, I’m flabbergasted by claims that the decisions of algorithms are opaque while the decisions of people are transparent. I’ve spent half my life at it and I still have limited success understanding human decisions. - Jean-François Bonnefon’s tweet (as quoted in https://p.migdal.pl/2019/07/15/human-machine-learning-motiva...)
The advantage of humans is that we have a building bullshit generator.
If someone ask why you like ice-cream, you can tell a nice story about the hot summers during your childhood, but the reality is that sugar and fat are very useful.
If a the autopilot of a Tesla hit someone, the error report is "Fatal error 0xDEADBEEF: coefficient 742 > 812".
If a person hit someone the explanation is "It was dark and near a curve. I was texting that is totally safe. I got distracted by reindeer nearby. And I snoozed and was thinking about reaching a handkerchief".
To be gadflyish do humans even truly understand or do they just claim they do because they had the observations roughly encoded from what they have been taught? Teachings which themselves often include unfounded assumptions or outright superstition.
Human understanding has been wrong often enough, missing enough crucial context to be dangerously hillariously wrong even amongst the "experts" of the day who came closest.
The isn't some epistemological nilhism but to point out that understanding is incomplete for everyone and just because a given intelligence subset doesn't match with our assumptions doesn't mean it is wrong - although it also isn't always right.
I think getting near human level for NLP understanding means be being able to visualize and combine all of the dynamic systems that language represents. I mean it's obvious that you can get pretty far just by processing a lot of text, but there is a limit. Some information about the way things work just is not encoded very well in text the way it is in video input. So you need to be able to do a sort of physics simulation for starters. Except it can't just be physics, because there are a lot of patterns that occur that you need to be able to call up and manipulate or combine that are not just plain physics. These patterns are not represented in text.
There are projects doing video and text understanding. I think the trick to efficient generalization is to have the representations properly factored out somehow. Maybe things like capsule networks will help. Although that my guess is that to get really sort of componentized efficient understanding neural networks are not going to be the most effective way.
The proposal in the article is to define "understanding" and work towards testable satisfaction of the definition.
This sounds a bit like a studying for a test taking. What if we made a definition and then worked successfully to reach the state when, according to this definition, the system "understands". Can we expect to be satisfied with the result in general, outside of the definition?
The definition of understanding could be tricky, as history suggests. Other than "to understand is to translate into a form which is suitable for some use", there could be many definitions. Article itself brings examples of chess playing or truck driving which were considered good indicators, yet failed to satisfy us in some ways.
Maybe we should just keep redefining "understanding" as good as we can today, and changing it if needed, and work trying to create a system "good", not necessarily "passing the test"?
OK, wow, the old guard sure knows how to write sensibly. This is a great
article.
But I have to disagree with this (because of course I do):
>> For example, when I tell Siri “Call Carol” and it dials the correct number,
you will have a hard time convincing me that Siri did not understand my
request.
That is a very common-sense and down-to-earth non-definition of intelligence:
how can an entity that is answering a question correctly not "understand" the
question?
I am going
to quote Richard Feynman who encountered an example of this "how":
After a lot of investigation, I finally figured out that the students had
memorized everything, but they didn’t know what anything meant. When they
heard “light that is reflected from a medium with an index,” they didn’t know
that it meant a material such as water. They didn’t know that the “direction
of the light” is the direction in which you see something when you’re looking
at it, and so on. Everything was entirely memorized, yet nothing had been
translated into meaningful words. So if I asked, “What is Brewster’s Angle?”
I’m going into the computer with the right keywords. But if I say, “Look at
the water,” nothing happens – they don’t have anything under “Look at the
water”!
In this (in?) famous passage Feynman is arguing that students of physics that
he met in Brazil didn't know physics, even though they had memorised physics
textbooks.
Feynman doesn't talk about "understanding". Rather he talks about "knowing" a
subject. But his is also a very straight-forward definition of knowing: you
can tell whether someone knows a subject if you ask them many questions from
different angles and find that they can only answer the questions asked from
one single angle.
So if I follow up "Siri, call Carol" with "Siri, what is a call" and Siri
answers by calling Carol, I know that Siri doesn't know what a call is,
probably doesn't know what a Carol is, or what a call-Carol is, and so that
Siri doesn't have any understanding from a very common-sense point of view.
Not sure if this goes beyond the Chinese room argument though. Perhaps I'm
just on a diffferent side of it than Thomas Dietterich.
I think the key ingredient is 'being in the game', that means, having a body, being in an environment with a purpose. Humans are by default playing this game called 'life', we have to understand otherwise we perish, or our genes perish.
It's not about symbolic vs connectionist, or qualia, or self consciousness. It's about being in the world, acting and observing the effects of actions, and having something to win or lose as a consequence of acting. This doesn't happen when training a neural net to recognise objects in images or doing translation. It's just a static dataset, a 'dead' world.
AI until now has had a hard time simulating agents or creating real robotic bodies - it's expensive, and the system learns slowly, and it's unstable. But progress happens. Until our AI agents get real hands and feet and a purpose they can't be in the world and develop true understanding, they are more like subsystems of the brain than the whole brain. We need to close the loop with the environment for true understanding.
It certainly doesn’t understand Go as a board game humans invented as a stimulating mental exercise that became competitive enough to see whether human programmers could come up with a program that could beat any human. And whatever cultural history went along with playing Go. Certainly chess playing has been used as an analogy in the west for many activities involving strategy. This is something no computer currently understands.
I'm with John Searle on the Chinese room [1] opinion, i.e. that a machine cannot be said to "understand" language even if it is able to pass the Turing Test. That is because when we say "understand", we are referring to particular kind of human experience (qualia?) that a machine simply doesn't seem to have, but animals, for example, do.
Unfortunately you have no way of determining if a machine or animal have this particular experience qualia. You cant even determine of other people beside yourself have it, which gives rise to solipsism.
It like saying that red-headed people doesn't have a soul - there is no way to disprove that assertion.
I can say that you don't have qualia and you can't prove me wrong.
Does that seem dangerous to anyone else?
I also don't see any distinction between "qualia" and "soul" other than spelling, but perhaps it's because I don't have one.
Finally, I have this question for Searle: Say you understand English. Does any specific neuron in your brain understand English? No, the larger system of neurons+neuronal connections does, so why doesn't the system of grad student+book understand Chinese?
See my other comment in this discussion. What you experience as "understanding" is that a particular constraint satisfaction problem (roughly, the logical fragment that is supposed to be understood) has a solution and you are able to construct it.
I don't think it's possible for machines to understand. Numbers are meaningless, our human actions give them a useful function. All of the meaning a computer appears to provide is the preassigned values of layers and layers of programming work done by humans. Even today AI has a lot of human tagging and categorization that makes it useful.
The idea that a new self- sustaining meaning generation can arise out of the interlocking mechanisms of a computer is an interesting one. As we see self driven car CEOs describe some of the most advanced systems we have, requiring to be run in controlled environments and balking at the infinite complexity of real life, are we really building computer systems that are anything more than an incredibly sophisticated loop?
Well, what does it mean for humans to "understand"? Don't humans understand things by altering the state and connections of neurons in the brain? You could make the argument that the brain is also an "incredibly sophisticated loop".
My point is that humans are also highly-sophisticated, biological machines, so if you say machines cannot "understand", you are making the same claim for humans as well.
nutanc|6 years ago
That said, I have a small problem with the examples presented to say that already machines understand us :)
The article says 'For example, when I tell Siri “Call Carol” and it dials the correct number, you will have a hard time convincing me that Siri did not understand my request"
Let me try to take a shot at trying to explain that Siri did not "understand" your request.
Siri was waiting for a command and executed the best command that matched. Which is, make a phone call.
It did not understand what you meant because it did not take the whole environment into consideration. What if Carol was just in the other room. A human would maybe just shout "hey Carol, Thomas is asking you to come", instead of making a phone call.
If listening to a request and executing a command is understanding, then computers have been understanding us for a long time. Even without the latest advances in AI.
prvnsmpth|6 years ago
This is the crux of the matter. These voice recognition agents are trained with goal of accurately modelling a function that converts recorded sound to a series of words, and then act on those words to perform the most appropriate action. They are NOT trained to model the entire world, which is an incredibly complex task that no one has been able to formulate as a problem that computers can solve, yet. Humans on the other hand, have a machine that is extremely well-equipped to do just that - the brain. And that is exactly why humans are able to "understand" things, while we feel that machines are not, with our definition of "understand".
In the far distant future, if and when we do figure out a way to model the entire world, come up with suitable objective function, and solve it on a computer, there's no reason why that machine should be any less capable of understanding things than the average human.
marcus_holmes|6 years ago
We have a very specific set of evolved traits that define our understanding of the universe. A lot of that is social. So our "understanding" of the phrase "call Carol" includes a wide range of social cues about what that means, and your example is perfect: "call Carol" means that I want to talk to her, and that would be better done in person if possible, but that "if possible" has a more-or-less specific range of "if she's within earshot so I can yell for her", which is limited to the range of a human voice (but not the maximum range, like screaming, but just a normal yelling range). Which is less if the door is closed, or there's music playing, or Kevin is trying to nap in the other room. And not at all if we're in a library, or concert, or even a public space where yelling would draw attention. If "call Carol" has to include all of these to qualify for "understanding" then I think I know some people who fail at this test.
My go-to thought experiment on this is Dolphins. Dolphins are intelligent, have language, etc. But their understanding of the world must be so different. Trying to explain to a dolphin what "tripping someone up" means is going to be tricky. They may understand the words, but they'll never understand the concept.
We swim in a sea of social cues and non-verbal communication. We can program an AI to imitate more and more of this, and be aware of more of it, but it's like teaching dolphins about long-distance running. It's never going to come naturally. And they're never going to evolve that understanding naturally (like we do as children) because it's not in their nature. We anthropomophise our machines a lot, and we assume that they'll grow (like children) to grok all of our social cues eventually, because our only experience of similar situations is, well, children. But they're just machines, designed for a single purpose. They're never going to grok this. They're never going to be "like us" and really understand all the social ramifications of "call Carol". At some point I think we're going to have to accept this, and say that the machine understands the phrase "call Carol" enough. TFA draws the line at the machine calling Carol, and that seems reasonable.
netsharc|6 years ago
The classic analogue is of course the Chinese room argument: https://en.m.wikipedia.org/wiki/Chinese_room
jfk13|6 years ago
ISTM there's no more "understanding" involved in this than when I touch the Contacts icon on my screen, then "C", "A", "R", etc until Carol's entry is displayed, and then I touch the Phone icon to initiate a call.
The fact that the interface used was sound-waves that the device recognised as matching the keyword "call" and the contact-list entry "Carol", rather than my finger touching specific areas of the screen, may be a handy feature. Of course it's a triumph of signal processing, fuzzy recognition, etc. But there's no more "understanding" involved than in the touch-screen version of the action, or in typing a command and parameter into a terminal window.
empath75|6 years ago
I think this is a reasonable thing to say, in the limited way he has defined ‘understanding’. People forget what a titanic achievement that user interfaces that allow us to communicate our intentions to a computer and receive a relevant response actually are, whether it’s using a voice or clicking a button.
mellosouls|6 years ago
The problem with the hype is that we are nowhere close to building systems that understand anything.
All we've built are calculators on steroids so far.
js8|6 years ago
For example, take the classical AI knowledgebase fragment, "bird is animal that flies". If I ask example of bird, it can say "eagle", and exhibit some understanding. We can then probe further and ask for a bird which is not an eagle. If it says "bat" or "balloon", it exhibits that it still doesn't understand birds quite right.
In particular, if the description is nonsensical and thus impossible to understand, we cannot give any examples.
This idea was really inspired by the study, where they asked people to recognize nonsensical and profound sentences, describing certain situation. The profound are the ones where you can create a concrete instance of the situation.
mjburgess|6 years ago
You've rigged this up to operationalize it for current digital machines.
"Understanding", "Intelligence", etc. is a feature of animals in their environment. We need to begin there; and that is what we are talking about.
We "understand" how to drive as a dog "understands" how to play fetch. Understanding is not ever going to be a trivial rule that some digital system may instantiate.
It will always require direct causal contact with an environment. In my view "understanding" is "competent play in a changing environment" -- ie., the ability to modify the environment as it changes in accordance with your goals.
This rough definition is inspired by work in animals to understand the role of the neocortex, and animal learning, and the role of consciousness therein. Roughly: consciousness is "perceptual and cognitive intelligence grappling with environmental change".
felixyz|6 years ago
On my reading list is "The proper treatment of events", a book which "studies the semantics of tense and aspect" within a formal framework of constraint logic programming[1]. There is other similar work in this area, like "Good-enough parsing, Whenever possible interpretation:a constraint-based model of sentence comprehension"[2].
[1] http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.10.... [2] https://hal.archives-ouvertes.fr/hal-01907632/file/CSLP-Blac...
russdill|6 years ago
Question: What is an example of a bird? Answer: An egret. Question: What is another example? Answer: Canaries.
Seems to do fine. I don't really have a stop though, so it goes on making up new questions on it's own. Make of it what you will. Very few of the answers are correct or even coherent enough to be correct: https://hastebin.com/agululiqif.txt
I do like this one though:
Question: Who is the inventor of the English ham? Answer: Poor old Francis Bacon.
deepbake|6 years ago
yamrzou|6 years ago
Edit: nvm, I think I found it : http://journal.sjdm.org/15/15923a/jdm15923a.pdf
sgt101|6 years ago
stared|6 years ago
> The opposite of a fact is falsehood, but the opposite of one profound truth may very well be another profound truth. - Niels Bohr
And, in fact, it is my rule of thumb test if something is a profound truth.
cjfd|6 years ago
On the other hand, machines still perform actions that one could call 'stupid'. When alphago was losing in the fourth match against Lee Sedol it would play 'stupid' moves. These were, for instance, trivial threads that any somewhat accomplished amateur go player would recognize in an instant and answer correctly.
Humans, and also animals, have a hierarchy in their understanding of things. This maps on brain structure too. Evolution has added layers to the brain while keeping the existing structure. In this layered structure the lower parts are faster and more accurate but not as sophisticated. Stupidity arises because of a lack of layeredness so when the goal of winning the game is thwarted the top layer doesn't have any useful thing to do anymore and it falls back on a layer behind that. For alphago pretty much the only layer behind its very strong go engine is the rules of go. So, even when it is losing it will never play an illegal move but it will do otherwise trivially stupid things. For humans there is a layer between these things that prevents them from doing useless stuff. For living entities this is essential for survival. You can be forgetful of your dentist appointment but it is not possible to forget to let your heart beat. It seems that this problem could be mended by putting layers between the top level algorithm and most basic hardware level such that stupid stuff is preempted.
AnIdiotOnTheNet|6 years ago
I think this behavior is less 'stupid' than it appears. When human beings play Go, the points matter even to the loser, and everyone goes home when it is over. There is life outside of Go. To Alpha Go, Go is it's entire universe. Part of the way it was trained was competing against other instances of itself, a sort of Thunderdome where the loser doesn't get to continue existing, and doesn't contribute to future generations. To Alpha Go, defeat is death. The behavior we observe when losing is nigh-certain has a human equivalent, we call it desperation. Alpha Go is trying moves that can only possibly work if the opponent makes a catastrophic blunder, which is incredibly unlikely, but it's the only shot it has.
modeless|6 years ago
Google Search doesn't, but Google Assistant does. I posed the exact queries suggested by the article and the second query of simply the word "when" did give the correct answer (May 11 1997).
cdirkx|6 years ago
I wonder if now it would correctly take the previous context into account. Google has been working a lot on improving their search and assistants to be "conversational". [1] looks like one of the results of this endevour.
[1] https://cloud.google.com/dialogflow/docs/contexts-overview
baddox|6 years ago
It’s like saying “my calculator lets me type ’1 + 2 =’ and gives me the answer ‘3,’ so it seems to understand that question, but when I look at the calculator I see there’s no ‘sqrt’ button that would show me the square root of 3.”
The fact that my basic calculator doesn’t have a “sqrt” button is pretty irrelevant to how well it “understands” how to add two numbers together.
BoppreH|6 years ago
For example, imagine a system that has as input the picture of a human face in RAW format. If the system runs the picture through JPEG compression, for example, and returns something substantially smaller, it has shown some understanding of the input (color, spatial repetition, etc).
A more advanced system, with more understanding, may recognize it as a human face, and convert it to a template like the ones used for facial recognition. It doesn't care about individual pixels anymore, or the lighting, just general features of faces. It understands faces.
An even more advanced system may recognize the specific person and compress the whole thing to a few bits.
I would say that an OCR scanner understands the alphabet and how text is laid out, GPT-2 understands the relationship between words and how text is written. And a physics simulator understands basic physics because it can approximately compress a sequence of object movements into only initial conditions and small corrections.
Lossy compression makes this concept non-trivial to measure, but it's still a world's away from the normal philosophical arguments.
stared|6 years ago
gus_massa|6 years ago
If someone ask why you like ice-cream, you can tell a nice story about the hot summers during your childhood, but the reality is that sugar and fat are very useful.
If a the autopilot of a Tesla hit someone, the error report is "Fatal error 0xDEADBEEF: coefficient 742 > 812".
If a person hit someone the explanation is "It was dark and near a curve. I was texting that is totally safe. I got distracted by reindeer nearby. And I snoozed and was thinking about reaching a handkerchief".
Nasrudith|6 years ago
Human understanding has been wrong often enough, missing enough crucial context to be dangerously hillariously wrong even amongst the "experts" of the day who came closest.
The isn't some epistemological nilhism but to point out that understanding is incomplete for everyone and just because a given intelligence subset doesn't match with our assumptions doesn't mean it is wrong - although it also isn't always right.
ilaksh|6 years ago
There are projects doing video and text understanding. I think the trick to efficient generalization is to have the representations properly factored out somehow. Maybe things like capsule networks will help. Although that my guess is that to get really sort of componentized efficient understanding neural networks are not going to be the most effective way.
avmich|6 years ago
This sounds a bit like a studying for a test taking. What if we made a definition and then worked successfully to reach the state when, according to this definition, the system "understands". Can we expect to be satisfied with the result in general, outside of the definition?
The definition of understanding could be tricky, as history suggests. Other than "to understand is to translate into a form which is suitable for some use", there could be many definitions. Article itself brings examples of chess playing or truck driving which were considered good indicators, yet failed to satisfy us in some ways.
Maybe we should just keep redefining "understanding" as good as we can today, and changing it if needed, and work trying to create a system "good", not necessarily "passing the test"?
YeGoblynQueenne|6 years ago
But I have to disagree with this (because of course I do):
>> For example, when I tell Siri “Call Carol” and it dials the correct number, you will have a hard time convincing me that Siri did not understand my request.
That is a very common-sense and down-to-earth non-definition of intelligence: how can an entity that is answering a question correctly not "understand" the question?
I am going to quote Richard Feynman who encountered an example of this "how":
After a lot of investigation, I finally figured out that the students had memorized everything, but they didn’t know what anything meant. When they heard “light that is reflected from a medium with an index,” they didn’t know that it meant a material such as water. They didn’t know that the “direction of the light” is the direction in which you see something when you’re looking at it, and so on. Everything was entirely memorized, yet nothing had been translated into meaningful words. So if I asked, “What is Brewster’s Angle?” I’m going into the computer with the right keywords. But if I say, “Look at the water,” nothing happens – they don’t have anything under “Look at the water”!
https://v.cx/2010/04/feynman-brazil-education
In this (in?) famous passage Feynman is arguing that students of physics that he met in Brazil didn't know physics, even though they had memorised physics textbooks.
Feynman doesn't talk about "understanding". Rather he talks about "knowing" a subject. But his is also a very straight-forward definition of knowing: you can tell whether someone knows a subject if you ask them many questions from different angles and find that they can only answer the questions asked from one single angle.
So if I follow up "Siri, call Carol" with "Siri, what is a call" and Siri answers by calling Carol, I know that Siri doesn't know what a call is, probably doesn't know what a Carol is, or what a call-Carol is, and so that Siri doesn't have any understanding from a very common-sense point of view.
Not sure if this goes beyond the Chinese room argument though. Perhaps I'm just on a diffferent side of it than Thomas Dietterich.
visarga|6 years ago
I think the key ingredient is 'being in the game', that means, having a body, being in an environment with a purpose. Humans are by default playing this game called 'life', we have to understand otherwise we perish, or our genes perish.
It's not about symbolic vs connectionist, or qualia, or self consciousness. It's about being in the world, acting and observing the effects of actions, and having something to win or lose as a consequence of acting. This doesn't happen when training a neural net to recognise objects in images or doing translation. It's just a static dataset, a 'dead' world.
AI until now has had a hard time simulating agents or creating real robotic bodies - it's expensive, and the system learns slowly, and it's unstable. But progress happens. Until our AI agents get real hands and feet and a purpose they can't be in the world and develop true understanding, they are more like subsystems of the brain than the whole brain. We need to close the loop with the environment for true understanding.
goatlover|6 years ago
julvo|6 years ago
boyadjian|6 years ago
unknown|6 years ago
[deleted]
Entropee|6 years ago
[deleted]
RaiseProfits|6 years ago
basicplus2|6 years ago
sadness2|6 years ago
sadness2|6 years ago
yamrzou|6 years ago
dtech|6 years ago
_xnmw|6 years ago
[1] https://en.wikipedia.org/wiki/Chinese_room
goto11|6 years ago
It like saying that red-headed people doesn't have a soul - there is no way to disprove that assertion.
msla|6 years ago
Does that seem dangerous to anyone else?
I also don't see any distinction between "qualia" and "soul" other than spelling, but perhaps it's because I don't have one.
Finally, I have this question for Searle: Say you understand English. Does any specific neuron in your brain understand English? No, the larger system of neurons+neuronal connections does, so why doesn't the system of grad student+book understand Chinese?
baddox|6 years ago
js8|6 years ago
I have also somewhat responded before to Chinese room argument with this comment: https://news.ycombinator.com/item?id=20864005
friendlybus|6 years ago
The idea that a new self- sustaining meaning generation can arise out of the interlocking mechanisms of a computer is an interesting one. As we see self driven car CEOs describe some of the most advanced systems we have, requiring to be run in controlled environments and balking at the infinite complexity of real life, are we really building computer systems that are anything more than an incredibly sophisticated loop?
prvnsmpth|6 years ago
My point is that humans are also highly-sophisticated, biological machines, so if you say machines cannot "understand", you are making the same claim for humans as well.