In the 1980s, when everybody was trying to do AI with some flavor of predicate calculus, he extended that to probabilistic predicate calculus. That helped. But it didn't lead to common sense reasoning. The field is so stuck that few people are even trying.
Working on common sense, defined as predicting what happens next from observations of the current state, is a classic AI problem on which little progress has been made. I used to remark that most of life is avoiding big mistakes in the next 30 seconds. If you can't do that, life will go very badly. Solving that problem is "common sense". It's not an abstraction.
The other classic problem where the field is stuck is robotic manipulation in unstructured situations. McCarthy once thought, in the 1960s, that it was a summer project to do that.
He wanted a robot to assemble a Heathkit TV set kit. No way. (The TV set kit was actually purchased, sat around for years, and finally somebody assembled it and put it in a student lounge at Stanford.) 50 years later, unstructured manipulation still works very badly. Watch the DARPA Humanoid Challenge or the DARPA Manipulation Challenge videos from a few years ago.
Great PhD thesis topics for really good people. High-risk; you'll probably fail. Succeed, even partially, and you have a good career ahead.
I worked on AI-via-predicate-calculus, as a successor to Cyc, and I think the main thing I learned is that people are incredibly bad at predicate calculus. Even when we behave "logically", it's an after-the-fact rationalization for a conclusion we arrived at much faster with heuristics.
When we think-about-thinking, or talk-about-thinking, we do so in the language of language, which quickly leads to logic. And that leads us to think that the logic is the thinking. But in fact it's a rare, specialized mode of thought. The primary mode of thought -- the one that keeps us from making big mistakes for a half-minute at a time -- is that irrational one that's very easy to fool if you put effort into it, but which actually gets it right for most of reality (which isn't, generally, trying to trick you).
Do you think the challenge in unstructured manipulation tasks are more related to problems in AI, or more related to the incredibly primitive actuators we have at our disposal?
Just by going off the wikipedia page, this seems like a really hairy problem because of the definition
We'd expect some kind of AI-human parity in avoiding obstacles while driving in common sense speak as "don't hit that, or you'll have a wreck", but we don't really expect the car AI to see a bad collision between two other cars on a perpendicular road and call emergency services (as should be common sense for humans).
But if the cars involved in that collision have a detection system to automatically call 911 (any kind of OnStar variant), why should an AI concern itself with that knowing there is a system to handle that task? Would it be common sense for the AI to act as a parallel system and make sure the primary didn't fail to call 911? A human's common sense might be to act as if that system didn't exist because it might have failed and just call anyway (knowing that there's really no penalty for calling twice just to make sure)
I wouldn't say that robotic manipulation in unstructured environments is exactly stuck, just progressing very slowly. There has been some progress since the DARPA Humanoid Challenge and Manipulation Challenge. Robots seem to be getting decent at picking things up[0][1], although manipulation is more than just picking things up. Although, robots still struggle with very basic tasks like motion planning.
[0]https://www.youtube.com/watch?v=geub-Nuu-Vw
[1]https://spectrum.ieee.org/automaton/robotics/artificial-inte...
Huh. I wonder if you setting up a board game might be an easier unstructured problem. If you've got experience with board games, you can usually figure out most of the rules by just looking at the pieces; where do the same symbols appear, and what other symbols appear with them?
Still sounds super hard, but might be easier than the same (matching corresponding shapes) in 3D.
While the article is a nice Q&A with Pearl about his new book, The Book of Why, there is a very detailed technical tutorial from 2014 at http://research.microsoft.com/apps/video/default.aspx?id=206... that provides a very in depth explanation of causal calculus / coutnerfactuals / etc. and how these tools should be used
Simulating a mind is not the same as simulating mind processes.
I doubt that you can create a mind that's similar to a human mind without the relevant elements that are took for granted when we think of a human being: senses, perception, pain, pleasure, fear, volition... a body! and the real-time feedback loop that connects us to our environment and our peers.
The same could be said about animals' minds. That's why it's still impossible to make even a mosquito brain. It's a question of texture. Making a decision for a human involves a complex cloud of subsystems working in unstable equilibrium, more of a boiling cauldron than an algorithmic checklist. When you're scared, you're not just thinking that somethind is dangerous and rather avoid it, you are feeling something very uncomfortable and you want to stop it.
What if you want to advance in creating some kind of simpler mind now when you still haven't the means to build a complete organism? That's an interesting problem. Would immersing programs in a virtual world be useful? Or would it be better to make robots face the real world directly? I believe that you need, as a minimum, a system that integrates sight with hearing and touching sensors, and some kind of incentive system.
After some results, maybe using machine learning, the emergent organization could be applied as a building block to more complex robots. Meanwhile, trying to teach machines some human capabilities will not lead to generalized IA, but to more of the same we have now, that it's very useful, just not quite qualifies for the label.
>senses, perception, pain, pleasure, fear, volition... a body!
Yes! Almost all neural networks have no self-model and thus no self-awareness because they cannot perceive themselves. They only see the inputs. They do not see the result of their actions.
This makes developing a self-model impossible. They cannot develop an internal model of internal vs external causes. What their "boundary of influence" is. Differentiation between internal and external causes.
They are trained and then used, immutable, unlearning after training. Even if they could perceive their outputs during training and/or evaluation, they cannot perceive themselves otherwise, making it practically impossible to deduce by themselves what they even are. They can't inspect themselves.
The causal loop needs to be closed for all of this to happen.
Causal inference is the next big leap in AI. Once the relatively(!) low-hanging fruit of pattern recognition are picked to exhaustion, and once we can get more comfortable with symbolic reasoning with respect to theorem proving / hypothesis testing / counterfactuals, "real" reasoning machines will arise.
There are a number of "things" that we should "teach" machines to create ones that are "truly intelligent". Besides this kind of "cause / effect reasoning", one could argue that an intelligent machine needs some baseline levels of what you might call "intuitive metaphysics", and "intuitive epistemology".
You could probably argue that the cause/effect stuff is subsumed by one of these at a certain level of abstraction, but I think it makes sense to treat them as separate.
Related to the idea of "cause/effect" and possibly falling into the overall rubric of "intuitive metaphsyics" is some notion of the passage of time. That is, in human experience we link things as "causal" when they happen in a certain sequence, and within a certain degree of temporal proximity.
Eg, "I touched the hot burner then instantaneously felt excruciating pain" is an experience that we learn from. "I walked through the door and four days later I felt pain in my knee" probably is not.
Our machines probably also need baseline levels of some sort of intuitive versions of Temporal Logic and Modal Logic as well.
I'd agree with that and I think winograd schema make this very obvious, take for example:
(1) John took the water bottle out of the backpack so that it would be lighter.
(2) John took the water bottle out of the backpack so that it would be handy
What does it refer to in each sentence? It's very obvious that a machine that solves this must understand physics, have a rudimentary ontology about objects and human intuition and so on.
I think it's straight-up sad how little progress there has been on these very fundamental problems which articulate what common sense and intelligent agents are about.
"Teach them cause and effect" ... yep, that's pretty much what everyone's been trying to do since the 60's. The problem is that nobody will touch the core issues of consciousness because it's inherently political. It requires confronting some of the biggest taboos in science: anthropomorphizing animals in biology, discussing consciousness seriously in physics, and looking at how economics and information interact with a skeptical eye toward the standard economic narrative.
I'm not sure that's necessary. Early humans didn't know why a lot of things happened, such as why rubbing sticks makes fire; they just learned to use them from trial and error.
The physics of it were beyond them. I see it more as goal-oriented: "I want fire, how can I get it?".
I suppose that's cause-and-effect in a loose sense, but one doesn't have to view everything as C&E to get similar results. It seems more powerful to think of it as relationships instead of just C&E because then you get a more general relationship processing engine out of it instead of a single-purpose thing. Make C&E a sub-set of relationship processing. If the rest doesn't work, then you still have a C&E engine from it by shutting off some features.
They understood cause and effect. They didn't know the causal chain in depth, but they did know that rubbing sticks together caused fire. They also knew that dumping water on the ground did not cause it to rain. Thus they could distinguish between correlation and causation.
I suppose that's cause-and-effect in a loose sense, but one doesn't have to view everything as C&E to get similar results. It seems more powerful to think of it as relationships instead of just C&E because then you get a more general relationship processing engine out of it instead of a single-purpose thing. Make C&E a sub-set of relationship processing. If the rest doesn't work, then you still have a C&E engine from it by shutting off some features.
I may be wrong (heck, I'm probably wrong), but I can't help but feel that you're abstracting things out too much. Yes, a "cause / effect relationship" IS-A "relationship", but sometimes the distinctions actually matter. I'd argue that a "cause/effect relationship" (and the associated reasoning) is markedly different in at least one important sense, and that is that it includes time in two senses: direction, and duration. There's a difference between knowing that Thing A and Thing B are "somehow" related, and knowing that "Doing Thing A causes Thing B to happen shortly afterwards" or whatever.
To may way of thinking, this is something like what Pearl is talking about here:
The key, he argues, is to replace reasoning by association with causal reasoning. Instead of the mere ability to correlate fever and malaria, machines need the capacity to reason that malaria causes fever.
That said, I do like your idea of trying to build the processing engine in such a way that you can turn features on and off, because I don't necessarily hold that "cause/effect" is the only kind of reasoning we need.
To me learning cause-and-effect is a non trivial process, when recognizing relationship often comes first and intrigues "formal" reasoning process afterwards (if any).
The more formal in the later process, the closer we reach to the real "knowledge". So your suggestion is quite practical, in the sense that we can start from that and figure out how to push the engine toward the more formal spectrum.
I recently joined a team that does a lot of causal analysis, mostly marketing related, and was wondering what the best resources are to get more familiar with this subject (books, lectures, online courses etc.). I am picking up the author's other book, Causality: Models, Reasoning and Inference, but wondering what other sources people recommend.
It's funny that I knew who this would be by or interviewing just from the title. I like Judea Pearl and a lot of his ideas, but at the same time I think he overstates their importance and hypes them up more then he should.
I can build an AI with common sense reasoning in about 9 months. The problem has already been solved. Why do we care so much about making computers more like people? Isn't that excessively cruel? Part of the utility of computing is that computers don't have needs for fulfillment, companionship, communion. We deploy them in awful conditions to do the most horrible, tedious time-waster jobs. Why do such minds need to be human?
AI researchers have been attempting to build common sense into software for decades and have largely failed. So if you think you can do it in 9 months then you're naive or uninformed. But hey if you've figured out a better approach then go ahead and do it, then show us running code. Talk is cheap.
I wonder how it could even be considered "cruel"? Cruel to other living human beings, perhaps. To the machine or its simulation software? No.
Any human-like AI is still a "fake" - any notion of emotion, pain, empathy etc. we attribute to them is only a simulation. It simply doesn't matter. It amazes and amuses me to think that people might actually give a damn what the machine is "feeling". I think people who truly believe this are out of touch with reality and frankly, with other human beings. The machine doesn't really care about us, it's a bunch of ones and zeroes no matter how you slice and dice it.
Even after training them on cause and effect, they still don't care. I don't buy the "if it looks like a human, sounds like a human, it's human" argument at all.
What is cruel about that? Has anyone actually ever implemented true pain in a computer AI? Does attaching a speaker with pre recorded screams to a Roomba count as pain? Does a number that keeps increasing count as pain? Is a boolean value pain/no pain enough to implement pain? What does it even mean to be in pain for a computer?
Scaling to meet the workload using machines may be as simple as paying the AWS bill. Scaling up people, even at profitable well-functioning companies, is hardly a solved problem.
No, it's a representation of which actions lead to good outcomes given a set of input data. There is no explicit symbolic reasoning about causal factors or their outcomes involved in classic RL, and it's very unlikely that any such symbolic representation evolves implicitly under the hood. A neural net in an RL system is just a souped-up version of the tabular data used in the earliest RL systems.
Most people don't even know how to run an experiment to verify causation. They chant the mantra: "correlation does not equal causation" then go back to correlating everything they see in the world.
Do you have an indication that they are all that much different? Meaning, would the techniques or strategies used to develop augmented intelligence be that much different than what is going on in AI?
A “truly intelligent machine” is a contradiction of terms. They can not have intelligence like humans (AGI or whatever the current buzzword is). Humans are not solely material.
Seriously. Extraordinary claims etc etc. If you want to claim humans are not solely material, you need to give some sort of evidence of a phenomena beyond the physical. You can't use intelligence per se as your evidence as then your argument is circular.
[+] [-] Animats|6 years ago|reply
Working on common sense, defined as predicting what happens next from observations of the current state, is a classic AI problem on which little progress has been made. I used to remark that most of life is avoiding big mistakes in the next 30 seconds. If you can't do that, life will go very badly. Solving that problem is "common sense". It's not an abstraction.
The other classic problem where the field is stuck is robotic manipulation in unstructured situations. McCarthy once thought, in the 1960s, that it was a summer project to do that. He wanted a robot to assemble a Heathkit TV set kit. No way. (The TV set kit was actually purchased, sat around for years, and finally somebody assembled it and put it in a student lounge at Stanford.) 50 years later, unstructured manipulation still works very badly. Watch the DARPA Humanoid Challenge or the DARPA Manipulation Challenge videos from a few years ago.
Great PhD thesis topics for really good people. High-risk; you'll probably fail. Succeed, even partially, and you have a good career ahead.
[+] [-] jfengel|6 years ago|reply
When we think-about-thinking, or talk-about-thinking, we do so in the language of language, which quickly leads to logic. And that leads us to think that the logic is the thinking. But in fact it's a rare, specialized mode of thought. The primary mode of thought -- the one that keeps us from making big mistakes for a half-minute at a time -- is that irrational one that's very easy to fool if you put effort into it, but which actually gets it right for most of reality (which isn't, generally, trying to trick you).
[+] [-] jmole|6 years ago|reply
[+] [-] vsareto|6 years ago|reply
We'd expect some kind of AI-human parity in avoiding obstacles while driving in common sense speak as "don't hit that, or you'll have a wreck", but we don't really expect the car AI to see a bad collision between two other cars on a perpendicular road and call emergency services (as should be common sense for humans).
But if the cars involved in that collision have a detection system to automatically call 911 (any kind of OnStar variant), why should an AI concern itself with that knowing there is a system to handle that task? Would it be common sense for the AI to act as a parallel system and make sure the primary didn't fail to call 911? A human's common sense might be to act as if that system didn't exist because it might have failed and just call anyway (knowing that there's really no penalty for calling twice just to make sure)
[+] [-] gene-h|6 years ago|reply
[+] [-] RangerScience|6 years ago|reply
Still sounds super hard, but might be easier than the same (matching corresponding shapes) in 3D.
[+] [-] foxfired|6 years ago|reply
If you compare these to Boston Dynamics, it is hard to watch. But to be fair, BD robots are remote controlled.
[+] [-] p1esk|6 years ago|reply
How is "predicting what happens next" different from predicting, say, the next word in the sentence using modern DL models?
[+] [-] sarosh|6 years ago|reply
[+] [-] pieterk|6 years ago|reply
[+] [-] narag|6 years ago|reply
I doubt that you can create a mind that's similar to a human mind without the relevant elements that are took for granted when we think of a human being: senses, perception, pain, pleasure, fear, volition... a body! and the real-time feedback loop that connects us to our environment and our peers.
The same could be said about animals' minds. That's why it's still impossible to make even a mosquito brain. It's a question of texture. Making a decision for a human involves a complex cloud of subsystems working in unstable equilibrium, more of a boiling cauldron than an algorithmic checklist. When you're scared, you're not just thinking that somethind is dangerous and rather avoid it, you are feeling something very uncomfortable and you want to stop it.
What if you want to advance in creating some kind of simpler mind now when you still haven't the means to build a complete organism? That's an interesting problem. Would immersing programs in a virtual world be useful? Or would it be better to make robots face the real world directly? I believe that you need, as a minimum, a system that integrates sight with hearing and touching sensors, and some kind of incentive system.
After some results, maybe using machine learning, the emergent organization could be applied as a building block to more complex robots. Meanwhile, trying to teach machines some human capabilities will not lead to generalized IA, but to more of the same we have now, that it's very useful, just not quite qualifies for the label.
[+] [-] 7373737373|6 years ago|reply
Yes! Almost all neural networks have no self-model and thus no self-awareness because they cannot perceive themselves. They only see the inputs. They do not see the result of their actions.
This makes developing a self-model impossible. They cannot develop an internal model of internal vs external causes. What their "boundary of influence" is. Differentiation between internal and external causes.
They are trained and then used, immutable, unlearning after training. Even if they could perceive their outputs during training and/or evaluation, they cannot perceive themselves otherwise, making it practically impossible to deduce by themselves what they even are. They can't inspect themselves.
The causal loop needs to be closed for all of this to happen.
[+] [-] uoaei|6 years ago|reply
[+] [-] mrfusion|6 years ago|reply
[+] [-] ivalm|6 years ago|reply
[+] [-] mindcrime|6 years ago|reply
You could probably argue that the cause/effect stuff is subsumed by one of these at a certain level of abstraction, but I think it makes sense to treat them as separate.
Related to the idea of "cause/effect" and possibly falling into the overall rubric of "intuitive metaphsyics" is some notion of the passage of time. That is, in human experience we link things as "causal" when they happen in a certain sequence, and within a certain degree of temporal proximity.
Eg, "I touched the hot burner then instantaneously felt excruciating pain" is an experience that we learn from. "I walked through the door and four days later I felt pain in my knee" probably is not.
Our machines probably also need baseline levels of some sort of intuitive versions of Temporal Logic and Modal Logic as well.
https://en.wikipedia.org/wiki/Metaphysics
https://en.wikipedia.org/wiki/Epistemology
https://en.wikipedia.org/wiki/Temporal_logic
https://en.wikipedia.org/wiki/Modal_logic
[+] [-] Barrin92|6 years ago|reply
(1) John took the water bottle out of the backpack so that it would be lighter.
(2) John took the water bottle out of the backpack so that it would be handy
What does it refer to in each sentence? It's very obvious that a machine that solves this must understand physics, have a rudimentary ontology about objects and human intuition and so on.
I think it's straight-up sad how little progress there has been on these very fundamental problems which articulate what common sense and intelligent agents are about.
[+] [-] spappletrap|6 years ago|reply
[+] [-] tabtab|6 years ago|reply
I suppose that's cause-and-effect in a loose sense, but one doesn't have to view everything as C&E to get similar results. It seems more powerful to think of it as relationships instead of just C&E because then you get a more general relationship processing engine out of it instead of a single-purpose thing. Make C&E a sub-set of relationship processing. If the rest doesn't work, then you still have a C&E engine from it by shutting off some features.
[+] [-] mjfl|6 years ago|reply
[+] [-] mindcrime|6 years ago|reply
I may be wrong (heck, I'm probably wrong), but I can't help but feel that you're abstracting things out too much. Yes, a "cause / effect relationship" IS-A "relationship", but sometimes the distinctions actually matter. I'd argue that a "cause/effect relationship" (and the associated reasoning) is markedly different in at least one important sense, and that is that it includes time in two senses: direction, and duration. There's a difference between knowing that Thing A and Thing B are "somehow" related, and knowing that "Doing Thing A causes Thing B to happen shortly afterwards" or whatever.
To may way of thinking, this is something like what Pearl is talking about here:
The key, he argues, is to replace reasoning by association with causal reasoning. Instead of the mere ability to correlate fever and malaria, machines need the capacity to reason that malaria causes fever.
That said, I do like your idea of trying to build the processing engine in such a way that you can turn features on and off, because I don't necessarily hold that "cause/effect" is the only kind of reasoning we need.
[+] [-] yoquan|6 years ago|reply
The more formal in the later process, the closer we reach to the real "knowledge". So your suggestion is quite practical, in the sense that we can start from that and figure out how to push the engine toward the more formal spectrum.
[+] [-] jankotek|6 years ago|reply
[+] [-] Pils|6 years ago|reply
[+] [-] mindcrime|6 years ago|reply
https://en.m.wikipedia.org/wiki/Structural_equation_modeling
[+] [-] neaden|6 years ago|reply
[+] [-] hooande|6 years ago|reply
[+] [-] AlexCoventry|6 years ago|reply
[+] [-] jacobwilliamroy|6 years ago|reply
[+] [-] nradov|6 years ago|reply
[+] [-] nineteen999|6 years ago|reply
Any human-like AI is still a "fake" - any notion of emotion, pain, empathy etc. we attribute to them is only a simulation. It simply doesn't matter. It amazes and amuses me to think that people might actually give a damn what the machine is "feeling". I think people who truly believe this are out of touch with reality and frankly, with other human beings. The machine doesn't really care about us, it's a bunch of ones and zeroes no matter how you slice and dice it.
Even after training them on cause and effect, they still don't care. I don't buy the "if it looks like a human, sounds like a human, it's human" argument at all.
[+] [-] imtringued|6 years ago|reply
[+] [-] icandoit|6 years ago|reply
[+] [-] kleer001|6 years ago|reply
[+] [-] plzHireMeImGood|6 years ago|reply
[+] [-] agumonkey|6 years ago|reply
[+] [-] ratsimihah|6 years ago|reply
[+] [-] AlexCoventry|6 years ago|reply
[+] [-] hans1729|6 years ago|reply
https://arxiv.org/pdf/1901.08162v1.pdf
[+] [-] smiljo|6 years ago|reply
The preamble is depressing, since the episode aired right after a mass shooting, but Pearl gives a brief overview of his thinking.
[+] [-] averros|6 years ago|reply
[deleted]
[+] [-] crimsonalucard|6 years ago|reply
[+] [-] sgt101|6 years ago|reply
any hooo....
[+] [-] bitxbit|6 years ago|reply
[+] [-] nightski|6 years ago|reply
[+] [-] RedComet|6 years ago|reply
[+] [-] IIAOPSW|6 years ago|reply
Seriously. Extraordinary claims etc etc. If you want to claim humans are not solely material, you need to give some sort of evidence of a phenomena beyond the physical. You can't use intelligence per se as your evidence as then your argument is circular.