The article's logic is a soup sandwich of two huge misunderstandings:
First, "Theory of Mind" is something experienced at a high level directly. We do reason about our own and other's actions, (both successfully and unsuccessfully of course). We still have much to learn about how this works and is implemented, but I don't think it is controversial that each of us do develop theories about our own minds and others and use that along with other information.
Second, "Theory of Mind" does not mean "a scientific theory of the mind" it means that each of us develops internal theories of our own and other's minds. In one case "Theory" means something in our heads, in the other "theory" means a scientific body of knowledge.
This seems to have confused the author greatly.
So "Theory of Mind" has nothing to say about how neural circuits record information about locations in a maze and whether that recorded information can be considered "meaning" or not.
"Of course you could argue that what Nobel Prize winning research shows about rats is irrelevant to humans."
Wow! This really underlines that second confusion. Of course rat's brains can tell us something about <<theories of minds>>, but since we already know rat's don't model each other's internal actions with the detail we do, it would be problematic to view them as reliable sources for investigating the particular theory of human minds called "Theory of Mind".
I might be wrong, but I think I've seen the term "theory of mind" used in the scientific context. For example, Kurzweil's "Pattern Recognition Theory of Mind" and Heylighen's "Anticipation Control Theory of Mind"
Rats brains are used to study mechanisms presumed to be conserved by humans eg aspects of molecular biology. That doesn’t mean that rat social behavior couldn’t be used to model some aspect of human social behavior.
From a skim read this article is pretty sloppy. It argues that the human OEM theory of mind is in some way unique or unreproducible. This conclusion is completely unsupported by the text, which is just a sophisticated word salad.
> What does all this mean? Watson may beat us at Jeopardy, but we are convinced we have something AI will always lack: We are agents in the world, whose decisions, choices, actions are made meaningful by the content of the belief/desire pairings that bring them about. But what if the theory of mind that underwrites our distinctiveness is build on sand, is just another useful illusion foisted upon us by the Darwinian processes that got us here? Then it will turn out that neuroscience is a far greater threat to human distinctiveness than AI will ever be.
I'm not sure how someone could believe that AI would always lack these things without also believing in some form of essentialism, basically believing that major parts of the human experience are "magic" somehow rather than derived from physical processes.
Anyway the author's major concern seems to be that neuroscience has demonstrated that desires and beliefs are not directly real in the same way that, say, your brain is a particular collection of atoms restricted to the physical confines of your skull. I suppose this could be very disturbing if you have the sense that your life is significantly more meaningful if all of your desires and beliefs are as real as the ground you walk on rather than being some illusory projection of the brain.
The problem actually arises from the associations that we have with the word "illusion". As soon as we start to think of our beliefs and desires as "illusions", it's natural to leap to terms like "fake", "false", "worthless", "deceit", and so on. In other words, there are all kinds of negative connotations that we associate with the word "illusion", so when we use that to describe core parts of what we think of as the human experience, it makes us feel bad.
But this is simply a limit of our language: what are the illusions of "beliefs" and "desires" anyway? Desires correspond to our experiences; if we want food, that corresponds to a very real physical state of our physical bodies. Beliefs are a lot shakier because they tend to be be built on complex layers of ideas, but they correspond to physical reality as well, whether or not they're particularly accurate. So these things are very useful and important despite being "illusions". And there's a huge difference between these "illusions" and direct falsehoods. So I think the real limitation here is that we don't have language that's particularly well adapted to discussing and thinking about these things.
"But what if the theory of mind that underwrites our distinctiveness is build on sand, is just another useful illusion foisted upon us by the Darwinian processes that got us here? Then it will turn out that neuroscience is a far greater threat to human distinctiveness than AI will ever be."
Ah, I get it.
It's that journalist twist. The threats supposedly presented by AI include some version of robot rebellion ("the singularity"), a crisis where jobs are still necessary to live but where few people have any sellable skills, and a blow to the human ego concerning human uniqueness.
What are journalists concerned with? The blow to the human ego? It's like the hyperbole of, say, one boxer described as "destroying" another by insulting them when the actual match will say who will destroy who.
I mean, I personally think it is good to put to sleep the concept of agency promulgated by certain views of philosophy. Neuroscience is useful here. But what a terrible way to start framing things.
I wonder if what you are referring to as illusions could not just be realities of our emergent physical needs for individual, short-term survival, but also for collective, long-term survival. I guess it's possible that some human illusions such as some of our concepts of right and wrong, morality, or natural rights, would actually also be emergent for AI in the long-run if these illusions are necessary or consistent with survival ir exitence in our universe?
The article seems to take the position that Intelligent life on earth is zero sum and there is no victor. I'm not sure that is the case.
Then the rest of the article details the concept of theory of mind in philosophy and how that is not congruent with how neuroscience understands the brain. I don't see where the conflict is here unless you want to believe in the theory of mind.
The title is really ham-fisted. It means "threat" as in challenge.
Like a better understanding of the brain and mind might destroy all of our previous misconceptions about how it works, radically altering our perceptions of ourselves.
This is just as how AI is providing insight into how some cognitive processes work by serving as an experimental foundation.
"Besides its economic threat, the advance of AI seems to pose a cultural threat: if physical systems can do what we do without thought to give meaning to their achievements, the conscious human mind will be displaced from its unique role in the universe as a creative, responsible, rational agent."
That's a bit of an aggrandizing characterization for the gang of monkeys that climbed down the trees a couple of ice ages ago, that almost nuked itself over a disagreement on the best way of distributing bananas, roughly speaking.
"We are agents in the world, whose decisions, choices, actions are made meaningful by the content of the belief/desire pairings that bring them about. But what if the theory of mind that underwrites our distinctiveness is build on sand, is just another useful illusion foisted upon us by the Darwinian processes that got us here? Then it will turn out that neuroscience is a far greater threat to human distinctiveness than AI will ever be."
Oh my, a threat to our distinctiveness! Next thing those scientists will tell us that the earth is not the center of the universe, that the stars in the sky aren't fixed and that indeed Santa Claus himself is not a real person running a sweatshop at the north pole.
OMG, your last paragraph brought tears to my eyes. Did your parents not tell you? (Can someone else, ... I cannot do it!)
Seriously though, human's have been replaced every few years since we showed up. As long as AI's don't annihilate each other I think we should just be proud parents and stop worrying that they are going to grow up and want the car keys.
As we start to understand neurology sufficiently to simulate it, the two fields are converging. Eventually (soon?) they'll be seen as sub-fields of the study of intelligence.
(As for your fusing neural organoids to computers, there was a sci-fi story I read years ago that used this theme. There were human-level AIs that needed some integrated organic grey matter to perform some of their higher level functions. Wish I could remember what it was - although that description probably covers half a dozen stories by now.)
The title was very misleading, especially since AI's "threat" usually means a paperclip maximiser or something. I ended up skimming through the whole article to try and figure out what was being threatened. Until the very end I assumed it was our ability to function, knowing how meaningless consciousness is.
The weird part is that he specifically mentions behaviorism:
> Except for a brief period when psychologists embraced a wrongheaded behaviorism, the theory of mind everyone shares drove the 20th century research programs of child psychology and psychiatry, cognitive science, evolutionary anthropology, and neuroscience.
But then by the end goes ahead and suggests the premises of behaviorism as a conclusion...?
> By colonizing consciousness spoken language turned it into a monologue of silent speech, tricking us that the meaning of spoken words is given by thoughts’ content when its just silent sounds passing through consciousness. Neuroscience shows that that in our brains the neural circuits neither have nor need content to do their jobs.
Maybe I'm misunderstanding something somewhere? Is this guy really the one in charge of the philosophy department at Duke?
That's the closest thing to a thesis I could get out of it. But the real question is, can you scale up the rat-in-a-maze monitoring of physical activity based on simple inputs involving binary choices in a solitary setting into anything meaningful about the much more intricate system of a human brain making complex decisions about abstract concepts in a complicated social mesh? I'm not sure the author's willingness to make the leap that human language is just meaningless noise is justified based on being able to interpret the physical meaning of a rat's brain activity.
I was lost as well. Was the author arguing that the actual encoding of information in the brain is incapable of producing a mind in accordance with our current model of mind? (Seemingly in favor of a more behaviorist model)
Combine the two and then you get those rat brained robot things. Gives you a nice little shortcut to true awareness in AI, by cheating and using meat to tell the silicon what to do. https://www.technologyreview.com/s/401756/rat-brained-robot/
I think that 'artificial intelligence' being used as a curtain for the Ozs of the world to hide behind is a much greater threat than either of these things.
Humans have this seemingly innate or nearly innate theory of mind, where the beliefs and desires of other people can be reasoned about in an effort to understand or predict their behavior.
A few key neuroscience results suggest that the brain does not really work how the "theory of mind" has us think it should. The concepts of belief and desire that we use in our theory of mind, may actually just be crude heuristics that were useful at some point in our evolutionary past, but don't actually have explanatory power in the way most humans naturally try to apply them today.
I don't really know how strong this idea is. I think it is fairly speculative in these early days of research.
My summary l may have a bit of extra color. I listened to Sean Carroll's podcast with the article author, where this same topic was a big part of the conversation.
My favorite parts touch on how thoughts projected out of neural firings do not describe the cause of those firings. I think.
I imagine planting bread crumbs after already walking a trail, as a best guess narrative of where I walked and why.
> What makes some neural firings [semantically significant] is just their place in the causal chain, the pathway to further behavior. Rats choose among alternative pathways as a result of neural firings produced by previous experience. But it’s not because these neuron circuits contain statements about anything.
> By colonizing consciousness, spoken language turned [consciousness] into a monologue of silent speech, tricking us that the meaning of spoken words is given by thoughts’ content when its just silent sounds passing through consciousness. Neuroscience shows that that in our brains the neural circuits neither have nor need content to do their jobs.
China will create designer babies with extremely high IQ (150%+). There will be no way for the rest of the world to compete using good old fashioned nature, so they either become irrelevant, serfs, or also start using in-vitro CRISPR editing. This will lead to AGI in a single generation (so about 30-40 years).
arbitrarily concludes that a mechanistic understanding of the brain is "a threat" to our "distinctiveness" i.e. we are not unicorns anymore. zero-information garbage if you ask me
[+] [-] Nevermark|7 years ago|reply
First, "Theory of Mind" is something experienced at a high level directly. We do reason about our own and other's actions, (both successfully and unsuccessfully of course). We still have much to learn about how this works and is implemented, but I don't think it is controversial that each of us do develop theories about our own minds and others and use that along with other information.
Second, "Theory of Mind" does not mean "a scientific theory of the mind" it means that each of us develops internal theories of our own and other's minds. In one case "Theory" means something in our heads, in the other "theory" means a scientific body of knowledge.
This seems to have confused the author greatly.
So "Theory of Mind" has nothing to say about how neural circuits record information about locations in a maze and whether that recorded information can be considered "meaning" or not.
"Of course you could argue that what Nobel Prize winning research shows about rats is irrelevant to humans."
Wow! This really underlines that second confusion. Of course rat's brains can tell us something about <<theories of minds>>, but since we already know rat's don't model each other's internal actions with the detail we do, it would be problematic to view them as reliable sources for investigating the particular theory of human minds called "Theory of Mind".
Sigh.
[+] [-] ex3xu|7 years ago|reply
Kurzweil: https://en.wikipedia.org/wiki/How_to_Create_a_Mind#Pattern_R...
Heylighen: http://pespmc1.vub.ac.be/Papers/AnticipationControl.pdf
Though I maybe should mention I generally agree with the use of "soup sandwich" as a description for the article.
[+] [-] marmaduke|7 years ago|reply
[+] [-] conjectures|7 years ago|reply
[+] [-] nipponese|7 years ago|reply
[+] [-] lynal|7 years ago|reply
[+] [-] daferna|7 years ago|reply
[+] [-] vokep|7 years ago|reply
[+] [-] djokkataja|7 years ago|reply
I'm not sure how someone could believe that AI would always lack these things without also believing in some form of essentialism, basically believing that major parts of the human experience are "magic" somehow rather than derived from physical processes.
Anyway the author's major concern seems to be that neuroscience has demonstrated that desires and beliefs are not directly real in the same way that, say, your brain is a particular collection of atoms restricted to the physical confines of your skull. I suppose this could be very disturbing if you have the sense that your life is significantly more meaningful if all of your desires and beliefs are as real as the ground you walk on rather than being some illusory projection of the brain.
The problem actually arises from the associations that we have with the word "illusion". As soon as we start to think of our beliefs and desires as "illusions", it's natural to leap to terms like "fake", "false", "worthless", "deceit", and so on. In other words, there are all kinds of negative connotations that we associate with the word "illusion", so when we use that to describe core parts of what we think of as the human experience, it makes us feel bad.
But this is simply a limit of our language: what are the illusions of "beliefs" and "desires" anyway? Desires correspond to our experiences; if we want food, that corresponds to a very real physical state of our physical bodies. Beliefs are a lot shakier because they tend to be be built on complex layers of ideas, but they correspond to physical reality as well, whether or not they're particularly accurate. So these things are very useful and important despite being "illusions". And there's a huge difference between these "illusions" and direct falsehoods. So I think the real limitation here is that we don't have language that's particularly well adapted to discussing and thinking about these things.
[+] [-] evdev|7 years ago|reply
a) Platonic realism is the only way our semantic categories/abstractions could be meaningful.
b) The world is physical (platonism is not true).
c) All our categories are meaningless! Our abstractions are unreal!
You can see it all follows from a), and there are serious questions about whether this is actually a physicalist philosophy...
[+] [-] joe_the_user|7 years ago|reply
Ah, I get it.
It's that journalist twist. The threats supposedly presented by AI include some version of robot rebellion ("the singularity"), a crisis where jobs are still necessary to live but where few people have any sellable skills, and a blow to the human ego concerning human uniqueness.
What are journalists concerned with? The blow to the human ego? It's like the hyperbole of, say, one boxer described as "destroying" another by insulting them when the actual match will say who will destroy who.
I mean, I personally think it is good to put to sleep the concept of agency promulgated by certain views of philosophy. Neuroscience is useful here. But what a terrible way to start framing things.
[+] [-] unknown|7 years ago|reply
[deleted]
[+] [-] wallace_f|7 years ago|reply
I wonder if what you are referring to as illusions could not just be realities of our emergent physical needs for individual, short-term survival, but also for collective, long-term survival. I guess it's possible that some human illusions such as some of our concepts of right and wrong, morality, or natural rights, would actually also be emergent for AI in the long-run if these illusions are necessary or consistent with survival ir exitence in our universe?
[+] [-] zitterbewegung|7 years ago|reply
Then the rest of the article details the concept of theory of mind in philosophy and how that is not congruent with how neuroscience understands the brain. I don't see where the conflict is here unless you want to believe in the theory of mind.
[+] [-] astrodust|7 years ago|reply
Like a better understanding of the brain and mind might destroy all of our previous misconceptions about how it works, radically altering our perceptions of ourselves.
This is just as how AI is providing insight into how some cognitive processes work by serving as an experimental foundation.
[+] [-] monocasa|7 years ago|reply
[+] [-] zeroname|7 years ago|reply
That's a bit of an aggrandizing characterization for the gang of monkeys that climbed down the trees a couple of ice ages ago, that almost nuked itself over a disagreement on the best way of distributing bananas, roughly speaking.
"We are agents in the world, whose decisions, choices, actions are made meaningful by the content of the belief/desire pairings that bring them about. But what if the theory of mind that underwrites our distinctiveness is build on sand, is just another useful illusion foisted upon us by the Darwinian processes that got us here? Then it will turn out that neuroscience is a far greater threat to human distinctiveness than AI will ever be."
Oh my, a threat to our distinctiveness! Next thing those scientists will tell us that the earth is not the center of the universe, that the stars in the sky aren't fixed and that indeed Santa Claus himself is not a real person running a sweatshop at the north pole.
[+] [-] Nevermark|7 years ago|reply
Seriously though, human's have been replaced every few years since we showed up. As long as AI's don't annihilate each other I think we should just be proud parents and stop worrying that they are going to grow up and want the car keys.
[+] [-] FlowNote|7 years ago|reply
https://news.ycombinator.com/item?id=18425194
You have no idea how much more powerful neurology is over machine learning.
[+] [-] taneq|7 years ago|reply
(As for your fusing neural organoids to computers, there was a sci-fi story I read years ago that used this theme. There were human-level AIs that needed some integrated organic grey matter to perform some of their higher level functions. Wish I could remember what it was - although that description probably covers half a dozen stories by now.)
[+] [-] unknown|7 years ago|reply
[deleted]
[+] [-] nategri|7 years ago|reply
[+] [-] SubiculumCode|7 years ago|reply
Not a neuroscientist. Not a cognitive psychologist. He gives short shrift to the utility of different levels of description to inform each other.
[+] [-] n4r9|7 years ago|reply
[+] [-] xazo|7 years ago|reply
[+] [-] n4r9|7 years ago|reply
[+] [-] carapace|7 years ago|reply
https://en.wikipedia.org/wiki/Hard_problem_of_consciousness
[+] [-] ny2244111|7 years ago|reply
[+] [-] ex3xu|7 years ago|reply
> Except for a brief period when psychologists embraced a wrongheaded behaviorism, the theory of mind everyone shares drove the 20th century research programs of child psychology and psychiatry, cognitive science, evolutionary anthropology, and neuroscience.
But then by the end goes ahead and suggests the premises of behaviorism as a conclusion...?
> By colonizing consciousness spoken language turned it into a monologue of silent speech, tricking us that the meaning of spoken words is given by thoughts’ content when its just silent sounds passing through consciousness. Neuroscience shows that that in our brains the neural circuits neither have nor need content to do their jobs.
Maybe I'm misunderstanding something somewhere? Is this guy really the one in charge of the philosophy department at Duke?
[+] [-] skywhopper|7 years ago|reply
[+] [-] Paul-ish|7 years ago|reply
[+] [-] ny2244111|7 years ago|reply
[+] [-] starbeast|7 years ago|reply
[+] [-] hnuser355|7 years ago|reply
[+] [-] unknown|7 years ago|reply
[deleted]
[+] [-] cirgue|7 years ago|reply
[+] [-] igravious|7 years ago|reply
[+] [-] lashkari|7 years ago|reply
In my own anecdotal experience over the past couple years, this no hypothesis has been right about 90% of the time.
[+] [-] igolden|7 years ago|reply
[+] [-] dwaltrip|7 years ago|reply
A few key neuroscience results suggest that the brain does not really work how the "theory of mind" has us think it should. The concepts of belief and desire that we use in our theory of mind, may actually just be crude heuristics that were useful at some point in our evolutionary past, but don't actually have explanatory power in the way most humans naturally try to apply them today.
I don't really know how strong this idea is. I think it is fairly speculative in these early days of research.
My summary l may have a bit of extra color. I listened to Sean Carroll's podcast with the article author, where this same topic was a big part of the conversation.
[+] [-] undershirt|7 years ago|reply
I imagine planting bread crumbs after already walking a trail, as a best guess narrative of where I walked and why.
> What makes some neural firings [semantically significant] is just their place in the causal chain, the pathway to further behavior. Rats choose among alternative pathways as a result of neural firings produced by previous experience. But it’s not because these neuron circuits contain statements about anything.
> By colonizing consciousness, spoken language turned [consciousness] into a monologue of silent speech, tricking us that the meaning of spoken words is given by thoughts’ content when its just silent sounds passing through consciousness. Neuroscience shows that that in our brains the neural circuits neither have nor need content to do their jobs.
[+] [-] ipsa|7 years ago|reply
[+] [-] buboard|7 years ago|reply