top | item 10740748

John Searle: Consciousness in Artificial Intelligence [video]

71 points| nolantait | 10 years ago |youtube.com

59 comments

order
[+] bhickey|10 years ago|reply
There isn't much new here. Skip ahead to the first audience question from Ray Kurzweil (http://www.youtube.com/watch?v=rHKwIYsPXLg&t=38m51s).

Kurzweil, in summary, asks "You say that a machine manipulating symbols can't have consciousness. Why is this different than consciousness arising neurons manipulating neurotransmitter concentrations?" Searle gives a non-answer: "My dog has consciousness because I can look at it and conclude that it has consciousness."

[+] chubot|10 years ago|reply
Yeah honestly I don't get what he is really contributing (and I'm sort of an AI skeptic). In 2000 in undergrad, I recall checking out some of his books from the library because people said he was important, and I learned about the "Chinese Room" argument [1] in class.

How is it even an argument? It doesn't illuminate anything, and it's not even clever. It seems like the most facile wrong-headed stab at refutation, by begging the question. As far as I can tell, the argument is, "well you can make this room that manipulates symbols like a computer, and of course it's not conscious, so a computer can't be either"? There are so many problems with this argument I don't even know where to begin.

The fact that he appears to think that changing a "computer" to a "room" has persuasive power just makes it all the more antiquated. As if people can't understand the idea that computers "just" manipulate symbols? Changing it to a "room" adds nothing.

[1] http://plato.stanford.edu/entries/chinese-room/

[+] cscurmudgeon|10 years ago|reply
I think we are missing the gist of the Chinese room argument here.

The correct question to ask: How is a machine manipulating symbols (that someone says is conscious) different from any other complex physical system? Is New York city's complex sewer system conscious. What about the entire world's sewer & plumbing system?

Does a machine have to compute some special function to be conscious? Does the speed of computation matter? If so who measures the speed? (let us not bring in general relativity as speed of computation can be different for different observers.

Kurzweil et al's definition of consciousness is exactly as silly as Searle saying "My dog has consciousness because I can look at it and conclude that it has consciousness."

[+] dekhn|10 years ago|reply
So basically, the way to convince Searle (not that that is a real goal) is to build a robot automaton which passes the uncanny valley: very responsive eyes. A collection of tricks. Clever responses.

Searle would look at that and conclude it had consciousness.

[+] DonaldFisk|10 years ago|reply
I think Searle's mostly correct and Kurzweil's completely wrong on this. It took me a long time to understand Searle's argument, because Searle conflates consciousness and intelligence and this confuses matters. Understanding Chinese is a difficult problem requiring intelligence, but I don't think it requires consciousness.

It is important to distinguish between "understanding Chinese" and "knowing what it's like to understand Chinese". We immediately have a problem: knowing what it's like to understand Chinese involves various qualia, none of which is unique to Chinese speakers.

So I'll simplify the argument. Instead of having a room with a book containing rules about Chinese, and a person inside who doesn't Chinese, we have a room, with some coloured filters, and a person who can't see any colours at all (i.e. who has achromatopsia). Such people (e.g. http://www.achromatopsia.info/knut-nordby-achromatopsia-p/) will confirm they have no idea what it's like to see colours. If you shove a sheet of coloured paper under the door, the person in the room will place the different filters on top of the sheet in turn, and by seeing how dark the paper then looks, be able to determine its colour, which he'll write on the paper, and pass it back to the person outside. The person outside thinks the person inside can distinguish colours, but the person inside will confirm that not only can he not, but he doesn't even know what it's like. Nothing else in the room is obviously conscious.

A propos of the dog, this is the other minds problem. It's entirely possible that I'm the only conscious being in the universe and everyone else (and their pets) are zombies. But we think that people, dogs, etc. are conscious because they are similar to us in important ways. Kurzweil presumably considers computers to be conscious too. Computers can be intelligent, and maybe in a few years or decades will be able to pass themselves off over the Internet as Chinese speakers, but there's no reason to believe computers have qualia (i.e. know what anything is like), and given the above argument, every reason to believe that they don't.

[+] TheOtherHobbes|10 years ago|reply
This is basically just the Hard Problem of consciousness. It's been a hard problem for decades, and we're no closer to having an answer.

>But we think that people, dogs, etc. are conscious because they are similar to us in important ways.

Specifically, mammals have mirror neurones. More complex mammals also seem to have common hard-wired links between emotions and facial expressions - so emotional expression is somewhat recognisable across species.

I'm finding the AI debates vastly frustrating. There are basic features of being a sentient mammal - like having a body with a complicated sensory net, and an endocrine system with goal/avoidance sensations and emotions, and awareness of social hierarchy and other forms of bonding - that are being ignored in superficial arguments about paperclip factories.

It's possible that a lot of what we experience as consciousness happens at all of those levels. The ability to write code or find patterns or play chess floats along on top, often in a very distracted way.

So the idea that an abstract symbol processing machine can be conscious in any way we understand seems wrong-headed. Perhaps recognisable consciousness is more likely to appear on top of a system that models the senses, emotions, and social awareness first, topped by a symbolic abstraction layer that includes a self-model to "experience" those lower levels, recursively.

[+] leafee|10 years ago|reply
> conflates consciousness and intelligence and this confuses matters

I think this is an excellent point. I like your example with colors, which shows that there is a difference between seeing (i.e. experiencing) colors and producing symbols which give the impression that an entity can see colors.

I don't follow any argument that proposed that computers can be conscious but other machines (e.g. car engines) cannot. In the end, symbols don't really exist in physical reality - all that exists is physical 'stuff' - atoms, electrons, photons etc. interacting with each other. So how can we say that one ball of stuff is conscious but another is not? And why isn't all of the stuff together also conscious? Why not just admit we don't know yet?

Consciousness may be hard to define, but lets take something simpler - experience, or even more specifically - pain. I can feel pain. While I can't be 100% sure, I believe other humans feel pain as well. However I don't believe my laptop has the capacity to feel pain, irrespective of how many times and in how many languages it can say 'I feel pain'.

Perhaps the ability to experience is the defining characteristic of consciousness?

[+] redwood|10 years ago|reply
I disagree completely. After time the color filter will start to associate various concepts and feelings add images with various colors. This association is what starts making the colors themselves have meaning even if they can't see the colors the same way that you and I can. There's no way to prove that we all see colors the same way anyway. But that doesn't mean that we don't believe that were conscious. I think I see that you're saying we cannot make any claims about others perhaps but only can talk about how we feel. But I feel like the room example is actually misleading in this respect. Another way of thinking about it is our brain starts to associate things and if those clusters of associations that give those things meaning. The experience of experience and color is only important because experience and color has a web of other associated experiences that those colors remind us of. So extending the room experiment to the experience of a baby who throughout the entire life sees colors or the filter image version of these colors at various moments to associate with various things. In this example we can imagine that the baby will in fact associate let's say blue with I don't know this great unknown half of our outside ceiling that we see during the day. And then that will take on something more to it but it is difficult admittedly to explain.
[+] nova|10 years ago|reply
I can only recommend reading this paper: http://www.scottaaronson.com/papers/philos.pdf

It really lives up to its title. Suddenly computational complexity is not just a highly technical CS matter anymore, and the Chinese Room paradox is explained away successfully, at least for me.

[+] amoruso|10 years ago|reply
Searle makes two assertions:

1) Syntax without semantics is not understanding. 2) Simulation is not duplication.

Claim 1 is a criticism of old-style Symbolic AI that was in fashion when he first formulated his argument. This is obviously right, but we're already moving past this. For example, word2vec or the recent progress in generating image descriptions with neural nets. The semantic associations are not nearly as complex as those of a human child, but we're past the point of just manipulating empty symbols.

Claim 2 is an assertion about the hard problem of consciousness. In other words, about what kinds of information processing systems would have subjective conscious experiences. No one actually has an answer for this yet, just intuitions. I can't really see why a physical instantiation of a certain process in meat should be different from a mathematically equivalent instantiation on a Turing machine. He has a different intuition. But neither one of us can prove anything, so there's nothing else to say.

[+] mtrimpe|10 years ago|reply
I think Claim 1 is actually more about determinism; that if by knowing all the inputs you can reliably get the same outputs what you have isn't consciousness.

Neural nets are somewhat starting to escape that dynamic but there still isn't a neural net that reliable pulls in a continuous stream of randomness to generate meaningful behaviour like our consciousness does.

Now to be honest; I'm not entirely sure if John Searle would agree that that is consciousness when we do get there but I do agree with him that deterministic consciousness is essentially a contradictio in terminis.

[+] DonaldFisk|10 years ago|reply
I wouldn't be so critical of GOFAI. Much high-level reasoning either does or can involve symbol manipulation. There are some impressive systems, such as Cyc, which do precisely that. It isn't useful for low-level tasks like vision or walking, so other approaches are needed to complement it.

> but we're past the point of just manipulating empty symbols.

We've now reached the point where we can manipulate large matrices containing floating point numbers. I don't see how this makes systems any more conscious.

[+] ttctciyf|10 years ago|reply
Regards claim 2, Searle repeats the phrase "specific causal properties of the brain" quite a few times without spelling out just what he's referring to, but from other remarks he makes it seems clear he means actual electrochemical interactions, rather than generic information processing capabilities. I think his view is that consciousness (most likely) doesn't arise out of "information processing", which he would probably class as "observer-relative", but out of some as yet not understood chemistry/physics which takes place in actual physical brains.

So the question, to Searle, is not "about what kinds of information processing systems would have subjective conscious experiences", but "what kinds of electrochemical interactions would cause conscious experiences".

The intuition/assumption of his questioners seems to be that whatever electrochemical interactions are relevant for consciousness, they are relevant only in virtue of their being a physical implementation of some computational features, but plainly he does not share this assumption and favours the possibility that the electrochemical interactions are relevant because they physically (I think he'd have to say) produce subjective experience - and that any computational features we attribute to them are most likely orthogonal to this. Hence his example of the uselessness of feeding an actual physical pizza to a computer simulation of digestion. His point is that the biochemistry (he assumes) required for consciousness isn't present in a computer any more than that required for digestion is.

Another example might be: you wouldn't expect a compass needle to be affected by a computer simulating electron spin in an assemblage of atoms exhibiting ferromagnetism any more than it would be by a simulation of a non-ferromagnetic assemblage.

To someone making the assumption that computation is fundamental for explanations of consciousness, these examples seem to entirely miss the point, because it's not the physical properties of the implementation (the actual goings on in the CPU and whatnot) that matter, but the information processing features of the model that are the relevant causal properties (for them.)

But to Searle, I think, these people are just failing to grok his position, because they don't seem to even understand that he's saying the physical goings on are primary. You can almost hear the mental "WHOOSH!" as he sees his argument pass over their heads. In an observer-relative way, of course.

As you imply, until someone can show at least a working theory of how either information processing or biochemistry can cause subjective experience the jury will be out and the arguments can continue. I won't be surprised if it takes a long time.

(Edited to add the magnetic example and subsequent 2 paragraphs.)

[+] cromwellian|10 years ago|reply
The systems response is pretty much the right answer. You can put yourself at any level of reductionism of a complex system and ask how in the hell the system accomplishes anything. If you imagine yourself running a simulation of physics on paper for the universe, you may ask yourself, how does this simulation create jellyfish.

I think people fall for Searle's argument the same way people fall for creationist arguments that make evolution seem absurd. Complex systems that evolve over long periods of time have enormous logical depth complexity and exhibit emergent properties that really can't be computed analytically, but only but running the simulation, and observing macroscopic patterns.

If I run a cellular automaton that computes the sound wave frequencies of a symphony playing one of Mozart's compositions, and it takes trillions of steps before even the first second of sound is output, you can rightly ask, at any state, how is this thing creating music?

[+] spooningtamarin|10 years ago|reply
Consciousness and understanding are human created symbolism. Talking about it seriously is a waste of time.

I could be an empty shell imitating a human perfectly, all other humans would buy my lack of consciousness, and nothing would be different, from their perspective I exist, from mine, I don't have mine.

How does one know that I really understand something? Maybe I can answer all the questions to convince them?

[+] kriro|10 years ago|reply
It's pretty frustrating to watch. Feels like an endless repetition of "well humans and dogs are conscious because that's self evident". There's no sufficient demarcation criterion other than "I know it when I see it" that he seems to apply. [I guess having a semantics is his criterion but he doesn't elaborate on a criterion for that]

The audience question about intelligent design summed up my frustration nicely (or rather the amoeba evolving part of it).

[+] sethev|10 years ago|reply
I think what it boils down to is that Searle believes consciousness is a real thing that exists in the universe. A simulation of a thing isn't the same as the thing itself, no matter how accurate the outputs. The Chinese Room argument just amplifies that intuition (my guess is that the idea of a room was inspired by the Turing Test).

I think studying the brain (as opposed to philosophical arguments) is the thing that will eventually answer these kinds of questions, though.

[+] pbw|10 years ago|reply
I think the argument about consciousness is vacuous. Searle admits we might create an AI which acts 100% like a human in every way.

Nothing Searle says stands in the way of creating intelligent or super-intelligent entities. All Searle is saying is those entities won't be conscious.

No can prove this claim today. But more significantly I think it's extremely likely no one will ever prove the claim. Consciousness is a private subjective experience. I think it's likely you simply cannot prove it exists or doesn't exist.

Mankind will create a human-level robots and we'll watch them think and create and love and cry and we'll simply not know what their conscious experience is.

Even if we did prove it one way or the other, the popular opinion would be unaffected.

Some big chunk of people will insist robots are conscious entities who feel pain and have rights. And some big chunk of people will insist they are not conscious.

It might be our final big debate. An abstruse proof is not going to change anyone's mind. Look at how social policies are debated today. Proof is not a factor.

[+] orblivion|10 years ago|reply
So, supposing there's any chance that it has consciousness, is there any sort of movement doing all it can to put the brakes on AI research? If it's true, it's literally the precursor to the worst realistic (or hypothetical, really) outcome I can fathom, which has been discussed before on HN (simulated hell, etc). I'm not sure why more people aren't concerned about it. Or is it just that there's "no way to stop progress" as they say, and this is just something we're going to learn to live with, the way we live with, say, the mistreatment of animals?
[+] adrianN|10 years ago|reply
We are sufficiently far away from creating machines that humans would consider to have consciousness that it's not really a problem so far. Eventually we'll probably have to think about robot rights, but I guess we still have a few decades until they're sufficiently advanced. But judging from how we treat, eg. great apes, who are so very similar to us, I wouldn't want to be a robot capable of suffering.
[+] nnq|10 years ago|reply
This guy so smart but at the same time such an idiot. SYNTAX and SEMANTICS are essentially SAME THING. It's only a context-dependent difference, and this difference is quantitative, even if we still don't have a good enough definition of what those quantitative variables underlying them are. You must have a really "fractured" mind not to instantly "get it". And "INTRINSIC" is simply a void concept: nothing is intrinsic, everything (the universe and all) is obviously observer dependent, it just may be that the observer can be a "huge entity" that some people choose to personalize and call God.

It's amazing to me that people with such a pathological disconnect between mind and intuition can get so far in life. He's incredibly smart, has a great intuition, but when exposed to some problems he simply can't CONNECT his REASON with his INTUITION. This is a MENTAL ILLNESS and we should invest in developing ways to treat it, seriously!

Of course that "the room + person + books + rule books + scratch paper" can be self conscious. You can ask the room questions about "itself" and it will answer, proving that it has a model of itself, even if that model is not specifically encoded anywhere. It's just like mathematics, if you have a procedural definitions for the set of all natural numbers (ie. a definition that can be executed to generate the first and the next natural number), you "have" the entire set of natural numbers, even if you don't have them all written down on a piece of paper. Same way, if you have the processes for consciousness, you have consciousness, even if you can't pinpoint "where" in space and time exactly is. Consciousness is closer to a concept like "prime numbers" than to a physical thing like "a rock", you don't need a space and time for the concept of prime numbers to exist in, it just is.

His way o "depersonalizing" conscious "machines" is akin to Hitler's way of depersonalizing Jews, and this "mental disease" will probably lead to similar genocides, even if the victims will not be "human" ...at least in the first phase, because you'll obviously get a HUGE retaliation in reply to any such stupidity, and my bet it that such a retaliation will be what will end the human race.

Now, of course the Chinese room discussion is stupid: you can't have "human-like consciousness" with one Chinese room. You'd need a network of Chinese rooms that talk to each other and also operate under constraints that make their survival dependent on their ability to model themselves and their neighbors, in order to generate "human-like consciousness".