top | item 45663348

(no title)

BadThink6655321 | 4 months ago

A ridiculous argument. Turing machines don't know anything about the program they are executing. In fact, Turing machines don't "know" anything. Turing machines don't know how to fly a plane, translate a language, or play chess. The program does. And Searle puts the man in the room in the place of the Turing machine.

discuss

order

wk_end|4 months ago

So what, in the analogy, would be the program? Surely it's not the printed rules, so I think you're making the "systems reply" - that the program that knows Chinese is some sort of metaphysical "system" that arises from the man using the rules - which is the first thing Searle tries to rebut.

> let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn't anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn't anything in the system that isn't in him. If he doesn't understand, then there is no way the system could understand because the system is just a part of him.

In other words, even if you put the man in place of everything, there's still a gap between mechanically manipulating symbols and actual understanding.

mannykannot|4 months ago

People are doing things they personally do not understand, just by following the rules, all the time. One does not need to understand why celestial navigation works in order to do it, for example. Heck, most kids can learn arithmetic (and perform it in their heads) without being able to explain why it works, and many (including their teachers, sometimes) never achieve that understanding. Searle’s failure to recognize this very real possibility amounts to tacit question-begging.

rcxdude|4 months ago

In that case you've basically just created a split-brain situation (I mean like the actual phenomenon of someone who's had the main part of the connection between the two hemispheres of the brain). There's one system which is the man and the rules that he has internalized, and there's what the man himself consciously understands, and there's no reason that the two are necessarily communicating in some deeper way, in much the same way as a split-brain patient may be able to point to something they see in one side of their vision when asked but be unable to say what it is.

(Also, IMO, the question of whether the program understands chinese mainly depends on whether you would describe an unconscious person as understanding anything)

I also can't help but think of this sketch when this topic comes up (even though, importantly, it is not quite the same thing): https://www.youtube.com/watch?v=6vgoEhsJORU

BadThink6655321|4 months ago

Only because "actual understanding" is ambiguously defined. Meaning is an association of A with B. Our brains have a large associative array with the symbols for the sound "dog" is associated with the image of "dog' which is associated with the behavior of "dog" which is associated with the feel of "dog", ... We associate the symbols for the word "hamburger" with the symbols for the taste of "hamburger", with ... We undersand something when our past associations match current inputs and can predict furture inputs.

glyco|4 months ago

You and Searle both seem to not understand a simple, obvious fact about the world, which is that (inhomogenous) things don't have the same thing inside. A chicken pie, for example, doesn't have any chicken pie inside. There's chicken inside, but that's not chicken pie. There's sauce, vegetables and pastry, but those aren't chicken pie either. All these things together still may not make a chicken pie. The 'chickenpieness' of the pie is an additional fact, not derivable from any facts about its components.

As with pie, so with 'understanding'. A system which understands can be expected to not contain anything which understands. So if you find a system which contains nothing which understands, this tells you nothing about whether the system understands[0].

Somehow both you and Searle have managed to find this simple fact about pie to be 'the grip of an ideology' and 'metaphysical'. But it really isn't.

[0] And vice-versa, as in Searle's pointlessly overcomplicated example of a system which understands Chinese containing one which doesn't containing one which does.