top | item 32347272

(no title)

jrlocke | 3 years ago

The Chinese room argument itself isn't very compelling. Surely the constituent parts of the brain are fundamentally governed solely by physics, surely thought arises solely from the physical brain, and surely the constituent parts (and thus thought) could be described by a sufficiently complex discreet computation.

Are we not conscious?

discuss

order

mannykannot|3 years ago

The argument you make here is a reasonable one (IMHO) for the plausibility in principle of what Searle calls “strong AI”, but he claims that his “Chinese Room” argument proves that it must be mistaken. One can simply ignore him, but to refute him takes a little more effort.

It turns out that when one looks at the argument in detail, and in particular at Searle’s responses to various objections (such as the Systems and Virtual Mind replies), it is clear that he is essentially begging the question, and his ultimate argument, “a model is not the thing modeled”, is a non-sequitur.

TheOtherHobbes|3 years ago

The argument is essentially that there are no qualia of Chinese comprehension in an automaton or in any system that uses an equivalent algorithm, whether or not run by a human.

It's a sound argument to the extent that qualia clearly exist, but no one has any idea what they are, and even less of an idea how to (dis)prove that they exist in external entities.

It's the materialists who are begging the question, because their approach to qualia is "Well obviously qualia are something that just happens and so what?"

Unfortunately arguments based on "Well obviously..." have a habit of being embarrassingly unscientific.

And besides - written language skills are a poor indicator of human sentience. Human sentience relies at least much on empathy; emotional reading of body language, expression, and linguistic subtexts; shared introspection; awareness of social relationships and behavioural codes; contextual cues from the physical and social environment which define and illuminate relationships; and all kinds of other skills which humans perform effortlessly and machines... don't.

Turing Tests and game AI are fundamentally a nerd's view of human intelligence and interaction. They're so impoverished they're not remotely plausible.

So as long as DALL-E has no obvious qualia, it cannot be described as sentient. It has no introspection and no emotional responses, no subjective internal state (as opposed to mechanical objective state), and no way to communicate that state even if it existed.

And it also has no clue about 3D geometry. It doesn't know what a sphere, only what sphere-like shading looks like. Generally it knows the texture of everything and the geometry of nothing.

Essentially it's a style transfer engine connected to an image search system which performs keyword searches and smushes them together - a nice enough thing, but still light years from AGI, never mind sentience.

salawat|3 years ago

>A model is not the thing modeled is a non-sequitur.

Have you ever used a map?

If not, I'd like you to get one and I want you to point out your home in every one. Can you use a magnifying glass, then look at the map hard enough to see yourself looking over a tinier map? Ad infinitum?

That is what Searle means. A map is a model of the world. The model of a world that is a map is not, in fact, in any way equivalent to or interchangeable with the thing it models. It is merely a distilled representation that provides a facsimile representative enough to be useful. So to would be any attempt at modeling consciousness.

indigo945|3 years ago

I think a considerable subset of the people who do make use of the Chinese room argument also subscribe to some form of mind-body dualism, where consciousness does not or does not completely arise from physical processes.

notahacker|3 years ago

I see it the other way round.

The Chinese Room and the brain of a Chinese-speaking person are completely different physical processes. Looked at on an atomic level, they have almost nothing in common. Mind-body dualists may or may not agree that the room is not "conscious" in the way a human is, but if consciousness is purely a material process, I can't see how the materialist can possibly conclude all the relevant properties of the completely dissimilar room and person are the same.

Those that would argue the Chinese Room is "conscious" in the same way as the Chinese person are essentially arguing that the dissimilarity of the physical processes is irrelevant: the "consciousness" of the Chinese person doesn't arise from molecules bouncing around their brain in very specific ways, but exists at some higher level of abstraction shared with the constituent molecules of pieces of paper with instructions written in English and outputs written in Chinese.

The idea our consciousness exists in some abstract sense which transcends the physics of the brain is not a new one of course. Historically we called such abstractions souls...

jrlocke|3 years ago

To be more explicit, I'm saying I find it weird to hear so much about the Chinese room argument from a crowd of (presumably) materialists.

rekrsiv|3 years ago

Of course! But if the universe is the result of all the quantum field interactions, what if there's a quantum field that, on its own, brings the consciousness interaction, and it manifests in ways that are computationally prohibitive for a process created from atomic-scale logic gates to replicate believably?

What if there's just no way to build consciousness from the building blocks that are within our reach?

astrange|3 years ago

Penrose thinks this (that the brain requires quantum computing), but it doesn't seem like anyone agrees with him or that it makes much sense. If I'm a quantum computer, why can't I do Shor's algorithm in my head?

Of course parts of regular computers involve "quantum stuff", like details of how transistors and hard drives work, but that doesn't mean they're magic.