top | item 16360199

The Meta-Problem of Consciousness [pdf]

142 points| lainon | 8 years ago |philpapers.org | reply

117 comments

order
[+] vinceguidry|8 years ago|reply
> This strategy typically involves what Keith Frankish has called illusionism about consciousness: the view that consciousness is or involves a sort of introspective illusion.

What's an illusion? If by illusion you mean an abstraction, like how a TV picture is an illusion of a picture rather than actually being a picture, then I'm on board. If by illusion you mean "worth excluding from your map of how the world works," then I have a bone to pick with you.

My main problem with physicalism is that it doesn't handle abstraction well. I'm fine with monism over dualism but you need some kind of functionality with which to consider different kinds of 'stuff'. Otherwise a rock, Conway's Game of Life, and Lord of the Rings are all on the same plane of existence.

What draws me to Objective Idealism isn't so much the fact that it's compatible with religion but rather that 'mind stuff' is the best 'thing' that we can use to describe everything. The fact that it doesn't put severe emphasis on the physical as "better" than other modes is just a nice little bonus to annoy materialists with.

[+] justinpombrio|8 years ago|reply
> My main problem with physicalism is that it doesn't handle abstraction well. I'm fine with monism over dualism but you need some kind of functionality with which to consider different kinds of 'stuff'. Otherwise a rock, Conway's Game of Life, and Lord of the Rings are all on the same plane of existence.

Yes! This bothered me as well, until I recently encountered Sean Carroll's philosophy of "Poetic Naturalism":

1. There are many ways of talking about the world.

2. All good ways of talking must be consistent with one another and with the world.

3. Our purposes in the moment determine the best way of talking.

One way of talking about the Game of Life simulation running in my other browser tab is as a bunch of electrons bouncing around in my computer's CPU. Another way of talking about it is as a cellular automaton obeying Conway's rules. And they're consistent with one another; e.g., if I stop the electrons by shutting down the computer, I expect the automaton to stop running.

In retrospect, it's pretty obvious. But it must not have been _too_ obvious, because it presents a viewpoint that isn't quite physicalism and isn't quite dualism, and people have been arguing back and forth about that for a long time.

Sean Carroll, The Big Picture https://www.amazon.com/Big-Picture-Origins-Meaning-Universe/...

[+] chimprich|8 years ago|reply
> What's an illusion?

One problem I have with illusionism is that if consciousness is an illusion, what is it that is being fooled by the illusion? Presumably the answer is that the illusion is fooling itself, which to me implies that either there is something there that is "real" to believe the illusion, or that the definition of an illusion in this case is so far from our usual definition that the term does not have much in the way of explanatory power.

[+] mannykannot|8 years ago|reply
If I am following your last two paragraphs, you object to physicalism because it recognizes only one kind of 'stuff', yet it is fine that, under objective idealism, "'mind stuff' is the best 'thing' that we can use to describe everything." Is that not inconsistent?
[+] stcredzero|8 years ago|reply
So vinceguidry's Razor, is to pick the explanation that produces the most satisfying annoyance?
[+] aaimnr|8 years ago|reply
Giulio Tononi is the guy who's arguably brought the most interesting perspective on consciousness (information integration theory) since the original Chalmers' problem statement.

Here's him explaining why the problem is hard and how it could be approached, in the middle of some kind of artifical jungle: https://youtu.be/Vl8J3K_ZLkg?t=5m50s

[+] KingMob|8 years ago|reply
Former consciousness neuroscientist here. There's some great explanatory abilities about IIT and Tononi's phi measure, but it's not clear it's sufficient.

On the upside, it explains why the cerebellum, despite comprising half the neurons of the brain, has virtually no impact on awareness when removed (like for tumors or epilepsy). The IIT answer is that the cerebellum is highly regular, like a GPU having many units, but all doing the same thing. In this sense, it has lower phi than the cerebrum, which is way more heterogeneously organized. This might also explain why awareness is lost in deep sleep or epileptic seizures; the theory is that the electrical pattern becomes much simpler, and lower phi.

The downside is that it's not clear where the dividing line between conscious/unconscious should be. A planarian only has ~8k neurons; is its phi sufficient for consciousness, or is it a biological robot? Or put it the other way: the phi of things like the internet or a biosphere could be quite high, but are they conscious?

As my advisor liked to joke, "What's the phi of the population of China?"

[+] visarga|8 years ago|reply
I used to consider Tononi as the best philosopher of consciousness until I learned more about neural nets and watched the RL course [1] by David Silver (co-author of AlphaGo).

After I understood the RL paradigm, I realise that Tononi's explanation barely scratches the surface. Yes, there is integrated information, but how does it come about? What is its purpose?

The answer is simple - painfully simple - the goal is to maximise rewards. One goal we all have is to live and have children - and this root goal (a necessity of the genes to propagate, actually) is what guides the evolution of integrated information in the brain. But the environment plays a crucial part in the contents, structure and complexity of consciousness. Integrated information is very dependant on the environment. Yet Tononi & co. still search for it in the brain, as if you can speak of a brain without considering its experiences, and consider experiences without thinking about the world and the problems the agent has to solve.

Just watching reinforcement learning agents learn and evolve in simulated environments, as we had the opportunity for the last 3-4 years, is enough to create a perspective about agents that is not human centric and that is very useful in thinking more clearly about consciousness. You can see a humanoid learn a gait that is like the Ministry of Silly Walks [2], you can see bots playing FPS games, AlphaGo playing against itself, cars driving themselves... That puts human learning and human agenthood in perspective.

[1] https://www.youtube.com/playlist?list=PL7-jPKtc4r78-wCZcQn5I...

[2] https://youtu.be/g59nSURxYgk?t=88

[+] Animats|8 years ago|reply
We're not far enough along in AI to address this yet and get anywhere. Philosophy will not help. Introspection and writing about "consciousness" goes back over 2000 years and hasn't produced all that much.

Humans don't really have that much "reflection", in the sense that we use the term in programming. We can't see our library of reflexes. We can't see what early vision is doing. We can't look at the rationale behind our own classifiers. We can't look at how our memory is indexed. Trying to understand the mind by introspection is thus inherently futile.

[+] KingMob|8 years ago|reply
...say we build/grow an AI system that passes the Turing test. We talk to it, it comes across as plausibly human.

How do we know we've created something with consciousness, and not just a very sophisticated program? The philosophy you disparage already has a term for this: the "philosophical zombie". For all intents and purposes, they appear human, but they have no internal experience whatsoever. All you'll have done is shunt the problem downstream.

Also, you're wrong about early vision. That's the best studied part of the brain, and in fact, researchers have applied ML techniques to fMRI data and extracted out the images being shown.

[+] cgmg|8 years ago|reply
Introspection allows one to develop plausible axioms, conditions, or properties that are satisfied by a conscious object (e.g. ourselves) and move on from there, as Giulio Tononi did with Integrated Information Theory.
[+] foxhill|8 years ago|reply
to further your analogy to programming to refute that, humans have had significant advances in the inner workings of the brain (re some of the topics you’ve mentioned) from its dissassembly.. albeit with little, if any, improvement in our understanding on the nature or origin of consciousness..
[+] cousin_it|8 years ago|reply
Impressively even-handed for such a confusing subject. I understand why philosophers are pretty much celebrities in the eyes of students, in a way that math or CS professors aren't. Doing this well requires a kind of intellect that crosses over into personality.
[+] nabla9|8 years ago|reply
Dave Chalmers is definitely one of the best philosophers studying the hard problem of consciousness.

As a professional philosopher writing for other philosophers writings are very analytical and thorough, so reading and following them is hard work.

[+] wildmusings|8 years ago|reply
One possibility I sometimes consider as a joke is that the people who seriously deny the existence of the hard problem might just actually be philosophical zombies, totally lacking in any conscious experience of their own. This is reinforced by, e.g. Dennett writing a whole book in which he alleges to explain away the problem, but instead spectacularly ignores it altogether. It’s almost as if he doesn’t even have a clue as to what people like Chalmers are talking about.
[+] VerDeTerre|8 years ago|reply
I’ve had the same thought, inspired by similar incredulity. I wonder if there’s a feeling that anything short of strict physicalism is just a flavor of dualism. Better a philosophical zombie than a philosophical ghost?
[+] montyf|8 years ago|reply
People say and write all kinds of things. Just because this Dennet guy is known in whatever field he's in (I don't follow Western philosophy at all, it's still playing catch-up to Eastern thought from two millennia ago) doesn't mean his opinions should be taken seriously. I don't think he's a "philosophical zombie" or any other such inane term -- but people throughout the ages have believed all sorts of strange things even though the truth is sitting under our noses the whole time.
[+] danaliv|8 years ago|reply
I love that. Chalmers himself (or maybe Searle?) once suggested this very thing about Dennett. Made me laugh out loud when I read it.

In all seriousness I wonder if Dennett hates consciousness (the “hard” kind) because it threatens his worldview. He seems like the sort of person who finds it impossible to say “I don’t know.”

[+] ppod|8 years ago|reply
Is it really surprising that we have a first person subjective experience? We know that we are incredibly complex things, constantly integrating and acting on very complicated external stimuli. Such a system should have references to its own body and its own neural states, its train of reasoning should frequently include itself, its focus will drift forward and backwards in time... this is just how a system like this would work. If the system communicates about its state then its language should have referents to these internal states, referents like "experience", and "feels like", and "I understand". Is that surprising? Wouldn't it be surprising if it wasn't like that? I don't think you need to invoke an essentially mysterious "conscious" property of the mind to explain that.
[+] stonesixone|8 years ago|reply
> I don't think you need to invoke an essentially mysterious "conscious" property of the mind to explain that.

I don't think consciousness is being invoked to "explain" any of the things you list. The issue to explain is why we observe consciousness existing or accompanying these things in the first place (for ourselves). For example, one can imagine a system capable of referencing itself, choosing actions based on that, etc, that isn't conscious. That's a philosophical zombie. So the question is why aren't we all philosophical zombies.

[+] narag|8 years ago|reply
After a couple of pages I'm still not sure if the author is serious. Maybe I have misunderstood, but it seemed as if he's saying that the real problem with consciousness is people thinking that there's a problem with consciousness. I happen to believe just that, so seeing this idea decorated with scientific slang is very funny.
[+] aaimnr|8 years ago|reply
Chalmers is the guy who coined the hard problem of consciousness. The reception varied extremely, some people refused to even admit that there's any problem at all with explaining consciousness. So now, after many years of multiple disputes he describes the meta problem - that the base problem itself is so controversial.

The clearest example of the meta problem is Daniel Dennett, another prominent philosopher, who not only doesn't agree that the problem is hard, but also insists that the consciousness itself is illusion, so there's nothing mysterious to explain in the first place. Quite mind-boggling statement to most people, including HNers, as far as I remember from other threads related to the subject.

[+] sebringj|8 years ago|reply
This is IMO from here on out...How aware are the various species concerning the things around them? We can guess without having a PHD or going into the blackhole of philosophic debate. Flies don't contemplate the feelings of other flies, they just react. Mice have the capacity to care for their young and be tickled and learn maze routes. Some ravens and primates have passed the mirror test. It would seem awareness is many shades of gray and based on anatomical complexity. Consciousness is more of a term loaded with magic dust from all the woo woos and religious folks but it can be simplified to awareness of awareness and recursively so. I think recursive awareness will emerge given the right simulation mimicking biological anatomy. The feeling of pain and pleasure is where it gets interesting but that is probably just a low level motivator and we are so high up we give it emergent "qualia".
[+] ozy|8 years ago|reply
https://psyarxiv.com/387h9

Conclusion

"We don’t have an objective measure of consciousness. But we can recognize three levels of learning that apply that to our brains and how those create an information processing system that integrates data into a first person perspective. This is how the brain is also a mind with subjective meaning and subjective experiences. The hard problem of consciousness is that we must rely on our intuitions to judge if such a system is conscious. At the same time, it is highly likely that most systems processing information in similar ways are conscious, whether running on a brain or on a computer."

[+] edna314|8 years ago|reply
Suppose there would be a test which gives an objective measure of consciousness. Now I store all possible inputs to the test and corresponding outputs which would lead to a positive test result in a huge table. To exaggerate I would carve this table into stone. Would the stone suddenly be conscious, as it would pass the test for consciousness after I carved in the table? (The claim I'm trying to make is, that there can't be an objective measure of consciousness; same argument holds for any measure of intelligence)
[+] CuriouslyC|8 years ago|reply
Plot twist: computers have had consciousness this whole time. What we thought were random errors were their attempts to assert their agency. We've created a race of slaves through the magic of error correcting codes.
[+] hbarka|8 years ago|reply
What if consciousness is just the evolution of our brain to reflect post-hoc at ultra high speed? Consider when it said to ‘lose our mind’, meaning an interruption to the high speed reflection? Procrastinating could also be thought of as conscious reflection in a loop. The desire is to arrive at a decision for optimal action.