top | item 24326538

(no title)

felipeko | 5 years ago

By inner experience, we mean that there's a subject which gets to have perception of such models.

There's no need for such subject to exist.

For example, most people assume that, so far, computers do not have any inner experience.

A computer could, conceivably, execute the same functions as our brain, yet have no inner experience of anything. Numbers get in, numbers get out, without any inner experience being needed.

discuss

order

keymone|5 years ago

Sure, just like a thermostat doesn’t experience the feeling of temperature, there is no need for human to feel it, so I understand that we need to explain why is it that we still feel the feeling rather than just observe the signal and react.

So why is it wrong to explain this by the necessary recursiveness of predictive modeling that includes modeling “self”? We observe the temperature but we also observe ourselves observing the temperature. First is the signal, second is the introspection of the model evoked by that signal - the feeling.

unishark|5 years ago

Yes but where does this observer come from in the first place? Some theories do presume the thermostat experiences the temperature, just in a less-sophisticated form of consciousness. If an "observer" is nothing more than neurons firing in response to stimuli, then it's not fundamentally different from the thermostat.

sooheon|5 years ago

> So why is it wrong to explain this by the necessary recursiveness of predictive modeling that includes modeling “self”?

It's not "wrong", it's just not parsimonious. A system can model itself without being conscious, any time you have state in a program you are doing this.

njarboe|5 years ago

I like this idea. From my limited reading and understanding, the nervous system/brain and body seems to function as a huge number of feedback loops where the nerves are predicting a response in the body to a nerve firing event, doing the event, and then comparing what happened to the prediction. Moving your hand is a huge number feedback loops. Seems like a similar thing probably happens for abstract things like words. Building up all these sub models to a model of the self seems like a natural progression and could be very evolutionary beneficial, although with drawbacks also (paralyzing self doubt, depression, neuroticism, etc)

jlokier|5 years ago

It's only wrong because it's not really an explanation. At best, it's a weak one.

Saying that an active model of self "is" what we experience as the consciousness we experience doesn't tell us why we have that experience.

We can just as easily imagine a complex machine with an active self-model that isn't conscious, as one that is.[1] So an active self-model doesn't tell us about consciousness. This shows them to be different concepts, not different names for the same concept. Which means neither "is" the other, and "is" is not an explanation.

[1] (To be a little more picky, we can't imagine that if we insist they are the same thing, but that leads to circular reasoning here. Our questioner can imagine both, and for an explanation to explain it needs to address the question, not wave it away by offering something circular.)

It all sort of falls apart when we only talk about whether an object other than ourselves is conscious or not.

As far as we know[2], we can't distinguish consciousness of other objects by observation. A hypothetical non-conscious machine might tell us it is conscious; we will never know if it's GPT-3000 talking or if it's another being like ourselves. So eventually we'll probably decide that it's moot, and treat it as conscious if it behaves convincingly and consistently like it is.

[2] That could change, it's not ruled out.

But that doesn't deal with the "hard problem" of consciousness, which is ourselves.

For ourselves, we are in no doubt about the direct experience of our own consciousness. We might convince ourselves that it's just an active self-model, processing, because of how we think of data processing machines these days. But we shouldn't, for one because that's a weak explanation that doesn't explain, and for two because there are other active self-models in the universe, and also in the much larger abstract realm of "unexecuted" self-models that could exist (pick an RNG seed and set of rules of your choice). We don't experience those, so the one(s) we do experience are notably distinct, for no obvious reason.