top | item 16370680

(no title)

aaimnr | 8 years ago

What does learning have to do with consciousness? These are orthogonal issues. That's the whole point of Chalmers argument.

discuss

order

visarga|8 years ago

Well it has a lot to do. We're not born with fully functional minds, we learn our mental skills as we grow. Learning shapes the very concepts we use for representing sensations and thinking. Consciousness is not something 'secreted' in the brain, it's the loop made of 'agent + environment', where the purpose is to maximise rewards. There is no consciousness in itself, just consciousness of something. Learning is what ties together agent and environment, it's the building force of consciousness.

And Chalmers is a dualist that believes there are two realms that can't be explained, and that's ridiculous in this day and age. He's the worst philosopher of consciousness because he led a generation astray with sterile dualist concepts - and where has he led philosophy? Nowhere, there was no insight, no discovery after the "hard problem" because, darn, it's "hard" which is another word for dualism today.

I take Tononi and Dennett over Chalmers any day, but I prefer Reinforcement Learning over all of them as my intuition pump with regard to consciousness. Philosophy is mired in a swamp of bad concepts that are almost useless, they should just use a learning based terminology which is so much more effective. Engineers and experimental scientists create bots that beat humans at Go, a game that can't be brute forced, and they don't realise they've been outrun in their 2000 year marathon by a hundred year old concrete approach. The difference is that RL has the right concepts and philosophy uses extremely refined but ultimately useless concepts. They've realised words don't mean anything in the absolute sense (they all rely on each other, cyclic referential) and are just part of a game, but are still neck deep in useless words instead of using evolutionary and RL concepts to concretely model consciousness and the game it plays.

aaimnr|8 years ago

"They've realised words don't mean anything in the absolute sense (they all rely on each other, cyclic referential) and are just part of a game, but are still neck deep in useless words instead of using evolutionary and RL concepts to concretely model consciousness and the game it plays."

That's a full-on nihilistic postmodernism. The fact that words mean something only in reference to other words doesn't have to mean that they are useless. Quine and other pragmatists (Buddhism does the same) argued otherwise - that concepts/theories derive meaning or truth-value from how useful they are in the real world (as a network, rather than individually).

Treating all philosophers as a one camp vs science is mistaken. Whatever any particular scientist or engineer say, there always will be some philosophical assumptions behind it. It's always better to make them explicit rather than be in the dark. The best scientists in history were pretty deep in philosophy as well.

Eg. Tononi is both philosopher and a scientist. He's clearly on Chalmers' side philosophically, he perceives consciousness as something fundamental, MUCH more fundamental than learning. He posits that even stable systems (so with no learning at all) can be conscious. Which makes a lot of sense from phenomenological point of view. He also adds a theory of how specifically consciousness may be related causally with the physical world. That's the scientific part.

Silvers, on the other hand, and the whole RL thing is not concerned with consciousness AT ALL! It's a completely different problem. Actually it may be the case that most of the learning processes in human mind are unconscious!

"There is no consciousness in itself, just consciousness of something. Learning is what ties together agent and environment..." Exactly - if you define learning as a relationship between a system and its environment, you don't need anything else (like cosciousness), just the actual and potential interactions.

Late Wittgenstein, Heidegger, Merleau-Ponty and others would be on the same page with you here, so again, let's not throw the baby of philosophy out with the bathwater. These observations were made in the first half of XX century. They apply perfectly to the naivete of old school symbolic AI (and a logical positivism philosophical stance behind it), as captured by Hubert Dreyfuss, who described all the problems with it from philosophical (phenomenological specifically) standpoint in "What computers can't do" and his more recent paper ( http://cspeech.ucd.ie/Fred/docs/WhyHeideggerianAIFailed.pdf ). RL seems to be step in the right direction in this perspective. However...

"[Learning] ... it's the building force of consciousness."

Well, this part just doesn't make sense. You want to focus on explaining learning? Fine. Do some work on RL, it's enligthening for sure. I completely agree that it's fascinating how new concepts emerge in AlhaGo around some specific baord configurations. It changed people's understanding of the game. But please, don't conflate it with consciousness. And if you do, be open about it and name your position in terms of Chalmers' recent paper. Is it some form of illusionism? Only then we can have meaningful conversation about your actual position on what consciousness is.

Whatever is the relationship of concepts and sensations, however these two aggregates relate to each other and evolve in the mind, consciousness seems to be something more fundamental. Are you saying that AlphaGo is already conscious? If not, can it be made conscious? How? By adding more CPU? A webcam? We can't escape these questions.