top | item 16363449

(no title)

aaimnr | 8 years ago

Giulio Tononi is the guy who's arguably brought the most interesting perspective on consciousness (information integration theory) since the original Chalmers' problem statement.

Here's him explaining why the problem is hard and how it could be approached, in the middle of some kind of artifical jungle: https://youtu.be/Vl8J3K_ZLkg?t=5m50s

discuss

order

sbierwagen|8 years ago

Note that IIT was heavily criticized by Scott Aaronson for producing largely nonsensical results when applied to a simple square grid of XOR gates: https://www.scottaaronson.com/blog/?p=1823 https://www.scottaaronson.com/blog/?p=1799

It is clear to me that, whatever it is we're talking about when we're talking about consciousness, an expander graph doesn't have it.

visarga|8 years ago

IIT is missing the self-replication requirement. If a system is a self-replicator, it needs resources and has to avoid dangers. This in turn creates a necessity for perception and ability to select good actions depending on situation. A square grid of XOR gates has none of that.

KingMob|8 years ago

Former consciousness neuroscientist here. There's some great explanatory abilities about IIT and Tononi's phi measure, but it's not clear it's sufficient.

On the upside, it explains why the cerebellum, despite comprising half the neurons of the brain, has virtually no impact on awareness when removed (like for tumors or epilepsy). The IIT answer is that the cerebellum is highly regular, like a GPU having many units, but all doing the same thing. In this sense, it has lower phi than the cerebrum, which is way more heterogeneously organized. This might also explain why awareness is lost in deep sleep or epileptic seizures; the theory is that the electrical pattern becomes much simpler, and lower phi.

The downside is that it's not clear where the dividing line between conscious/unconscious should be. A planarian only has ~8k neurons; is its phi sufficient for consciousness, or is it a biological robot? Or put it the other way: the phi of things like the internet or a biosphere could be quite high, but are they conscious?

As my advisor liked to joke, "What's the phi of the population of China?"

visarga|8 years ago

> "What's the phi of the population of China?"

Small, because if you cut it in 100, you still get 100 functioning parts. Can't cut the brain in 100 and still get functioning mini-brains.

21|8 years ago

> The IIT answer is that the cerebellum is highly regular, like a GPU having many units, but all doing the same thing.

Isn't the cortex also the same unit (the cortical column) repeated over and over again?

visarga|8 years ago

I used to consider Tononi as the best philosopher of consciousness until I learned more about neural nets and watched the RL course [1] by David Silver (co-author of AlphaGo).

After I understood the RL paradigm, I realise that Tononi's explanation barely scratches the surface. Yes, there is integrated information, but how does it come about? What is its purpose?

The answer is simple - painfully simple - the goal is to maximise rewards. One goal we all have is to live and have children - and this root goal (a necessity of the genes to propagate, actually) is what guides the evolution of integrated information in the brain. But the environment plays a crucial part in the contents, structure and complexity of consciousness. Integrated information is very dependant on the environment. Yet Tononi & co. still search for it in the brain, as if you can speak of a brain without considering its experiences, and consider experiences without thinking about the world and the problems the agent has to solve.

Just watching reinforcement learning agents learn and evolve in simulated environments, as we had the opportunity for the last 3-4 years, is enough to create a perspective about agents that is not human centric and that is very useful in thinking more clearly about consciousness. You can see a humanoid learn a gait that is like the Ministry of Silly Walks [2], you can see bots playing FPS games, AlphaGo playing against itself, cars driving themselves... That puts human learning and human agenthood in perspective.

[1] https://www.youtube.com/playlist?list=PL7-jPKtc4r78-wCZcQn5I...

[2] https://youtu.be/g59nSURxYgk?t=88

ozy|8 years ago

Reinforcement based learning requires self-observation. Especially when done with predictive modeling. Both the brain clearly does. You might like this paper: https://psyarxiv.com/387h9

aaimnr|8 years ago

What does learning have to do with consciousness? These are orthogonal issues. That's the whole point of Chalmers argument.