top | item 39127028

Brains are not required to think or solve problems – simple cells can do it

426 points| anjel | 2 years ago |scientificamerican.com | reply

396 comments

order
[+] iandanforth|2 years ago|reply
There are a couple traps to be aware of with this article.

1. "Bioelectricity"

This is a generic term which doesn't capture the nuance of charge gradients and chemical gradients in cells. While you can directly apply charges to interact with gradient based biological systems, this is a brute force method. Cells have chemically selective walls. So while applying an external electrical voltage can act in a similar manner as causing a neuron to fire, it is far less precise than the calcium and sodium channel mediated depolarization which implements normal firing. Said another way 'bioelectricity' is not simple.

2. Replacement

This one is a bit more subtle. If you find that you can affect a system by one means that is not the same thing as saying the means is the cause. Take the example of using RNA to transfer memory from one Aplysia to another. Immediately after transfer the recipient does not have the memory. It takes time for the introduced RNA to affect sensory cells so that they become more sensitive to stimulation. This is in contrast to a trained animal that has already undergone synaptic remodeling. If you have the appropriate synapses but were somehow able to remove all the relevant RNA in an instant, the animal would continue to 'remember' its training. Synapses are sufficient.

In reality there are multiple systems that work together over multiple timescales to produce the behaviors we observe. Some of those systems can have their contributions mimicked by other interventions. Because of this complexity you can never say 'it's really about X', the best you can say is 'X plays a major role' or 'X contributes Y percent to this observed phenomenon'.

[+] daveguy|2 years ago|reply
> Said another way 'bioelectricity' is not simple.

> If you have the appropriate synapses but were somehow able to remove all the relevant RNA in an instant, the animal would continue to 'remember' its training. Synapses are sufficient.

I'm not sure these two statements are compatible. The first is definitely true, and rna does function on a slower timescale. We can't be 100% confident that some of the complexity we don't understand in the first statement wouldn't have an impact in the second scenario, can we?

[+] dekhn|2 years ago|reply
I am not sure I would call RNA transferring regulatory programs "memory". This looks more like epigenetic transfer than what we would call memory (IE, factual recall). My training was before the more recent work with Aplysia, but "RNA memory transfer in planaria" was presented as an example of "how to make big claims with irreproducible experiments" in grad school.

I appreciate that epigenetics is a well-established field at this point but I worry people conflate its effects with other phenomena.

[+] RaftPeople|2 years ago|reply
> This is in contrast to a trained animal that has already undergone synaptic remodeling. If you have the appropriate synapses but were somehow able to remove all the relevant RNA in an instant, the animal would continue to 'remember' its training. Synapses are sufficient.

Not if you removed the DNA. Epigenetic changes to the DNA are what maintain the synapse at it's "learned" state. Here's a link:

https://www.sciencedirect.com/science/article/pii/S240584402...

In addition, research has shown neurons communicating via mRNA (surrounded by a lipid).

https://www.nature.com/articles/d41586-018-00492-w

https://www.inverse.com/article/40113-arc-protein-ancient-mo...

Lots of interesting stuff in this arena.

[+] nickpsecurity|2 years ago|reply
I also want to know how much of this was replicated by independent, skeptical sources looking for alternative explanations. One thing I see in “science” reporting is that one or a few people make wild claims, it hits the news, and people believe their word on faith with no replication. There’s also many statements about what we know where the claims made should have citations, too. Yet, people who have never run experiments like that are nodding along saying, “Of course it’s true.”

Or was all this replicated? What strengths and weaknesses did they hypothesize in these studies? What did they prove or disprove? What’s the next steps? And can we already implement any of those in simulators?

(Note: I think agents poking and prodding the world can definitely be implemented in simulators. Even primitive, game engines should be able to model some of that.)

[+] eurekin|2 years ago|reply
Where one can learn about that in more details?
[+] generalizations|2 years ago|reply
> In reality there are multiple systems that work together over multiple timescales to produce the behaviors we observe. Some of those systems can have their contributions mimicked by other interventions. Because of this complexity you can never say 'it's really about X', the best you can say is 'X plays a major role' or 'X contributes Y percent to this observed phenomenon'.

You can say the same thing about computer systems - as long as you don't understand the underlying logic. If you don't understand that the chemistry of transistors doesn't matter as much as the C code, you can say exactly the same critique about how a thinkpad works: "So while applying an external electrical voltage can act in a similar manner as causing a neuron to fire, it is far less precise than the calcium and sodium channel mediated depolarization which implements normal firing. Said another way 'bioelectricity' is not simple....In reality there are multiple systems that work together over multiple timescales to produce the behaviors we observe. Some of those systems can have their contributions mimicked by other interventions."

Once you do understand the logic - the 'why' of von neumann machines and Javascript and transistors, it's clear that your claim isn't true and there is an underlying logic. The trouble is, until we positively identify that logic, we can't know if it exists or not and we're stuck debating the bioequivalent of the fundamental computational significance of the clock cycle speed of a CPU.

[+] agumonkey|2 years ago|reply
Interesting to see Levin's zeitgeist spreading (even though considering the amount of podcast and discussions he made explains that too).

I don't know what the biological/medical field thought about single cell and tissue level intelligence before but I found this gap in the usual medical thinking (usually things are either genetic or biochemical/hormonal) quite mind blowing.

Hopefully this results in new opportunities for finer medical therapies.

[+] Arjuna144|2 years ago|reply
This is just incredible! I follow Michael Leavin since quite a while now and I am sure that he will earn a Nobel Price for this outstanding research! All the other things that he adresses in his Presentations and also Interviews are just mindblowing!(the one with Lex Fridman is quite in depth, but I prefer others even more)

This really has the potential to revolutionize our understanding of intelligence, mind and medicine. He may just tell cells to grow a new heart without modifying genes. He want to have what he calls an 'anatomical compiler' which translates our "designs" to electro-magnetic cell stimuli so that they will build this.

For me this is really pointing into a worldview that is much more in line with view that the ancient mystics from all cultures throughout all the ages have been pointing towards: Intelligence is something fundamental to existance, like space and time (maybe even more fundamental). It is all a play of intelligence, it is phenomenal and it can be tapped into. This is amazing!!!

[+] teekert|2 years ago|reply
I've been listening a lot to Sean Caroll's mindscape podcast [0]. In it they have this notion of complex-to-intelligent systems. Their loose definition is that such systems can hold an internal state that represents the world around them. A sort of model to interact with and to extrapolate future events from (time travel!). In this light consciousness also makes more sense to me, although consciousness feels more like a by-product, our (human) ability to hold an internal model of the world in our minds and interact with it, is pretty advanced. One can imagine, somehow in the feedback loops (I think, that she thinks, that I think, that she thinks, ...), something like consciousness (awareness [a model?] of the self in the world?) evolved.

Anyway, cells can hold (super) primitive models of the world and maintain internal balance in the face of anticipated events.

I'm just a cocktail philosopher, but aren't we all.

[0] https://podverse.fm/podcast/e42yV38oN

[+] mewpmewp2|2 years ago|reply
But still - why is consciousness required? Because a model of the World could be held even without it, in my view.

E.g., I wouldn't think GPT-4 is conscious, but I'm pretty sure there's a representation of abstract World and relationships within it following the neurons and weights. Otherwise it wouldn't be able to do much of it, that it is.

Also I think model of the World is just that - which can be represented as relationships between neurons, symbolising that model of the World.

And I think you can have a complex and a perfect set of neurons and their connections to represent everything in the most efficient manner for that size of parameters (neurons and connections together). There probably is the perfect configuration, but it couldn't even be achieved using training or evolutionary methods.

And none of it requires consciousness in my view.

[+] beambot|2 years ago|reply
A thermostat is a system that can hold an internal state (nominally, temperature) that represents the world around them. You can also build a thermostat with a switch and a bimetallic strip with differing rates of thermal expansion -- a device that is clearly not intelligent. I'm not sure I can subscribe to this definition...
[+] harha_|2 years ago|reply
You can say that. You can say a lot of things to explain consciousness in a materialistic sense, as in how it could've emerged. But I cannot fathom how material interacting with other material and forces gives arise to subjective experience. It simply makes no sense to me. If I create a copy of my brain, it would be conscious, but with its own unique subjective experience. This makes sense so far, but what exactly is this subjective experience and how can "mere" mechanical matter create such an entity.

So in short: I cannot understand what is the actual substance of subjective experience.

[+] wouldbecouldbe|2 years ago|reply
We have a deep-founded believe that the atom is the core of reality.

And everything emerges from there.

This materialism stems from René Descartes and his fellow philosophers.

And in the West it's often subconsciously combined it with evolutional theory. consciousness developed because it was useful somehow. However that's a very big leap to make.

Both theories have good arguments going for them but are very theoretical and need a lot more proof. Yet they form the basis for pretty much all Western thought

From a scientific perspective we have no idea how to create new consciousness or what it is.

From a human's experience it's more the other way around, reality is an emerging property of consciousness.

At the same time we also learned that matter & time is not as solid as we thought a few centuries ago.

[+] indigochill|2 years ago|reply
> Anyway, cells can hold (super) primitive models of the world and maintain internal balance in the face of anticipated events.

I'm not even a cocktail biologist, but my understanding is cells effectively operate via a web of complex chemical reactions, so the notion of a cell holding primitive models might be analogous to the way a CPU executes an assembly instruction: not because it "thinks" but because the way it's wired it's (nearly - barring solar radiation, I suppose, which incidentally also goes for cells) inevitable that it will react to a stimulus in a predefined way (even though the way cells react to stimuli is far more advanced than a CPU).

In a similar way, "anticipating events" could involve an analogue to computer memory: the processes that have run so far have lead to certain state being saved to memory that will now influence how the system reacts to stimuli in a way that's different from how it reacted before (e.g. sum a value with the value stored in a register).

[+] stcredzero|2 years ago|reply
Anyway, cells can hold (super) primitive models of the world and maintain internal balance in the face of anticipated events.

I've occasionally run into science podcasts, going back almost a decade, where some researcher talks about the computational power of cell membranes, and how the synapses evolved from these mechanisms. Amoebas and paramecia navigate their environments, sense, and react through their cell membranes. Apparently, synapses evolved from these mechanisms.

The upshot of this for AI, is that the neural network model may be drastically incomplete, with far more computation actually happening inside actual individual neurons.

[+] jebarker|2 years ago|reply
I'm also a cocktail philosopher, but isn't consciousness different to just having a model of the world and self within it? Consciousness is the lived experience. The world model and feeling of self appear in consciousness. I think a complex system could plausibly be conscious without having a belief of a self within it. Not sure if consciousness is possible without any world model though.

My impressions about this were strongly influenced by Sam Harris's Waking Up book and app.

[+] FrustratedMonky|2 years ago|reply
As another cocktail philosopher.

I think everyone should ponder this, when thinking about how they think, like as if they are the one thinking at all.

"Man can do what he wills but he cannot will what he wills.” ― Arthur Schopenhauer, Essays and Aphorisms

[+] bcherny|2 years ago|reply
This is one of Hofstadter’s big ideas that he explored in his main work: GEB, Mind’s I, and I am a Strange Loop. The latter is a good intro to his work.
[+] wrycoder|2 years ago|reply
The particular podcast didn’t come across with that link. Can you provide the title or number? I’d like to listen to it! I reviewed a fair amount the podcast list, but didn’t find a match to your description.
[+] rolisz|2 years ago|reply
Joscha Bach also talks about this a lot. He calls the consciousness the monkey with a stick controlling the elephant. For a starting point, listen to his Lex Fridman interviews.
[+] bookofjoe|2 years ago|reply
What we call consciousness may have same relationship to what creates it as Plato's cave shadows to what generates them.
[+] Etheryte|2 years ago|reply
Not everyone is a philosopher with a cocktail, but surely we're all cocktail philosophers.
[+] moffkalast|2 years ago|reply
> A sort of model to interact with and to extrapolate future events from

Something something LLMs can only predict the next word.

I hate to spin up this trendy debate again, but it's always funny to me to see the dissonance when talking about the exact same things in biological and mathematical cases.

[+] lkadjal|2 years ago|reply
> In this light consciousness also makes more sense to me, although consciousness feels more like a by-product, our (human) ability to hold an internal model of the world in our minds and interact with it, is pretty advanced.

You can generate all kind of sentences like this all day you want in your consciousness. That does not make it any true.

There is zero evidence for existence of physical matter/materialism.

The only thing we know for sure that exists is consciousness.

And you suggest the complete opposite with zero evidence.

[+] naasking|2 years ago|reply
Brains are not required to solve problems, yes, but they are required to think. That's one of their defining characteristics. It's not a thought without something like a brain, at best it's a pre-programmed/pre-trained behavioural response.
[+] Arjuna144|2 years ago|reply
> "... but they are required to think"

Let me humbly suggest to you to not make such (Truth) statements! I dont know of any hard evidence that supports this. I know this is what most people believe, but the focus is on believe.

[+] dimal|2 years ago|reply
That’s misunderstanding what they’re saying. If you watch some of Michael Levin’s talks on YouTube, he specifically uses William James’ definition of intelligence (Intelligence is a fixed goal with variable means of achieving it) and has experimentally shown this capability at cellular scales. He shows how it cannot be pre-programmed behavior. There seems to be goal directed behavior.
[+] FrustratedMonky|2 years ago|reply
This is pretty similar to concept in "Children of Time" Adrian Tchaikovsky.

I've always thought the concept in the book of 'DNA' memory storage, was SCI-FI. Cool concept, but really far out. So this is pretty exciting that this Sci-Fi concept could happen.

What if we could drink something to give us the memories of someone else. And this would be way to drink a 'degree', and learn a ton fast.

"Glanzman was able to transfer a memory of an electric shock from one sea slug to another by extracting RNA from the brains of shocked slugs and injecting it into the brains of new slugs. The recipients then “remembered” to recoil from the touch that preceded the shock. If RNA can be a medium of memory storage, any cell might have the ability, not just neurons."

[+] inglor_cz|2 years ago|reply
Michael Levin is a rare example of a scientist who really thinks outside the box and goes wherever few have gone before.
[+] apienx|2 years ago|reply
> “Indeed, the very act of living is by default a cognitive state, Lyon says. Every cell needs to be constantly evaluating its surroundings, making decisions about what to let in and what to keep out and planning its next steps. Cognition didn't arrive later in evolution. It's what made life possible.“

Yes. Cognition isn’t just about solving differential equations and the like. It also refers to the most basic functions/processes such as perception and evaluation.

[+] DrStormyDaniels|2 years ago|reply
Is perception and evaluation a basic function? By analogy with cellular life, maybe. But I think this abstraction hides more than it reveals.
[+] Narciss|2 years ago|reply
"All intelligence is really collective intelligence, because every cognitive system is made of some kind of parts" - that's exactly the basis for the popularity theory of consciousness, which deduces that not only humans are conscious (and plants and other animals, etc), but also the global human society can have a sort of consciousness.

https://consciousness.social

[+] efitz|2 years ago|reply
This is great news given the relative scarcity of brains among humans.
[+] dspillett|2 years ago|reply
My reading (caveat: not a biologist, other sort of scientist, nor philosopher) is that a brain is required to translate the environment and its collection of problems into something (or some things) that its simpler structures can “solve” (where “solve” could just mean “act usefully in response to” and that act/response could be to ignore), and then to translate any responses but out to that more complex environment.

Cells can solve problems in their limited context, though that context can be less limited than you might first think (consider single celled life can have relatively complex interactions). Groups of cells can solve more complex problems, by working directly together or by some acting as support structures while others do the solving. Complex bodies and brains build up in parts from there over time.

[+] saurabhpandit26|2 years ago|reply
Micheal levin is just incredible, he appears on lot of podcasts on Youtube. His work on collective intelligence of cells, xenobots and regeneration is just mind boggling.
[+] photochemsyn|2 years ago|reply
Next stage in AI?

> According to Bongard, that's because these AIs are, in a sense, too heady. “If you play with these AIs, you can start to see where the cracks are. And they tend to be around things like common sense and cause and effect, which points toward why you need a body. If you have a body, you can learn about cause and effect because you can cause effects. But these AI systems can't learn about the world by poking at it.”

[+] nmstoker|2 years ago|reply
This is a little like the 60s experiment teaching what I believe were nematodes to arch their backs in response to a light shone by the researchers.

Those nematodes were ground up and fed to new untrained nematodes which then acquired the back arching response.

Can't find the original paper but it was covered in the 1984 book The Science in Science Fiction.

[+] maxglute|2 years ago|reply
Feels like Peter Watt's Blindsight, conciousness not needed for advanced problem solving, and may actually hinder.
[+] Sparkyte|2 years ago|reply
Brains are for complex tasks linked by a series of simple problems handled by simple cells. It is a network.