The author makes a good point that it's important to define what "a good simulation" means.
On one extreme, we cannot even solve the underlying physics equations for single atoms beyond hydrogen, let alone molecules, let alone complex proteins, etc. etc. all the way up to cells and neuron clusters. So that level of "good" seems enormously far off.
On the other hand, there are lots of useful approximations to be made.
If it looks like a duck and quacks like a duck, is it a duck?
If it squidges like a nematode and squirms like a nematode, is it a [simulation of a] nematode?
(if it talks like a human and makes up answers like a human, is it a human? ;)
> If it looks like a duck and quacks like a duck, is it a duck?
ISTM that the answer is "in a way yes, in a way no".
Yes, in that we reasonably conclude something is a duck if it seems like a duck.
No, in that seeming like a duck is not a cause of its being a duck (rather, it's the other way round).
When we want to figure out what something is, we reason from effect to cause. We know this thing is a duck because it waddles, quacks, lays eggs, etc etc. We figure out everything in reality this way. We know what a thing is by means of its behavior.
But ontologically -- ie outside our minds -- the opposite is happening from how we reason. Something waddles, quacks & lays eggs because it is a duck. Our reason goes from effect (the duck's behavior) to cause (the duck), but reality goes in the other direction.
Our reasoning (unlike reality) can be mistaken. We might be mistaking the model of a duck or a robot-duck for a real duck. But it doesn't follow from this that a model duck or a robot-duck is a duck. It just means a different cause is producing [some of] the same effects. This is true no matter how realistic the robot-duck is.
So we may (may!) be able to theoretically simulate a nematode, though the difficulty level must be astronomical, but that doesn't mean we've thereby created a nematode. This seems to be the case for attempting to simulate anything.
At least this is my understanding, I could be mistaken somewhere.
I think this is also one possible answer to the famous 'zombie' question.
> If it looks like a duck and quacks like a duck, is it a duck?
No, if it doesn't do everything else a duck does. You can have a robot dog, but you won't need to take it to the vet, feed it, sweep up it's hair, let it go outside to go potty, put up a warning sign for the mailman, or take it for a walk. You can have a simulated dog do all those things, but then how accurate will the biological functions be in trying to model it's physiology over time?
Will it give us insights into real dog psychology so we can better interact with our pets? Or does that need to happen with real dogs and real human researchers? Wildlife biologists aren't going to refer to simulated ducks to research their behavior in more depth. They'll go out and observe them, or bring them into the lab.
That's an incredibly narrow slice of properties of ducks, nematodes (and humans).
Is there truly so little that makes up the soul of a duck? No mention of laying eggs? Caring for it's young? Viciously chasing children across the lawn of the local park? (I know that's usually the prevue of Geese, however I have seen ducks launch the occasional offensive against too curious little ones)
We really have no idea whether consciousness is something that can arise from computation, or whether it is somehow dependent on the physical substrate of the universe. Maybe we can create a virtual brain that, from the outside, is indistinguishable from a physical brain, and which will argue vociferously that it is a real person, and yet experiences no more conscious qualia than an equation written on a piece of paper.
> We really have no idea whether consciousness is something that can arise from computation, or whether it is somehow dependent on the physical substrate of the universe.
I don't understand this argument. How is the computer running the computation not part of the "physical substrate of the universe"? _Everything_ is part of the universe almost by definition.
I think an even simpler argument can be made: our brain develops in response to the physical stimulus we experience from birth (earlier even).
Basically, even if it's a simple computation engine, can we put that simulation through the stimulus our brain experiences (not easily) and will lack of that turn into entirely differently behaving system?
It's well known it in fact isn't, otherwise learning would be impossible. Learning still isn't perfectly understood but one key characteristic is likely modulating synaptic strength (the weights mentioned). Also, yes, every cell and in particular neurons are very complex systems, although synapses themselves have various simplifying properties (specially along the axon, electrical communication really is the main method of communication).
I love how it is just assumed that "we can" and "we will", a good way to confidently tell you will burn human resources until you get to the point you want.
Meanwhile, the most advanced simulations are still rough approximations with little to no realism rather than "in this specific conditions and with this specific neural arrangements I made artificially, it behaves similarly to a real nematode", a good way to make a self-fulfilled prophecy.
>In 2013, neuroscientist Henry Markram secured about 1 billion euros from the European Union to "simulate the human brain" — a proposal widely deemed unrealistic even at the time. The project faced significant challenges and ultimately did not meet its ambitious yet vague goals
Unfortunately, it's not that easy. Axon terminals of neurons release neurotransmitters. We know of dozens of different types, but are not certain that we know about all of them yet. The same synapse can release multiple different neurotransmitters too, with one or more released depending on the axonic signals. And what to these chemicals do? It depends! There are receptors on the post-synaptic cell that respond to neurotransmitters, but there can be multiple different receptors that respond differently to the same neurotransmitter. Again, we aren't sure we know about all of them. The post-synaptic neuron is probably also listening to neurons of other types that signal using different neurotransmitters that it uses to determine if it should transmit an action potential or not. Oh, and invertebrates (like nematodes) send graded potentials (not action potentials like us vertebrates usually do) where the signal strength can vary.
In short - we are a long way from being able to simulate a nervous system. Our knowledge of neuronal biochemistry is not there yet.
There's a genuine question of whether fully simulating a brain will be enough. We have several hundred million neurons in our digestive system. What we eat, and the kind of bacteria that lives there influences our mood. Same with the rest of our body. Brains are part of a larger organism. What would it mean to just simulate the brain independent of a body? Our sensory organs play a role in processing the incoming sensory stimulus and then send that off to the brain.
It appears inevitable that we can fully map a dead person's neurons and synapses. [1] is doing essentially that for a tiny sliver, with some amazing images to show. From there, it's "just" scaling up.
That alone wouldn't be enough to fully clone a person's consciousness. There is information stored in the actively firing synapses. For example short-term memory seems to be stored by sending signals in a loop, and there might be more such mechanisms. Those signals are obviously lost once the brain is dead. Another issue are hormones. The same brain regulated by a different (simulated) body might behave completely different. And then there are probably a lot of unknown unknowns. Despite decades of research there are still a lot of open questions, and more questions will become apparent once we actually start simulating complex brains.
But that doesn't mean that those early methods wouldn't be useful, both for science and for more questionable efforts. For example accessing the long-term memory of a recently deceased might be comparatively viable if given enough funding
A human mind simulating another human mind is a computational system which is powerful enough to do arithmetic acting on itself, so Gödel's incompleteness theorems apply.
> This represents the next phase in human evolution, freeing our cognition and memory from the limits of our organic structure. Unfortunately, it’s also a long way off.
I'm actually happy it's a long way off. Feels like the richer humans would live with cheat codes, and the others wouldn't.
I disagree that it is the all or nothing thing the author implies. I say this isn't a long way off, it's something we've been doing for centuries. Writing is a great example of our "freeing our cognition and memory from the limits of our organic structure". We've used a technology to extend our memory and allow others access to that memory. A calculator is another easy to understand example of this principle. I think Heidegger best explains this relationship between us and our technology with his ideas around Das Zeug and ready-to-hand.
We are already cyborgs.
Against that I'm quite up for doing away with death. A much less computationally challenging version may not be that far off, more along the lines of an LLM trying to be you rather than a neuron level simulation.
I'd be worried during the brain scan of losing the coin flip and waking up in digital Neura-hell being tortured for eternity for Elon Musk's enjoyment.
Ego death is a brutal suboptimum. It's tragic that any entity brought into and knowing of its own existence has to die and be forever annihilated.
If humanity has only one goal, and that goal was to achieve immortality for all humans henceforth [1], that would be a noble cause for our species.
I hate that those I care about will cease to exist.
Fuck death.
[1] Maybe we get lucky and they master physics, reverse the lightcone, and they pull each of us out of the ether of time with perfect memories to join them. Sign me up. I consent.
But there won't be any others to would or wouldn't. When human fertility rates drop below 2.1, population shrinks. Each generation is smaller than the last. The inevitable result of shrinking (through fertility decline rather than war/disease/disaster) is inevitable extinction. You have the equality of species oblivion to look forward to.
nycticorax|11 months ago
dang|11 months ago
C. Elegans: The worm that no computer scientist can crack - https://news.ycombinator.com/item?id=43490290 - March 2025 (130 comments)
interroboink|11 months ago
On one extreme, we cannot even solve the underlying physics equations for single atoms beyond hydrogen, let alone molecules, let alone complex proteins, etc. etc. all the way up to cells and neuron clusters. So that level of "good" seems enormously far off.
On the other hand, there are lots of useful approximations to be made.
If it looks like a duck and quacks like a duck, is it a duck?
If it squidges like a nematode and squirms like a nematode, is it a [simulation of a] nematode?
(if it talks like a human and makes up answers like a human, is it a human? ;)
geye1234|11 months ago
ISTM that the answer is "in a way yes, in a way no".
Yes, in that we reasonably conclude something is a duck if it seems like a duck.
No, in that seeming like a duck is not a cause of its being a duck (rather, it's the other way round).
When we want to figure out what something is, we reason from effect to cause. We know this thing is a duck because it waddles, quacks, lays eggs, etc etc. We figure out everything in reality this way. We know what a thing is by means of its behavior.
But ontologically -- ie outside our minds -- the opposite is happening from how we reason. Something waddles, quacks & lays eggs because it is a duck. Our reason goes from effect (the duck's behavior) to cause (the duck), but reality goes in the other direction.
Our reasoning (unlike reality) can be mistaken. We might be mistaking the model of a duck or a robot-duck for a real duck. But it doesn't follow from this that a model duck or a robot-duck is a duck. It just means a different cause is producing [some of] the same effects. This is true no matter how realistic the robot-duck is.
So we may (may!) be able to theoretically simulate a nematode, though the difficulty level must be astronomical, but that doesn't mean we've thereby created a nematode. This seems to be the case for attempting to simulate anything.
At least this is my understanding, I could be mistaken somewhere.
I think this is also one possible answer to the famous 'zombie' question.
goatlover|11 months ago
No, if it doesn't do everything else a duck does. You can have a robot dog, but you won't need to take it to the vet, feed it, sweep up it's hair, let it go outside to go potty, put up a warning sign for the mailman, or take it for a walk. You can have a simulated dog do all those things, but then how accurate will the biological functions be in trying to model it's physiology over time?
Will it give us insights into real dog psychology so we can better interact with our pets? Or does that need to happen with real dogs and real human researchers? Wildlife biologists aren't going to refer to simulated ducks to research their behavior in more depth. They'll go out and observe them, or bring them into the lab.
zehaeva|11 months ago
Is there truly so little that makes up the soul of a duck? No mention of laying eggs? Caring for it's young? Viciously chasing children across the lawn of the local park? (I know that's usually the prevue of Geese, however I have seen ducks launch the occasional offensive against too curious little ones)
skybrian|11 months ago
jl6|11 months ago
https://en.wikipedia.org/wiki/Philosophical_zombie
We really have no idea whether consciousness is something that can arise from computation, or whether it is somehow dependent on the physical substrate of the universe. Maybe we can create a virtual brain that, from the outside, is indistinguishable from a physical brain, and which will argue vociferously that it is a real person, and yet experiences no more conscious qualia than an equation written on a piece of paper.
doctoboggan|11 months ago
I don't understand this argument. How is the computer running the computation not part of the "physical substrate of the universe"? _Everything_ is part of the universe almost by definition.
pwatsonwailes|11 months ago
The answer to that would appear to be, no.
necovek|11 months ago
Basically, even if it's a simple computation engine, can we put that simulation through the stimulus our brain experiences (not easily) and will lack of that turn into entirely differently behaving system?
rpastuszak|11 months ago
koerding2|11 months ago
cabirum|11 months ago
dreamworld|11 months ago
nipah|11 months ago
Meanwhile, the most advanced simulations are still rough approximations with little to no realism rather than "in this specific conditions and with this specific neural arrangements I made artificially, it behaves similarly to a real nematode", a good way to make a self-fulfilled prophecy.
afh1|11 months ago
Ah, so this is where 45% of my salary goes.
lm28469|11 months ago
sva_|11 months ago
https://www.humanbrainproject.eu/en/follow-hbp/news/2023/09/...
dj_axl|11 months ago
https://www.abc.net.au/news/science/2025-03-05/cortical-labs...
protonfish|11 months ago
In short - we are a long way from being able to simulate a nervous system. Our knowledge of neuronal biochemistry is not there yet.
lennxa|11 months ago
https://youtu.be/bEXefdbQDjw
brap|11 months ago
So many philosophical, ethical and legal questions. And unsettling possibilities.
We will probably have to deal with this someday.
jasiu85|11 months ago
fmbb|11 months ago
This is quite an extraordinary claim with no extraordinary evidence.
As said elsewhere in this thread we can at this moment not even simulate single atoms.
I see no reason to believe at all that we will ever be able to simulate a human brain.
Unless you want my simulation here:
goatlover|11 months ago
the__alchemist|11 months ago
wongarsu|11 months ago
That alone wouldn't be enough to fully clone a person's consciousness. There is information stored in the actively firing synapses. For example short-term memory seems to be stored by sending signals in a loop, and there might be more such mechanisms. Those signals are obviously lost once the brain is dead. Another issue are hormones. The same brain regulated by a different (simulated) body might behave completely different. And then there are probably a lot of unknown unknowns. Despite decades of research there are still a lot of open questions, and more questions will become apparent once we actually start simulating complex brains.
But that doesn't mean that those early methods wouldn't be useful, both for science and for more questionable efforts. For example accessing the long-term memory of a recently deceased might be comparatively viable if given enough funding
1: https://edition.cnn.com/2024/05/15/world/human-brain-map-har...
tialaramex|11 months ago
ljouhet|11 months ago
lennxa|11 months ago
palata|11 months ago
I'm actually happy it's a long way off. Feels like the richer humans would live with cheat codes, and the others wouldn't.
eikenberry|11 months ago
tim333|11 months ago
sherburt3|11 months ago
echelon|11 months ago
Ego death is a brutal suboptimum. It's tragic that any entity brought into and knowing of its own existence has to die and be forever annihilated.
If humanity has only one goal, and that goal was to achieve immortality for all humans henceforth [1], that would be a noble cause for our species.
I hate that those I care about will cease to exist.
Fuck death.
[1] Maybe we get lucky and they master physics, reverse the lightcone, and they pull each of us out of the ether of time with perfect memories to join them. Sign me up. I consent.
NoMoreNicksLeft|11 months ago
Isamu|11 months ago
tomxor|11 months ago
warpech|11 months ago
dreamworld|11 months ago
unknown|11 months ago
[deleted]
RotationPedant|11 months ago
[deleted]
GcryptUser|11 months ago
[deleted]
James_K|11 months ago
[deleted]
bondarchuk|11 months ago
James_K|11 months ago
[deleted]
quantum_state|11 months ago
tekla|11 months ago