Awareness is just continuous propagation of the neural network, be that artificial or biological. The reason thoughts just "appear" is because the brain is continuously propagating signal through the neural network. LLMs also do this during their decoding phase, where they reason continuously with every token that they generate. There is no difference here.
Then you say "we don't think most of the times using language exclusively" , but neither do LLMs. What most people fail to realise is that in between each token being generated, black magic is happening in between the transformer layers. The same type of magic you describe. High dimensional. Based on complex concepts. Merging of ideas. Fusion of vectors to form a combined concept. Smart compression. Application of abstract rules. An LLM does all of these things, and more, and you can prove this by how complex their output is. Or, you can read studies by Anthropic on interpretability, and how LLMs do math underneath the transformer layers. How they manipulate information.AGI is not here with LLMs, but its not because they lack reasoning ability. It's due to something different. Here is what I think is truly missing: continuous learning, long term memory, and infinite and efficient context/operation. All of these are tied together deeply, and thus I believe we are but a simple breakthrough away from AGI.
10weirdfishes|4 months ago
The idea of awareness being propagations through the NN is an interesting concept though. I wonder if this idea be proven through monitoring the electrical signals within the brain.
luisml77|4 months ago
In essence, I think it doesn't matter that the brain has a whole bunch of chemistry added into it that artificial neural networks don't. The underlying deep non-linear function mapping capability is the same, and I believe this depth is, in both cases, comparable.
laterium|4 months ago
emptysongglass|4 months ago
This is just a claim you are making, without evidence.
The way you understand awareness is not through "this is like that" comparisons. These comparisons fall over almost immediately as soon as you turn your attention to the mind itself, by observing it for any length of time. Try it. Go observe your mind in silence for months. You will observe for yourself it is not what you've declared it to be.
> An LLM does all of these things, and more, and you can prove this by how complex their output is.
Complex output does not prove anything. You are again just making claims.
It is astoundingly easy to push an LLM over to collapse into ungrounded nonsense. Humans don't function this way because the two modes of reasoning are not alike. It's up to those making extraordinary claims to prove otherwise. As it is, the evidence does not exist that they behave comparably.
2OEH8eoCRo0|4 months ago
It's immaterial and not measurable thus possibly out of reach of science.
antonvs|4 months ago
Wait, you mean this HN comment didn't casually solve the hard problem of consciousness?
buster|4 months ago
How easy? What specific methods accomplish this? Are these methods fundamentally different from those that mislead humans?
How is this different from exploiting cognitive limitations in any reasoning system—whether a developing child's incomplete knowledge or an adult's reliance on heuristics?
How is it different from Fake News and adults taking Fake News for granted and replicating bullshit?
Research on misinformation psychology supports this parallel. According to https://www.sciencedirect.com/science/article/pii/S136466132...:
Perhaps human and LLM reasoning capabilities differ in mechanism but not in fundamental robustness against manipulation?Maybe the only real difference is our long term experience and long term memory?
luisml77|4 months ago
Even though complex output can be deceptive of the underlying mental model used to produce it, in my personal experience, LLMs have produced for me output that must imply extremely complex internal behaviour, with all the characteristics I mentioned before. Namely, I frequently program with LLMs, and there is simply zero percent probability that their output tokens exist WITHOUT first having thought at a very deep level about the unique problem I presented to them. And I think anyone that has used the models to the level I have, and interacted with them this extensively, knows that behind each token there is this black magic.
To summarize, I am not being naive by saying I believe everything my LLM says to me. I rather know very intimately where the LLM is deceiving me and when its producing output where its mental model must have been very advanced to do so. And this is through personal experience playing with this technology, both inference and training.
[1] https://www.anthropic.com/research/tracing-thoughts-language...
ozgung|4 months ago
Thank you by saying that. I think most people have an incomplete mental model for how LLMs work. And it's very misleading for understanding what they really do and can achieve. "Next token prediction" is done only at the output layer. It's not what really happens internally. The secret sauce is at the hidden layers of a very deep neural network. There are no words or tokens inside the network. A transformer is not the simple token estimator that most people imagine.
luisml77|4 months ago