We don't really know a consciousness really is and I think it is premature to dismiss the possibility of replicating the behavior with a mathematical model.
Hell, I don't even know for sure if I'm "conscious". When I really stop and think hard about it, the process of speaking or typing, word by word (even this!) is built on past experience. If you smack me on the head hard enough, and give me amnesia, there goes all that memory and suddenly I can't talk about the things that I could before. I would struggle and need to be exposed to new information (looking at it, reading up, being told about it, etc.) to be able to discuss it further. For me, that idea suggests there's a process that's not entirely different from a large language model. Not the same. But definitely makes me wonder if have more in common with them on some level and there isn't as much to the human mind as we think. For humans, maybe we're just more than the sum of our components.
The commonality breaks down at value assignment. You hear an unexpected sound and have a threat/delight assessment in 170ms. Faster than Google serves a first byte. You do this with virtually no data.
An LLM doesn't assign value to anything; it predicts tokens. The interesting question isn't whether we share a process with LLMs, it's whether the thing that makes your decisions matter to you, moral weight, spontaneous motivation, can emerge from a system that has no survival stake in its own outputs. I wrote about this a few years ago as "the consciousness gap": https://hackernoon.com/ai-and-the-consciousness-gap-lr4k3yg8
Consciousness is an emergent phenomenon from the ability to fantasize - to think about things that don't exist. In particular, it is fantasy about the "self".
I don't have the data to demonstrate that this is incorrect, and that's because we lack a fundamental model of how brains operate. Brains probably compute under an expansive definition of computer, but to say it's a classical computer is sorely underdetermined by the evidence.
fortyseven|5 days ago
adamzwasserman|5 days ago
An LLM doesn't assign value to anything; it predicts tokens. The interesting question isn't whether we share a process with LLMs, it's whether the thing that makes your decisions matter to you, moral weight, spontaneous motivation, can emerge from a system that has no survival stake in its own outputs. I wrote about this a few years ago as "the consciousness gap": https://hackernoon.com/ai-and-the-consciousness-gap-lr4k3yg8
Merrill|5 days ago
AI is getting close.
superxpro12|5 days ago
jhickok|5 days ago