Given we don't understand consciousness, nor the internal workings of these models, the fact that their externally-observable behavior displays qualities we've only previously observed in other conscious beings is a reason to be real careful. What is it that you'd expect to see, which you currently don't see, in a world where some model was in fact conscious during inference?
root_axis|6 months ago
It doesn't follow logically that because we don't understand two things we should then conclude that there is a connection between them.
> What is it that you'd expect to see, which you currently don't see, in a world where some model was in fact conscious during inference?
There's no observable behavior that would make me think they're conscious because again, there's simply no reason they need to be.
We have reason to assume consciousness exists because it serves some purpose in our evolutionary history, like pain, fear, hunger, love and every other biological function that simply don't exist in computers. The idea doesn't really make any sense when you think about it.
If GPT-5 is conscious, why not GPT-1? Why not all the other extremely informationally complex systems in computers and nature? If you're of the belief that many non-living conscious systems probably exist all around us then I'm fine with the conclusion that LLMs might also be conscious, but short of that there's just no reason to think they are.
comp_throw7|6 months ago
I didn't say that there's a connection between the two of them because we don't understand them. The fact that we don't understand them means it's difficult to confidently rule out this possibility.
The reason we might privilege the hypothesis (https://www.lesswrong.com/w/privileging-the-hypothesis) at all is because we might expect that the human behavior of talking about consciousness is causally downstream of humans having consciousness.
> We have reason to assume consciousness exists because it serves some purpose in our evolutionary history, like pain, fear, hunger, love and every other biological function that simply don't exist in computers. The idea doesn't really make any sense when you think about it.
I don't really think we _have_ to assume this. Sure, it seems reasonable to give some weight to the hypothesis that if it wasn't adaptive, we wouldn't have it. (But not an overwhelming amount of weight.) This doesn't say anything about the underlying mechanism that causes it, and what other circumstances might cause it to exist elsewhere.
> If GPT-5 is conscious, why not GPT-1?
Because GPT-1 (and all of those other things) don't display behaviors that, in humans, we believe are causally downstream of having consciousness? That was the entire point of my comment.
And, to be clear, I don't actually put that high a probability that current models have most (or "enough") of the relevant qualities that people are talking about when they talk about consciousness - maybe 5-10%? But the idea that there's literally no reason to think this is something that might be possible, now or in the future, is quite strange, and I think would require believing some pretty weird things (like dualism, etc).