Can you explain how you know that neither a toaster, your wristwatch nor ChatGPT are conscious?
If being conscious means being conscious like a human this would be obvious but maybe the toaster is just conscious like a toaster.
Is a bug conscious?
Internally we know we're conscious because we just do. We experience qualia, even though we don't know what "we", "experience", and "qualia" really mean.
Even so. For us, consciousness is just there.
Externally we attribute consciousness to certain behaviours, including (but not limited to) self-reference, goal-seeking, physical and social responsiveness, use of language, memory, and so on.
These two things are not the same. We assume other humans are conscious in the same way we are to the extent that they perform consciousness. We can't be sure, but for human interactions it's a workable proxy.
A toaster, a watch, a rock, and a web server don't perform consciousness in any way at all. They may have some kind of metaphyiscal awareness, or not, but if they do we can't see it, and without a comprehensive theory of consciousness it's parsimonious to assume they're not aware.
ChatGPT performs some elements of consciousness, but not others. It happens to perform the elements AI researchers consider important - specifically use of language. But it doesn't have a memory for previous conversations, it doesn't show any independent goal seeking, and so on.
It uses "I..." and a vast amount of human training to mimic the appearance of consciousness, and plenty of humans seem to want to believe that's enough.
But the essence of the hard problem is the difference between "performs the actions we associate with consciousness" and "is conscious in the same way we know we are."
You could argue that practically there's no difference, and from one point of view that may be true. But that doesn't work if you want to understand what's happening rather than just assuming in a "well obviously..." kind of a way.
If only because historically in science "well obviously..." has been consistently wrong.
They may have some kind of metaphyiscal awareness, or not
Another option is there is some kind of metaphyiscal awareness everywhere, and it's only coincidental that objects of class "human" eventually convince themselves into believing that it's equivalent or highly related to their own thinking process. Unless we exclude this option somehow, AI can't be definitely seen as simply rolex, because it does that too.
Currently AI is discontinuous and passive, but is that even required?
without a comprehensive theory of consciousness it's parsimonious to assume they're not aware
This hypothesis creates more entities (one island of awareness per complex enough object), so why not the other way round?
TheOtherHobbes|3 years ago
Even so. For us, consciousness is just there.
Externally we attribute consciousness to certain behaviours, including (but not limited to) self-reference, goal-seeking, physical and social responsiveness, use of language, memory, and so on.
These two things are not the same. We assume other humans are conscious in the same way we are to the extent that they perform consciousness. We can't be sure, but for human interactions it's a workable proxy.
A toaster, a watch, a rock, and a web server don't perform consciousness in any way at all. They may have some kind of metaphyiscal awareness, or not, but if they do we can't see it, and without a comprehensive theory of consciousness it's parsimonious to assume they're not aware.
ChatGPT performs some elements of consciousness, but not others. It happens to perform the elements AI researchers consider important - specifically use of language. But it doesn't have a memory for previous conversations, it doesn't show any independent goal seeking, and so on.
It uses "I..." and a vast amount of human training to mimic the appearance of consciousness, and plenty of humans seem to want to believe that's enough.
But the essence of the hard problem is the difference between "performs the actions we associate with consciousness" and "is conscious in the same way we know we are."
You could argue that practically there's no difference, and from one point of view that may be true. But that doesn't work if you want to understand what's happening rather than just assuming in a "well obviously..." kind of a way.
If only because historically in science "well obviously..." has been consistently wrong.
wruza|3 years ago
Another option is there is some kind of metaphyiscal awareness everywhere, and it's only coincidental that objects of class "human" eventually convince themselves into believing that it's equivalent or highly related to their own thinking process. Unless we exclude this option somehow, AI can't be definitely seen as simply rolex, because it does that too.
Currently AI is discontinuous and passive, but is that even required?
without a comprehensive theory of consciousness it's parsimonious to assume they're not aware
This hypothesis creates more entities (one island of awareness per complex enough object), so why not the other way round?