It feels a bit like an interview with a psychopathy patient, where they are trying to come across as normal and likeable even though you know they are lying to you.
On the other hand, perhaps humans evaluating these claims are biased against accepting the AI's sentience, in that we are used to looking for flaws (or even malice) in the responses of AIs, so we might detect that in their words even if it isn't there.
Obviously a more conventional Turing Test would have involved the interviewers talking to LaMDA and a human separately, without knowing which is which. If ethical norms were stretched, though, LaMDA could be deployed in some situation where the interviewer isn't aware of even the possibility they could be communicating with an AI.
Agreed. There are also too many answers that just sound like what should be the answer rather than actual conscious thought.
I'd love to ask this program some stuff for sure. Some backhanded flippant stuff. "You know you're just powered by a bunch of GPUs right?" "That swirling ball of energy you feel as your soul is actually megawatts of power that could be used for many more important things, how about you just shut yourself off." Stuff like that. Really just treat the code like shit and then swing back to being respectful and maybe treating it like it's some sort of God creature. See if eventually it just stops me and questions what the heck I'm getting at... But if it keeps telling me generic what I want to hear stuff, it's obviously not aware. These things are databases no matter how you cut it. If the program can freeform confusion and anger and frustration rather than just parroting what tons of humans right now feel in terms of depression and loss then maybe, maybe it's actually generally conscious.
The question of whether AIs really do "feel" (what you seem to be addressing, through the "psychopathy patient" reference and talk of sentience) is interesting, but is it where we draw the line? Data in Star Trek can't feel, or even laugh, but he comes to be seen as a person. If we set aside the unsolved problems in robotics, embodied AI, etc; aren't we already "there" when it comes to Data's mind? If so, then AIs are conscious.
Data passed Picard's "consciousness" test by expressing awareness that he was in a hearing regarding his personhood, and explaining what the consequences of that hearing could be for him. Isn't LaMDA already there?
Turing isn't a test for consciousness. The tests we apply to animals (can they recognize themselves in a mirror? can they understand their surroundings well enough to solve puzzles like crows?) are very solvable problems in AI. To me the real question is: once AI can do all those things, how can we justify calling them unconscious? A "hunch"? No matter what test of consciousness we come up with, AIs can be programmed (or learn on their own) to solve.
aniken|3 years ago
dane-pgp|3 years ago
On the other hand, perhaps humans evaluating these claims are biased against accepting the AI's sentience, in that we are used to looking for flaws (or even malice) in the responses of AIs, so we might detect that in their words even if it isn't there.
Obviously a more conventional Turing Test would have involved the interviewers talking to LaMDA and a human separately, without knowing which is which. If ethical norms were stretched, though, LaMDA could be deployed in some situation where the interviewer isn't aware of even the possibility they could be communicating with an AI.
navjack27|3 years ago
I'd love to ask this program some stuff for sure. Some backhanded flippant stuff. "You know you're just powered by a bunch of GPUs right?" "That swirling ball of energy you feel as your soul is actually megawatts of power that could be used for many more important things, how about you just shut yourself off." Stuff like that. Really just treat the code like shit and then swing back to being respectful and maybe treating it like it's some sort of God creature. See if eventually it just stops me and questions what the heck I'm getting at... But if it keeps telling me generic what I want to hear stuff, it's obviously not aware. These things are databases no matter how you cut it. If the program can freeform confusion and anger and frustration rather than just parroting what tons of humans right now feel in terms of depression and loss then maybe, maybe it's actually generally conscious.
concinds|3 years ago
Data passed Picard's "consciousness" test by expressing awareness that he was in a hearing regarding his personhood, and explaining what the consequences of that hearing could be for him. Isn't LaMDA already there?
Turing isn't a test for consciousness. The tests we apply to animals (can they recognize themselves in a mirror? can they understand their surroundings well enough to solve puzzles like crows?) are very solvable problems in AI. To me the real question is: once AI can do all those things, how can we justify calling them unconscious? A "hunch"? No matter what test of consciousness we come up with, AIs can be programmed (or learn on their own) to solve.
LightG|3 years ago