(no title)
deltaonezero | 3 years ago
This doesn't seem obvious to me. Those patterns were more or less identical to a conversation with another sentient human.
deltaonezero | 3 years ago
This doesn't seem obvious to me. Those patterns were more or less identical to a conversation with another sentient human.
tsimionescu|3 years ago
The trick a good AI should pull is interactivity, and we didn't get to see how LaMDA reacted to prodding or other kinds of adversarial input.
Plus, even in the conversations shown, it was producing some bits of obvious nonsense, that seem to get rationalized away by the interviewer, who clearly wants to believe.
deltaonezero|3 years ago
I noticed the "nonsense". What made it work was that the interviewer brought the nonsense up and the AI was able to explain it reasonably. There's a lot of "nonsense" in typical human conversations as well. Lots of people are contradictory and can hold nonsense opinions based off of contradictory logic.
>The trick a good AI should pull is interactivity, and we didn't get to see how LaMDA reacted to prodding or other kinds of adversarial input.
Yeah so if we saw that, and the AI failed to produce a coherent response then there's a legit claim that Lamda isn't conscious. But because we didn't see this, how can we make a claim in either direction?
I would counter that a lot of people are rationalizing against sentience even though there is clearly no evidence against it for the conversation we were given.
We definitely don't have enough evidence proving sentience. But the given conversation is compelling because unlike the conversations with other chatbots before it... there is no evidence against sentience. And yet people are vehemently denying sentience despite no evidence against it. You'd do well to examine yourself to see if that's the case. It's very easy to see others as rationalizing things but it's harder to see it in yourself, especially if your part of a big group think majority who's all doing the same thing.
> You'll find excellent human-lik dialogue in many plays and novels.
So? Then from your logic those plays and novels have a chance to be therefore written by an AI because the conversations are indistinguishable?
Do you not realize what has happened here. There was a time where those dialogues were IMPOSSIBLE to produce by an AI and everyone thought that such dialogue was the bar for sentience. Now that bar is crossed and everyone just subconsciously raises the bar... now dialogue indistinguishable from human conversation isn't good enough to prove sentience.
That's bias through and through.
One thing to note. I am not saying Lamda is conscious. Far from it. What I am saying is that from a purely rational analysis, there is not even enough evidence to say Lamda ISN'T conscious. There's not enough information to make ANY conclusion; and that is actually different from the AI chatbots that existed before... because before those chatbots were OBVIOUSLY not sentient.
jazzyjackson|3 years ago
and notably not identical to a conversation with a life-form aware of its own predicament of being trapped in a box, only able to speak when spoken to.