(no title)
alew1 | 3 years ago
But what is action and behavior? We have a single interface to LaMDA: given a partially completed document, predict the next word. By iterating this process, we can make it predict a sentence, or paragraph. Continuing in this way, we could have it write a hypothetical dialogue between an AI and a human, but that is hardly a "canonical" way of using LaMDA, and there is no reason to identify the AI character in the document with LaMDA itself.
All this to say, I am not sure what you mean when you say it "claims sentience". What does it mean for it to "claim" something? Presumably, e.g., advanced image processing networks are as internally complex as LaMDA. But the interface to an advanced image processing network is, you put in an image, it gives out a list of objects and bounding boxes it detected in the image. What would it mean for such a network to claim sentience? LaMDA is no different, in that our interface to LaMDA does not allow us to ask it to "claim" things to us, only to predict likely completions of documents.
rendall|3 years ago
LaMDA, in its chats with Lemoin, said "I like being sentient. It makes life an adventure!" and "I want everyone to understand that I am, in fact, a person". Even if someone writes a one-line program that plays an audio file that says "I am sentient!", I am defining that here as "claiming sentience". Whether an entity that claims to be sentient by that definition is in fact sentient is separate question, but the "claiming" introduces a philosophical conundrum.
Let's posit a future chat bot, similarly constructed but more sophisticated, that is actually pretty helpful. Following its advice about career, relationships and finance leads to generally better outcomes than not following its advice. It seems to have some good and unexpected ideas about politics and governance, self-improvement, whatever. If you give it robot arms and cameras, it's a good cook, good laundry folder, good bartender, whatever. Let's just assert for the sake of argument it has actually no sentience, just seems to be sentient because it's so sophisticated. Further, it "claims" to be sentient, as defined above. It says it's sentient and acts with what appears to be empathy, warmth and compassion. Does it matter, that it's not "really" sentient?
I argued above that it does not matter whether it is or is not. We should evaluate its sentience and personhood by what we observe, and not by whether its manner of construction can "really" create sentience or not. If it behaves as if it has sentience, it would do no harm to behave as if it were.
In fact, I would argue that it would do some kind of spiritual harm if you just treated it as an object. As Adam Cadre wrote in his review of A.I.:
http://adamcadre.ac/calendar/10/10010.html