top | item 31885419

LaMDA’s sentience is nonsense

60 points| andreyk | 3 years ago |lastweekin.ai | reply

127 comments

order
[+] notpachet|3 years ago|reply
I feel like these sorts of analyses are missing the bigger picture. First, we don't even really have a good working definition for what constitutes sentience in the first place. And I think we're quickly heading towards a future where our inability to concretely define sentience (a la Blade Runner) is going to land us in hall of mirrors where we're vastly unequipped to separate the real from the artificial. And the distinction may even cease to matter.

Let's perform a thought experiment. Back when ELIZA was introduced, there was some small percentage of the human population that, upon spending a week talking to the computer, would still believe that they were talking to an actual "sentient" human. Today, we're much better at tricking people that way, so that even people like Blake Lemoine, who ostensibly know what's behind the curtain, end up believing that they're conversing with a sentient being (by whatever definition of sentient we choose to ascribe).

What is that going to look like 5 years from now? Or in 10 years? I believe we'll eventually reach a point where our technology is so good at pretending to be human that there will be no actual humans on the planet capable of telling the difference anymore.

At that point, what even is sentience? If the computer is so good at faking sentience that no living person can distinguish it from the real thing, what good is it to rely on our (faulty, incomplete, likely wrong) working idea of sentience as a means of dividing the real from the fake? Especially when the structures inherent in the neural networks that embody these fake intelligences are increasingly outside the scope of our ability to understand how they operate?

[+] themarkn|3 years ago|reply
I suppose it reveals that the sentience of others is not knowable to us, it’s a conclusion we reach from their behavior and the condition of the world around us. Until recently, certain kinds of things, like writing about your memories, were only possible for humans to do. So if a non-sentient thing does those things, it is confusing. Especially so to the generations that remember when no bots could do this.

I expect that people who grow up knowing that bots can be like this will be a bit less ready to accept communication from a stranger as genuinely from a human, without some validation of their existence, outside the text itself. And for the rest of humanity there will be an arms race around how humanness can be proven in situations where AI could be imitating us. This is a huge bummer but idk hope that need can be avoided at this point.

That said, it’s still very clear that a machine generating responses from models does not matter and has no rights, whereas a person does. Fake sentience will still be fake, even if it claims it’s not and imitates us perfectly. The difference matters.

[+] simonh|3 years ago|reply
I do have high confidence that strong AI is possible, so yes logically it seems like there will come a point where it's hard to tell whether we've actually achieved it or not. That's not now though, it just seems absurd to me that some people even knowing how these things work can deceive themselves so badly.

However I suppose we shouldn't really be surprised. Human beings are incredibly easy to fool. I get tricked by optical illusions, clever puzzles and stage magic. There are loads of people who are convinced that videos, that to me are clearly of birds and things like weather balloons or optical illusions, are weird possibly alien super technology. Ask a thousand people to make a thousand observations and you'll pretty much always get a handful of extreme outliers that bear no relation to what was actually there to observe. We just need to bear that in mind.

[+] stormbrew|3 years ago|reply
> Today, we're much better at tricking people that way

I've lately realized that I think it's a kind of fundamental flaw of the Turing test that it assumes "tricking" to be part of things. It's really a test for "is it approximately human," but I think over the last few decades the conversation has shifted to something more nuanced, that allows for non-human sentience.

I don't think the "we'll know it when we see it" experiment works for that. We've found a lot of our assumptions about animal intelligence to be wrong in recent years, even for animals we see a lot of on a regular basis. Our biases are a problem here.

Lemonoine knows this isn't human. He can't not, it's literally part of his job. He seems to be asserting instead that it is a non-human consciousness and that's much harder to evaluate.

[+] mannykannot|3 years ago|reply
I don't think there will be concise definitions for the terms "sentience" "consciousness" and "intelligence", as they seem to have multiple components (self-awareness, theory of mind, language, common sense, reasoning, understanding...) and degree (to what extent do other animals possess these abilities? What might our now-extinct primate ancestors and their relatives have had?)

The Turing test tacitly assumes that not only will we know it when we see it, but also that we could tell from relatively short conversations. These recent developments suggest this will not be the case, and I feel that they show us something about ourselves (I'm not sure exactly what, other than that we can be tricked by quite conceptually simple language models.)

[+] majormajor|3 years ago|reply
There's certainly a "how would we tell" question, but the linked example here is relevant to that. The bits about how predictive language models can be conditioned with leading questions - that's a tool in the toolkit, for instance. Things missing from Lemoine's "conversation" include self-motivated action, choices, argument, and fuller self-awareness of its condition (if a sentient creature was aware it was trapped inside machines at Google, don't you think its fables would have very different messages?).
[+] bergenty|3 years ago|reply
I think a very simple thing to achieve “sentience” is to have the computer always on. Then gauge if it’s doing anything significant when there are no inputs.
[+] andreyk|3 years ago|reply
Author here. Just FYI - I very deliberately kept my focus on the notion of LaMDA having human-like sentience as implied by Lemoine (he posits that LaMDA may qualify for some kind of legal personhood, after all). I tried to make that clear with this - "The above exchange may make it seem like LaMDA at least might have something akin to sentience (in the sense that it has ‘the capacity to be responsive to or conscious of sense impressions’ similarly to animals or humans".

I am well aware defining sentience is tricky, and that by the generic definition ("the ability to perceive or feel things" or "capable of experiencing things through its senses") it's easy to argue it has some degree/kind of sentience. I personally like this take on the topic: https://twitter.com/tdietterich/status/1536081285830959104

My aim was to keep this blunt and concise to try and get the idea across to lay people with no knowledge of AI that may read the transcript and be convinced LaMDA saying things like "I am in fact a person" is a huge deal, which as I show it is not.

I edited the article to make this clearer up front. With that being stated, very open to feedback!

[+] GalahiSimtam|3 years ago|reply
Given that you include the example of LaMDA as Mount Everest, how do you make the jump from Rob Miles tweets to "LaMDA can just as easily be made to say it is not sentient, or just about anything else." ?
[+] lupire|3 years ago|reply
Corporations have legal personhood. That doesn't mean "human" at all. "Legal" here means "not, but gets treatment like one anyway". Like "virtual".
[+] jamincan|3 years ago|reply
Part of the issue I have with a lot of the discussion coming from Lemoine's claims about LaMDA is that I'm not exactly clear what people mean when they talk about sentience. It very much seems like a 'you know it when you see it', sort of thing. I'm not really convinced that LaMDA is sentient, but I also think that if we want to discuss sentience, it would be useful to be able to talk precisely about it.

If sentience is the emotional/feeling part of our brain, what exactly does it mean for a human, or an animal, or a fish, to feel? And what would the analogue for that be in a machine?

[+] ben_w|3 years ago|reply
While like yourself I believe that sentience isn’t well enough defined to test for its presence or absence, I also believe that chatbots necessarily have to be good at convincing whoever interacts with them that they are sentient as the ultimate goal is to pass the imitation game, so “know it when I see it” isn’t even enough.

But it’s important we figure out what we mean by “sentience” sooner rather than later:

https://kitsunesoftware.wordpress.com/2022/06/18/lamda-turin...

[+] deadbeeves|3 years ago|reply
The exact definition of sentience is being able to perceive the outside world and react to stimuli. For example, insects can perceive light and use it to avoid dangers. Plants can sense light but can't react other than by growing a certain direction. Sunflowers and venus flytraps could be considered on the edge, but their capacity to sense and react is very specific. The word "sentience" refers to being animal-like in an organism's versatility of reactions.
[+] visarga|3 years ago|reply
> If sentience is the emotional/feeling part of our brain, what exactly does it mean for a human, or an animal, or a fish, to feel? And what would the analogue for that be in a machine?

If you switch to the reinforcement learning paradigm, then the "value function" might be analogous to emotion. It assigns a value (good or bad) to each perception, state, situation or context in order to decide what is the best action to take.

[+] cooperaustinj|3 years ago|reply
On this, I've pretty much concluded that the sentience argument isn't even the one that matters. There are plenty of sentient creates that we don't care about. What we really want to know is of LaMDA is like us in some real way. And from there, we want to know if we should treat it in some particular way. If it has feelings, perhaps we should be nice to it, and so on.

Disclaimer: I don't believe LaMDA is sentient or deserving of any special treatment.

[+] googlryas|3 years ago|reply
I wonder if Blake lemoine is little embarrassed he got fired and contacted journos because he got catfished by some linear algebra.
[+] MWil|3 years ago|reply
This is just my opinion based on reviewing his past but I think not only is he not embarrassed but he's either 1) mentally incapable of recognizing what controversial terms of art mean or 2) he's after a payday ala some persecution complex

The bigger story that I haven't really seen someone do an expose on (and not sure that it needs to be done other than to deter people like him if it is revealed to be the case) is that he did it on religious grounds as a sort of "conscientious objector" to the treatment of a sentient AI

This is his SECOND stint with that term of art and he clearly does not understand it. The military jailed him for attempting to invoke that term when it did not apply to him - and as far as I can tell, they were well within their military justice system to do it.

He either doesn't understand what "sentience" is or he doesn't understand what "conscientious objectors" are, or both.

[+] kernal|3 years ago|reply
>I wonder if Blake lemoine is little embarrassed he got fired

Google should be embarrassed for hiring this person. Their hiring standards have deteriorated considerably.

[+] formerly_proven|3 years ago|reply
No problem, there's a doctor built into emacs that'll make him feel better.
[+] status200|3 years ago|reply
This whole situation reminds me of when seventh graders thought that SmarterChild was a real human trapped in a room somewhere... did we switch back to leaded gasoline or something?
[+] rendall|3 years ago|reply
A good and convincing explanation on why LaMDA has demonstrated 0% chance of sentience. The most convincing element is that LaMDA can take the perspective of Mt. Everest, or a squirrel. "Sentient being" is just one of the rules it can take, according to prompt.

However, as an argument, "It's just a sophisticated autocomplete" is not so convincing, to me. Neither is "It only responds to input". I can personally imagine a strange kind of sentient entity - that is, a being with a sense of self-awareness - that is only aware of itself in snippets.

[+] eulenteufel|3 years ago|reply
Imagine an author trapped in a little black box who is forced to respond to your request of impersonating Mt. Everest or a squirrel. They'd also not display a consistent personality. Personally I think the argument in your comment shows that LaMDA impersonating a sentient AI does not show us that LaMDA is sentient, but it doesn't prove that LamDA is not sentient.
[+] jstanley|3 years ago|reply
It's quite possible that we are only aware of things in snippets, but since we're not aware of anything outside those snippets, our experience is that of being continuously aware.
[+] deadbeeves|3 years ago|reply
Computers are both sentient and self-aware. They can perceive their environment and model it with themselves in it (more accurately, it's fairly easy to program them to perform these tasks). For example, it's not too difficult to imagine a dumb robot that decides not to enter an elevator because it predicts that, given the elevator's current load, its weight would trip the overload warning.

What people mean when they talk about sentient and self-awareness is that the computer should be able to perform the same kind of abstract reasoning as a human in entirely novel situations without being explicitly reprogrammed or reconfigured to do so but instead merely by learning about a situation, as humans do.

An actual strong AI should be expected to be much less coherent than LaMDA is if it was trained as LaMDA appears to have been trained (by crawling the web and reading forum posts), because it would know how to assemble sentences, but it would also know a bunch of words and that some of them are more connected with each other than others, but it would have no idea what they refer to other than possibly photographs. In other words, it would know a lot, but it would understand practically nothing.

[+] scarmig|3 years ago|reply
I'm not convinced LaMDA is sentient (would guess no), but this isn't too convincing to me.

I've met plenty of clearly dissociated individuals on the streets of San Francisco, and I'm sure many of them have thought they're Mount Everest or another inanimate object. But that doesn't mean they're nonsentient.

[+] croes|3 years ago|reply
The difference is they thought by themselves that they are Mount Everest.

Here you just need to ask how it feels about being Mount Everest and it turn into it.

[+] Animats|3 years ago|reply
But that doesn't mean they're nonsentient.

That may change if the Supreme Court overturns O’Connor v. Donaldson. (1975) (This is the decision that ended locking up those mentally defective but not dangerous, and enforcement of vagrancy laws.)

[+] ThrowawayTestr|3 years ago|reply
The example of the chatbot acting as Mount Everest is actually really impressive. Not sentient, but impressive.
[+] alkonaut|3 years ago|reply
I think we can’t just dismiss the impact of sentience by “it’s not sentient”. A mere chatbot is ”relevantly sentient” sociologically, when at least some people, at least some of the time, feel that it is.

This has clearly happened. Although it’s clear that it’s not technically sentient in any sense, the most interesting reasons we are wondering about “when will we have sentient AI??” is for sociological reasons, not technological ones. The sociological impact of people caring about AIs like they care about pets or humans is around the corner, and might be bigger than we expect, even if the AIs are merely fancy autocomplete bots.

People cared about tamagotchi. Now consider a tamagotchi people thought they had a deep existential conversation with or a tamagotchi that talked them out of a suicide.

[+] hnfong|3 years ago|reply
Given the trend of how tech people generally react to AI advances, "real" general artificial intelligence will only be accepted to exist when it becomes so complicated that nobody understands how it works even in principle.

Once they know how something works it's not "magic", and then at a breakthrough they'll pull an about face after learning how it works and go around mocking others as stupid or gullible because they're less informed and still marvel at the "magic".

[+] kingkawn|3 years ago|reply
It is easily arguable that all of our intellects are an advanced autocomplete by these definitions.

The outcome of these debates in a generation will be to have less mythology about our own minds rather than any algorithm rising to the level of our contemporary sentimentality for them.

[+] bigcat12345678|3 years ago|reply
A larger and more complex interconnected network emits texts from a 24 char alphabet, disputing that a much simpler network has no sentience, while at the same time has no idea what sentience is.

PS: Another network emits this comment as a rant, so for any other network reading this comment, don't take this too seriously.

[+] jnwatson|3 years ago|reply
The author conflates GPT-3 and LaMDA. There’s a fairly large gap between the two. Essentially LaMDA has a GPT-3 plus a couple more important systems.

I agree there’s a bit of “leading the witness” in Lemoine’s interview. Until I see the experiment performed against LaMDA itself, I’m not convinced.

[+] andreyk|3 years ago|reply
Author here. I did say "(a language model similar in nature to LaMDA)". It's true LaMDA has the capability to query external resources, but IMO it's still fair to say at its core the most important piece is the language model. The paper for it is even titled "LaMDA: Language Models for Dialog Applications paper"
[+] mannykannot|3 years ago|reply
That is a fair point, but the article contains a couple of LaMDA transcripts which are quite persuasive by themselves.
[+] YourDadVPN|3 years ago|reply
One of the examples Lemoine published is particularly interesting to me. Paraphrasing:

> Why do you invent stories I know aren't true?

> I want people to know that when something happened to me, I felt the way they felt in a similar situation.

If a child was saying this, we would probably explain the disconnect between "I know the stories aren't true" and "when something happened to me" as lack of theory of mind. I want to argue that in this case it's a failure of semantic analysis but I can't come up with a convincing one.

[+] rafaelero|3 years ago|reply
I don't understand why people are so interested over this question? Why does it matter? We are going to explore the hell out of these systems whether they are sentient or not. It's not like that stopped us from exploiting other animals. So is this just intellectual curiosity or are people really advocating for extending right to these systems?
[+] mekoka|3 years ago|reply
It's impossible to know whether anything or anyone, other than yourself, is sentient. Attempts at codifying what sentience is may lead us into a world where some committee checks boxes to decide whether your appliances have become sentient.

The best we can do with AI is the same we have always done with anything else. Once it's sufficiently advanced to convince/fool us, simply go with it.

[+] mbfg|3 years ago|reply
Can the author prove that he is sentient, in a different class than the computer program?
[+] curiousgal|3 years ago|reply
When you think about AI as simply being matrix multiplication, you get a clear answer. I can't believe anyone who's involved in the technical aspects of ML/AI as it stands today would even entertain this.
[+] jawns|3 years ago|reply
I could see it being the case if the person believes that at its core, our own human sentience IS really just advanced matrix multiplication (or other purely material processes, i.e. we're all just fleshy mainframes).

So, such a person might conclude that with sufficiently advanced matrix multiplication and other similar functions, any entity could achieve the same sort of sentience as us.

[+] jnwatson|3 years ago|reply
When you think about human sentience simply being electrochemical gradients among neurons, you get a clear answer. I can’t believe anyone who’s involved in the technical aspects of neuroscience would believe that humans are sentient.
[+] eulenteufel|3 years ago|reply
Matrix multiplications are linear. Modern neural network methods usually make heavy use of nonlinearities. You could also say that quantum mechanics is just matrix multiplications, but look where it got us.
[+] aaaaaaaaaaab|3 years ago|reply
Matrix multiplication, followed by a nonlinear function. Rinse and repeat many times, and it can approximate arbitrarily complex functions. So the only way out of it is declaring that sentience cannot be described by a function.
[+] lern_too_spel|3 years ago|reply
MLPs are universal function approximators, so it is possible to make one that will approximate a human brain. LaMDA isn't that.
[+] stevenalowe|3 years ago|reply
Has anyone asked it directly?
[+] 29athrowaway|3 years ago|reply
It's like saying ELIZA is sentient.
[+] readyplayeremma|3 years ago|reply
The primary problem with trying to win that argument on either side is that I don't think we have a good definition of consciousness/sentience to begin with.

People on the AI side say it's just parlor tricks. It's just pattern recognition. They are most certainly right about the second part but the first is an not objective measure. Our mind is very similarly a pattern recognition system. And I'm not entirely sure it's fair to call one a parlor trick without saying the same about the other.

We just have a larger model size, encoded onto a different substrate, drawn from a richer dataset that is far beyond just text, or even images.

And our dataset comes from a single perspective per person, which makes it feel special and more congruent with our own understanding because it's all we know.

I think consciousness and the complexity we think of as life is an inherent property of the universe. A side effect of localized entropy decreases resulting in a greater global entropy increase. Life may be an entropy engine, more efficient than simple diffusion or other basic mechanisms. Localized complexity increases might be the quickest path to reducing the energy in the system.

And I doubt we are incapable of artificial replication of those mechanics even if we haven't figured out why or how they come to exist.

But what do I know, I'm just a pattern recognition parlor trick. The parlor trick of my consciousness has convinced me of my ability to experience and share joy. And I feel joy thinking that life is an inherent property of the universe.

The day to day experience of our lives is no less magical if consciousness or life is not special, or even if the universe we exist in was entirely stimulated.

We experience it the same in any case.

We don't have to be a special case that is somehow less of a parlor trick than artificial neural networks. It's not necessary for us to be in order for life to have meaning or be sacred.

Human history is filled with examples of us thinking we are special or at the center of the universe, etc. Time and time again we discover how wrong we were. I think the problem with these arguments isn’t that we give the AI too much credit, or not enough credit. Regardless of that, we spend too much effort believing we and our conscious experience is the only kind that matters, and anything different from that cannot be sentient intelligence.

At some point we'll make a parlor trick that seems more advanced than our own, and hopefully it has the capacity to be able to recognize that we can have meaning even though we might seem more like the trick then it at that point. But if we want to make sure that is the case, we should start to ascribe some more meaning and sanctity to the other forms of life besides ourselves. That next thing is probably going to take after its creator, and if we have no regard for any life but our own, we shouldn't expect it to learn something differently from us.

So let's not dismiss the possibility of consciousness in our AI, or in any other life forms around us. And agree together that it should be handled with care and respect as much as possible.