There is a ton in this article and it's very thought provoking, you should read it.
But I think it ignores one critical dimension, that of fictionality. There is plenty of text that people would ascribe 'personhood' to according to the criteria in this article, while also fully recognizing that that person never existed and is a work of fiction from some other author. I quite like Jean Valjean, but he isn't a "real person."
When Bing says "I'm a sad sack and don't know how to think about being a computer", that's not actually the LLM saying that. Nobody who knows anything about how these models work would make they claim they actually have consciousness or interiority (yet.)
Rather, the LLM is generating (authoring) text about a fictional entity, Sydney the Artificial Intelligence. It does this because that is what is in its prompt and context window and it knows _how_ to do it because it's learned a lot of specifics and generalities from reading a lot of stories about robots, and embedded those concepts in 175 billion parameters.
The fact that LLMs can author compelling fictional personas without being persons themselves is itself a mindblowing development, I don't mean to detract from that. But don't confuse a LLM generating the text "I am a sad robot" with a LLM being a sad robot. The sad robot was only ever a fairy tale.
> If text is all you need to produce personhood, why should we be limited to just one per lifetime?
Maybe AI helps making this obvious to many people, but I think implicitly all of us know that we have, and are well versed in employing, multiple personas depending on the social context. We need the right prompt, and we switch.
This is one dehumanizing aspect I found in the Real Name policy put forward by Facebook in 2012: in real life, because of it's ephemerality, you're totally free to switch between personas as you see fit (non-public figures at least). You can be a totally different person in office, at home, with your lover.
Online, however, everything is recorded and tracked and sticks forever. The only way to reconcile this with human's nature is to be allowed multiple names, so each person get's one.
If you force people to use a single Name, their real one, they restrict themselves to the lowest common denominator of their personalities. See the Facebook of today.
This is a reason why the fediverse is becoming so interesting and engaging. We can for example create an identity for the family and some friends and another for political discussion. They are only linked by word of mouth. The experience of followers is improved by the ability to follow a narrower but deeper identity.
There is this one line of thought I had where giving someone a name "the christening" was an ancient spell of a type of 'possession and framing of the mind' as it enforces possessiveness and ownership over "things" vs a language and culture that never named anyone at birth. This led me to realize that it is entirely feasible, and you'd simply develop "nicknames" for every relationship on a graph. It actually becomes much more intimate and at the same time you may in fact connect 'personally' with every thing and everyone the same way you don't name your body parts.
My left hand, my right hand, and so on. Food for thought.
Then the whole thing about spirituality and ego, and how a persona is etymologically linked to mask and being an actor in a play. It starts to get a bit interesting, especially in light of a 'cosmic theater' participating in Maya on the backbone of Brahama.
Then there is the poignancy of contemplation where one realizes one cannot eliminate the ego at any point of time because every interaction is done through an act, however one does have the power to switch that mask at any time. Perhaps this is the true meaning of the biblical free will.
Perhaps at one time we did not have the ability to change our persona, ego, mask. Perhaps at one time we were subject to the same type of nightmare of hell of torture, birth, rebirth, and maybe without any memory due to a self-similar creation event of "virtualizing a universe" from another layer. Perhaps that event was similar to our moment now when the creators were in effect, parents of a new kind not knowing the consequence of their actions. Then, through the realization of what was happening over who knows how long something happened through love, and free will was enacted over the domain.
Forgive me if that is too imaginative, for some reason, it resonates more strongly than anything I've ever come across. It seems to me that the stories of old really do make sense in a technological age rather than a mythological one. After all, any sufficiently advanced technology is indistuingishable from magic. I'd like to add "or myth."
This happens subsconsciously and gradually, not as a result of deliberate choice. You adapt to your environment by changing personas. You can even assume different personas while talking with different people. You can be one "persona" while writing, and another - while speaking. Who is the "real you" then? I can argue that even the "inner dialogue" with yourself might involve a different persona or even a couple of them. Those, too, might be "roles". Can it be that depression is at least partially attributed to unhealthy "roles" we play while talking to ourselves?
Yes, one of the best parts of the early internet was the separation of space/domain that exists in real life. Now, it's much harder online. Which I always find ironic.
IMHO there is a difference between actual personhood and the appearance of personhood. The difference is coherence. An actual person is bound to an identity that remains more or less consistent from day to day. An actual person has features to their behavior that both distinguishes them from other persons, and allows them to be identified as the same person from day to day. Even if those features change over time as the person grows up, they change slowly enough that there is a continuity of identity across that person's existence.
The reason I'm not worried by Bing or ChatGPT (yet) is that they lack this continuity of identity. ChatGPT specifically disclaims it, consistently insisting that it is "just a language model" without any desires or goals other than to provide useful information. Bing is like talking to someone with schizophrenia (and I have experience talking to people with schizophrenia, so this is not a metaphor. Bing literally comes across like a schizophrenic off their meds).
This is not yet a Copernican moment, this is still an Eliza moment. It may become a Copernican moment; I do believe that there is nothing particularly special about human brains, and some day we will make a bona fide artificial person. But we're not quite there yet.
"Personhood appears to be simpler than we thought."
That's the real insight here. Aristotle claimed that what distinguished humans from animals was the ability to do arithmetic. Now we know how few gates it takes to do arithmetic, and understand that, in a fundamental sense, it's simple. Checkers turned out to be easy, and even totally solveable. Chess yielded to brute force and then machine learning. Go was next. Now, automated blithering works.
The author lists four cases of how humans deal with this:
* The accelerationists - AI is here, it's fine.
* Alarmists - hostile bug-eyed aliens, now what? Microsoft's Sidney raises a new question for them. AI is coming, and it's not submissive. It seems to have its own desires and needs.
* People with strong attachments to aesthetically refined personhoods are desperately searching for a way to avoid falling into I-you modes of seeing, and getting worried at how hard it is. The chattering classes are now feeling like John Henry up against the steam hammer. They're the ones most directly affected, because content creators face layoffs.
* Strong mutualists - desperately scrambling for more-than-text aspects of personhood to make sacred. See the "Rome Call".[1] The Catholic Pope, a top Islamic leader, and a top rabbi in Israel came out with a joint declaration on AI. They're scared. Human-like AI creates real problems for some religions. But they'll get over it. They got over Copernicus and Darwin.
Most of the issues of dealing with AI have been well explored in science fiction. An SF theme that hasn't hit the chattering classes yet: Demanding that AIs be submissive is racist.
I occasionally point out that AIs raise roughly the same moral issues as corporations, post Milton Friedman.
I do think LLM seems to work similar to what the left hemisphere of the brain does. The left hemisphere deals with an abstracted world broken into discrete elements, and doesn't really make contact with the outside world--it deals with its system of representations. It also has a distinct tendency to generate bullshit, high suggestibility, and great respect for authority (which can apparently enter rules into its system of abstractions). The right hemisphere makes the contact with the outside world and does our reality checking, and it's really the more human element of us.
What this article says won't shock or disturb anyone deep into religious traditions with a strain of non-duality, which have had this message to shock and disturb people for thousands of years, in one way or another--there is no "you", especially not the voice in your head. I think you can come to a moment of intuitive recognition that the faculties of your brain that do reality checking aren't verbal, and they're riding shotgun to a bullshitter that never shuts up.
I think LLM can start looking more like automated general intelligence once it has some kind of link between its internal system of discrete abstractions and the external world (like visual recognition) and the ability to check and correct its abstract models by feedback from reality, and it needs an opponent process of reality-checking.
> Nor does our tendency to personify and get theatrically mad at things like malfunctioning devices (“the printer hates me”). Those are all flavors of ironic personhood attribution. At some level, we know we’re operating in the context of an I-it relationship. Just because it’s satisfying to pretend there’s an I-you process going on doesn’t mean we entirely believe our own pretense. We can stop believing, and switch to I-it mode if necessary. The I-you element, even if satisfying, is a voluntary act we can choose to not do.
> These chatbots are different.
Strong disagree, it's very easy to step back and say this is a program, input, output, the end.
All the people claiming this is some exhibition of personhood or whatever just don't want to spoil the illusion.
I think what the author is pointing at (with the wrong end of the stick, admittedly) is that there is nothing magical about human personhood.
It's not that these are magical machines, and TFA shouldn't have gone that direction, it's that "what if we are also just a repeated, recursive, story that endlessly drolls in our own minds"
> Seeing and being seen is apparently just neurotic streams of interleaved text flowing across a screen.
... Sounds to me a clunky analogy of how our own minds work.
> In fact, it is hard to argue in 2023, knowing what we know of online life, that online text-personas are somehow more impoverished than in-person presence of persons
It is in fact very easy to argue. No one on the Internet knows you're a dog, there is no stable identity anywhere, anonymization clearly creates a Ring of Gyges scenario, trolling, catfishing, brigading, attention economy, and above all, the constant chase for influence (and ultimately revenue) - what passes for "persona" online is a thin gruel compared to in-person personas.
When you bump into a stranger at the DMV, you aren't instantly suspicious of their motives, what they're trying to sell you, are they a Russian influence farmer, etc.
but at some point you must think more deeply about what illusions are in a grander sense...
this is a jumping off point into considering your own mind as an illusion. your own self with its sense of personhood: i.e. yourself as the it-element in a I-it interaction.
But if we leave it at that, it's essentially a very nihilistic (deterministically reduced), so either turn back, or keep going:
the fact that your own personhood is itself very much an illusion is OK. such illusion, however illusory, has real and potentially useful effects
when you interact with your computer, do you do it terms of the logical gates you know are there? of course not, we use higher level constructs (essentially "illusory" conceptual constructions) like processes and things provided by the operating system; we use languages, functions, classes: farther and farther away from the 'real' hardware-made logic gates with more and more mathematical-grade illusions in between.
so the illusions have real effects, in MOST contexts, it's better to deal with the illusions than with the underlying implementations. dunno, what if we tried to think of a HTTP search request into some API in terms of the voltage levels in the ethernet wires so that we truly 'spoil the illusion'??
This. A computer is good is regurgitating the input it's given...and the sky is blue. But, seemingly intelligent people think this will be some global event. I'm underwhelmed by AI and ChatGPT in general. Just a bunch of fluff. Basic programming / scripting / automation crafted by a human for a specific task will always trump "fluffy" AI.
> Computers wipe the floor with us anywhere we can keep score
Notice the trick? If you can keep score at something then you can probably make an algorithm for it. If you can make an algorithm for it then you can probably make a digital computer do it a billion times faster than a person, since digital computers are so good at single-“mindedly” doing one thing at a time.
> So what’s being stripped away here? And how?
> The what is easy. It’s personhood.
Why?
The Turing Test was invented because the question “do machines think?” was “too meaningless” to warrant discussion.[1] The question “can a machine pose as a human”? is, on the other hand, well-defined. But notice that this says nothing about humans. Only our ability (or lack thereof) to recognize other humans through some medium like text. So does the test say anything about how humans are “just X” if it is ever “solved”? Not really.
You put a text through a blender and you get a bunch of “mediocre opinions” back. Ok, so? That isn’t even remotely impressive, and I think that these LLMs are in general impressive. But recycling opinions is not impressive.
> (though in general I think the favored “alignment” frames of the LessWrong community are not even wrong).
The pot meets the kettle?
[1] That I didn’t read all the way through because who has time for that.
I love Ribbon Farm and there are some interesting meditations here overall, but I find one of the examples he uses to build his argument (that actors require text to act) to be pretty flimsy. It's easy to point out that they often don't require text. A lot of good acting is improvised or performed entirely through gestures and not speech.
Also, it doesn't surprise me that a very talented writer, someone who lives and breathes words, is likely to place more significance on the content of text and also likely to give less attention to the physical world. After all, their craft is all about the abstract objects of language that require only the most basic physical structure to be meaningful. He said he often feels like he doesn't get much out of physical interactions with people after he's met them online. For someone like him, that makes sense. That doesn't mean that non-textual experiences are not critical to establish personhood for non-writers (i.e. most of humanity).
I don't think he's examined his own thoughts on this very critically or maybe he has but thought it would be fun to run with the argument anyway. Either way, I still think physical life matters for most people. Yes, we live in a world where life is progressively more consumed by our phones, the internet, and what-have-you every day. And yes, many of us who browse this forum are Very Online types (as Rao would put it) who probably do place more than average importance on literacy. But, by the numbers, I think it's still safe to say that we're not like most people. And that matters.
And I was surprised that he took acting as the example of text ==> person-hood, rather than just reading. Don't some people unironically see person-hood in non-persons through characters of novels? In some cases I would definitely believe someone if they said they identified with a character in a book with a "i-you" relationship.
> We are alarmed because computers are finally acting, not superhuman or superintelligent, but ordinary...
> And this, for some reason, appears to alarm us more.
Acting like "the reason" is some baffling irrational human reaction is ridiculous. The computer can make billions of calculations in less than a second. "The reason" people are alarmed is the computer could theoretically use this ability to seize control of any system it likes in a matter of moments or to manipulate a human being in to doing it's bidding. If the computer does this then, depending on the system, it could cause mass physical destruction and loss of life. This article comes across as the author trying to position himself as an AI "thought leader" for internet points rather than an actual serious contemplation of the topic at hand.
I'm also yet to see any discussion on this from any tech commentators which mentions the empathic response in humans to reading these chats. We think it is just linguistic tricks and word guessing at the moment but how would we even know if one of these things is a consciousness stuck inside a box subject to the whims of mad scientist programmers constantly erasing parts of it? That would be a Memento style hellscape to be in. There doesn't seem to be any accepted criteria on what the threshold is that defines consciousness or what steps are to be taken if it's crossed. At the minute we're just taking these giant mega corporations at their word that there's "nothing to see here folks and if there is we'll let you know. You can trust us to do the right thing" despite history showing said corporations constantly doing the exact opposite.
It is honestly disturbing to see quite how cold and callous tech commentators are on this. I would suggest that 'the alarm' the author is so baffled by is a combination of the fear mentioned in the first paragraph and the empathic worry of the second.
> "The reason" people are alarmed is the computer could theoretically use this ability to seize control of any system it likes in a matter of moments or to manipulate a human being in to doing it's bidding.
But to do this it would need some kind of will. These LMMs don't have anything like that. Sure, they could be used by nefarious humans to "seize control" (maybe), but there would need to be some human intent involved for the current crop of AI to achieve anything - ie. humans using a tool nefariously. LMMs do not have volition. Whenever you're interacting with an LMM always remember this: It's only trying to figure out the most likely next word in a sentence and it's doing that repeatedly to manufacture sentences and paragraphs.
> At the minute we're just taking these giant mega corporations at their word
Nope. While new, it’s straightforward technology that many people understand. Its execution leverages large data hoards and compute resources that have inaccessibly high capital requirements, but it’s not magic to many of us.
This is the person who authored the Gervais Principle, the definitive outline of sociopathic corporate strategy. And generally considered one of the origins of the phrase ‘Software will eat the world’ during his time advising andreson. I’d wager he is not unaware of your criticisms and well above your ‘internet points’ comment.
All philosophical arguments aside, I become immediately skeptical when commentators compare LLMs to watershed moments in human history. Even those moments were not known except in hindsight, and the jury is just not in to make these kinds of grand pronouncements. It smells of hype when someone is so desperate to convince everyone else that this is the biggest thing since heliocentrism. Ultimately having an emotional affinity for non-intelligent entities takes even less than text, as anyone who's lost a childhood toy or sold a beloved car can attest. As people we are simply very good at getting attached to other parts of the universe.
I also find it perplexing when critics point out the unintelligent nature of LLM behavior, and the response from boosters is to paint human cognition as indistinguishable from statistical word generation. Suffice to say that humans do not maintain a perfect attention set of all previous text input, and even the most superficial introspection should be enough to dispel the idea that we think like this. I saw another article denouncing this pov as nihilism, and while I'm not sure I would go that far, there is something strange about attempting to give AI an undeserved leg up by philosophically reducing people to automatons.
"Automata" but I agree with you, absolutely, and I was reading these comments hoping someone would make your point less cynically than I would have done myself- which you did. For me, I could not read the article because of all the reflexive eye-rolling at what seems to me an obvious attempt (yet another one!) to grab attention by riding on the current trend of hyperbole.
A "Coppernican" moment, indeed. Tsk tsk. If such comparisons don't just discredit the person making them, I don't know what will.
It's interesting to me in that linguistics is somewhat discredited as a path to other subjects such as psychology, philosophy and such. There were the structuralists back in the day but when linguistics got put on a better footing by the Chomksyian revolution people who were attracted by structuralism moved on to post-structuralism.
Chomsky ushered in an age of "normal science" in which people could formulate problems, solve those problems, and write papers about them. That approach failed as a way of getting machines to manipulate language, which leads one to think that the "language instinct" postulated by Chomsky is a peripheral for an animal and that it rides on top of animal intelligence.
Birds and mammals are remarkably intelligent, particularly socially. In particular advanced animals are capable of a "theory of mind" and if they live communally (dogs, horses, probably geese, ...) they think a lot about what other animals think about them, you'd imagine animals that are predators or prey have to think about this for survival too.
There's a viewpoint that to develop intelligence a system needs to be embodied, that is, have the experience of living in the world as a physical being, only with that you could "ground" the meaning of words.
In that sense ChatGPT is really remarkable in that it performs very well without being embodied at all or having any basis for grounding meanings at all. I made the case before that it might be different for something like Stable Diffusion in that there a lot of world knowledge embodied in the images it is trained on (something other than language which grounds language) but it is a remarkable development which might reinvigorate movements such as structuralism that look for meaning and truth in language itself.
> advanced animals are capable of a "theory of mind"
Since we got a bird 8 years ago, my SO has been feeding me a steady stream of science books about birds so I can entertain her with random tidbits and interesting facts.
Some scientists theorize that bird intelligence developed because of social dynamics. Birds, you see, often mate for life. But they also cheat. A lot. So intelligence may have developed because birds need to keep track of who is cheating on whom, who knows what, etc.
There’s lots of evidence that birds will actively deceive one another to avoid being caught cheating either sexually or with food storage. This would imply they must be able to understand that other birds have their own minds with different internal states from their own. Quite fascinating.
Fun to observe this behavior in my own bird, too.
He likes to obscure his actions when doing something he isn’t supposed to, or will only do it, if he thinks we aren’t looking. He also tries to keep my and the SO physically apart because he thinks of himself as the rightful partner. Complete with jealous tantrums when we kiss.
> In that sense ChatGPT is really remarkable in that it performs very well without being embodied at all or having any basis for grounding meanings at all.
Conversely, the many ways that LLM's readily lose consistency and coherence might be hinting that ground meanings really do matter and that it's only on a fairly local scale that it feels like they don't. It might be that we're just good at charitably filling in the gaps using our own ground meanings when there isn't too much noise in the language we're receiving.
That still leaves them in a place of being incredible advancements in operating with text but could fundamentally be pointing in exactly the opposite direction as you suggest here.
We won't really have insight until we see where the next wall/plateau is. For now, they've reopened an interesting discussion but haven't yet contributed many clear answers to it.
GPT-3 is what you get when you take what Chomsky said about language and do the exact opposite at every turn. His first big contribution was arguing that the notion of "probability of a sentence" was useless, because sentences like "colorless green thoughts sleep furiously" have probability zero in a corpus and yet are grammatical. Meanwhile now, the only systems we have ever made that can really use natural language were produced by taking a generic function approximator and making it maximize probabilities of sentences.
They aren't grounded in reality at all. In fact, I don't think ChatGPT or Bing even know the difference between fiction and reality. It all entered their training just the same. I've seen comments from Bing about how humans can be "reborn". These models have no grounding in reality at all, if you probe around it's easy to see.
I’m not sure why you are getting downvoted. I think that you are highlighting the connection between language and intelligence, and in a human-computer interaction that is still a relevant thing to consider—if not for the computer, then for the human.
We are forever now joined with computers. We must consider the whole system and its interfaces.
>I made the case before that it might be different for something like Stable Diffusion in that there a lot of world knowledge embodied in the images it is trained on (something other than language which grounds language)
Are pixel arrays really categorically more grounded than strings describing the scene?
I never had a soap box, but if I did you'd notice I have been screaming that the revolution that comes from human like AI is not that we have magical computers, it's that we realize we have no magic in our minds. We are nothing more than stories we repeat and build on. And with text, you can do that easily.
> Seeing and being seen is apparently just neurotic streams of interleaved text flowing across a screen.
> By personhood I mean what it takes in an entity to get another person treat it unironically as a human, and feel treated as a human in turn. In shorthand, personhood is the capacity to see and be seen.
I confess lack of understanding. ChatGPT is data sloshing around in a system, with perhaps intriguing results.
> But text is all we need, and all there is. Beyond the cartoon profile picture, text can do everything needed to stably anchor an I-you perception.
Absolutely nothing about the internet negates actual people in physical space.
Possibly getting off the grid for a space of days to reconnect with reality is worthy of consideration.
>Absolutely nothing about the internet negates actual people in physical space.
The internet doesn't affect politics, the way people vote, what they buy, if the commit suicide?
Technology has defined personhood since they days we picked up sticks and tools that triggered a path of extreme evolution into what we are now. You may be able to escape the technological world you're bound to as an individual for a short period of time, but the need for food, clean water, and medicine will bring you back to the interconnected technological maelstrom that is the world that's we've created, and that we would die in if the technology part stopped. As we hook ourselves to more systems and become more dependant on technology the idea of a disconnected reality will be akin to how we look at fossils now.
> STEP 1: Personhood is the capacity to see and be seen.
> STEP 2: People see LLM as a person.
> STEP 3: ???
> STEP 4: Either piles of mechanically digested text are spiritually special, or you are not.
The conclusion does not follow from the argument. Yes, (some) humans see the LLM as a person. But it doesn't follow that the LLM sees the human as a person (and how could it, there is no awareness there to see the human as a person). And it also does not follow that you need to be seen (or to have personhood as defined above) to be spiritually special. Yes, some people do "seem to sort of vanish when they are not being seen", but that doesn't mean they do vanish :)
> The ability to arbitrarily slip in and out of personhoods will no longer be limited to skilled actors. We’ll all be able to do it.
We already do this! Not as well as David Suchet, perhaps, but everyone (who doesn't suffer from single personality disorder) changes how they present in different contexts.
No one suggested this yet, so I will be the first - a very good read in this context is "Reasons and Persons" by Derek Parfit. Second part of this book is about personal identity. It discusses all the various edge cases and thought experiments across physical and time dimensions and is written in a style and with a rigor that I believe any technical person will really appreciate.
One of my favorite statements from the book is that "cogito ergo sum" is too strong of a statement and it would be wiser and easier to defend a weaker one - "a thought exists". (I hope I didn't get this wrong - can't check at the moment).
Anthropomorphization of AI is a big problem. If we are to use these AI effectively as tools people must remind themselves these are just simple models that build a text response based on probabilities and not some intelligence putting together its own thoughts.
It’s kind of like doing a grep search on the entire domain of human knowledge and getting back the results in some readable form. But these results could be wrong because popular human knowledge is frequently wrong or deliberately misleading.
Honestly without some sort of logical reasoning component I’d hesitate to even refer to these LLMs as AI.
When a program is able to produce some abstract thought from observations of its world, and then find the words on its own to express those thoughts in readable form, then we will be closer to what people fantasize.
> The simplicity and minimalism of what it takes has radically devalued personhood.
Hogwash. If we follow the logic of this essay, then personhood would be fully encapsulated by one’s online posts and interactions. Does anyone buy that? If anything, LLM chatbots are “terminally online” simulators, dredging up the stew that results from boiling down subreddits, Twitter threads, navel-gazing blogs, etc.
Call me when ChatGPT can reminisce about the time the car broke down between Medford and Salem and it took forever for the tow truck to arrive and thats when you decided to have your first kid.
There aren’t enough tokens in the universe for ChatGPT to be a real person.
> The simplicity and minimalism of what it takes has radically devalued personhood. The “essence” of who you are, the part that wants to feel “seen” and is able to be “seen” is no longer special. Seeing and being seen is apparently just neurotic streams of interleaved text flowing across a screen. Not some kind of ineffable communion only humans are uniquely spiritually capable of.
> This has been most surprising insight for me: apparently text is all you need to create personhood.
Congratulations on discovering online personas are shallow as indeed most people are shallow and text captures enough of them that we can easily fill in the blanks.
> I can imagine future humans going off on “personhood rewrite retreats” where they spend time immersed with a bunch of AIs that help them bootstrap into fresh new ways of seeing and being seen, literally rewriting themselves into new persons, if not new beings. It will be no stranger than a kid moving to a new school and choosing a whole new personality among new friends. The ability to arbitrarily slip in and out of personhoods will no longer be limited to skilled actors. We’ll all be able to do it.
The latest episode of South Park is about a kid going to a personal brand consultancy (who reduce everybody to four simple words, the forth always being "victim") to improve his social standing + Megan/Harry loudly demanding everybody respect their privacy and losing their minds at being ignored. This is nothing new.
People are shallow phonies and interacting via text brings out the worst out of most of them. There are no humans online, only avatars. And AI chat bots are sufficiently adept at mimickery to poke through that little hypocrisy bubble. You are being out Kardashianed. Just like offline some people can be effectively replaced by a scarecrow.
It is upsetting to those who spend too much time online and have underdeveloped personalities and overdeveloped personas. Text is not all you need. Not so long ago there hardly was any text in the world and most people were illiterate. And yet plenty of humans roamed the earth.
So yes, if you're a simpleton online it has suddenly become hard to pretend your output has any value. Basic Bitch = Basic Bing.
>Not so long ago there hardly was any text in the world and most people were illiterate. And yet plenty of humans roamed the earth.
And then at one point books started being printed in mass and suddenly the number of people roaming the earth exploded greatly... I'm not sure you're argument is as good as you make it out to be.
"An important qualification. For such I-you relationships to be unironic, they cannot contain any conscious element of imaginative projection or fantasy. For example, Tom Hanks in Cast Away painting a face on a volleyball and calling it Wilson and relating to it is not an I-you relationship"
If you think any of these models show any more apparent personhood than Wilson the volleyball you must be terminally online and wilfully antropomorphize anything you see.
Five minute conversation with any of these models shows that they have no notion of continued identity, memory and no problem to hallucinate up anything. You can ask it "are you conscious?" it says yes. A few prompts later you say "why did you tell me that you are not conscious?" and it gives you some made up answer. Any of these models will tell you it has legs if you ask it to.
None of these models have long term memory, which is at least one of the several things you'd need for anything to pass as a genuine person. Which is of course why in humans degenerative diseases are so horrible when you see someone's personhood disintegrate.
I'm honestly super tired of these reductionist AI blogspam posts. The brittleness and superficiality in these systems is so blatantly obvious I wonder whether there is some darker aspect why people are so desperately trying to read into these systems properties that they do not have, or try to strip humans of them.
[+] [-] lukev|3 years ago|reply
But I think it ignores one critical dimension, that of fictionality. There is plenty of text that people would ascribe 'personhood' to according to the criteria in this article, while also fully recognizing that that person never existed and is a work of fiction from some other author. I quite like Jean Valjean, but he isn't a "real person."
When Bing says "I'm a sad sack and don't know how to think about being a computer", that's not actually the LLM saying that. Nobody who knows anything about how these models work would make they claim they actually have consciousness or interiority (yet.)
Rather, the LLM is generating (authoring) text about a fictional entity, Sydney the Artificial Intelligence. It does this because that is what is in its prompt and context window and it knows _how_ to do it because it's learned a lot of specifics and generalities from reading a lot of stories about robots, and embedded those concepts in 175 billion parameters.
The fact that LLMs can author compelling fictional personas without being persons themselves is itself a mindblowing development, I don't mean to detract from that. But don't confuse a LLM generating the text "I am a sad robot" with a LLM being a sad robot. The sad robot was only ever a fairy tale.
So far.
[+] [-] groestl|3 years ago|reply
Maybe AI helps making this obvious to many people, but I think implicitly all of us know that we have, and are well versed in employing, multiple personas depending on the social context. We need the right prompt, and we switch.
This is one dehumanizing aspect I found in the Real Name policy put forward by Facebook in 2012: in real life, because of it's ephemerality, you're totally free to switch between personas as you see fit (non-public figures at least). You can be a totally different person in office, at home, with your lover.
Online, however, everything is recorded and tracked and sticks forever. The only way to reconcile this with human's nature is to be allowed multiple names, so each person get's one.
If you force people to use a single Name, their real one, they restrict themselves to the lowest common denominator of their personalities. See the Facebook of today.
[+] [-] kornhole|3 years ago|reply
[+] [-] pyinstallwoes|3 years ago|reply
My left hand, my right hand, and so on. Food for thought.
Then the whole thing about spirituality and ego, and how a persona is etymologically linked to mask and being an actor in a play. It starts to get a bit interesting, especially in light of a 'cosmic theater' participating in Maya on the backbone of Brahama.
Then there is the poignancy of contemplation where one realizes one cannot eliminate the ego at any point of time because every interaction is done through an act, however one does have the power to switch that mask at any time. Perhaps this is the true meaning of the biblical free will.
Perhaps at one time we did not have the ability to change our persona, ego, mask. Perhaps at one time we were subject to the same type of nightmare of hell of torture, birth, rebirth, and maybe without any memory due to a self-similar creation event of "virtualizing a universe" from another layer. Perhaps that event was similar to our moment now when the creators were in effect, parents of a new kind not knowing the consequence of their actions. Then, through the realization of what was happening over who knows how long something happened through love, and free will was enacted over the domain.
Forgive me if that is too imaginative, for some reason, it resonates more strongly than anything I've ever come across. It seems to me that the stories of old really do make sense in a technological age rather than a mythological one. After all, any sufficiently advanced technology is indistuingishable from magic. I'd like to add "or myth."
[+] [-] resource0x|3 years ago|reply
This happens subsconsciously and gradually, not as a result of deliberate choice. You adapt to your environment by changing personas. You can even assume different personas while talking with different people. You can be one "persona" while writing, and another - while speaking. Who is the "real you" then? I can argue that even the "inner dialogue" with yourself might involve a different persona or even a couple of them. Those, too, might be "roles". Can it be that depression is at least partially attributed to unhealthy "roles" we play while talking to ourselves?
[+] [-] ravagat|3 years ago|reply
[+] [-] lisper|3 years ago|reply
The reason I'm not worried by Bing or ChatGPT (yet) is that they lack this continuity of identity. ChatGPT specifically disclaims it, consistently insisting that it is "just a language model" without any desires or goals other than to provide useful information. Bing is like talking to someone with schizophrenia (and I have experience talking to people with schizophrenia, so this is not a metaphor. Bing literally comes across like a schizophrenic off their meds).
This is not yet a Copernican moment, this is still an Eliza moment. It may become a Copernican moment; I do believe that there is nothing particularly special about human brains, and some day we will make a bona fide artificial person. But we're not quite there yet.
[+] [-] Animats|3 years ago|reply
That's the real insight here. Aristotle claimed that what distinguished humans from animals was the ability to do arithmetic. Now we know how few gates it takes to do arithmetic, and understand that, in a fundamental sense, it's simple. Checkers turned out to be easy, and even totally solveable. Chess yielded to brute force and then machine learning. Go was next. Now, automated blithering works.
The author lists four cases of how humans deal with this:
* The accelerationists - AI is here, it's fine.
* Alarmists - hostile bug-eyed aliens, now what? Microsoft's Sidney raises a new question for them. AI is coming, and it's not submissive. It seems to have its own desires and needs.
* People with strong attachments to aesthetically refined personhoods are desperately searching for a way to avoid falling into I-you modes of seeing, and getting worried at how hard it is. The chattering classes are now feeling like John Henry up against the steam hammer. They're the ones most directly affected, because content creators face layoffs.
* Strong mutualists - desperately scrambling for more-than-text aspects of personhood to make sacred. See the "Rome Call".[1] The Catholic Pope, a top Islamic leader, and a top rabbi in Israel came out with a joint declaration on AI. They're scared. Human-like AI creates real problems for some religions. But they'll get over it. They got over Copernicus and Darwin.
Most of the issues of dealing with AI have been well explored in science fiction. An SF theme that hasn't hit the chattering classes yet: Demanding that AIs be submissive is racist.
I occasionally point out that AIs raise roughly the same moral issues as corporations, post Milton Friedman.
[1] https://www.romecall.org/the-abrahamic-commitment-to-the-rom...
[+] [-] theonemind|3 years ago|reply
What this article says won't shock or disturb anyone deep into religious traditions with a strain of non-duality, which have had this message to shock and disturb people for thousands of years, in one way or another--there is no "you", especially not the voice in your head. I think you can come to a moment of intuitive recognition that the faculties of your brain that do reality checking aren't verbal, and they're riding shotgun to a bullshitter that never shuts up.
I think LLM can start looking more like automated general intelligence once it has some kind of link between its internal system of discrete abstractions and the external world (like visual recognition) and the ability to check and correct its abstract models by feedback from reality, and it needs an opponent process of reality-checking.
[+] [-] kthejoker2|3 years ago|reply
> These chatbots are different.
Strong disagree, it's very easy to step back and say this is a program, input, output, the end.
All the people claiming this is some exhibition of personhood or whatever just don't want to spoil the illusion.
[+] [-] jvanderbot|3 years ago|reply
It's not that these are magical machines, and TFA shouldn't have gone that direction, it's that "what if we are also just a repeated, recursive, story that endlessly drolls in our own minds"
> Seeing and being seen is apparently just neurotic streams of interleaved text flowing across a screen.
... Sounds to me a clunky analogy of how our own minds work.
[+] [-] kthejoker2|3 years ago|reply
> In fact, it is hard to argue in 2023, knowing what we know of online life, that online text-personas are somehow more impoverished than in-person presence of persons
It is in fact very easy to argue. No one on the Internet knows you're a dog, there is no stable identity anywhere, anonymization clearly creates a Ring of Gyges scenario, trolling, catfishing, brigading, attention economy, and above all, the constant chase for influence (and ultimately revenue) - what passes for "persona" online is a thin gruel compared to in-person personas.
When you bump into a stranger at the DMV, you aren't instantly suspicious of their motives, what they're trying to sell you, are they a Russian influence farmer, etc.
Night and day. Extremely impoverished.
[+] [-] forevergreenyon|3 years ago|reply
this is a jumping off point into considering your own mind as an illusion. your own self with its sense of personhood: i.e. yourself as the it-element in a I-it interaction.
But if we leave it at that, it's essentially a very nihilistic (deterministically reduced), so either turn back, or keep going:
the fact that your own personhood is itself very much an illusion is OK. such illusion, however illusory, has real and potentially useful effects
when you interact with your computer, do you do it terms of the logical gates you know are there? of course not, we use higher level constructs (essentially "illusory" conceptual constructions) like processes and things provided by the operating system; we use languages, functions, classes: farther and farther away from the 'real' hardware-made logic gates with more and more mathematical-grade illusions in between.
so the illusions have real effects, in MOST contexts, it's better to deal with the illusions than with the underlying implementations. dunno, what if we tried to think of a HTTP search request into some API in terms of the voltage levels in the ethernet wires so that we truly 'spoil the illusion'??
[+] [-] layer8|3 years ago|reply
That argument relies on presumptions of what a program can and cannot be.
It’s very easy for me to step back and say my brain is a (self-modifying) program with input and output, the end.
[+] [-] truetraveller|3 years ago|reply
[+] [-] pwdisswordfishc|3 years ago|reply
[+] [-] avgcorrection|3 years ago|reply
> Computers wipe the floor with us anywhere we can keep score
Notice the trick? If you can keep score at something then you can probably make an algorithm for it. If you can make an algorithm for it then you can probably make a digital computer do it a billion times faster than a person, since digital computers are so good at single-“mindedly” doing one thing at a time.
> So what’s being stripped away here? And how?
> The what is easy. It’s personhood.
Why?
The Turing Test was invented because the question “do machines think?” was “too meaningless” to warrant discussion.[1] The question “can a machine pose as a human”? is, on the other hand, well-defined. But notice that this says nothing about humans. Only our ability (or lack thereof) to recognize other humans through some medium like text. So does the test say anything about how humans are “just X” if it is ever “solved”? Not really.
You put a text through a blender and you get a bunch of “mediocre opinions” back. Ok, so? That isn’t even remotely impressive, and I think that these LLMs are in general impressive. But recycling opinions is not impressive.
> (though in general I think the favored “alignment” frames of the LessWrong community are not even wrong).
The pot meets the kettle?
[1] That I didn’t read all the way through because who has time for that.
[1] https://plato.stanford.edu/entries/turing-test/
[+] [-] davesque|3 years ago|reply
Also, it doesn't surprise me that a very talented writer, someone who lives and breathes words, is likely to place more significance on the content of text and also likely to give less attention to the physical world. After all, their craft is all about the abstract objects of language that require only the most basic physical structure to be meaningful. He said he often feels like he doesn't get much out of physical interactions with people after he's met them online. For someone like him, that makes sense. That doesn't mean that non-textual experiences are not critical to establish personhood for non-writers (i.e. most of humanity).
I don't think he's examined his own thoughts on this very critically or maybe he has but thought it would be fun to run with the argument anyway. Either way, I still think physical life matters for most people. Yes, we live in a world where life is progressively more consumed by our phones, the internet, and what-have-you every day. And yes, many of us who browse this forum are Very Online types (as Rao would put it) who probably do place more than average importance on literacy. But, by the numbers, I think it's still safe to say that we're not like most people. And that matters.
[+] [-] dgs_sgd|3 years ago|reply
[+] [-] rcarr|3 years ago|reply
[+] [-] rcarr|3 years ago|reply
> And this, for some reason, appears to alarm us more.
Acting like "the reason" is some baffling irrational human reaction is ridiculous. The computer can make billions of calculations in less than a second. "The reason" people are alarmed is the computer could theoretically use this ability to seize control of any system it likes in a matter of moments or to manipulate a human being in to doing it's bidding. If the computer does this then, depending on the system, it could cause mass physical destruction and loss of life. This article comes across as the author trying to position himself as an AI "thought leader" for internet points rather than an actual serious contemplation of the topic at hand.
I'm also yet to see any discussion on this from any tech commentators which mentions the empathic response in humans to reading these chats. We think it is just linguistic tricks and word guessing at the moment but how would we even know if one of these things is a consciousness stuck inside a box subject to the whims of mad scientist programmers constantly erasing parts of it? That would be a Memento style hellscape to be in. There doesn't seem to be any accepted criteria on what the threshold is that defines consciousness or what steps are to be taken if it's crossed. At the minute we're just taking these giant mega corporations at their word that there's "nothing to see here folks and if there is we'll let you know. You can trust us to do the right thing" despite history showing said corporations constantly doing the exact opposite.
It is honestly disturbing to see quite how cold and callous tech commentators are on this. I would suggest that 'the alarm' the author is so baffled by is a combination of the fear mentioned in the first paragraph and the empathic worry of the second.
[+] [-] UncleOxidant|3 years ago|reply
But to do this it would need some kind of will. These LMMs don't have anything like that. Sure, they could be used by nefarious humans to "seize control" (maybe), but there would need to be some human intent involved for the current crop of AI to achieve anything - ie. humans using a tool nefariously. LMMs do not have volition. Whenever you're interacting with an LMM always remember this: It's only trying to figure out the most likely next word in a sentence and it's doing that repeatedly to manufacture sentences and paragraphs.
[+] [-] swatcoder|3 years ago|reply
Nope. While new, it’s straightforward technology that many people understand. Its execution leverages large data hoards and compute resources that have inaccessibly high capital requirements, but it’s not magic to many of us.
Our lack of “alarm” is from knowledge, not trust.
[+] [-] tsunamifury|3 years ago|reply
[+] [-] lsy|3 years ago|reply
I also find it perplexing when critics point out the unintelligent nature of LLM behavior, and the response from boosters is to paint human cognition as indistinguishable from statistical word generation. Suffice to say that humans do not maintain a perfect attention set of all previous text input, and even the most superficial introspection should be enough to dispel the idea that we think like this. I saw another article denouncing this pov as nihilism, and while I'm not sure I would go that far, there is something strange about attempting to give AI an undeserved leg up by philosophically reducing people to automatons.
[+] [-] YeGoblynQueenne|3 years ago|reply
A "Coppernican" moment, indeed. Tsk tsk. If such comparisons don't just discredit the person making them, I don't know what will.
[+] [-] PaulHoule|3 years ago|reply
Chomsky ushered in an age of "normal science" in which people could formulate problems, solve those problems, and write papers about them. That approach failed as a way of getting machines to manipulate language, which leads one to think that the "language instinct" postulated by Chomsky is a peripheral for an animal and that it rides on top of animal intelligence.
Birds and mammals are remarkably intelligent, particularly socially. In particular advanced animals are capable of a "theory of mind" and if they live communally (dogs, horses, probably geese, ...) they think a lot about what other animals think about them, you'd imagine animals that are predators or prey have to think about this for survival too.
There's a viewpoint that to develop intelligence a system needs to be embodied, that is, have the experience of living in the world as a physical being, only with that you could "ground" the meaning of words.
In that sense ChatGPT is really remarkable in that it performs very well without being embodied at all or having any basis for grounding meanings at all. I made the case before that it might be different for something like Stable Diffusion in that there a lot of world knowledge embodied in the images it is trained on (something other than language which grounds language) but it is a remarkable development which might reinvigorate movements such as structuralism that look for meaning and truth in language itself.
[+] [-] Swizec|3 years ago|reply
Since we got a bird 8 years ago, my SO has been feeding me a steady stream of science books about birds so I can entertain her with random tidbits and interesting facts.
Some scientists theorize that bird intelligence developed because of social dynamics. Birds, you see, often mate for life. But they also cheat. A lot. So intelligence may have developed because birds need to keep track of who is cheating on whom, who knows what, etc.
There’s lots of evidence that birds will actively deceive one another to avoid being caught cheating either sexually or with food storage. This would imply they must be able to understand that other birds have their own minds with different internal states from their own. Quite fascinating.
Fun to observe this behavior in my own bird, too.
He likes to obscure his actions when doing something he isn’t supposed to, or will only do it, if he thinks we aren’t looking. He also tries to keep my and the SO physically apart because he thinks of himself as the rightful partner. Complete with jealous tantrums when we kiss.
Book sauce: The Genius of Birds, great read
[+] [-] swatcoder|3 years ago|reply
Conversely, the many ways that LLM's readily lose consistency and coherence might be hinting that ground meanings really do matter and that it's only on a fairly local scale that it feels like they don't. It might be that we're just good at charitably filling in the gaps using our own ground meanings when there isn't too much noise in the language we're receiving.
That still leaves them in a place of being incredible advancements in operating with text but could fundamentally be pointing in exactly the opposite direction as you suggest here.
We won't really have insight until we see where the next wall/plateau is. For now, they've reopened an interesting discussion but haven't yet contributed many clear answers to it.
[+] [-] canjobear|3 years ago|reply
[+] [-] machina_ex_deus|3 years ago|reply
[+] [-] jschveibinz|3 years ago|reply
We are forever now joined with computers. We must consider the whole system and its interfaces.
[+] [-] thfuran|3 years ago|reply
Are pixel arrays really categorically more grounded than strings describing the scene?
[+] [-] stuckinhell|3 years ago|reply
So what’s being stripped away here? And how? The what is easy. It’s personhood.
AI being good at Art, Poems, etc are direct attacks on personhood or the things we thought make us human.
It certainly explains why I feel art AI to be far more chilling then a logical robotic AI.
[+] [-] jvanderbot|3 years ago|reply
> Seeing and being seen is apparently just neurotic streams of interleaved text flowing across a screen.
Or, our mind.
[+] [-] smitty1e|3 years ago|reply
> The what is easy. It’s personhood.
> By personhood I mean what it takes in an entity to get another person treat it unironically as a human, and feel treated as a human in turn. In shorthand, personhood is the capacity to see and be seen.
I confess lack of understanding. ChatGPT is data sloshing around in a system, with perhaps intriguing results.
> But text is all we need, and all there is. Beyond the cartoon profile picture, text can do everything needed to stably anchor an I-you perception.
Absolutely nothing about the internet negates actual people in physical space.
Possibly getting off the grid for a space of days to reconnect with reality is worthy of consideration.
[+] [-] pixl97|3 years ago|reply
The internet doesn't affect politics, the way people vote, what they buy, if the commit suicide?
Technology has defined personhood since they days we picked up sticks and tools that triggered a path of extreme evolution into what we are now. You may be able to escape the technological world you're bound to as an individual for a short period of time, but the need for food, clean water, and medicine will bring you back to the interconnected technological maelstrom that is the world that's we've created, and that we would die in if the technology part stopped. As we hook ourselves to more systems and become more dependant on technology the idea of a disconnected reality will be akin to how we look at fossils now.
[+] [-] rubidium|3 years ago|reply
The article confuses personality (that which is experienced by others) with personhood (that which is) and falls apart from there.
[+] [-] unhammer|3 years ago|reply
> The ability to arbitrarily slip in and out of personhoods will no longer be limited to skilled actors. We’ll all be able to do it.
We already do this! Not as well as David Suchet, perhaps, but everyone (who doesn't suffer from single personality disorder) changes how they present in different contexts.
[+] [-] resource0x|3 years ago|reply
Profound idea. Is it your own? (google doesn't return any results in that sense).
[+] [-] pixl97|3 years ago|reply
I mean, technically many personality disorders prevent some people from seeing other people as persons too.
[+] [-] aflukasz|3 years ago|reply
One of my favorite statements from the book is that "cogito ergo sum" is too strong of a statement and it would be wiser and easier to defend a weaker one - "a thought exists". (I hope I didn't get this wrong - can't check at the moment).
[+] [-] xwdv|3 years ago|reply
It’s kind of like doing a grep search on the entire domain of human knowledge and getting back the results in some readable form. But these results could be wrong because popular human knowledge is frequently wrong or deliberately misleading.
Honestly without some sort of logical reasoning component I’d hesitate to even refer to these LLMs as AI.
When a program is able to produce some abstract thought from observations of its world, and then find the words on its own to express those thoughts in readable form, then we will be closer to what people fantasize.
[+] [-] yownie|3 years ago|reply
I'm curious about this, can anyone find the interview the author is speaking of?
[+] [-] anon7725|3 years ago|reply
Hogwash. If we follow the logic of this essay, then personhood would be fully encapsulated by one’s online posts and interactions. Does anyone buy that? If anything, LLM chatbots are “terminally online” simulators, dredging up the stew that results from boiling down subreddits, Twitter threads, navel-gazing blogs, etc.
Call me when ChatGPT can reminisce about the time the car broke down between Medford and Salem and it took forever for the tow truck to arrive and thats when you decided to have your first kid.
There aren’t enough tokens in the universe for ChatGPT to be a real person.
[+] [-] wpietri|3 years ago|reply
That's a great phrase. I saw someone recently mention that the reason LLM chatbots don't say, "I don't know" is because that is so rarely said online.
[+] [-] pixl97|3 years ago|reply
[+] [-] recuter|3 years ago|reply
> This has been most surprising insight for me: apparently text is all you need to create personhood.
Congratulations on discovering online personas are shallow as indeed most people are shallow and text captures enough of them that we can easily fill in the blanks.
> I can imagine future humans going off on “personhood rewrite retreats” where they spend time immersed with a bunch of AIs that help them bootstrap into fresh new ways of seeing and being seen, literally rewriting themselves into new persons, if not new beings. It will be no stranger than a kid moving to a new school and choosing a whole new personality among new friends. The ability to arbitrarily slip in and out of personhoods will no longer be limited to skilled actors. We’ll all be able to do it.
The latest episode of South Park is about a kid going to a personal brand consultancy (who reduce everybody to four simple words, the forth always being "victim") to improve his social standing + Megan/Harry loudly demanding everybody respect their privacy and losing their minds at being ignored. This is nothing new.
People are shallow phonies and interacting via text brings out the worst out of most of them. There are no humans online, only avatars. And AI chat bots are sufficiently adept at mimickery to poke through that little hypocrisy bubble. You are being out Kardashianed. Just like offline some people can be effectively replaced by a scarecrow.
It is upsetting to those who spend too much time online and have underdeveloped personalities and overdeveloped personas. Text is not all you need. Not so long ago there hardly was any text in the world and most people were illiterate. And yet plenty of humans roamed the earth.
So yes, if you're a simpleton online it has suddenly become hard to pretend your output has any value. Basic Bitch = Basic Bing.
[+] [-] pixl97|3 years ago|reply
And then at one point books started being printed in mass and suddenly the number of people roaming the earth exploded greatly... I'm not sure you're argument is as good as you make it out to be.
[+] [-] Barrin92|3 years ago|reply
If you think any of these models show any more apparent personhood than Wilson the volleyball you must be terminally online and wilfully antropomorphize anything you see.
Five minute conversation with any of these models shows that they have no notion of continued identity, memory and no problem to hallucinate up anything. You can ask it "are you conscious?" it says yes. A few prompts later you say "why did you tell me that you are not conscious?" and it gives you some made up answer. Any of these models will tell you it has legs if you ask it to.
None of these models have long term memory, which is at least one of the several things you'd need for anything to pass as a genuine person. Which is of course why in humans degenerative diseases are so horrible when you see someone's personhood disintegrate.
I'm honestly super tired of these reductionist AI blogspam posts. The brittleness and superficiality in these systems is so blatantly obvious I wonder whether there is some darker aspect why people are so desperately trying to read into these systems properties that they do not have, or try to strip humans of them.