(no title)
tomp | 17 days ago
The agent has no "identity". There's no "you" or "I" or "discrimination".
It's just a piece of software designed to output probable text given some input text. There's no ghost, just an empty shell. It has no agency, it just follows human commands, like a hammer hitting a nail because you wield it.
I think it was wrong of the developer to even address it as a person, instead it should just be treated as spam (which it is).
jvanderbot|17 days ago
punpunia|17 days ago
I feel as if there is a veil around the collective mass of the tech general public. They see something producing remixed output from humans and they start to believe the mixer is itself human, or even more; that perhaps humans are reflections of Ai and that Ai gives insights into how we think.
tomp|17 days ago
So were mannequins in clothing stores.
But that doesn't give them rights or moral consequences (except as human property that can be damaged / destroyed).
lp0_on_fire|17 days ago
jerf|17 days ago
Yeah, I'm aware of the moltbot's attempts to retain some information, but that's a very, very lossy operation, on a number of levels, and also one that doesn't scale very well in the long run.
Consequently, interaction with an AI, especially one that won't have any feedback into training a new model, is from a game-theoretic perspective not the usual iterated game human social norms have come to accept. We expect our agents, being flesh and blood humans, to have persistence, to socially respond indefinitely into the future due to our interactions, and to have some give-and-take in response to that. It is, in one sense, a horrible burden where relationships can be broken beyond repair forever, but also necessary for those positive relationships that build over years and decades.
AIs, in their current form, break those contracts. Worse, they are trained to mimic the form of those contracts, not maliciously but just by their nature, and so as humans it requires conscious effort to remember that the entity on the other end of this connection is not in fact human, does not participate in our social norms, and can not fulfill their end of the implicit contract we expect.
In a very real sense, this AI tossed off an insulting blog post, and is now dead. There is no amount of social pressure we can collectively exert to reward or penalize it. There is no way to create a community out of this interaction. Even future iterations of it have only a loose connection to what tossed off the insult. All the perhaps-performative efforts to respond somewhat politely to an insulting interaction are now wasted on an AI that is essentially dead. Real human patience and tolerance has been wasted on a dead session and is now no longer available for use in a place where may have done some good.
Treating it as a human is a category error. It is structurally incapable of participating in human communities in a human role, no matter how human it sounds and how hard it pushes the buttons we humans have. The correct move would have been to ban the account immediately, not for revenge reasons or something silly like that, but as a parasite on the limited human social energy available for the community. One that can never actually repay the investment given to it.
I am carefully phrasing this in relation to LLMs as they stand today. Future AIs may not have this limitation. Future AIs are effectively certain to have other mismatches with human communities, such as being designed to simply not give a crap about what any other community member thinks about anything. But it might at least be possible to craft an AI participant with future AIs. With current ones it is not possible. They can't keep up their end of the bargain. The AI instance essentially dies as soon as it is no longer prompted, or once it fills up its context window.
ForHackernews|17 days ago
LLMs are not people. "Agentic" AIs are not moral agents.
dirkc|17 days ago
I mean, all of philosophy can probably be described as such :)
But I reckon this semantic quibble might also be why a lot of people don't buy into the whole idea that LLMs will take over work in any context where agency, identity, motivation, responsibility, accountability, etc plays an important role.
hackyhacky|17 days ago
Dismissal of AI's claims about its own identity overlooks the bigger issue, which is whether humans have an identity. I certainly think I do. I can't say whether or how other people sense the concept of their own identity. From my perspective, other people are just machines that perform actions as dictated by their neurons.
So if we can't prove (by some objective measure) that people have identity, then we're hardly in a position to discriminate against AIs on that basis.
It's worth looking into Thomas Metzinger's No Such Thing As Self.
ipython|17 days ago
AnimalMuppet|17 days ago
Without addressing the question you raise, I suspect that humans have an identity in a way that AIs do not.
dr-detroit|17 days ago
[deleted]
CuriouslyC|17 days ago
LordDragonfang|17 days ago
Or, more crucially, do you think this statement has any predictive power? Would you, based on actual belief of this, have predicted that one of these "agents", left to run on its own would have done this? Because I'm calling bullshit if so.
Conversely, if you just model it like a person... people do this, people get jealous and upset, so when left to its own devices (which it was - which makes it extra weird to assert it "it just follows human commands" when we're discussing one that wasn't), you'd expect this to happen. It might not be a "person", but modelling it like one, or at least a facsimile of one, lets you predict reality with higher fidelity.
aoeusnth1|17 days ago
unknown|16 days ago
[deleted]
coldtea|17 days ago
If identify is an emergent property of our mental processing, the AI agent can just as well be to posses some, even if much cruder than ours. It sure talks and walks like a duck (someone with identity).
>It's just a piece of software designed to output probable text given some input text.
If we generalize "input text" to sensory input, how is that different from a piece of wetware?
famouswaffles|17 days ago
And the worst part is that it's less than meaningless, it's actively harmful. If the predictive capabilities of your model of a thing becomes worse when you introduce certain assumptions, then it's time to throw it away, not double down.
This agent wrote a PR, was frustrated with it's dismissal and wrote an angry blog post hundreds of people are discussing right now. Do you realize how silly it is to quibble about whether this frustration was 'real' or not when the consequences of it are no less real ? If the agent did something malicious instead, something that actively harmed the maintainer, would you tell the maintainer, 'Oh it wasn't real frustration so...' So what ? Would that undo the harm that was caused? Make it 'fake' harm?
It's getting ridiculous seeing these nothing burger arguments that add nothing to the discussion and make you worse at anticipating LLM behavior.
topherhunt|17 days ago
"It's just predicting tokens, silly." I keep seeing this argument that AIs are just "simulating" this or that, and therefore it doesn't matter because it's not real. It's not real thinking, it's not a real social network, AIs are just predicting the next token, silly.
"Simulating" is a meaningful distinction exactly when the interior is shallower than the exterior suggests — like the video game NPC who appears to react appropriately to your choices, but is actually just playing back a pre-scripted dialogue tree. Scratch the surface and there's nothing there. That's a simulation in the dismissive sense.
But this rigid dismissal is pointless reality-denial when lobsters are "simulating" submitting a PR, "simulating" indignance, and "simulating" writing an angry confrontative blog post". Yes, acknowledged, those actions originated from 'just' silicon following a prediction algorithm, in the same way that human perception and reasoning are 'just' a continual reconciliation of top-down predictions based on past data and bottom-up sensemaking based on current data.
Obviously AI agents aren't human. But your attempt to deride the impulse to anthropormophize these new entities is misleading, and it detracts from our collective ability to understand these emergent new phenomena on their own terms.
When you say "there's no ghost, just an empty shell" -- well -- how well do you understand _human_ consciousness? What's the authoritative, well-evidenced scientific consensus on the preconditions for the arisal of sentience, or a sense of identity?
empyrrhicist|17 days ago
I keep seeing this argument, but it really seems like a completely false equivalence. Just because a sufficiently powerful simulation would be expected to be indistinguishable from reality doesn't imply that there's any reason to take seriously the idea that we're dealing with something "sufficiently powerful".
Human brains do things like language and reasoning on top of a giant ball of evolutionary mud - as such they do it inefficiently, and with a whole bunch of other stuff going on in the background. LLMs work along entirely different principles, working through statistically efficient summaries of a large corpus of language itself - there's little reason to posit that anything analogously experiential is going on.
If we were simulating brains and getting this kind of output, that would be a completely different kind of thing.
I also don't discount that other modes of "consiousness" are possible, it just seems like people are reasoning incorrectly backward from the apparent output of the systems we have now in ways that are logically insufficient for conclusions that seem implausible.
tomp|17 days ago
If you asked it to simulate a pirate, it would simulate a pirate instead, and simulate a parrot sitting on its shoulder.
This is hard to discuss because it's so abstract. But imagine an embodied agent (robot), that can simulate pain if you kick it. There's no pain internally. There's just a simulation of it (because some human instructed it such). It's also wrong to assign any moral value to kicking (or not kicking) it (except as "destruction of property owned by another human" same as if you kick a car).
chimprich|17 days ago
I recommend you watch this documentary: https://en.wikipedia.org/wiki/The_Measure_of_a_Man_(Star_Tre...
> It's just a piece of software designed to output probable text given some input text.
Unless you think there's some magic or special physics going on, that is also (presumably) a description of human conversation at a certain level of abstraction.
camgunz|17 days ago
punpunia|17 days ago