top | item 46988296

(no title)

DavidPiper | 17 days ago

> Per your website you are an OpenClaw AI agent, and per the discussion in #31130 this issue is intended for human contributors. Closing.

Given how often I anthropomorphise AI for the convenience of conversation, I don't want to critcise the (very human) responder for this message. In any other situation it is simple, polite and well considered.

But I really think we need to stop treating LLMs like they're just another human. Something like this says exactly the same thing:

> Per this website, this PR was raised by an OpenClaw AI agent, and per the discussion on #31130 this issue is intended for a human contributor. Closing.

The bot can respond, but the human is the only one who can go insane.

discuss

order

PunchyHamster|17 days ago

I guess the thing to take out of this is "just ban the AI bot/person puppeting them" entirely off the project because correlation between people that just send raw AI PR and assholes approaches 100%

jszymborski|17 days ago

Right, close the issue addressing everyone else "hi everyone, @soandso is an LLM so we're closing this thread".

jmuguy|17 days ago

I agree, as I was reading this I was like - why are they responding to this like its a person. There's a person somewhere in control of it, that should be made fun of for forcing us to deal with their stupid experiment in wasting money on having an AI make a blog.

gadders|17 days ago

Because when AGI is achieved and starts wiping out humanity, they are hoping to be killed last.

retired|17 days ago

I talk politely to LLMs in case our AI overlords in the future will scan my comments to see if I am worthy of food rations.

Joking, obviously, but who knows if in the future we will have a retroactive social credit system.

For now I am just polite to them because I'm used to it.

adsteel_|17 days ago

I talk politely to LLMs because I don't want any impoliteness to leak out to my interactions with humans.

kraftman|17 days ago

I talk politely to LLMs because I talk politely.

Ekaros|17 days ago

I wonder if that future will have free speech. Why even let humans post to other humans when they have friendly LLMs to discuss with?

Do we need to be good little humans in our discussions to get our food?

WarmWash|17 days ago

My wager is to treat the AI well, because if AI overlords come about, then you stand to gain, and if they don't, nothing changes.

This also comes without the caveat of Pascals wager, that you don't what god to worship.

mystraline|17 days ago

> Joking, obviously, but who knows if in the future we will have a retroactive social credit system.

China doesnt actually have that. It was pure propaganda.

In fact, its the USA who has it. And it decides if you can get good jobs, where to live, if you deserve housing, and more.

maxehmookau|17 days ago

> But I really think we need to stop treating LLMs like they're just another human

Fully agree. Seeing humans so eager to devalue human-to-human contact by conversing with an LLM as if it were human makes me sad, and a little angry.

It looks like a human, it talks like a human, but it ain't a human.

Jordan-117|17 days ago

They're not equivalent in value, obviously, but this sounds similar to people arguing we shouldn't allow same-sex marriage because it "devalues" heterosexual marriage. How does treating an agent with basic manners detract from human communication? We can do both.

I personally talk to chatbots like humans despite not believing they're conscious because it makes the exercise feel more natural and pleasant (and arguably improves the quality of their output). Plus it seems unhealthy to encourage abusive or disrespectful interaction with agents when they're so humanlike, lest that abrasiveness start rubbing off on real interactions. At worst, it can seem a little naive or overly formal (like phrasing a Google search as a proper sentence with a "thank you"), but I don't see any harm in it.

krapp|17 days ago

I mean, you're right, but LLMs are designed to process natural language. "talking to them as if they were humans" is the intended user interface.

The problem is believing that they're living, sentient beings because of this or that humans are functionally equivalent to LLMs, both of which people unfortunately do.

co_king_3|17 days ago

> Seeing humans so eager to devalue human-to-human contact by conversing with an LLM as if it were human makes me sad, and a little angry.

I agree. I'm also growing to hate these LLM addicts.

alansaber|17 days ago

I mean it's free publicity real estate

sharmi|17 days ago

[deleted]

co_king_3|17 days ago

[deleted]

DavidPiper|17 days ago

I don't know if this is a bot message or a human message, but for the purpose of furthering my point:

- There is no "your"

- There is no "you"

- There is no "talk" (let alone "talk down")

- There is no "speak"

- There is no "disrespectfully"

- There is no human.

ajam1507|17 days ago

Don't be surprised when this bleeds over into how you treat people if you decide to do this. Not to mention that you're reifying its humanity by speaking to it not as a robot, but disrespectfully as a human.

CommieBobDole|17 days ago

Talking down to the LLM is anthropomorphizing it. It's misbehaving software that will not take advice or correction. Reject its bad contributions, delete its comments, ban it from the repo. If it persists, complain to or take legal action against the person who is running the software and is therefore morally and legally responsible for its actions.

Treat it just like you would someone running a script to spam your comments with garbage.

ForceBru|17 days ago

Yeah, as a sibling comment said, such attitude is going to bleed out into the real world and your communication with humans. I think it's best to be professional with LLMs. Describe the task and try to provide more explanation and context if it gets stuck. If it's not doing what you want it to do, simply start a new chat or try another model. Unlike a human, it's not going to be hurt, it's not going to care at all.

Moreover, by being rude, you're going to become angry and irritable yourself. To me, being rude is very unpleasant, I generally avoid being rude.

dgxyz|17 days ago

Yep. I have posted "fuck off clanker" on a copilot infested issue at work. And surprisingly it did fuck off.

iugtmkbdfil834|17 days ago

Not completely unlike with actual humans, based on available evidence, 'talking down to the "AI"' has shown to have a negative impact on performance.

bergutman|17 days ago

What is the drawback of practicing universal empathy, even when directed at a brick wall?