top | item 45150794

(no title)

sacul | 5 months ago

I think the biggest problem with chatbots is the constant effort to anthropomorphize them. Even seasoned software developers who know better fall into acting like they are interacting with a human.

But a llm is not a human, and I think OpenAI and all the others should make it clear that you are NOT talking to a human. Repeatedly.

I think if society were trained to treat AI as NOT human, things would be better.

discuss

order

pwython|5 months ago

I'm not a full-time coder, it's maybe 25% of my job. And am not one of those people that have conversations with LLMs. But I gotta say I actually like the occasional banter, it makes coding fun again for me. Like sometimes after Claude or whatever fixes a frustrating bug that took ages to figure out, I'll be like "You son of a bitch you fixed it! Ok, now do..."

I've been learning a hell of a lot from LLMs, and am doing way more coding these days for fun, even if they are doing most of the heavy lifting.

mejutoco|5 months ago

> I think if society were trained to treat AI as NOT human, things would be better.

Could you elaborate on why? I am curious but there is no argument.

sacul|5 months ago

Yeah, thanks for asking. My reasoning is this:

That chatbot you're interacting with is not your friend. I take it as a fact (assumption? axiom?) that it can never be your friend. A friend is a human - animals, in some sense, can be friends - who has your best interests at heart. But in fact, that chatbot "is" a megacorp whose interests certainly aren't your interests - often, their interests are at odds with your interests.

Google works hard with branding and marketing to make people feel good about using their products. But, at the end of the day, it's reasonably easy to recognize that when you use their products, you are interacting with a megacorp.

Chatbots blur that line, and there is a huge incentive for the megacorps to make me feel like I'm interacting with a safe, trusted "friend" or even mentor. But... I'm not. In the end, it will always be me interacting with Microsoft or OpenAI or Google or whoever.

There are laws, and then there is culture. The laws for AI and surveillance capitalism need to be in place, and we need lawmakers who are informed and who are advocates for the regular people who need to be protected. But we also need to shift culture around technology use. Just like social customs have come in that put guard rails around smartphone usage, we need to establish social customs around AI.

AI is a super helpful tool, but it should never be treated as a human friend. It might trick us into thinking that its a friend, but it can never be or become a friend.

BriggyDwiggs42|5 months ago

Yeah I’ve loved seeing people call them clankers and stuff for that reason.