top | item 45490669

(no title)

anonymous_sorry | 4 months ago

You can't "deceive" an LLM. It's not like lying to a person. It's not a person.

Using emotive, anthropomorphic language about software tool is unhelpful, in this case at least. Better to think of it as a mentally disturbed minor who found a way to work around a tool's safety features.

We can debate whether the safety features are sufficient, whether it is possible to completely protect a user intent on harming themselves, whether the tool should be provided to children, etc.

discuss

order

wongarsu|4 months ago

I don't think deception requires the other side to be sentient. You can deceive a speed camera.

And while meriam-webster's definition is "the act of causing someone to accept as true or valid what is false or invalid", which might exclude LLMs, Oxford simply defines deception as "the act of hiding the truth, especially to get an advantage", no requirement that the deceived is sentient

anonymous_sorry|4 months ago

Mayyybe, but since the comment I objected to also used an analogy of lying to a person I felt it suggested some unwanted moral judgement (of a suicidal teenager).

lxgr|4 months ago

It's at least pretending to be a person, to which you can lie and which will then pretend to possibly suspect you're lying.

At some point, the purely reductionist view stops being very useful.

anonymous_sorry|4 months ago

I mean, for one thing, a commercial LLM exists as a product designed to make a profit. It can be improved, otherwise modified, restricted or legally terminated.

And "lying" to it is not morally equivalent to lying to a human.

usefulcat|4 months ago

> Using emotive, anthropomorphic language about software tool is unhelpful, in this case at least.

Ok, I'm with you so far..

> Better to think of it as a mentally disturbed minor...

Proceeds to use emotive, anthropomorphic language about a software tool..

Or perhaps that is point and I got whooshed. Either way I found it humorous!

8note|4 months ago

the whoosh is that they are describing the human operator, a "mentally disturbed minor" and not the LLM. the human has the agency and specifically bypassed the guardrails

jdietrich|4 months ago

To treat the machine as a machine: it's like complaining that cars are dangerous because someone deliberately drove into a concrete wall. Misusing a product with the specific intent of causing yourself harm doesn't necessarily remove all liability from the manufacturer, but it radically changes the burden of responsibility.

anonymous_sorry|4 months ago

That's certainly a reasonable argument.

Another is that this is a new and poorly understood (by the public at least) technology that giant corporations make available to minors. In ChatGPT's case, they require parental consent, although I have no idea how well they enforce that.

But I also don't think the manufacturer is solely responsible, and to be honest I'm not that interested in assigning blame, just keen that lessons are learned.