(no title)
anonymous_sorry | 4 months ago
Using emotive, anthropomorphic language about software tool is unhelpful, in this case at least. Better to think of it as a mentally disturbed minor who found a way to work around a tool's safety features.
We can debate whether the safety features are sufficient, whether it is possible to completely protect a user intent on harming themselves, whether the tool should be provided to children, etc.
wongarsu|4 months ago
And while meriam-webster's definition is "the act of causing someone to accept as true or valid what is false or invalid", which might exclude LLMs, Oxford simply defines deception as "the act of hiding the truth, especially to get an advantage", no requirement that the deceived is sentient
anonymous_sorry|4 months ago
lxgr|4 months ago
At some point, the purely reductionist view stops being very useful.
anonymous_sorry|4 months ago
And "lying" to it is not morally equivalent to lying to a human.
usefulcat|4 months ago
Ok, I'm with you so far..
> Better to think of it as a mentally disturbed minor...
Proceeds to use emotive, anthropomorphic language about a software tool..
Or perhaps that is point and I got whooshed. Either way I found it humorous!
8note|4 months ago
unknown|4 months ago
[deleted]
jdietrich|4 months ago
anonymous_sorry|4 months ago
Another is that this is a new and poorly understood (by the public at least) technology that giant corporations make available to minors. In ChatGPT's case, they require parental consent, although I have no idea how well they enforce that.
But I also don't think the manufacturer is solely responsible, and to be honest I'm not that interested in assigning blame, just keen that lessons are learned.
unknown|4 months ago
[deleted]