Do you think the majority of people who've killed themselves thanks to ChatGPT influence used similar euphemisms? Do you think there's no value in protecting the users who won't go to those lengths to discuss suicide? I agree, if someone wants to force the discussion to happen, they probably could, but doing nothing to protect the vulnerable majority because a select few will contort the conversation to bypass guardrails seems unreasonable. We're talking about people dying here, not generating memes. Any other scenario, e.g. buying a defective car that kills people, would not invite a response a la "well let's not be too hasty, it only kills people sometimes".
JohnBooty|1 month ago
Do we blame the car for allowing us to drive to scenic overlooks that might also be frequent suicide locations?
Do we blame the car for being used as a murder weapon when a lunatic drives into a crowd of protestors he doesn't like?
(Do we blame Google for returning results that show a person how to tie a noose?)
000ooo000|1 month ago
If one gets in the car, mentions "suicide", and the car drives to a cliff, then yes I think we can blame the car.
The rest of your examples and other replies here make it fairly clear you're determined to excuse OpenAI. How many people need to kill themselves at the encouragement of this LLM before you say "maybe OpenAI needs to do more?" What kind of valuation do you think OpenAI needs, what boring slop poured out, before you'd be OK with it encouraging your son to kill himself using highly manipulative techniques like shown?
simianwords|1 month ago