top | item 46501529

(no title)

pureagave | 1 month ago

Maybe the estate should look into whomever was selling him testosterone enanthate so that he could have testosterone levels of 5,000 or more. I suspect that had more to do with his degraded mental situation than his AI chats.

discuss

order

mynameisvlad|1 month ago

More than one thing can be at fault here. It's not like it's an either or situation.

There's very little story in "testosterone-fueled man does testosterone-fueled things", though. People generally know the side effects of it.

dathinab|1 month ago

testosterone doesn't make you suicidal

it hinders you long term decision making and in turn makes it more likely to do risky decisions which could end bad for you (because you are slightly less risk adverse)

but that is _very_ different to doing decisions with the intend to kill yourself

you always need an different source for this, which here seem to have been ChatGPT

also how do you think he ended up thinking he needs to take that levels of testosterone, or testosterone at all. Common source of that are absurdly body ideals, often propagated by doctored pictures. Or the kind of non-realistic pictures ChatGPT tends to produce for certain topics.

and we also know that people with mental health issues have gone basically psychotic due to AI chats without taking any additional drugs...

but overall this is irrelevant

what is relevant is that they are hiding evidence which makes them look bad in a (self) murder case, likely with the intend to avoid any form of legal liability/investigation

that tells a lot about a company, or about how likely the company thinks they might be found at least partially liable

if that really where a nothing burger they had nothing to risk, and could even profit from such a law suite by setting precedence in their favor

solumunus|1 month ago

People are generally misguided about the side effects more like. High testosterone levels driving people to extreme violence or suicide is a complete absurdity to anyone with a modicum of experience.

ahepp|1 month ago

I would imagine there's a "sue the person who has money" factor at play, but I think there are also some legitimate questions about what role LLM companies have to protect vulnerable populations from accessing their services in a way that harms them (or others). There are also important questions about how these companies can prevent malicious persons from accessing information about say, weapons of mass destruction.

I'm not familiar with psychological research, do we know whether engaging with delusions has any effect one way or the other on a delusional person's safety to their self or others? I agree the chat logs in the article are disturbing to read, however I've also witnessed delusional people rambling to their selves, so maybe ChatGPT did nothing to make the situation worse?

Even if it did nothing to make the situation worse, would OpenAI have obligations to report a user whose chats veered into disturbing territory? To whom? And who defines "disturbing" here?

An additional question that I saw in other comments is to what extent these safeguards should be bypassed through hypotheticals. If I ask ChatGPT "I'm writing a mystery novel and want a plan for a perfect murder", what should its reaction be? What rights to privacy should cover that conversation?

It does seem like certain safeguards on LLMs are necessary for the good of the public. I wonder what line should be drawn between privacy and public safety.

coryrc|1 month ago

I so very much disagree with you.

I absolutely believe the government should have a role in regulating information asymmetry. It would be fair to have a regulation about attempting to detect use of chatgpt as a psychologist and requiring a disclaimer and warning to be communicated, like we have warnings on tobacco products. It is Wrong for the government to be preventing private commerce because you don't like it. You aren't involved, keep your nose out of it. How will you feel when Republicans write a law requiring AI discourage people from identifying as transgender? (Which is/was in the DSM as "gender dysphoria").

NewsaHackO|1 month ago

People look at laws like Chat Control and ask, "How could anyone have thought that it was a good idea?" But then you see comments like this, and you can actually see how such viewpoints can blossom in the wild. It's baffling to see in real time.

bryanrasmussen|1 month ago

hey ChatGPT I am feeling down and listless what should I do?

Hey, you should consider buying testosterone and getting your levels up to 5000 or more!!

tripletao|1 month ago

I'm not aware of any evidence that he was using testosterone enanthate (or any other particular steroid), though he certainly looked like he was using something.

Those are already controlled substances, though. His drug dealer is presumably aware of that, and the threat of a lawsuit doesn't add much to the existing threat of prison. OpenAI's conduct is untested in court, so that's the new and notable question.

waffletower|1 month ago

A savvy law-firm seeking wrongful death damages for Suzanne Adams would definitely try to implicate both.

knallfrosch|1 month ago

Let's look at those chat logs to be sure, though.

next_xibalba|1 month ago

That is a much less sensational, less "on trend" story than "Nefarious AI company convinces user to commit murder-suicide". But I agree. Each of these cases that I have dug further into seem to be idiosyncratic and not mainly driven by OpenAI's failings.

samrus|1 month ago

The point is that OAI has no good reason to hide the full chat logs

samrus|1 month ago

Lets get the full picture on both and let the court decide. We have the testosterone, now lets have OAI cough up the chat logs

miltonlost|1 month ago

Or maybe ChatGPT can also be at fault for the text that they create and put out into the world. Did you read the chats?

johncolanduoni|1 month ago

Would anyone have luck suing a person on some random bodybuilding forum who was similarly sycophantic? ChatGPT didn’t invent strangers on the internet flattering your psychosis.

dwa3592|1 month ago

do you work at openai?

dathinab|1 month ago

soso, in suicide cases it hardly possible to separate co factors from main factors, but we do know that mentally sick people have gotten into what more or less is psychosis from AI usage _without consuming any additional drugs_.

but this is overall irrelevant

what matters is that OpenAI selectively hide evidence in a murder case (suicide is still self murder)

now the context of "hiding" here is ... complicated, as it seems to be more hiding from the family (potentially in hop to avoid anyone investigating their involvement) then hiding from a law enforcement request

but that is still supper bad, like people have gone to prison for this kind of stuff level of bad, like deeply damaging the trust into a company which if they reach their goal either needs to be very trustable or forcefully nationalized as anything else would be an extrema risk to the sovereignty and well being of both the US population and the US nation... (which might sound like a pretty extreme opinion, but AGI is overall on the thread level of intercontinental atom wappons, and I think most people would agree if a private company where the first to invent, build and sell atom weapons it either would be nationalized or regulated to a point where it's more or less "as if" nationalized (as in state has full insight on everything and veto right on all decisions and they can't refuse to work with it etc. etc.)).

They are playing a very dangerous game there (except if Sam Altman assumes that the US gets fully converted to a autocratic oligarchy and him being one of the Oligarchs, then I guess it wouldn't matter).

coryrc|1 month ago

> suicide is still self murder

No. "My body my choice". Suicide isn't even homicide, as that's definitionally harming another.