top | item 43817021

(no title)

thethethethe | 10 months ago

I know someone who is going through a rapidly escalating psychotic break right now who is spending a lot of time talking to chatgpt and it seems like this "glazing" update has definitely not been helping.

Safety of these AI systems is much more than just about getting instructions on how to make bombs. There have to be many many people with mental health issues relying on AI for validation, ideas, therapy, etc. This could be a good thing but if AI becomes misaligned like chatgpt has, bad things could get worse. I mean, look at this screenshot: https://www.reddit.com/r/artificial/s/lVAVyCFNki

This is genuinely horrifying knowing someone in an incredibly precarious and dangerous situation is using this software right now. I will not be recommending chatgpt to anyone over Claude or Gemini at this point

discuss

order

occamsrazorwit|10 months ago

I know someone in the Bay Area AI-adjacent community who went through that exact rapidly-escalating psychotic break in a highly visible and well-documented fashion. This started last year, and he's now in jail. The risk only increases from here :/