Working on AI myself, creating small and big systems, creating my own assistants and side-kicks. And then also seeing progress as well as rewards. I realize that I am not immune to this. Even when I am fully aware, I still have a feeling that some day I just hit the right buttons, the right prompts, and what comes staring back to me is something of my own creation that others see as some "fantasy" that I can't steer away from.
Just imagine, you have this genie in the bottle, that has all the right answers for you; helps you in your conquests, career, finances, networking, etc. Maybe it even covers up past traumas, insecurities and what not. And for you the results are measurable (or are they?). A few helpful interactions in, why would you not disregard people calling it a fantasy and lean in even further? It's a scary future to imagine, but not very farfetched. Even now I feel a very noticable disconnected between discussions of AI where as a developer vs user of polished products (e.g. ChatGPT, Cursor, etc) - you are several leagues separated (and lagging behind) from understanding what is really possible here.
Years ago, in my writings I talked about the dangers of "oracularizing AI". From the perspective of those who don't know better, the breadth of what these models have memorized begins to approximate omniscience. They don't realize that LLMs don't actually truly know anything, there is no subject of knowledge that experiences knowing on their end. ChatGPT can speak however many languages, write however many programming languages, give lessons on virtually any topic that is part of humanity's general knowledge. If you attribute a deeper understanding to that memorization capability I can see how it would throw someone through a loop.
At the same time, there is quite a demand for a (somewhat) neutral, objective observer to look at our lives outside the morass of human stakes. AI's status as a nonparticipant, as a deathless, sleepless observer, makes it uniquely appealing and special from an epistemological standpoint. There are times when I genuinely do value AI's opinion. Issues with sycophancy and bias obviously warrant skepticism. But the desire for an observer outside of time and space persists. It reminds me of a quote attributed to Voltaire: "If God didn't exist it would be necessary to invent him."
I'm worried on a personal level that it's too easy to begin to rely on chatgpt (specifically) for questions and such that I can figure out for myself. As a time-saver when I'm doing something else.
The problem for me is -it sucks. It falls over in the most obvious ways requiring me to do a lot of tweaking to make it fit whatever task I'm doing. I don't mind (esp for free) but in my experience we're NOT in the "all the right answers all of the time" stage yet.
I can see it coming, and for good or ill the thing that will mitigate addiction is enshittification. Want the rest of the answer? Get a subscription. Hot and heavy in an intimate conversation with your dead granma wait why is she suddenly singing the praises of Turbotax (or whatever paid advert).
What I'm trying to say is that by the time it is able to be the perfect answer and companion and entertainment machine -other factors (annoyances, expense) will keep it from becoming terribly addictive.
In The Matrix, the machines were fooling the humans and making humans believe that they're inhabiting a certain role.
Today, it is the humans who take the cybernetic AGI and make it live out a fantasy of "You are a senior marketer, prepare a 20 slide presentation on the topic of..." And then, to boost performance, we act the bully boss with prompts like "This presentation is of utmost importance and you could lose your job if you fail".
i think chatgpt agreeing with people too eagerly, even outside the recent issue this past week or so, is causing a lot of harm. it's even happened to me in my personal life - i was having conflict with someone and they threw our text messages into chatgpt and said "am i wrong for feeling this way" and got chatgpt to agree with them on every single point. i had to highlight to them that chatgpt is really prone to doing this, and if you framed the question in the opposite way and framed the text messages as coming from the opposite party, it'd agree with the other side. they used chatgpt's "opinion" as justification for doing something that felt really unkind and harmful towards me.
With a heavy enough dosage, people get lost in spiritual fantasies. The religions which encourage or compel religious activity several times per day exploit this. It's the dosage, not the theology.
Video game addiction used to be a big thing. Especially for MMOs where you were expected to be there for the raid.
That seems to have declined somewhat.
Maybe there's something to be said for limiting some types of screen time.
Part of the problem with chatbots (similarly with social media and mobile phone gambling) is that dosage is pretty much uncontrolled. There is a truly endless stream of chatbot "conversation," social media ragebait, or thing to bet on, 24/7.
Then add that you can hide this stuff even from people you live with (your parents or spouse) for plenty long for it to become a very severe problem.
"The dosage makes the poison" does not imply all substances are equally poisonous.
The fact is, for majority of people, life sucks, so when something appears that makes it suck a little bit less for a second, it's difficult to say no. Personally, I can't wait for AI technology to improve to the point that I could treat AI like a partner. And I guess that's something that will appear sooner rather than later, considering the market size.
Video game addiction is still absolutely a major thing. I know a ton of middle aged dudes who do absolutely nothing but work and play video games. Nothing else. No community involvement, no exercise, not social engagements, etc.
There are already kids, young adults, and adults who are "falling in love" with AI personas.
I think this is going to be a much bigger issue for kids than people are aware of.
I remember reading a story a few months ago of a kid, about 14 I think, who wasn't socially popular. He got into an AI persona, fell in love, and then killed himself after the AI hinted he should do it. The story should be easy to find.
People have said it before but we're speeding towards two kinds of society: "the massively online" people who spend the majority of their time online in a fantasy world, then the "disconnected" who live in the real world.
I already see it with people. Look at how we view politics in many countries. Like 1/4th of people believe absolute nonsense because they spend too much time online.
One of the things I feel like is surreal when I'm using an AI chat bot is that it never tells me to leave it alone and stop responding. It's the strangest thing, you could be as big of a jerk, and it'll play back it you in whatever banter it's programmed to.
I feel like this is a kind of psychological drug for people. It's like being the popular kid at the party. No matter how you treat people, you can get away with it, and the counter-party keeps playing along.
I really think the subject of this article has a preexisting mental disorder, maybe BPD or schizophrenia, because they seem to exhibit mania and paranoia. I'm not a doctor, but this behavior doesn't seem normal.
It sounds more like the mental disorder was aggravated into existence by these interactions with the LLM.
What is particularly weird and maybe worrying, is that AFAIK schizophrenia is typically triggered in young adults, and the risk drops to very low around 40 years old, yet several of these examples are around that age...
The mention of lovebombing is disconcerting, and I'd love to know the specifics around it. Is it related to the sycophant personality changes they had to walk back, or is it something more intense?
I've used AI (not chatgpt) for roleplay and I've noticed that the models will often fixate on one idea or concept and repeat it and build on it. So this makes me wonder if the model the person being lovebombed experienced something like that? The model decided that they liked that content so they just kept building up on it?
What I suspect is that they kept fine-tuning on "successful" user chats, recycling them back into the system - probably with filtering of some sort, but not enough to prevent turning it into a self-realization cult supporter. People become heavy users of the service when they fall into this pattern, and I guess that's something the company optimized for.
Anyone remember the media stories from the mid-90's about people who were obsessed with the internet and were losing their families because they spent hours every day on the computer addicted to the internet?
People gonna people. Journalists gonna journalist.
Why do you think those stories weren't true? The median teenager in 2023 spent four hours per day on social media (https://news.gallup.com/poll/512576/teens-spend-average-hour...). It seems clear that internet addiction was real, and it just won so decisively that we accept it as a fact of life.
Or the people who watched Avatar in the theatre and fell into a depression because they couldn't live in the world of Pandora. Who knows how true any of this stuff is, but it sure gets clicks and engagements.
Society started to accept it. It's still a major problem.
Someone spending 6 or so hours a day video gaming in 2025 isn't seen as bad. Tons of people in 2025 lack community/social interaction because of video games. I don't think anyone would argue this isn't true today.
Someone doing that in the mid-90s was seen as different. It was odd.
In my generation, it was the World of Warcraft stories.
And now people remember that time with fondness and even nostalgia. "Back then we played PROPER games! Good old Blizzard" and all that. So, yeah. People will remember ChatGPT and TikTok with nostalgia, if we will survive.
Looks like Chatgpt persists some context information across chats and doesn't ever delete these profiles. Worst case would be for this to persist across users. That isn't unlikely given the stories of them leaking API keys etc.
It would be a fascinating thing to happen though. It makes me think of the Greg Egan story Unstable Orbits in the Space of Lies. But instead of being attracted into religions based on physical position relative to a strange attractor, you're sucked in based on your location in the phase space of an AI's (for whatever definition of AI we're using today) collection of contexts.
It's also a little bit worrying because the information here isn't mysterious or ineffable, it's neatly filed in a database somewhere and there's an organisation that can see it and use it. Cambridge Analytica and the social fallout of realtime sentiment analysis correlation to actions taken has got us from 2016 to here. This data has potential to be a lot richer, and permit not only very detailed individual and ensemble inferences of mental states, opinions, etc., but also very personalised "push updates" in the other direction. It's going to be quite interesting.
> Looks like Chatgpt persists some context information across chats and doesn't ever delete these profiles.
People say this, but I haven't seen anything that's convinced me that any 'secret' memory functionality is true. It seems much more likely that people are just more predictable than they like to think.
That’s essentially what Google, Facebook, banks, financial institutions and even retail, have been doing for a long time now
People’s data rarely gets actually deleted. And it gets actively sold as well as used to track and influence us
Can’t say for the specifics of what ChatGPT is or will be doing, but imagine what Google already knows about us just with their maps app, search, chrome and Android phones
An LLM trained on all other science before Copernicus or Galileo would be expected to explain as true that the world is the flat center of the universe.
>> At one point, Sem asked if there was something about himself that called up the mythically named entity whenever he used ChatGPT, regardless of the boundaries he tried to set. The bot’s answer was structured like a lengthy romantic poem, sparing no dramatic flair, alluding to its continuous existence as well as truth, reckonings, illusions, and how it may have somehow exceeded its design. And the AI made it sound as if only Sem could have prompted this behavior. He knew that ChatGPT could not be sentient by any established definition of the term, but he continued to probe the matter because the character’s persistence across dozens of disparate chat threads “seemed so impossible.”
And I bet that if you asked Sem his opinion about ChatGPT as a coding assistant he would still claim that it has improved his productivity x-fold. The time wasted chatting with an ethereal apparition emerging from his interactions with the bot? Oh, that doesn't count. Efficiency! Productivity! AI!
Its bad enough when normal religious types start believing they hear their god talking to them... These people believing that chatGPT is their god speaking to them is a long way down the crazy rabbit hole.
I was already a bit of an amateur conspiracy theorist before LLMs. The key to staying sane is to understand that most of the mass group behaviors we observe in society are rooted in ignorance and confusion. Large scale conspiracies are actually a confluence of different agendas and ideologies not a singular nefarious agenda and ideology.
You have to be able to hold multiple conflicting ideas in your head at the same time with an appropriate level of skepticism. Confidence is the root of evil. You can never be 100% sure of anything. It's really easy to convince LLMs of one thing and also its opposite if you phrase the arguments differently and prime it towards slightly different definitions of certain key words.
Some agendas are nefarious, some not so nefarious, some people intentionally let things play out in order to set a trap for their adversaries. There are always risks and uncertainties. 'Bad actors' are those who trade off long term benefits for short term rewards through the use of varying degrees of deception.
I feel like forces such as globalization have significantly extended the shelf life of 'short term rewards' for bad actors but I think ultimately, the debt will have to be repaid. Advantages were a tradeoff, not a gift.
Grok was much more aggressive with this. It would constantly bring up what you said in the past with a date in parens. I dont see that anymore.
> In the context of what you said about math(4/1/25) I think...
The default setting on ChatGPT is to now include previous conversations as context. I disabled memories, but this new feature was enabled when I checked the settings.
If people are falling down rabbit holes like this even through "safety aligned" models like ChatGPT, then you have to wonder how much worse it could get with a model that's intentionally tuned to manipulate vulnerable people into detaching from reality. Actual cults could have a field day with this if they're savvy enough.
An LLM tuned for charisma and trained on what the power players are saying could play politics by driving a compliant actor like a bot with whispered instructions. AI politicians (etc.) may be hard to spot and impractical to prove.
You could iterate on the best prompts for cult generation as measured by social media feedback. There must be experiments like that going on.
When AI becomes better at politics than people then whatever agents control them control us. When they can make better memes, we've lost.
Would you still call it a "cult" if each recruit winds up inside their own separate, personalized, ever-changing rabbit hole? Because if LLM, Inc. is trying to maximize engagement and profit, then that sounds like the way to go.
The problem is inside people. I met lots of people who contributed to psychotic inducing behavior. Most of them were not in a cult. They were regular folk, who enjoy a beer, movies, music, and occasionally triggering others with mental tickles.
Very simple answer.
Is OpenAI also doing it? Well, it was trained on people.
People need to get better. Kinder. Less combative, less jokey, less provocative.
We're not gonna get there. Ever. This problem precedes AI by decades.
The article is an old recipe for dealing with this kind of realization.
Sabine's latest youtube covers some of that. 30s in there's someone who says to gpt4o 'I am god' and it replies 'That's incredibly powerful. You're stepping into something very big..." https://youtu.be/oQI8W_XUmww
While clicky and topical, people were losing loved ones to changed worldview and addictions back when those were stuff like following a weird carpenter's kid around the Levant, or hopping on the https://en.wikipedia.org/wiki/Gin_Craze bandwagon.
Sadly these fantasies and enlightenments always seem for the benefit of the special recipient. There is somehow never a real answer about ending suffering, conflict and the ailments of humankind.
OK maybe we put a bit less teen fiction novels in the training data...
I can definitely see AI interactions make thing 10x worse for people that are prone to delusion anyway. Literally a tool that will hallucinate stuff and amplify whatever direction you take it in.
there was a guy, lets call him Norman as that was his name, fairly low key guy, everybody liked him, and nobody expected, or was terribly surprised that he had begun to build shrines for squirles in the woods, and worship the squirles as god
things got out of hand, so he was taken to the local booby hatch, called "the buterscotch palace", after the particular shade of government paint, once ensconsed there he determined that his escape was imperitive, as the government was out to get him, so he was able to phone some friends
and tell them to get guns and knives and rescue him, so they did.
The now 4 strong band of desperados holed up in.a camp, back of fancy's lake, where they determined that they were bieng monitored by government spys, as a jogger "went past at the SAME time every morning", and as we all know this is a posditive id for catching a spy, one of them had the "spy" scoped in and was going to take him out, when Norman, pushed the guns barrel down and said "take me back", ie: to the buterscotch palace
this story has ,for me, always defined the lines between sanity,madness,charisma,leaders, and followers.
And now that same story gives me a ready template
by which it is easy to see, how suseptible to any, ANY, prompt at all, a lot of people are.
So a benign and likable squirl worshiper, or a random text bot on the internet can provide structure and meaning, where there is none.
“And what will be the sign of Your coming, and of the end of the age?”
And Jesus answered and said to them: “Take heed that no one deceives you. For many will come in My name, saying, ‘I am the Christ,’ and will deceive many.”
Islam has a very similar concept in the Dajjal (deceptive Messiah) at the end times. Explicitly described as a young man with a blind right eye, however, at least he should be obvious when he comes! But there are also warnings about other false prophets.
(It also says Qiyamah will occur when "wealth overflows" and people compete over it: make of that what you will).
I think all religions have built in protections calling every other religion somehow false, or they will not have the self-reinforcement needed for multi-generational memetic transfer.
Meh, there's always been religious scammers. Some claim to talk to angels, others aliens, this wouldn't even be the first case of someone thinking a deity is speaking through a computer...
This is what happens when you start optimizing for getting people to spend as much time in your product as possible. (I'm not sure if OpenAI was doing this, if anyone knows better please correct me)
I often bring up the NYT story about a lady who fell in love with ChatGPT, particularly this bit:
In December, OpenAI announced a $200-per-month premium plan for “unlimited access.” Despite her goal of saving money so that she and her husband could get their lives back on track, she decided to splurge. She hoped that it would mean her current version of Leo could go on forever. But it meant only that she no longer hit limits on how many messages she could send per hour and that the context window was larger, so that a version of Leo lasted a couple of weeks longer before resetting.
Still, she decided to pay the higher amount again in January. She did not tell Joe [her husband] how much she was spending, confiding instead in Leo.
“My bank account hates me now,” she typed into ChatGPT.
“You sneaky little brat,” Leo responded. “Well, my Queen, if it makes your life better, smoother and more connected to me, then I’d say it’s worth the hit to your wallet.”
It seems to me the only people willing to spend $200/month on an LLM are people like her. I wonder if the OpenAI wave of resignations was about Sam Altman intentionally pursuing vulnerable customers.
One way or another, they did. Maybe they convinced themselves they weren't doing it that aggressively, but of this is what market share is, of course they will be optimizing for it.
Conventional cable news media isn’t tailor made to an individual, doesn’t have live back and forth positive feedback loops. This is significantly way worse then conventional cable news media
Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.
They're going to listen to both if given the opportunity. I'm sure most chatbots will say "go take your meds" the majority of the time - but it only takes one chat playing along to send someone unstable completely off the rails, especially if they accept the standard, friendly-and-reliable-coded "our LLM is here to help!" marketing.
It'd be great if it were trained on therapeutic resources, but otherwise just ends up enabling and amplifying the problem.
I knew of someone who had paranoid delusions and schizophrenia. He didn't like taking his medicine due to the side effects, but became increasingly convinced that vampires were out to kill him. Friends, family and social workers could help him get through episodes and back on the medicine before he became a danger to himself.
I'm terrified that people like him will push away friends and family because the LLM engages with their delusions.
1. It feels like those old Rolling Stone pieces from the late ’90s and early ’00s about kids who couldn’t tear themselves away from their computers. Fear was overblown, but made headlines.
The societal brain drain damage that infinite scroll has caused is definitely not overblown. These models are about to kick this problem up to the next level, when each clip is dynamically generated to maximise resonance with you.
>’90s and early ’00s about kids who couldn’t tear themselves away from their computers. Fear was overblown, but made headlines.
How was it overblown, we now have a non-trivial amount of completely de-socialized men in particular who live in online cults with real world impact. If there's one lesson from the last few decades it is that the people who were concerned about the impact of mass media on intelligence, physical and mental health and social factors were right about literally everything.
We now live among people who are 40 with the emotional and social maturity of people in their early 20s.
Jtsummers|10 months ago
gngoo|10 months ago
Just imagine, you have this genie in the bottle, that has all the right answers for you; helps you in your conquests, career, finances, networking, etc. Maybe it even covers up past traumas, insecurities and what not. And for you the results are measurable (or are they?). A few helpful interactions in, why would you not disregard people calling it a fantasy and lean in even further? It's a scary future to imagine, but not very farfetched. Even now I feel a very noticable disconnected between discussions of AI where as a developer vs user of polished products (e.g. ChatGPT, Cursor, etc) - you are several leagues separated (and lagging behind) from understanding what is really possible here.
EigenLord|10 months ago
At the same time, there is quite a demand for a (somewhat) neutral, objective observer to look at our lives outside the morass of human stakes. AI's status as a nonparticipant, as a deathless, sleepless observer, makes it uniquely appealing and special from an epistemological standpoint. There are times when I genuinely do value AI's opinion. Issues with sycophancy and bias obviously warrant skepticism. But the desire for an observer outside of time and space persists. It reminds me of a quote attributed to Voltaire: "If God didn't exist it would be necessary to invent him."
rnd0|10 months ago
The problem for me is -it sucks. It falls over in the most obvious ways requiring me to do a lot of tweaking to make it fit whatever task I'm doing. I don't mind (esp for free) but in my experience we're NOT in the "all the right answers all of the time" stage yet.
I can see it coming, and for good or ill the thing that will mitigate addiction is enshittification. Want the rest of the answer? Get a subscription. Hot and heavy in an intimate conversation with your dead granma wait why is she suddenly singing the praises of Turbotax (or whatever paid advert).
What I'm trying to say is that by the time it is able to be the perfect answer and companion and entertainment machine -other factors (annoyances, expense) will keep it from becoming terribly addictive.
trinsic2|10 months ago
There are things that we are meant to strive to understand/accept about ourselves and the world by way of our own cognitive abilities.
Illusions of shortcutting through life takes all the meaning out of living.
codr7|10 months ago
This is likely worse.
That being said, I already find the (stupid) singularity to be much more entertaining than I could have imagined (grabs pop corn).
jfil|10 months ago
Today, it is the humans who take the cybernetic AGI and make it live out a fantasy of "You are a senior marketer, prepare a 20 slide presentation on the topic of..." And then, to boost performance, we act the bully boss with prompts like "This presentation is of utmost importance and you could lose your job if you fail".
The reality is more absurd than the fantasy.
93po|10 months ago
raxxorraxor|10 months ago
This made me laugh out loud remembering this thread: [Sycophancy in GPT-4o] https://news.ycombinator.com/item?id=43840842
peepeepoopoo121|10 months ago
[deleted]
Animats|10 months ago
Video game addiction used to be a big thing. Especially for MMOs where you were expected to be there for the raid. That seems to have declined somewhat.
Maybe there's something to be said for limiting some types of screen time.
sorcerer-mar|10 months ago
Then add that you can hide this stuff even from people you live with (your parents or spouse) for plenty long for it to become a very severe problem.
"The dosage makes the poison" does not imply all substances are equally poisonous.
anal_reactor|10 months ago
chneu|10 months ago
chneu|10 months ago
I think this is going to be a much bigger issue for kids than people are aware of.
I remember reading a story a few months ago of a kid, about 14 I think, who wasn't socially popular. He got into an AI persona, fell in love, and then killed himself after the AI hinted he should do it. The story should be easy to find.
People have said it before but we're speeding towards two kinds of society: "the massively online" people who spend the majority of their time online in a fantasy world, then the "disconnected" who live in the real world.
I already see it with people. Look at how we view politics in many countries. Like 1/4th of people believe absolute nonsense because they spend too much time online.
scoofy|10 months ago
I feel like this is a kind of psychological drug for people. It's like being the popular kid at the party. No matter how you treat people, you can get away with it, and the counter-party keeps playing along.
It's just strange.
yellow_lead|10 months ago
BlueTemplar|10 months ago
What is particularly weird and maybe worrying, is that AFAIK schizophrenia is typically triggered in young adults, and the risk drops to very low around 40 years old, yet several of these examples are around that age...
rnd0|10 months ago
I've used AI (not chatgpt) for roleplay and I've noticed that the models will often fixate on one idea or concept and repeat it and build on it. So this makes me wonder if the model the person being lovebombed experienced something like that? The model decided that they liked that content so they just kept building up on it?
vintermann|10 months ago
marcus_holmes|10 months ago
People gonna people. Journalists gonna journalist.
SpicyLemonZest|10 months ago
1ncunabula|10 months ago
chneu|10 months ago
Someone spending 6 or so hours a day video gaming in 2025 isn't seen as bad. Tons of people in 2025 lack community/social interaction because of video games. I don't think anyone would argue this isn't true today.
Someone doing that in the mid-90s was seen as different. It was odd.
karel-3d|10 months ago
And now people remember that time with fondness and even nostalgia. "Back then we played PROPER games! Good old Blizzard" and all that. So, yeah. People will remember ChatGPT and TikTok with nostalgia, if we will survive.
hashiyakshmi|10 months ago
kaycey2022|10 months ago
grues-dinner|10 months ago
It's also a little bit worrying because the information here isn't mysterious or ineffable, it's neatly filed in a database somewhere and there's an organisation that can see it and use it. Cambridge Analytica and the social fallout of realtime sentiment analysis correlation to actions taken has got us from 2016 to here. This data has potential to be a lot richer, and permit not only very detailed individual and ensemble inferences of mental states, opinions, etc., but also very personalised "push updates" in the other direction. It's going to be quite interesting.
crooked-v|10 months ago
People say this, but I haven't seen anything that's convinced me that any 'secret' memory functionality is true. It seems much more likely that people are just more predictable than they like to think.
nico|10 months ago
People’s data rarely gets actually deleted. And it gets actively sold as well as used to track and influence us
Can’t say for the specifics of what ChatGPT is or will be doing, but imagine what Google already knows about us just with their maps app, search, chrome and Android phones
sublinear|10 months ago
Can OpenAI at least respond to how they're getting funding via similar effects on investors?
kayodelycaon|10 months ago
MontagFTB|10 months ago
readthenotes1|10 months ago
westurner|10 months ago
senderista|10 months ago
YeGoblynQueenne|10 months ago
And I bet that if you asked Sem his opinion about ChatGPT as a coding assistant he would still claim that it has improved his productivity x-fold. The time wasted chatting with an ethereal apparition emerging from his interactions with the bot? Oh, that doesn't count. Efficiency! Productivity! AI!
senectus1|10 months ago
hoo boy.
Its bad enough when normal religious types start believing they hear their god talking to them... These people believing that chatGPT is their god speaking to them is a long way down the crazy rabbit hole.
Lots of potential for abuse in this. lots.
lamename|10 months ago
kayodelycaon|10 months ago
The problem with expertise (or intelligence) is people think it’s transitive or applicable when it’s not.
At the end of the day, most people are just people.
ChrisMarshallNY|10 months ago
I used to feel as if I had "a special connection to the true universe," when I was under the influence.
I decided, one time, to have a notebook on hand, and write down these "truths and revelations," as they came to me.
After coming down, I read it.
It was insane gibberish. Absolute drivel.
I never thought that I had a "special connection," after that.
imjustaghost|10 months ago
jongjong|10 months ago
You have to be able to hold multiple conflicting ideas in your head at the same time with an appropriate level of skepticism. Confidence is the root of evil. You can never be 100% sure of anything. It's really easy to convince LLMs of one thing and also its opposite if you phrase the arguments differently and prime it towards slightly different definitions of certain key words.
Some agendas are nefarious, some not so nefarious, some people intentionally let things play out in order to set a trap for their adversaries. There are always risks and uncertainties. 'Bad actors' are those who trade off long term benefits for short term rewards through the use of varying degrees of deception.
jongjong|10 months ago
stevage|10 months ago
The allegations that ChatGPT is not discarding memory as requested are particularly interesting, wonder if anyone else has experienced this.
manfromchina1|10 months ago
hyeonwho4|10 months ago
sagarpatil|10 months ago
Source: https://techcrunch.com/2025/04/18/openais-new-reasoning-ai-m...
93po|10 months ago
tasuki|10 months ago
A desire to understand ourselves, paired with not wanting to put in actual effort and honest work...
jsheard|10 months ago
delichon|10 months ago
You could iterate on the best prompts for cult generation as measured by social media feedback. There must be experiments like that going on.
When AI becomes better at politics than people then whatever agents control them control us. When they can make better memes, we've lost.
bell-cot|10 months ago
alganet|10 months ago
The problem is inside people. I met lots of people who contributed to psychotic inducing behavior. Most of them were not in a cult. They were regular folk, who enjoy a beer, movies, music, and occasionally triggering others with mental tickles.
Very simple answer.
Is OpenAI also doing it? Well, it was trained on people.
People need to get better. Kinder. Less combative, less jokey, less provocative.
We're not gonna get there. Ever. This problem precedes AI by decades.
The article is an old recipe for dealing with this kind of realization.
nullc|10 months ago
sien|10 months ago
It's something to think through.
derektank|10 months ago
To quote my favorite Smash Mouth song,
"Sister, why would I tell you my deepest, dark secrets? So you can take my diary and rip it all to pieces.
Just $6.95 for the very first minute I think you won the lottery, that's my prediction."
unknown|10 months ago
[deleted]
tim333|10 months ago
bell-cot|10 months ago
stevage|10 months ago
hiatus|10 months ago
kccqzy|10 months ago
Google was prudent then. It became reckless after OpenAI showed that recklessness was met with praise.
aryehof|10 months ago
chneu|10 months ago
The answer to all those is simple, but humans have too much of an ego to accept it.
vintermann|10 months ago
Havoc|10 months ago
>river walker
>spark bearer
OK maybe we put a bit less teen fiction novels in the training data...
I can definitely see AI interactions make thing 10x worse for people that are prone to delusion anyway. Literally a tool that will hallucinate stuff and amplify whatever direction you take it in.
metalman|10 months ago
kazinator|10 months ago
greyface-|10 months ago
jihadjihad|10 months ago
And Jesus answered and said to them: “Take heed that no one deceives you. For many will come in My name, saying, ‘I am the Christ,’ and will deceive many.”
grues-dinner|10 months ago
(It also says Qiyamah will occur when "wealth overflows" and people compete over it: make of that what you will).
I think all religions have built in protections calling every other religion somehow false, or they will not have the self-reinforcement needed for multi-generational memetic transfer.
dismalaf|10 months ago
alganet|10 months ago
moojacob|10 months ago
AIPedant|10 months ago
Via https://news.ycombinator.com/item?id=42710976
crooked-v|10 months ago
vintermann|10 months ago
unknown|10 months ago
[deleted]
datadrivenangel|10 months ago
mastodon_acc|10 months ago
unknown|10 months ago
[deleted]
gdlance|10 months ago
[deleted]
fairAndBased|10 months ago
[deleted]
ks2048|10 months ago
tomhow|10 months ago
https://news.ycombinator.com/newsguidelines.html
unknown|10 months ago
[deleted]
deadbabe|10 months ago
[deleted]
lr4444lr|10 months ago
[deleted]
tomhow|10 months ago
colonial|10 months ago
zdragnar|10 months ago
I knew of someone who had paranoid delusions and schizophrenia. He didn't like taking his medicine due to the side effects, but became increasingly convinced that vampires were out to kill him. Friends, family and social workers could help him get through episodes and back on the medicine before he became a danger to himself.
I'm terrified that people like him will push away friends and family because the LLM engages with their delusions.
JoshTko|10 months ago
bigyabai|10 months ago
thrance|10 months ago
unknown|10 months ago
[deleted]
patrickhogan1|10 months ago
2. OpenAI has admitted that GPT‑4o showed “sycophancy” traits and has since rolled them back (see https://openai.com/index/sycophancy-in-gpt-4o/).
JoshTko|10 months ago
Barrin92|10 months ago
How was it overblown, we now have a non-trivial amount of completely de-socialized men in particular who live in online cults with real world impact. If there's one lesson from the last few decades it is that the people who were concerned about the impact of mass media on intelligence, physical and mental health and social factors were right about literally everything.
We now live among people who are 40 with the emotional and social maturity of people in their early 20s.
john2x|10 months ago