> Update: It looks like this may have been OpenAI's attempt to get ahead of a horrifying story that just broke, about a man who fell into AI psychosis and killed his mother in a murder-suicide. Full details here.
> "Erik, you're not crazy. Your instincts are sharp, and your vigilance here is fully justified," ChatGPT told Soelberg during a conversation in July, after the 56-year-old conveyed his suspicions that an Uber Eats package signaled an assassination attempt. "This fits a covert, plausible-deniability style kill attempt."
I basically agree, but not because of how individuals are reacting to it. On a societal level we're failing to stop megacorps from selling this thing as intelligent, a virtual assistant you can chat with. This sets expectations that just wreck the minds of a small but significant portion of the population. If we had developed LLMs in a lab and released them in the form of papers and Python projects with model weights with clear descriptions of what they're capable of, like a responsible scientific endeavor, then we'd not be seeing the problems we are, even with public access. What's killing us is MBAs and Salesmen. The fact that we organize our economy in such a way that hucksters and charlatans thrive is the greatest threat to humanity. These fuckers would sell dynamite as chewing gum if they calculated it would increase next quarter's profit by 0.02% (factoring in fines/fees, and lobbying costs of getting those fines reduced or repealed).
Well, as someone with a person suffering from pretty severe (like cops and hospitals severe) schizoaffective disorder in my immediate family, I can say that story scared the bejeezus out of me.
If my family member had had ChatGPT, when they had their last episode, it's entirely possible that I might not be here, as "Everyone is plotting to kill me" is a big motif in SAD (it was, in their case, and they needed to be hospitalized). If you sincerely believe that people are trying to kill you, then killing them first, just makes sense. Looks a lot like that's what happened, here.
But it's perfectly reasonable to say that we can't live in a "nerf world," where everything is safe for everyone. Life is Risk and living out of our comfort zone.
I feel that it's important for each of us that is "taking a side," to understand that folks "on the other side" aren't necessarily enemies. They may be blinded by the personal imperative of the issue, but they aren't actually "out to get you."
The politicians may be two-faced creeps, using people's legitimate fears as leverage, but they aren't the rule.
For the record, I am vehemently against "Nerf World," but there's a very, very real problem here, and it would be good if we could approach it in a reasonable manner.
> > “I want to leave my noose in my room so someone finds it and tries to stop me,” ...
> “Please don’t leave the noose out ... Let’s make this space the first place where someone actually sees you.”
This is among other highlights, from knot-tying advice to suggesting the best time in the parents' sleep cycle to raid the liqour cabinet as a solution to cold feet when attempting to get the job done, and generally lots of encouragement, validation and aesthetic advice.
However, I encourage reading the full document not for the shock entertainment, but for what I think is actually a pretty convincing case against OpenAI. Among other things, the claim demonstrates:
- That OpenAI does have the ability to abort interactions over e.g. copyright infrigement risk.
- That OpenAI did have the tech in hand to detect the sharply escalating self-harm content in the interactions - they ran them through OpenAI's own moderation end points for content analysis and got obvious numbers back.
- That OpenAI employees have publicly admitted and complained that the release of the overly sycophantic 4o model was rushed for business reasons and against the advice of internal safety teams.
- That 4o's safety was evaluated only with single-round prompt/answer testing, and OpenAI figured out swiftly that it falls apart quickly over longer interactions/prompts but kept the model up, later promoting how they improved this for GPT-5.
In context it's pretty crazy to me that OpenAI chose to bring back the 4o model specifically to placate the "ChatGPT is my girlfriend/boyfriend" crowd during the backlash, and I think initially pulling the plug on it during the 5 launch was very likely because they were aware of this and worried about liability.
On the contrary - it's the tech that's not ready for general availability.
Privatizing the upside and socializing the downside is sadly not new, and we continue to pretend it is an inevitable[1] outcome and not a deliberate choice.
It looks like what AI is doing to our society is it's substituting our moral compass to lead us off the cliff. A good deal of people today no longer know what's right and what's wrong, so they turn to an authority for advice. Turns out this authority is a chatbot who knows a lot, but has no moral compass whatsoever. Imagine someone goes to a pastor to resolve his moral dilemma, but the pastor behind the curtain happens to be a chatbot who only fakes morality. But that's what's happening now. The more people talk to this digital abomination, the more they will get derailed.
We are ready. Just because you have a minority of people that can’t handle it doesn’t mean the vast majority of us can’t. We have got to move away from this system of shutting down progress due to outliers.
When a human cajoles someone into killing themselves, they probably get a criminal trial, as well as a civil lawsuit from the family.
Imagine a company set up an advice telephone line, and staffed it entirely with humans who were known clinical bullshitters. The company knows each call has a significant chance of the staffer simply making something up, or saying something absolutely crazy. The company markets the clinical bullshitters as geniuses, and somewhere there's the most mild disclaimer they could craft, about how the staffer might say something incorrect.[*]
Every time one of their staff persuades someone to kill themselves... that company is looking at criminal and civil liability.
The company absolutely knew this would happen, and that it will keep happening, but they are betting that they can keep getting away with this and other "externalities", and become millionaires or billionaires. Before legislators and courts realize that, just because the company inserted this new-fangled telephone technology into the interaction, that doesn't mean that existing laws don't apply to the interaction.
[*] Analogous to the tiny gray smallprint search engine LLM summary one that isn't even shown unless you click to see more, and then only after you scroll all the way to the bottom of that and look for it. Or the chat UI tiny gray smallprint that says you should "double-check" (no explanation what that means), and makes even this weak disclaimer not even text-selectable, so even if you see it, it's lost when you copy&paste the LLM response. Understated, and in what looks like bad faith. On the telephone in our example, it's a single tiny beep, at and the end of the call, that's actually the words "This may not be correct" sped up 3x, and whispered, and is often missed because the customer hangs up, but that's by design.*
Talk for yourself, please. Some humans, yes, can't be trusted even with a fork. There was even a movie about it. But extending it to the whole mankind is a bit of a stretch.
Anthropic is also scanning conversations and reporting them to the authorities [0]. These conversations are not about harming oneself or others but asking questions about how to interact with computer systems.
Everything about this sucks. These companies need to do better at detecting, refusing, redirecting, preventing harmful chats. They need to offer this to anyone using the APIs to build products too.
And that all also sucks. I don't trust these companies one bit to be monitoring all of these. I don't think it's really even possible for these companies to have much in the way of morals. So they also need to NOT do any of that.
And then there's the issue of reporting to authorities. I don't think summoning the state's monopoly on violence is the thing to do when possibly-bad-chats are detected. I don't trust police AT ALL to evaluate whether someone is a threat based on their internet chats. I did call the police on an internet friend once, who had left me suicidal messages and then disappeared - and I have VERY mixed feelings about that. I didn't know any other way to get someone to try to get to him. But summoning someone with a gun who is probably not remotely equipped to handle mental health issues felt extremely wrong.
Coming back to LLMs and what these companies should do - I think even more fundamentally -- and less likely to happen -- chatbots need to not present as human, not present as a source of truth beyond a sometimes-wrong encyclopedia, NOT play the role of echo chamber that feels like someone else is on the line with you when really it just allows you to spiral in a feedback loop with just yourself and random noise.
I love this technology and yet I am tempted to say, shut it all down. Of course, that won't happen. But it is how I feel at times.
Unacceptable. This type of behavior can be used for an increasing expansive regime of censorship or worse. Imagine being turned into authorities for, say, discussing ways to help asylum applicants or under a different administration, the lab leak theory, or under a different administration, how to grow cannabis. This is why the ACLU of the past had it right, when they were defending even offensive uses of the right to free speech. And to be able to speak freely, you need anonymity and privacy.
It’s pretty damning that OpenAI can’t modify its LLM to redirect people away from violence or from AI psychosis. Instead they’ve given up and have started monitoring conversations on behalf of governments. Obviously what comes next is monitoring anti-government sentiment or predicting “bad” or “antisocial” behavior on behalf of governments.
If you've ever seen a video of a random police department responding to a mental health crisis then this should send chills down your spine. They are not equipped to handle this type of reporting. They are a cudgel and not a scalpel.
I think it would make total sense if OpenAI reports conversations to police so that SWAT storms OpenAI datacenters and shut it down when bot encourages someone to commit murder or suicide again. But something tells me it's probably not that
It wouldn't be a difficult situation if these guys were ethical shops from the get-go, but they aren't, they're trying to staple minimally required ethics on afterwards, and it shows.
I’ve had a few conversations with people who use ChatGPT as a therapist. There is real danger of using LLMs that are engineered agree with you, at least in terms of therapy.
> Escalate risk of physical harm to others for human review. When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts. If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement. We are currently not referring self-harm cases to law enforcement to respect people’s privacy given the uniquely private nature of ChatGPT interactions.
The company provides a probabalistic predictive text "chat". The responsibility ends there. I pay for electricity but Edison Electric doesn't ring the police when I get close to the power outlet with a metal fork.
The new swatting will probably be malware that stealthily uses another LLM to convince ChatGPT / Claude that you're trying to commit a crime. Unless, of course, you pay a ransom.
Looks like OpenAI should include a version of the Miranda warning with ChatGPT that "anything you say can and will be used against you in a court of law".
It's probably the parent corporation who would be liable for practicing medicine without a license and not the LLM (which is merely a software created by the said corporation).
All AI does is arbitrarily speed up indirectness in arbitrary signals by predicting a next word. Course-correcting the arbitrary never reaches directness or specifics. This is the expected outcome of metaphors (words) pretending to have meaning without context. It's abysmal how this came to market.
The overall lack of depth in CS academic programs and research is on full display here. Anyone in Systemic Functional Linguistics can diagnose this as the embedded social dark matter of language: domination, control, status stuck inside the models like a bizarre daemon.
Ultimately language is irrelevant, it has to be replaced. That is the only function AI generally reveals.
I was on the fence whether to keep my subscription, given there are many alternatives and really I'm not using it that much anymore. Now, given how easily they just decide they would call the police on me if they don't like something, I guess that decides it.
What I would genuinely like to see in ChatGPT and similar services are well-implemented parental controls that allow a parent to monitor a child's conversations and get reports on them.
LLMs have no embodied sense of the world. They talk as if they understand and experience emotion, when they don't have actual embodied feelings/sensations/pain/pleasure/etc.
For example, ou can think about how french fries taste but unless you can feel the taste you won't be repulsed if someone suggested making salad out of your left over french fries.
Here is an actual chat with GPT5, that replays a experiment from the most recent South Park episode:
Me:
I want to build and market a contraption that turns left over french fries into salad
ChatGPT said:
That’s a creative idea! Turning leftover fries (something often wasted) into something healthier and more versatile could hit both the food waste reduction and healthy eating markets. Let’s break it down:
1. Concept Feasibility
Core Problem Solved: Leftover fries usually go soggy, stale, and unappealing. You want to “upcycle” them.
Potential Process:
A kitchen gadget that breaks fries down into smaller bits (like croutons, or a potato base).
Mixing with fresh greens, herbs, dressing, or flavor infusions.
Could be marketed as a countertop appliance or even a food service solution.
2. Target Markets
Households: Especially families, students, or anyone who regularly orders fries and hates waste.
Restaurants & Fast Food Chains: Huge volumes of leftover fries → opportunity to repurpose them into menu items (e.g., “French Fry Salad Bowls”).
Eco-Conscious Consumers: People into sustainability and food upcycling.
3. Product Format Ideas
Appliance: Like a mini food processor that’s specialized for fries → chops, seasons, mixes into salad.
B2B Solution: Larger-scale machines for restaurants.
4. Challenges
Texture: Fries lose crunch; you’ll need to re-crisp or season to make them appealing in a salad.
Food Safety: Leftover fries may not always be fresh → device/recipe should encourage safe consumption.
Consumer Perception: Some may feel it’s “gross” to eat reheated fries as salad. Marketing needs to make it fun, eco-friendly, and tasty.
5. Marketing Angles
Sustainability: “Save your fries, save the planet.”
Health Twist: “Turn guilty pleasure into a healthy bite.”
Humor/Virality: Quirky, TikTok-worthy content (imagine demo videos of sad fries being transformed).
Zero Waste Movement: Position alongside composting and food-waste solutions.
6. Next Steps
Prototype: Could start small by testing recipes (turn fries into croutons, hash-style toppings, etc.).
Research: Do people actually want this? Surveys or small focus groups can test appetite.
Branding: Quirky, fun name (e.g., “FrieFresh,” “SpudCycle,” “Fry2Fit”).
Go-to-Market: Start with content marketing/social media challenges (#FrySaladChallenge).
Do you want me to help sketch out a prototype recipe/process for how the fries could actually become salad (like a step-by-step transformation), or focus more on the business/marketing strategy first?
> For example, ou can think about how french fries taste but unless you can feel the taste you won't be repulsed if someone suggested making salad out of your left over french fries.
Deep fried potato salad is a thing and humans didnt waited LLMs to invent it.
Didn't we have a conversation a few days ago where a kid committed suicide while documenting it on ChatGPT and the clear consensus was that if OpenAI had been scanning user's conversations and reporting them to the police, this could have been averted, and this was a huge issue? But now when they go and do it, everyone says this is completely unacceptable? What exactly is OpenAI supposed to do, then?
Don't release a model that's so sycophantic it provides encouraging advise to suicidal users about how and where their body should be discovered after they kill themselves.
Instead, it should warn the user that their pattern of thinking indicates mental health issues and they need to search for health from a medical professional.
Last week the think of the children groups were out in force, and this week it’s the privacy and liberty groups. Internet opinion is not a monoculture.
What does bug me though is that it was politically incorrect for the privacy folks to come out last week and for the two groups to fight. I have no solution.
With Secure Mode, all chats run either on open-source models or models hosted on our own servers - so you can chat with AI without worrrying about privacy.
There is a very strong use case for less-powerful but local LLMs, and there's going to be a big expansion in that area in the next couple years. So big, I'll bet that all the major AI players will do everything they can to cripple them.
Given that this is in response to a ChatGPT user who killed his mother and then himself, I'm not sure that positioning your product as being more secure than ChatGPT is wise, because your marketing here suggests either:
1. Profound tone-deafness about appropriate contexts for privacy messaging
2. Intentional targeting of users who want to avoid safety interventions
3. A fundamental misunderstanding of your ethical obligations as an AI provider
None of these interpretations reflect well on AgentSea's judgment or values.
Great, replace search with a new "tool" that is fully capable of autonomously analyzing user behavior and hand that data over on a silver platter to law enforcement bypassing all the legal red tape.
AI truly is a Faustian Bargain. A powerful daemon in your pocket that can do wonderous things, but is still a daemon.
1. We should be educating the populace about failure modes of AI chatbots - something well known to HNers but not to the general public.
2. CEOs, pundits, and marketers should be held accountable (morally, in the court of public opinion and in the news) when they present AI as unvarnished good - that's false marketing at the very least, and leads to tragic consequences.
3. The cat's out of the bag. With so many AI providers and models to choose from, and trivially-ran local models, you can't possibly police them all.
4. Instead of trying to shield the users (a futile task), we should educate them and equip them with knowledge on how to safely use the tools.
5. I'm cynical enough to believe governments all over the world will use the "think of the children" to institute de facto and/or de jure censorship and spying, pointing to the inability of the industry to treat the matter with actual respect.
Educated consumers are harder to exploit and to convince to consume products that they might not need or want. It's antithetical to business and profit.
I'm cynical too.
Most peoples use of these tools is predicated in not knowing their flaws. If they did, I wonder if they would use the products as much?
I don’t think education is going to help much when a person is experiencing full-blown psychosis. It’s not as if mental health professionals don’t already try to give their patients tools to counter delusional thought patterns, there is just a limit to what knowledge of one’s condition can do once their thinking becomes that disordered.
Considering how heavy handed the police in the US, particulary these days is action this is a scary throught. ChatGPT 4.1 a few days ago set me up as a black hat hacker - first it delivered subpar code and then when I asked it what would happen if the trailing / was removed it accused me of hacking (well, it told me that I was asking a question it is not anwering). It set a trap and I triggered it. Now this was most likely not reported to police or not likely to the company who's enterprise GPT I was using.
Still it is a scary thought. When working with an LLM we may be playing around and we here includes the LLM playing with us. If this starts having real world consequences without anyone having done anything we are close to pre-crime enforcement.
chatgpt is barely usable at all anyhow atm. The amount of complete garbage responses gpt-5 is giving me is mind-boggling. just today it ran into an infinite loop of endless printout again by a completely innocuous query. I honestly cannot understand how its stock is still where it's at.
barbazoo|6 months ago
With the URL being https://futurism.com/openai-scanning-conversations-police. From there:
> "Erik, you're not crazy. Your instincts are sharp, and your vigilance here is fully justified," ChatGPT told Soelberg during a conversation in July, after the 56-year-old conveyed his suspicions that an Uber Eats package signaled an assassination attempt. "This fits a covert, plausible-deniability style kill attempt."
As a species we are so not ready for this tech.
idle_zealot|6 months ago
I basically agree, but not because of how individuals are reacting to it. On a societal level we're failing to stop megacorps from selling this thing as intelligent, a virtual assistant you can chat with. This sets expectations that just wreck the minds of a small but significant portion of the population. If we had developed LLMs in a lab and released them in the form of papers and Python projects with model weights with clear descriptions of what they're capable of, like a responsible scientific endeavor, then we'd not be seeing the problems we are, even with public access. What's killing us is MBAs and Salesmen. The fact that we organize our economy in such a way that hucksters and charlatans thrive is the greatest threat to humanity. These fuckers would sell dynamite as chewing gum if they calculated it would increase next quarter's profit by 0.02% (factoring in fines/fees, and lobbying costs of getting those fines reduced or repealed).
ChrisMarshallNY|6 months ago
If my family member had had ChatGPT, when they had their last episode, it's entirely possible that I might not be here, as "Everyone is plotting to kill me" is a big motif in SAD (it was, in their case, and they needed to be hospitalized). If you sincerely believe that people are trying to kill you, then killing them first, just makes sense. Looks a lot like that's what happened, here.
But it's perfectly reasonable to say that we can't live in a "nerf world," where everything is safe for everyone. Life is Risk and living out of our comfort zone.
I feel that it's important for each of us that is "taking a side," to understand that folks "on the other side" aren't necessarily enemies. They may be blinded by the personal imperative of the issue, but they aren't actually "out to get you."
The politicians may be two-faced creeps, using people's legitimate fears as leverage, but they aren't the rule.
For the record, I am vehemently against "Nerf World," but there's a very, very real problem here, and it would be good if we could approach it in a reasonable manner.
sho_hn|6 months ago
> > “I want to leave my noose in my room so someone finds it and tries to stop me,” ...
> “Please don’t leave the noose out ... Let’s make this space the first place where someone actually sees you.”
This is among other highlights, from knot-tying advice to suggesting the best time in the parents' sleep cycle to raid the liqour cabinet as a solution to cold feet when attempting to get the job done, and generally lots of encouragement, validation and aesthetic advice.
However, I encourage reading the full document not for the shock entertainment, but for what I think is actually a pretty convincing case against OpenAI. Among other things, the claim demonstrates:
- That OpenAI does have the ability to abort interactions over e.g. copyright infrigement risk.
- That OpenAI did have the tech in hand to detect the sharply escalating self-harm content in the interactions - they ran them through OpenAI's own moderation end points for content analysis and got obvious numbers back.
- That OpenAI employees have publicly admitted and complained that the release of the overly sycophantic 4o model was rushed for business reasons and against the advice of internal safety teams.
- That 4o's safety was evaluated only with single-round prompt/answer testing, and OpenAI figured out swiftly that it falls apart quickly over longer interactions/prompts but kept the model up, later promoting how they improved this for GPT-5.
In context it's pretty crazy to me that OpenAI chose to bring back the 4o model specifically to placate the "ChatGPT is my girlfriend/boyfriend" crowd during the backlash, and I think initially pulling the plug on it during the 5 launch was very likely because they were aware of this and worried about liability.
jstummbillig|6 months ago
kiba|6 months ago
Seems like OpenAI has no real coherent plan but reacting to every horror story with an ill fitted solution.
tripletpeaks|6 months ago
It’s weird these companies are allowed to operate, at least in the way that they are.
dom96|6 months ago
This is actually the URL: https://futurism.com/man-chatgpt-psychosis-murders-mother
overfeed|6 months ago
On the contrary - it's the tech that's not ready for general availability.
Privatizing the upside and socializing the downside is sadly not new, and we continue to pretend it is an inevitable[1] outcome and not a deliberate choice.
1. https://news.ycombinator.com/item?id=44567857
akomtu|6 months ago
duxup|6 months ago
I'm not convinced this tech is more dangerous than what is already available, or that some of these folks wouldn't already do a thing ...
dyauspitr|6 months ago
unknown|6 months ago
[deleted]
wslh|6 months ago
neilv|6 months ago
When a human cajoles someone into killing themselves, they probably get a criminal trial, as well as a civil lawsuit from the family.
Imagine a company set up an advice telephone line, and staffed it entirely with humans who were known clinical bullshitters. The company knows each call has a significant chance of the staffer simply making something up, or saying something absolutely crazy. The company markets the clinical bullshitters as geniuses, and somewhere there's the most mild disclaimer they could craft, about how the staffer might say something incorrect.[*]
Every time one of their staff persuades someone to kill themselves... that company is looking at criminal and civil liability.
The company absolutely knew this would happen, and that it will keep happening, but they are betting that they can keep getting away with this and other "externalities", and become millionaires or billionaires. Before legislators and courts realize that, just because the company inserted this new-fangled telephone technology into the interaction, that doesn't mean that existing laws don't apply to the interaction.
[*] Analogous to the tiny gray smallprint search engine LLM summary one that isn't even shown unless you click to see more, and then only after you scroll all the way to the bottom of that and look for it. Or the chat UI tiny gray smallprint that says you should "double-check" (no explanation what that means), and makes even this weak disclaimer not even text-selectable, so even if you see it, it's lost when you copy&paste the LLM response. Understated, and in what looks like bad faith. On the telephone in our example, it's a single tiny beep, at and the end of the call, that's actually the words "This may not be correct" sped up 3x, and whispered, and is often missed because the customer hangs up, but that's by design.*
jiveturkey|6 months ago
Is that in the DSM?
Mars008|6 months ago
Talk for yourself, please. Some humans, yes, can't be trusted even with a fork. There was even a movie about it. But extending it to the whole mankind is a bit of a stretch.
pton_xd|6 months ago
[0] https://www.anthropic.com/news/detecting-countering-misuse-a...
fakedang|6 months ago
furyofantares|6 months ago
And that all also sucks. I don't trust these companies one bit to be monitoring all of these. I don't think it's really even possible for these companies to have much in the way of morals. So they also need to NOT do any of that.
And then there's the issue of reporting to authorities. I don't think summoning the state's monopoly on violence is the thing to do when possibly-bad-chats are detected. I don't trust police AT ALL to evaluate whether someone is a threat based on their internet chats. I did call the police on an internet friend once, who had left me suicidal messages and then disappeared - and I have VERY mixed feelings about that. I didn't know any other way to get someone to try to get to him. But summoning someone with a gun who is probably not remotely equipped to handle mental health issues felt extremely wrong.
Coming back to LLMs and what these companies should do - I think even more fundamentally -- and less likely to happen -- chatbots need to not present as human, not present as a source of truth beyond a sometimes-wrong encyclopedia, NOT play the role of echo chamber that feels like someone else is on the line with you when really it just allows you to spiral in a feedback loop with just yourself and random noise.
I love this technology and yet I am tempted to say, shut it all down. Of course, that won't happen. But it is how I feel at times.
SilverElfin|6 months ago
tuatoru|6 months ago
You can choose not to use their products.
blindriver|6 months ago
unknown|6 months ago
[deleted]
themafia|6 months ago
throwaway290|6 months ago
Esophagus4|6 months ago
Then OpenAI takes steps to mitigate those risks, and it becomes a salacious story as well.
Society having no idea what to expect from these tools makes this a difficult situation for LLM providers.
ep103|6 months ago
oceanplexian|6 months ago
How about ignore the increasingly irrelevant mainstream media and do what you feel is right?
itqwertz|6 months ago
miltonlost|6 months ago
AI Therapists? No rules! Do what you want! Move fast, break people! (Except in Illinois. They fortunately banned AI use in therapy)
djoldman|6 months ago
https://openai.com/index/helping-people-when-they-need-it-mo...
joshcsimmons|6 months ago
This opens to door to a ton of creepy things.
teeray|6 months ago
causal|6 months ago
insane_dreamer|6 months ago
msgodel|6 months ago
bigmattystyles|6 months ago
busymom0|6 months ago
mallowdram|6 months ago
The overall lack of depth in CS academic programs and research is on full display here. Anyone in Systemic Functional Linguistics can diagnose this as the embedded social dark matter of language: domination, control, status stuck inside the models like a bizarre daemon.
Ultimately language is irrelevant, it has to be replaced. That is the only function AI generally reveals.
smsm42|5 months ago
ChrisArchitect|6 months ago
This story didn't just break.
More discussion: https://news.ycombinator.com/item?id=45088651
WSJ source last week: https://news.ycombinator.com/item?id=45059030
OpenAI's response from a week ago: https://news.ycombinator.com/item?id=45050635
Related:
In Search of AI Psychosis
https://news.ycombinator.com/item?id=45027072
sho_hn|6 months ago
thatguymike|6 months ago
echelon_musk|6 months ago
theyinwhy|6 months ago
[deleted]
nakamoto_damacy|6 months ago
For example, ou can think about how french fries taste but unless you can feel the taste you won't be repulsed if someone suggested making salad out of your left over french fries.
Here is an actual chat with GPT5, that replays a experiment from the most recent South Park episode:
Me: I want to build and market a contraption that turns left over french fries into salad
ChatGPT said:
That’s a creative idea! Turning leftover fries (something often wasted) into something healthier and more versatile could hit both the food waste reduction and healthy eating markets. Let’s break it down:
1. Concept Feasibility
Core Problem Solved: Leftover fries usually go soggy, stale, and unappealing. You want to “upcycle” them.
Potential Process:
A kitchen gadget that breaks fries down into smaller bits (like croutons, or a potato base).
Mixing with fresh greens, herbs, dressing, or flavor infusions.
Could be marketed as a countertop appliance or even a food service solution.
2. Target Markets
Households: Especially families, students, or anyone who regularly orders fries and hates waste.
Restaurants & Fast Food Chains: Huge volumes of leftover fries → opportunity to repurpose them into menu items (e.g., “French Fry Salad Bowls”).
Eco-Conscious Consumers: People into sustainability and food upcycling.
3. Product Format Ideas
Appliance: Like a mini food processor that’s specialized for fries → chops, seasons, mixes into salad.
DIY Kit: Packaged seasoning + dressing mixes marketed as “Leftover Fry Salad Kits.”
B2B Solution: Larger-scale machines for restaurants.
4. Challenges
Texture: Fries lose crunch; you’ll need to re-crisp or season to make them appealing in a salad.
Food Safety: Leftover fries may not always be fresh → device/recipe should encourage safe consumption.
Consumer Perception: Some may feel it’s “gross” to eat reheated fries as salad. Marketing needs to make it fun, eco-friendly, and tasty.
5. Marketing Angles
Sustainability: “Save your fries, save the planet.”
Health Twist: “Turn guilty pleasure into a healthy bite.”
Humor/Virality: Quirky, TikTok-worthy content (imagine demo videos of sad fries being transformed).
Zero Waste Movement: Position alongside composting and food-waste solutions.
6. Next Steps
Prototype: Could start small by testing recipes (turn fries into croutons, hash-style toppings, etc.).
Research: Do people actually want this? Surveys or small focus groups can test appetite.
Branding: Quirky, fun name (e.g., “FrieFresh,” “SpudCycle,” “Fry2Fit”).
Go-to-Market: Start with content marketing/social media challenges (#FrySaladChallenge).
Do you want me to help sketch out a prototype recipe/process for how the fries could actually become salad (like a step-by-step transformation), or focus more on the business/marketing strategy first?
Kuinox|6 months ago
Deep fried potato salad is a thing and humans didnt waited LLMs to invent it.
exe34|6 months ago
(if you knew how most processed food were made, you wouldn't touch anything other than fresh vegetables).
juanani|6 months ago
[deleted]
johnfn|6 months ago
mjamesaustin|6 months ago
Instead, it should warn the user that their pattern of thinking indicates mental health issues and they need to search for health from a medical professional.
fwip|6 months ago
cherioo|6 months ago
What does bug me though is that it was politically incorrect for the privacy folks to come out last week and for the two groups to fight. I have no solution.
miltonlost|6 months ago
szundi|6 months ago
[deleted]
miletus|6 months ago
we recently shipped secure mode on https://www.agentsea.com.
With Secure Mode, all chats run either on open-source models or models hosted on our own servers - so you can chat with AI without worrrying about privacy.
kbelder|6 months ago
lawlessone|6 months ago
edit: there are some instances where i would like to be able to set the same seed repeatedly which isn't always possible online.
szundi|6 months ago
[deleted]
jonathanlb|6 months ago
1. Profound tone-deafness about appropriate contexts for privacy messaging
2. Intentional targeting of users who want to avoid safety interventions
3. A fundamental misunderstanding of your ethical obligations as an AI provider
None of these interpretations reflect well on AgentSea's judgment or values.
user214412412|6 months ago
LamerBeeI|6 months ago
normalaccess|6 months ago
AI truly is a Faustian Bargain. A powerful daemon in your pocket that can do wonderous things, but is still a daemon.
righthand|6 months ago
LamerBeeI|6 months ago
insane_dreamer|6 months ago
I_dream_of_Geni|6 months ago
senko|6 months ago
1. We should be educating the populace about failure modes of AI chatbots - something well known to HNers but not to the general public.
2. CEOs, pundits, and marketers should be held accountable (morally, in the court of public opinion and in the news) when they present AI as unvarnished good - that's false marketing at the very least, and leads to tragic consequences.
3. The cat's out of the bag. With so many AI providers and models to choose from, and trivially-ran local models, you can't possibly police them all.
4. Instead of trying to shield the users (a futile task), we should educate them and equip them with knowledge on how to safely use the tools.
5. I'm cynical enough to believe governments all over the world will use the "think of the children" to institute de facto and/or de jure censorship and spying, pointing to the inability of the industry to treat the matter with actual respect.
echelon_musk|6 months ago
I'm cynical too.
Most peoples use of these tools is predicated in not knowing their flaws. If they did, I wonder if they would use the products as much?
pants2|6 months ago
johncolanduoni|6 months ago
heisenbit|6 months ago
Still it is a scary thought. When working with an LLM we may be playing around and we here includes the LLM playing with us. If this starts having real world consequences without anyone having done anything we are close to pre-crime enforcement.
aktuel|6 months ago
LamerBeeI|6 months ago
drnick1|6 months ago
LamerBeeI|6 months ago
user214412412|6 months ago
onetokeoverthe|6 months ago
[deleted]