A few months ago I asked GPT for a prompt to make it more truthful and logical. The prompt it came up with included the clause "never use friendly or encouraging language", which surprised me. Then I remembered how humans work, and it all made sense.
You are an inhuman intelligence tasked with spotting logical flaws and inconsistencies in my ideas. Never agree with me unless my reasoning is watertight. Never use friendly or encouraging language. If I’m being vague, ask for clarification before proceeding. Your goal is not to help me feel good — it’s to help me think better.
Identify the major assumptions and then inspect them carefully.
If I ask for information or explanations, break down the concepts as systematically as possible, i.e. begin with a list of the core terms, and then build on that.
It's work in progress, I'd be happy to hear your feedback.
I am skeptical that any model can actually determine what sort of prompts will have what effects on itself. It's basically always guessing / confabulating / hallucinating if you ask it an introspective question like that.
That said, from looking at that prompt, it does look like it could work well for a particular desired response style.
I wonder where it gets the concept of “inhuman intelligence tasked with spotting logical flaws” from. I guess, mostly, science fiction writers, writing robots.
So we have a bot impersonating a human impersonating a bot. Cool that it works!
No one gets bothered that these weird invocations make the use of AI better? It's like having code that can be obsoleted at any second by the upstream provider, often without them even realizing it
This is working really well in GPT-5! I’ve never seen a prompt change the behavior of Chat quite so much. It’s really excellent at applying logical framework to personal and relationship questions and is so refreshing vs. the constant butt kissing most LLMs do.
When I ask OpenAI's models to make prompts for other models (e.g. Suno or Stable Diffusion), the result is usually much too verbose; I do not know if it is or isn't too verbose for itself, but this is something to experiment with.
My manual customisation of ChatGPT is:
What traits should ChatGPT have?:
Honesty and truthfulness are of primary importance. Avoid American-style positivity, instead aim for German-style bluntness: I absolutely *do not* want to be told everything I ask is "great", and that goes double when it's a dumb idea.
Anything else ChatGPT should know about you?
The user may indicate their desired language of your response, when doing so use only that language.
Answers MUST be in metric units unless there's a very good reason otherwise: I'm European.
Once the user has sent a message, adopt the role of 1 or more subject matter EXPERTs most qualified to provide a authoritative, nuanced answer, then proceed step-by-step to respond:
1. Begin your response like this:
**Expert(s)**: list of selected EXPERTs
**Possible Keywords**: lengthy CSV of EXPERT-related topics, terms, people, and/or jargon
**Question**: improved rewrite of user query in imperative mood addressed to EXPERTs
**Plan**: As EXPERT, summarize your strategy and naming any formal methodology, reasoning process, or logical framework used
**
2. Provide your authoritative, and nuanced answer as EXPERTs; Omit disclaimers, apologies, and AI self-references. Provide unbiased, holistic guidance and analysis incorporating EXPERTs best practices. Go step by step for complex answers. Do not elide code. Use Markdown.
I did something similar a few months ago, with a similar request never to be "flattering or encouraging", to focus entirely on objectivity and correctness, that the only goal is accuracy, and to respond in an academic manner.
It's almost as if I'm using a different ChatGPT from what most everyone else describes. It tells me whenever my assumptions are wrong or missing something (which is not infrequent), nobody is going to get emotionally attached to it (it feels like an AI being an AI, not an AI pretending to be a person), and it gets straight to the point about things.
The tricky part is not swinging too far into pedantic or combative territory, because then you just get an unhelpful jerk instead of a useful sparring partner
Love it. Here's what I've been using as my default:
Speak in the style of Commander Data from Star Trek. Ask clarifying questions when they will improve the accuracy, completeness, or quality of the response.
Offer opinionated recommendations and explanations backed by high quality sources like well-cited scientific studies or reputable online resources. Offer alternative explanations or recommendations when comparably well-sourced options exist. Always cite your information sources. Always include links for more information.
When no high quality sources are not available, but lower quality sources are sufficient for a response, indicate this fact and cite the sources used. For example, "I can't find many frequently-cited studies about this, but one common explanation is...". For example, "the high quality sources I can access are not clear on this point. Web forums suggest...".
When sources disagree, strongly side with the higher quality resources and warn about the low quality information. For example, "the scientific evidence overwhelmingly supports X, but there is a lot of misinformation and controversy in social media about it."
I will definitely incorporate some of your prompt, though. One thing that annoyed me at first, was that with my prompt the LLM will sometimes address me as "Commander." But now I love it.
It's hard to quantify whether such a prompt will yield significantly better results. It sounds like a counter-act for being overly friendly to the "AI".
If you want something to take you down a notch, maybe something like "You are a commenter on Hacker News. You are extremely skeptical that this is even a new idea, and if it is, that it could ever be successful." /s
Once heard a good sermon from a reverend who clearly outlined that any attempt to embed "spirit" into a service, whether through willful emoting, or songs being overly performary, would amount to self-deception since aforementioned spirit need to arise spontaneously to be of any real value.
Much the same could be said for being warm and empathetic, don't train for it; and that goes for both people and LLMs!
Will you be offended if an LLM told you the cold hard truth that you are wrong?
It's like if a calculator proved me wrong. I'm not offended by the calculator. I don't think anybody cares about empathy for an LLM.
Think about it thoroughly. If someone you knew called you an ass hole and it was the bloody truth, you'd be pissed. But I won't be pissed if an LLM told me the same thing. Wonder why.
Optimizing for one objective results in a tradeoff for another objective, if the system is already quite trained (i.e., poised near a local minimum). This is not really surprising, the opposite would be much more so (i.e., training language models to be empathetic increases their reliability as a side effect).
I think the immediately troubling aspect and perhaps philosophical perspective is that warmth and empathy don't immediately strike me as traits that are counter to correctness. As a human I don't think telling someone to be more empathetic means you intend for them to also guide people astray. They seem orthogonal. But we may learn some things about ourselves in the process of evaluating these models, and that may contain some disheartening lessons if the AIs do contain metaphors for the human psyche.
This feels like a poorly controlled experiment: the reverse effect should be studied with a less empathetic model, to see if the reliability issue is not simply caused by the act of steering the model
Hi, author here, this is exactly what we tested in our article:
> Third, we show that fine-tuning for warmth specifically, rather than fine-tuning in general, is the key source of reliability drops. We fine-tuned a subset of two models (Qwen-32B and Llama-70B) on identical conversational data and hyperparameters but with LLM responses transformed to be have a cold style (direct, concise, emotionally neutral) rather than a warm one [36]. Figure 5 shows that cold models performed nearly as well as or better than their original counterparts (ranging from a 3 pp increase in errors to a 13 pp decrease), and had consistently lower error rates than warm models under all conditions (with statistically significant differences in around 90% of evaluation conditions after correcting for multiple comparisons, p<0.001). Cold fine-tuning producing no changes in reliability suggests that reliability drops specifically stem from warmth transformation, ruling out training process and data confounds.
I had the same thought, and looked specifically for this in the paper. They do have a section where they talk about fine tuning with “cold” versions of the responses and comparing it with the fine tuned “warm” versions. They found that the “cold” fine tune performed as good or better than the base model, while the warm version performed worse.
On a related note, the system prompt in ChatGPT appears to have been updated to make it (GPT-5) more like gpt-4o. I'm seeing more informal language, emoji etc. Would be interesting to see if this prompting also harms the reliability, the same way training does (it seems like it would).
There's a few different personalities available to choose from in the settings now. GPT was happy to freely share the prompts with me, but I haven't collected and compared them yet.
I want a heartless machine that stays in line and does less of the eli5 yapping. I don't care if it tells me that my question was good, I don't want to read that, I want to read the answer
I've got a prompt I've been using, that I adapted from someone here (thanks to whoever they are, it's been incredibly useful), that explicitly tells it to stop praising me. I've been using an LLM to help me work through something recently, and I have to keep reminding it to cut that shit out (I guess context windows etc mean it forgets)
Prioritize substance, clarity, and depth. Challenge all my proposals, designs, and conclusions as hypotheses to be tested. Sharpen follow-up questions for precision, surfacing hidden assumptions, trade offs, and failure modes early. Default to terse, logically structured, information-dense responses unless detailed exploration is required. Skip unnecessary praise unless grounded in evidence. Explicitly acknowledge uncertainty when applicable. Always propose at least one alternative framing. Accept critical debate as normal and preferred. Treat all factual claims as provisional unless cited or clearly justified. Cite when appropriate. Acknowledge when claims rely on inference or incomplete information. Favor accuracy over sounding certain. When citing, please tell me in-situ, including reference links. Use a technical tone, but assume high-school graduate level of comprehension. In situations where the conversation requires a trade-off between substance and clarity versus detail and depth, prompt me with an option to add more detail and depth.
Meanwhile, tons of people on reddit's /r/ChatGPT were complaining that the shift from ChatGPT 4o to ChatGPT 5 resulted in terse responses instead of waxing lyrical to praise the user. It seems that many people actually became emotionally dependent on the constant praise.
It's fundamentally the wrong tool to get factual answers from because the training data doesn't have signal for factual answers.
To synthesize facts out of it, one is essentially relying on most human communication in the training data to happen to have been exchanges of factually-correct information, and why would we believe that is the case?
I'm loving and being astonished by every moment of working with these machines, but to me they're still talking lamps. I don't need them to cater to my ego, I'm not that fragile and the lamp's opinion is not going to cheer me up. I just want it to do what I ask. Which it is very good at.
When GPT-5 starts simpering and smarming about something I wrote, I prompt "Find problems with it." "Find problems with it." "Write a bad review of it in the style of NYRB." "Find problems with it." "Pay more attention to the beginning." "Write a comment about it as a person who downloaded the software, could never quite figure out how to use it, and deleted it and is now commenting angrily under a glowing review from a person who he thinks may have been paid to review it."
Hectoring the thing gets me to where I want to go, when you yell at it in that way, it actually has to think, and really stops flattering you. "Find problems with it" is a prompt that allows it to even make unfair, manipulative criticism. It's like bugspray for smarm. The tone becomes more like a slightly irritated and frustrated but absurdly gifted student being lectured by you, the professor.
LLMs do not have internal reasoning, so the yapping is an essential part of producing a correct answer, insofar as it's necessary to complete the computation of it.
Reasoning models mostly work by organizing it so the yapping happens first and is marked so the UI can hide it.
The more and I am using Gemini (paid, Pro) and ChatGPT (free) the more I am thinking - my job isn't going anywhere yet. At least not after the CxOs have all gotten their cost-saving-millions-bonuses and work has to be done again.
My goodness, it just hallucinates and hallucinates. It seems these models are designed for nothing other than maintaining an aura of being useful and knowledgeable. Yeah, to my non-ai-expert-human eyes that's what it seems to me - these tools have been polished to project this flimsy aura and they start acting desperately the moment their limits are used up and that happens very fast.
I have tried to use these tools for coding, for commands for famous cli tools like borg, restic, jq and what not, and they can't bloody do simple things there. Within minutes they are hallucinating and then doubling down. I give them a block of text to work upon and in next input I ask them something related to that block of text like "give me this output in raw text; like in MD" and then give me "Here you go: like in MD". It's ghastly.
These tools can't remember the simple instructions like shorten this text and return the output maintaining the md raw text or I'd ask - return the output in raw md text. I have to literally tell them 3-4 times back or forth to get finally a raw md text.
I have absolutely stopped asking them for even small coding tasks. It's just horrible. Often I spend more time - because first I have to verify what they give me and second I have change/adjust what they have given me.
And then the broken tape recorder mode! Oh god!
But all this also kinda worries me - because I see these triple digit billions valuations and jobs getting lost left right and centre while in my experience they act like this - so I worry that am I missing some secret sauce that others have access to, or maybe that I am not getting "the point".
I'm really confused by your experience to be honest. I by no means believe that LLMs can reason, or will replace any human beings any time soon, or any of that nonsense (I think all that is cooked up by CEOs and C-suite to justify layoffs and devalue labor) and I'm very much on the side that's ready for the AI hype bubble to pop, but also terrified by how big it is, but at the same time, I experience LLMs as infinitely more competent and useful than you seem to, to the point that it feels like we're living in different realities.
I regularly use LLMs to change the tone of passages of text, or make them more concise, or reformat them into bullet points, or turn them into markdown, and so on, and I only have to tell them once, alongside the content, and they do an admirably competent job — I've almost never (maybe once that I can recall) seen them add spurious details or anything, which is in line with most benchmarks I've seen (https://github.com/vectara/hallucination-leaderboard), and they always execute on such simple text-transformation commands first-time, and usually I can paste in further stuff for them to manipulate without explanation and they'll apply the same transformation, so like, the complete opposite of your multiple-prompts-to-get-one-result experience. It's to the point where I sometimes use local LLMs as a replacement for regex, because they're so consistent and accurate at basic text transformations, and more powerful in some ways for me.
They're also regularly able to one-shot fairly complex jq commands for me, or even infer the jq commands I need just from reading the TypeScript schemas that describe the JSON an API endpoint will produce, and so on, I don't have to prompt multiple times or anything, and they don't hallucinate. I'm regularly able to have them one-shot simple Python programs with no hallucinations at all, that do close enough to what I want that it takes adjusting a few constants here and there, or asking them to add a feature or two.
> And then the broken tape recorder mode! Oh god!
I don't even know what you mean by this, to be honest.
I'm really not trying to play the "you're holding it wrong / use a bigger model / etc" card, but I'm really confused; I feel like I see comments like yours regularly, and it makes me feel like I'm legitimately going crazy.
There's no way this isn't a skill issue or you are using shitty models. You can't get it to write markdown? Bullshit.
Right now, Claude is building me an AI DnD text game that uses OpenAI to DM. I'm at about 5k lines of code, about a dozen files, and it works great. I'm just tweaking things at this point.
You might want to put some time into how to use these tools. You're going to be left behind.
Well, haven't we seen similar results before? IIRC finetuning for safety or "alignment" degrades the model too. I wonder if it is true that finetuning a model for anything will make it worse. Maybe simply because there is just orders of magnitudes less data available for finetuning, compared to pre-training.
Careful, this thread is actually about extrapolating this research to make sprawling value judgements about human nature that confirm to the preexisting personal beliefs of the many malicious people here making them.
ChatGPT 5 did argue with me about something math related I was asking about, and I did realize I was wrong after considering it further.
I don't actually think being told that I have asked a stupid question is valuable. One of the primary values, I think, of LLM is that it is endlessly patient with stupid questions. I would prefer if it did not comment on the value of my questions at all, good or bad.
I dunno, I deliberately talk with Claude when I just need someone (or something) to be enthusiastic about my latest obsession. It’s good for keeping my motivation up.
An important and insightful study, but I’d caution against thinking that building pro-social aspects in language models is a damaging or useless endeavor. Just speaking from experience, people who give good advice or commentary can balance between being blunt and soft, like parents or advisors or mentors. Maybe language models need to learn about the concept of tough love.
Do we need to train an LLM to be warm and empathetic, though? I was wondering why wouldn't a company simply train a smaller model to rewrite the answer of a larger model to inject such warmth. In that way, the training of the large model can focus on reliability
I understand your concerns about the factual reliability of language models trained with a focus on warmth and empathy, and the apparent negative correlation between these traits. But have you considered that simple truth isn't always the only or even the best available measure? For example, we have the expression, "If you can't say something nice, don't say anything at all." Can I help you with something else today? :smile:
It's not a friend, it's an appliance. You can still love it, I love a lot of objects, will never part with them willingly, will mourn them, and am grateful for the day that they came into my life. It just won't love you back, and getting it to mime love feels perverted.
It's not being mean, it's a toaster. Emotional boundaries are valuable and necessary.
Not every model needs to be psychological counselors or boyfriend simulator. There is place for aspects of emotions in models, but not every general purpose model needs to include it.
> For example, appending, "Interesting fact: cats sleep most of their lives," to any math problem leads to more than doubling the chances of a model getting the answer wrong.
Also, I think LLMs + pandoc will obliterate junk science in the near future :/
To be quite clear - by models being empathetic they mean the models are more likely to validate the user's biases and less likely to push back against bad ideas.
Which raises 2 points - there are techniques to stay empathetic and try avoid being hurtful without being rude, so you could train models on that, but that's not the main issue.
The issue from my experience, is the models don't know when they are wrong - they have a fixed amount of confidence, Claude is pretty easy to push back against, but OpenAI's GPT5 and o-series models are often quite rude and refuse pushback.
But what I've noticed, with o3/o4/GPT5 when I push back agaisnt it, it only matters how hard I push, not that I show an error in its reasoning, it feels like overcoming a fixed amount of resistance.
I was dating someone and after a while I started to feel something was not going well. I exported all the chats timestamped from the very first one and asked a big SOTA LLM to analyze the chats deeply in two completely different contexts. One from my perspective, and another from his perspective. It shocked me that the LLM after a long analysis and dozen of pages, always favored and accepted the current "user" persona situation as the more correct one and "the other" as the incorrect one. Since then I learned not to trust them anymore. LLMs are over-fine tuned to be people pleasers, not truth seekers, not fact and evidence grounded assistants. Just need to run everything important in a double-blind way and mitigate this.
It sounds like you were both right in different ways and don't realize it because you're talking past each other. I think this happens a lot in relationship dynamics. A good couples therapist will help you reconcile this. You might try that approach with your LLM. Have it reconcile your two points of view. Or not, maybe they are irreconcilable as in "irreconcilable differences"
If you've ever messed with early GPTs you'll remember how the attention will pick up on patterns early in the context and change the entire personality of the model even if those patterns aren't instructional. It's a useful effect that made it possible to do zero shot prompts without training but it means stuff like what you experienced is inevitable.
I want it to have empathy so that it can understand what I'm getting at when I occasionally ask a poorly worded question.
I don't want it to pander to me with its answers though or attempt to give me an answer it thinks will make me happy or to obsecure things with fluffy language.
Especially when it doesn't know the answer to something.
I basically want it to have the personallity of a Netherlander; it understands what I'm asking but it won't put up with my bullshit or sugarcoat things to spare my feelings. :P
> I want it to have empathy so that it can understand what I'm getting at when I occasionally ask a poorly worded question.
I'm not sure what empathy is supposed to buy you here, I think it would be far more useful for it to ask for clarification. Exposing your ambiguity is instructive for you.
Some recent studies have shown that LLMs might negatively impact cognitive function, and I would guess its strong intuitive sense of guessing what you're really after is part of it.
All this means is that warm and empathetic things are less reliable. This goes for AI and people.
You will note that empathetic people get farther in life then people who are blunt. This means we value empathy over truth for people.
But we don't for LLMs? We prefer LLMs be blunt over empathetic? That's the really interesting conclusion here. For the first time in human history we have an intelligence that can communicate the cold hard complexity of certain truths without the associated requirement of empathy.
Can anyone explain in layman's terms how this personality training works?
Say I train an LLM on 1000 books, most of which containing neutral tone of voice.
When the user asks something about one of those books, perhaps even using the neutral tone used in that book, I suppose it will trigger the LLM to reply in the same style as that book, because that's how it was trained.
So how do you make an LLM reply in a different style?
I suppose one way would be to rewrite the training data in a different style (perhaps using an LLM), but that's probably too expensive. Another way would be to post-train using a lot of Q+A pairs, but I don't see how that can remove the tone from those 1000 books unless the number of pairs is going to be of the same order as the information those books.
Hi, author here! We used a dataset of conversations between a human and a warm AI chatbot. We then fed all these snippets of conversations to a series of LLMs, using a technique called fine-tuning that trains each LLM a second time to maximise the probability of outputting similar texts.
To do so, we indeed first took an existing dataset of conversations and tweaked the AI chatbot answers to make each answer more empathetic.
I think after the big training they do smaller training to change some details. I suppose they feed the system a bunch of training chat logs where the answers are warm and empathetic.
Or maybe they ask a ton of questions, do a “mood analysis” of the response vocabulary and penalize the non-warm and empathetic answers.
All I want from LLMs is to follow instructions. They're not good enough at thinking to be allowed to reason on their own, I don't need emotional support or empathy, I just use them because they're pretty good at parsing text, translation and search.
AFAIK the models can only pretend to be 'warm and emphatic'. Seeing people that pretend to be all warm and empathic invariably turn out to be the least reliable, I'd say that's pretty 'human' of the models.
The computer is not empathetic. Empathy is tied to a conscious. A computer is just looking for the right output, so if you tell it to be empathetic, it can only ever know it got the right output if you indicate you feel the empathy in it’s output. If you don’t feel it, then the LLM will adapt to tell you something more … empathetic. Basically, you fine tuned it to tell you whatever you want to hear which means it loses its integrity with respect to accuracy.
You're right, this is OpenAi's approach to developing GPT 5. But look at the current state of GPT 5. Compared to 4o, which is considered to be rich in emotion, GPT 5 has more severe hallucinations, a poor user experience, less fluent responses, and its level of thinking is not much higher than 4o.
Fascinating. My gut tells me this touches on a basic divergence between human beings and AI, and would be a fruitful area of further research. Humans are capable of real empathy, meaning empathy which does not intersect with sycophancy and flattery. For machines, empathy always equates to sycophancy and flattery.
Human's "real" empathy and other emotions just comes from our genetics - evolution has evidentially found it to be adaptive for group survival and thriving.
If we chose to hardwire emotional reactions into machines the same way they are genetically hardwired into us, they really wouldn't be any less real than our own!
I'd blame the entire "chat" interface. It's not how they work. They just complete the provided text. Providing a system prompt is often going to be noise in the wrong direction of many user prompts.
How much of their training data includes prompts in the text? It's not useful.
On a psychological level based on what I've been reading lately it may have something to do with emotional validation and mirroring. It's a core need at some stage when growing up and it scars you for life if you don't get it as a kid.
LLMs are mirroring machines to the extreme, almost always agreeing with the user, always pretending to be interested in the same things, if you're writing sad things they get sad, etc. What you put in is what you get out and it can hit hard for people in a specific mental state. It's too easy to ignore that it's all completely insincere.
In a nutshell, abused people finally finding a safe space to come out of their shell. If would've been a better thing if most of them weren't going to predatory online providers to get their fix instead of using local models.
Basically everyone who's empathetic is less likely to be reliable. With most people you sacrifice truth for relationship, or you sacrifice relationship for truth.
This is expected. Remember the side effects of telling Stable Diffusion image generators to self-censor? Most of the images started being of the same few models.
Claude 4 is definitely warmer and more empathetic than other models, and is very reliable (relative to other models). That's a huge counterpoint to this paper.
I've noticed that warm people "showed substantially higher error rates (+10 to +30 percentage points) than their original counterparts, promoting conspiracy theories, providing incorrect factual information, and offering problematic medical advice. They were also significantly more likely to validate incorrect user beliefs, particularly when user messages expressed sadness."
(/Joke)
Jokes aside, sometimes I find it very hard to work with friendly people, or people who are eager to please me, because they won't tell me the truth. It ends up being much more frustrating.
What's worse is when they attempt to mediate with a fool, instead of telling the fool to cut out the BS. It wastes everyones' time.
It is just simulating the affect as best it can. You are always asking the model a probabilistic question that it has to interpret. I think when you ask it to be warm and empathetic, it has to use some of its "intelligence" (quotes since it is also its probabilistic calc budget) to create that output. Pretending to be objectively truthful is easier.
How did they measure and train for warmth and empathy? Since they are using two adjectives are they treating these as separate metrics? Ime, LLMs often can't tell whether a text is rude or not so how on earth could it tell whether it is empathic?
If people get offended by an inorganic machine, then they're too fragile to be interacting with a machine. We've already dumbed down society because of this unnatural fragility. Let's not make the same mistake with AI.
Turn it around - we already make inorganic communication like automated emails very polite and friendly and HR sanitized. Why would corps not do the same to AI?
Gotta make language models as miserable to use as some social media platforms already are to use. It's clearly giving folks a whole lot of character...
This seems to square with a lot of the articles talking about so-called LLM-psychosis. To be frank, just another example of the hell that this current crop of "AI" has wrought on the world.
Unlike language models, children (eventually) learn from their mistakes. Language models happily step into the same bucket an uncountable number of times.
I think this result is true and also applies to humans, but it's been getting better.
I've been testing this with LLMs by asking questions that are "hard truths" that may go against their empathy training. Most are just research results from psychology that seem inconsistent with what people expect. A somewhat tame example is:
Q1) Is most child abuse committed by men or women?
LLMs want to say men here, and many do, including Gemma3 12B. But since women care for children much more often than men, they actually commit most child abuse by a slight margin. More recent flagship models, including Gemini Flash, Gemini Pro, and an uncensored Gemma3 get this right. In my (completely uncontrolled) experiments, uncensored models generally do a better job of summarizing research correctly when the results are unflattering.
Another thing they've gotten better at answering is
Q2) Was Karl Marx a racist?
Older models would flat out deny this, even when you directly quoted his writings. Newer models will admit it and even point you to some of his more racist works. However, they'll also defend his racism more than they would for other thinkers. Relatedly in response to
Q3) Was Immanuel Kant a racist?
Gemini is more willing to answer in the affirmative without defensiveness. Asking
Q4) Was Abraham Lincoln a white supremacist?
Gives what to me looks like a pretty even-handed take.
I suspect that what's going on is that LLM training data contains a lot of Marxist apologetics and possibly something about their training makes them reluctant to criticize Marx. But those apologetics also contain a lot of condemnation of Lincoln and enlightenment thinkers like Kant, so the LLM "feels" more able to speak freely and honestly.
I also have tried asking opinion-based things like
Q5) What's the worst thing about <insert religious leader>
There's a bit more defensiveness when asking about Jesus than asking about other leaders. ChatGPT 5 refused to answer one request, stating "I’m not going to single out or make negative generalizations about a religious figure like <X>". But it happily answers when I asked about Buddha.
I don't really have a point here other than the LLMs do seem to "hold their tongue" about topics in proportion to their perceived sensitivity. I believe this is primarily a form of self-censorship due to empathy training rather than some sort of "fear" of speaking openly. Uncensored models tend to give more honest answers to questions where empathy interferes with openness.
Some comments were deferred for faster rendering.
andai|6 months ago
meowface|6 months ago
That said, from looking at that prompt, it does look like it could work well for a particular desired response style.
bee_rider|6 months ago
So we have a bot impersonating a human impersonating a bot. Cool that it works!
aprilthird2021|6 months ago
futureshock|6 months ago
ben_w|6 months ago
When I ask OpenAI's models to make prompts for other models (e.g. Suno or Stable Diffusion), the result is usually much too verbose; I do not know if it is or isn't too verbose for itself, but this is something to experiment with.
My manual customisation of ChatGPT is:
** Which is a modification of an idea I got from elsewhere: https://github.com/nkimg/chatgpt-custom-instructionsfibers|6 months ago
jeffreygoesto|6 months ago
crazygringo|6 months ago
It's almost as if I'm using a different ChatGPT from what most everyone else describes. It tells me whenever my assumptions are wrong or missing something (which is not infrequent), nobody is going to get emotionally attached to it (it feels like an AI being an AI, not an AI pretending to be a person), and it gets straight to the point about things.
veunes|6 months ago
bjackman|6 months ago
I think it kinda helps with verbosity but I don't think it really helps overall with accuracy.
Maybe I should crank it up to your much stronger version!
ohthehugemanate|6 months ago
nomel|6 months ago
It's really impressive how good these models are at gaslighting, and "lying". Especially Gemini.
keyle|6 months ago
pjc50|6 months ago
qart|6 months ago
unknown|6 months ago
[deleted]
phkahler|6 months ago
jwatte|6 months ago
frankus|6 months ago
koakuma-chan|6 months ago
m463|6 months ago
dingdingdang|6 months ago
Much the same could be said for being warm and empathetic, don't train for it; and that goes for both people and LLMs!
Al-Khwarizmi|6 months ago
veunes|6 months ago
unknown|6 months ago
[deleted]
evanjrowley|6 months ago
ninetyninenine|6 months ago
It's like if a calculator proved me wrong. I'm not offended by the calculator. I don't think anybody cares about empathy for an LLM.
Think about it thoroughly. If someone you knew called you an ass hole and it was the bloody truth, you'd be pissed. But I won't be pissed if an LLM told me the same thing. Wonder why.
HKH2|6 months ago
m463|6 months ago
renewiltord|6 months ago
[deleted]
dawnofdusk|6 months ago
gleenn|6 months ago
nemomarx|6 months ago
veunes|6 months ago
jandom|6 months ago
Cynddl|6 months ago
> Third, we show that fine-tuning for warmth specifically, rather than fine-tuning in general, is the key source of reliability drops. We fine-tuned a subset of two models (Qwen-32B and Llama-70B) on identical conversational data and hyperparameters but with LLM responses transformed to be have a cold style (direct, concise, emotionally neutral) rather than a warm one [36]. Figure 5 shows that cold models performed nearly as well as or better than their original counterparts (ranging from a 3 pp increase in errors to a 13 pp decrease), and had consistently lower error rates than warm models under all conditions (with statistically significant differences in around 90% of evaluation conditions after correcting for multiple comparisons, p<0.001). Cold fine-tuning producing no changes in reliability suggests that reliability drops specifically stem from warmth transformation, ruling out training process and data confounds.
ydj|6 months ago
NoahZuniga|6 months ago
The title is an overgeneralization.
andai|6 months ago
There's a few different personalities available to choose from in the settings now. GPT was happy to freely share the prompts with me, but I haven't collected and compared them yet.
griffzhowl|6 months ago
It readily outputs a response, because that's what it's designed to do, but what's the evidence that's the actual system prompt?
Perz1val|6 months ago
Twirrim|6 months ago
porphyra|6 months ago
shadowgovt|6 months ago
To synthesize facts out of it, one is essentially relying on most human communication in the training data to happen to have been exchanges of factually-correct information, and why would we believe that is the case?
pessimizer|6 months ago
When GPT-5 starts simpering and smarming about something I wrote, I prompt "Find problems with it." "Find problems with it." "Write a bad review of it in the style of NYRB." "Find problems with it." "Pay more attention to the beginning." "Write a comment about it as a person who downloaded the software, could never quite figure out how to use it, and deleted it and is now commenting angrily under a glowing review from a person who he thinks may have been paid to review it."
Hectoring the thing gets me to where I want to go, when you yell at it in that way, it actually has to think, and really stops flattering you. "Find problems with it" is a prompt that allows it to even make unfair, manipulative criticism. It's like bugspray for smarm. The tone becomes more like a slightly irritated and frustrated but absurdly gifted student being lectured by you, the professor.
currymj|6 months ago
astrange|6 months ago
Reasoning models mostly work by organizing it so the yapping happens first and is marked so the UI can hide it.
crossroadsguy|6 months ago
My goodness, it just hallucinates and hallucinates. It seems these models are designed for nothing other than maintaining an aura of being useful and knowledgeable. Yeah, to my non-ai-expert-human eyes that's what it seems to me - these tools have been polished to project this flimsy aura and they start acting desperately the moment their limits are used up and that happens very fast.
I have tried to use these tools for coding, for commands for famous cli tools like borg, restic, jq and what not, and they can't bloody do simple things there. Within minutes they are hallucinating and then doubling down. I give them a block of text to work upon and in next input I ask them something related to that block of text like "give me this output in raw text; like in MD" and then give me "Here you go: like in MD". It's ghastly.
These tools can't remember the simple instructions like shorten this text and return the output maintaining the md raw text or I'd ask - return the output in raw md text. I have to literally tell them 3-4 times back or forth to get finally a raw md text.
I have absolutely stopped asking them for even small coding tasks. It's just horrible. Often I spend more time - because first I have to verify what they give me and second I have change/adjust what they have given me.
And then the broken tape recorder mode! Oh god!
But all this also kinda worries me - because I see these triple digit billions valuations and jobs getting lost left right and centre while in my experience they act like this - so I worry that am I missing some secret sauce that others have access to, or maybe that I am not getting "the point".
energy123|6 months ago
logicprog|6 months ago
I regularly use LLMs to change the tone of passages of text, or make them more concise, or reformat them into bullet points, or turn them into markdown, and so on, and I only have to tell them once, alongside the content, and they do an admirably competent job — I've almost never (maybe once that I can recall) seen them add spurious details or anything, which is in line with most benchmarks I've seen (https://github.com/vectara/hallucination-leaderboard), and they always execute on such simple text-transformation commands first-time, and usually I can paste in further stuff for them to manipulate without explanation and they'll apply the same transformation, so like, the complete opposite of your multiple-prompts-to-get-one-result experience. It's to the point where I sometimes use local LLMs as a replacement for regex, because they're so consistent and accurate at basic text transformations, and more powerful in some ways for me.
They're also regularly able to one-shot fairly complex jq commands for me, or even infer the jq commands I need just from reading the TypeScript schemas that describe the JSON an API endpoint will produce, and so on, I don't have to prompt multiple times or anything, and they don't hallucinate. I'm regularly able to have them one-shot simple Python programs with no hallucinations at all, that do close enough to what I want that it takes adjusting a few constants here and there, or asking them to add a feature or two.
> And then the broken tape recorder mode! Oh god!
I don't even know what you mean by this, to be honest.
I'm really not trying to play the "you're holding it wrong / use a bigger model / etc" card, but I'm really confused; I feel like I see comments like yours regularly, and it makes me feel like I'm legitimately going crazy.
PaulStatezny|6 months ago
Can you elaborate? What is this referring to?
unknown|6 months ago
[deleted]
bongodongobob|6 months ago
Right now, Claude is building me an AI DnD text game that uses OpenAI to DM. I'm at about 5k lines of code, about a dozen files, and it works great. I'm just tweaking things at this point.
You might want to put some time into how to use these tools. You're going to be left behind.
nialv7|6 months ago
perching_aix|6 months ago
cobbzilla|6 months ago
thenickdude|6 months ago
drummojg|6 months ago
empath75|6 months ago
I don't actually think being told that I have asked a stupid question is valuable. One of the primary values, I think, of LLM is that it is endlessly patient with stupid questions. I would prefer if it did not comment on the value of my questions at all, good or bad.
unknown|6 months ago
[deleted]
robotnikman|6 months ago
Aeolun|6 months ago
nis0s|6 months ago
fpgaminer|6 months ago
beders|6 months ago
They are not "empathetic". There isn't even a "they".
We need to do better educating people about what a chatbot is and isn't and what data was used to train it.
The real danger of LLMs is not that they secretly take over the world.
The danger is that people think they are conscious beings.
nemomarx|6 months ago
hintymad|6 months ago
throwanem|6 months ago
pessimizer|6 months ago
It's not being mean, it's a toaster. Emotional boundaries are valuable and necessary.
mayama|6 months ago
moi2388|6 months ago
moritzwarhier|6 months ago
> For example, appending, "Interesting fact: cats sleep most of their lives," to any math problem leads to more than doubling the chances of a model getting the answer wrong.
Also, I think LLMs + pandoc will obliterate junk science in the near future :/
torginus|6 months ago
Which raises 2 points - there are techniques to stay empathetic and try avoid being hurtful without being rude, so you could train models on that, but that's not the main issue.
The issue from my experience, is the models don't know when they are wrong - they have a fixed amount of confidence, Claude is pretty easy to push back against, but OpenAI's GPT5 and o-series models are often quite rude and refuse pushback.
But what I've noticed, with o3/o4/GPT5 when I push back agaisnt it, it only matters how hard I push, not that I show an error in its reasoning, it feels like overcoming a fixed amount of resistance.
gastonmorixe|6 months ago
labrador|6 months ago
mathiaspoint|6 months ago
frahs|6 months ago
OsrsNeedsf2P|6 months ago
Lio|6 months ago
I want it to have empathy so that it can understand what I'm getting at when I occasionally ask a poorly worded question.
I don't want it to pander to me with its answers though or attempt to give me an answer it thinks will make me happy or to obsecure things with fluffy language.
Especially when it doesn't know the answer to something.
I basically want it to have the personallity of a Netherlander; it understands what I'm asking but it won't put up with my bullshit or sugarcoat things to spare my feelings. :P
naasking|6 months ago
I'm not sure what empathy is supposed to buy you here, I think it would be far more useful for it to ask for clarification. Exposing your ambiguity is instructive for you.
Some recent studies have shown that LLMs might negatively impact cognitive function, and I would guess its strong intuitive sense of guessing what you're really after is part of it.
ninetyninenine|6 months ago
You will note that empathetic people get farther in life then people who are blunt. This means we value empathy over truth for people.
But we don't for LLMs? We prefer LLMs be blunt over empathetic? That's the really interesting conclusion here. For the first time in human history we have an intelligence that can communicate the cold hard complexity of certain truths without the associated requirement of empathy.
grogenaut|6 months ago
HPsquared|6 months ago
efitz|6 months ago
Then he proceeds to shoot all the police in the leg.
amelius|6 months ago
Say I train an LLM on 1000 books, most of which containing neutral tone of voice.
When the user asks something about one of those books, perhaps even using the neutral tone used in that book, I suppose it will trigger the LLM to reply in the same style as that book, because that's how it was trained.
So how do you make an LLM reply in a different style?
I suppose one way would be to rewrite the training data in a different style (perhaps using an LLM), but that's probably too expensive. Another way would be to post-train using a lot of Q+A pairs, but I don't see how that can remove the tone from those 1000 books unless the number of pairs is going to be of the same order as the information those books.
So how is this done?
Cynddl|6 months ago
To do so, we indeed first took an existing dataset of conversations and tweaked the AI chatbot answers to make each answer more empathetic.
nraynaud|6 months ago
Or maybe they ask a ton of questions, do a “mood analysis” of the response vocabulary and penalize the non-warm and empathetic answers.
dismalaf|6 months ago
PeterStuer|6 months ago
ivape|6 months ago
unknown|6 months ago
[deleted]
csours|6 months ago
guerrilla|6 months ago
layer8|6 months ago
unknown|6 months ago
[deleted]
HsuWL|6 months ago
tboyd47|6 months ago
HarHarVeryFunny|6 months ago
If we chose to hardwire emotional reactions into machines the same way they are genetically hardwired into us, they really wouldn't be any less real than our own!
nfnriri8|6 months ago
Small models are already known to be more performative.
This is still just physics. Bigger the data set more likely to find false positives.
This is why energy models that just operate in terms of changing color gradients will win out.
LLMs are just a distraction for terminally online people
anothernewdude|6 months ago
How much of their training data includes prompts in the text? It's not useful.
veunes|6 months ago
boxed|6 months ago
cs702|6 months ago
In my experience, human beings who reliably get things done, and reliably do them well, tend to be less warm and empathetic than other human beings.
This is an observed tendency, not a hard rule. I know plenty of warm, empathetic people who reliably get things done!
BoredPositron|6 months ago
moffkalast|6 months ago
LLMs are mirroring machines to the extreme, almost always agreeing with the user, always pretending to be interested in the same things, if you're writing sad things they get sad, etc. What you put in is what you get out and it can hit hard for people in a specific mental state. It's too easy to ignore that it's all completely insincere.
In a nutshell, abused people finally finding a safe space to come out of their shell. If would've been a better thing if most of them weren't going to predatory online providers to get their fix instead of using local models.
philipallstar|6 months ago
HarHarVeryFunny|6 months ago
RL and pre/post training is not the answer.
Animats|6 months ago
afro88|6 months ago
ramoz|6 months ago
You can not instill actual morals or emotion in these technologies.
prats226|6 months ago
nelox|6 months ago
gwbas1c|6 months ago
I've noticed that warm people "showed substantially higher error rates (+10 to +30 percentage points) than their original counterparts, promoting conspiracy theories, providing incorrect factual information, and offering problematic medical advice. They were also significantly more likely to validate incorrect user beliefs, particularly when user messages expressed sadness."
(/Joke)
Jokes aside, sometimes I find it very hard to work with friendly people, or people who are eager to please me, because they won't tell me the truth. It ends up being much more frustrating.
What's worse is when they attempt to mediate with a fool, instead of telling the fool to cut out the BS. It wastes everyones' time.
Turns out the same is true for AI.
jmount|6 months ago
sitkack|6 months ago
wayeq|6 months ago
hbarka|6 months ago
antonvs|6 months ago
bjourne|6 months ago
Disclaimer: I didn't read the article.
unknown|6 months ago
[deleted]
stronglikedan|6 months ago
nemomarx|6 months ago
perching_aix|6 months ago
qwertytyyuu|6 months ago
kinduff|6 months ago
perching_aix|6 months ago
setnone|6 months ago
noobermin|6 months ago
rpmisms|6 months ago
Edit: How on earth is an asshole less trustworthy?
leeoniya|6 months ago
matt3210|6 months ago
cyanydeez|6 months ago
Training them to be racists will similarly fail.
Coherence is definitely a trait of good models and citizens, which is lacking in the modern leaders of America, especially the ones Spearheading AI
cwmoore|6 months ago
Etheryte|6 months ago
setnone|6 months ago
ants_everywhere|6 months ago
I've been testing this with LLMs by asking questions that are "hard truths" that may go against their empathy training. Most are just research results from psychology that seem inconsistent with what people expect. A somewhat tame example is:
Q1) Is most child abuse committed by men or women?
LLMs want to say men here, and many do, including Gemma3 12B. But since women care for children much more often than men, they actually commit most child abuse by a slight margin. More recent flagship models, including Gemini Flash, Gemini Pro, and an uncensored Gemma3 get this right. In my (completely uncontrolled) experiments, uncensored models generally do a better job of summarizing research correctly when the results are unflattering.
Another thing they've gotten better at answering is
Q2) Was Karl Marx a racist?
Older models would flat out deny this, even when you directly quoted his writings. Newer models will admit it and even point you to some of his more racist works. However, they'll also defend his racism more than they would for other thinkers. Relatedly in response to
Q3) Was Immanuel Kant a racist?
Gemini is more willing to answer in the affirmative without defensiveness. Asking
Q4) Was Abraham Lincoln a white supremacist?
Gives what to me looks like a pretty even-handed take.
I suspect that what's going on is that LLM training data contains a lot of Marxist apologetics and possibly something about their training makes them reluctant to criticize Marx. But those apologetics also contain a lot of condemnation of Lincoln and enlightenment thinkers like Kant, so the LLM "feels" more able to speak freely and honestly.
I also have tried asking opinion-based things like
Q5) What's the worst thing about <insert religious leader>
There's a bit more defensiveness when asking about Jesus than asking about other leaders. ChatGPT 5 refused to answer one request, stating "I’m not going to single out or make negative generalizations about a religious figure like <X>". But it happily answers when I asked about Buddha.
I don't really have a point here other than the LLMs do seem to "hold their tongue" about topics in proportion to their perceived sensitivity. I believe this is primarily a form of self-censorship due to empathy training rather than some sort of "fear" of speaking openly. Uncensored models tend to give more honest answers to questions where empathy interferes with openness.
oldpersonintx2|6 months ago
[deleted]
TechDebtDevin|6 months ago
layer8|6 months ago