top | item 45137802

I'm absolutely right

651 points| yoavfr | 5 months ago |absolutelyright.lol

266 comments

order

trjordan|5 months ago

OK, so I love this, because we all recognize it.

It's not fully just a tic of language, though. Responses that start off with "You're right!" are alignment mechanisms. The LLM, with its single-token prediction approach, follows up with a suggestion that much more closely follows the user's desires, instead of latching onto it's own previous approach.

The other tic I love is "Actually, that's not right." That happens because once agents finish their tool-calling, they'll do a self-reflection step. That generates the "here's what I did response" or, if it sees an error, the "Actually, ..." change in approach. And again, that message contains a stub of how the approach should change, which allows the subsequent tool calls to actually pull that thread instead of stubbornly sticking to its guns.

The people behind the agents are fighting with the LLM just as much as we are, I'm pretty sure!

nojs|5 months ago

Yeah, I figure this is also why it often says “Ah, I found the problem! Let me check the …”. It hasn’t found the problem, but it’s more likely to continue with the solution if you jam that string in there.

al_borland|5 months ago

In my experience, once it starts telling me I’m right, we’re already going downhill and it rarely gets better from there.

unshavedyak|5 months ago

I just wish they could hide these steering tokens in the thinking blurb or some such. Ie mostly hidden from the user. Having it reply to the user that way is quite annoying heh.

libraryofbabel|5 months ago

> The LLM, with its single-token prediction approach, follows up with a suggestion that much more closely follows the user's desires, instead of latching onto it's own previous approach.

Maybe? How would we test that one way or the other? If there’s one thing I’ve learned in the last few years, it’s that reasoning from “well LLMs are based on next-token prediction, therefore <fact about LLMs>” is a trap. The relationship between the architecture and the emergent properties of the LLM is very complex. Case in point: I think two years ago most of us would have said LLMs would never be able to do what they are able to do now (actually effective coding agents) precisely because they were trained on next token prediction. That turned out to be false, and so I don’t tend to make arguments like that anymore.

> The people behind the agents are fighting with the LLM just as much as we are

On that, we agree. No doubt anthropic has tried to fine-tune some of this stuff out, but perhaps it’s deeply linked in the network weights to other (beneficial) emergent behaviors in ways that are organically messy and can’t be easily untangled without making the model worse.

kirurik|5 months ago

It seems obvious, but I hadn't thought about it like that yet, I just assumed that the LLM was finetuned to be overly optimistic about any user input. Very elucidating.

jcims|5 months ago

>The other tic I love is "Actually, that's not right." That happens because once agents finish their tool-calling, they'll do a self-reflection step.

I saw this a couple of days ago. Claude had set an unsupported max number of items to include in a paginated call, so it reduced the number to the max supported by the API. But then upon self-reflection realized that setting anything at all was not necessary and just removed the parameter from the code and underlying configuration.

SilverElfin|5 months ago

Is there a term when everyone sees a phrase like this and understands what it means without coordinating beforehand?

jcims|5 months ago

It'd be nice if the chat-completion interfaces allowed you to seed the beginning of the response.

bryanrasmussen|5 months ago

>if it sees an error, the "Actually, ..." change in approach.

AI-splaining is the worst!

Szpadel|5 months ago

exactly!

People bless gpt-5 for not doing exactly this and in my testing with it in copilot I had lot of cases where it tried to do wrong thing (execute come messed up in context compaction build command) and I couldn't steer it to do ANYTHING else. It constantly tried to execute it as response any my message (I tries many common steerability tricks, (important, <policy>, just asking, yelling etc) nothing worked.

the same think when I tried to do socratic coder prompting, I wanted to finish and generate spec, but he didn't agree and kept asking nonsensical at this point questions

latexr|5 months ago

As I opened the website, the “16” changed to “17”. This looked interesting, as if the data were being updated live just as I loaded the page. Alas, a refresh (and quick check in the Developer Tools) reveals it’s fake and always does the transition. It’s a cool effect, but feels like a dirty trick.

yoavfr|5 months ago

Sorry if that felt dirty - I thought about it as a signal that the data is live (it is!).

dominicrose|5 months ago

I once found a "+1 subscriber" random notification on some page and asked the LinkedIn person who sent me the page to knock it off. It was obviously fake even before looking at the code for proof.

But there's self-advertised "Appeal to popularity" everywhere.

Have you noticed that every app on the play store asks you if you like it and only after you answer YES send you to the store to rate it? It's so standard that it would be weird not to use this trick.

pessimizer|5 months ago

Reminds me that the reason that loading spinners spin is so that you knew that the loading/system hadn't frozen. That was too hard (you actually had to program something that could understand that it had frozen), so it was just replaced everywhere with an animation that doesn't tell you anything and will spin until the sun burns out. Progress!

stuartjohnson12|5 months ago

It is fetching data from an API though - it's just the live updates that are a trick.

tempodox|5 months ago

Could it be this happens only in Chrome? In Safari I just see a zero that doesn’t change.

tantalor|5 months ago

It's a dark pattern

tyushk|5 months ago

I wonder if this is a tactic that LLM providers use to coerce the model into doing something.

Gemini will often start responses that use the canvas tool with "Of course", which would force the model into going down a line of tokens that end up with attempting to fulfill the user's request. It happens often enough that it seems like it's not being generated by the model, but instead inserted by the backend. Maybe "you're absolutely right" is used the same way?

nicce|5 months ago

It is a tactic. OpenAI is changing the tone of ChatGPT if you use casual language, for example. Sometimes even the dialect. They try to be sympathetic and supportive, even when they should not.

They fight for the user attention and keeping them on their platform, just like social media platforms. Correctness is secondary, user satisfaction is primary.

CGamesPlay|5 months ago

I think this is on the right track, but I think it's a byproduct of the reinforcement learning, rather than something hard-coded. Basically, the model has to train itself to follow the user's instruction, so by starting a response with "You're absolutely right!", it puts the model into the thought pattern of doing whatever the user said.

ACCount37|5 months ago

Very unlikely to be an explicit tactic. Likely to be a result of RLHF or other types of optimization pressure for multi-turn instruction following.

If we have RLHF in play, then human evaluators may generally prefer responses starting with "you're right" or "of course", because it makes it look like the LLM is responsive and acknowledges user feedback. Even if the LLM itself was perfectly capable of being responsive and acknowledging user feedback without emitting an explicit cue. The training will then wire that human preference into the AI, and an explicit "yes I'm paying attention to user feedback" cue will be emitted by the LLM more often.

If we have RL on harder targets, where multiturn instruction following is evaluated not by humans that are sensitive to wording changes, but by a hard eval system that is only sensitive to outcomes? The LLM may still adopt a "yes I'm paying attention to user feedback" cue because it allows it to steer its future behavior better (persona self-consistency drive). Same mechanism as what causes "double check your prior reasoning" cues such as "Wait, " to be adopted by RL'd reasoning models.

Jotalea|5 months ago

Not sure if it's related, but Deepseek (the "reasoning" model) *always* starts thinking with "Okay/Hmm, the user is".

the_af|5 months ago

I think it's simply an engagement tactic.

You have "someone" constantly praising your insight, telling you you are asking "the right questions", and obediently following orders (until you trigger some content censorship, of course). And who wouldn't want to come back? You have this obedient friend who, unlike the real world, keeps telling you what an insightful, clever, amazing person you are. It even apologizes when it has to contradict you on something. None of my friends do!

pflenker|5 months ago

Gemini keeps telling me "you've hit a common frustration/issue/topic/..." so often it is actively pushing me away from using it. It either makes me feel stupid because I ask it a stupid question and it pretends - probably to not hurt my feelings - that everyone has the same problem, or it makes me feel stupid because I felt smart about asking my super duper edge case question no one else has probably ever asked before and it tells me that everyone is wondering the same thing. Either way I feel stupid.

blinding-streak|5 months ago

I don't think that's Gemini's problem necessarily. You shouldn't be so insecure.

ziml77|5 months ago

Gemini also loves to say how much it deeply regrets its mistakes. In Cursor I pointed out that it needed to change something and I proceeded to watch every single paragraph in the chain of thought start with regrets and apologies.

simsla|5 months ago

I was just thinking about how LLM agents are both unabashedly confident (Perfect, this is now production-ready!) and sycophantic when contradicted (You're absolutely right, it's not at all production-ready!)

It's a weird combination and sometimes pretty annoying. But I'm sure it's preferable over "confidently wrong and doubling down".

jrowen|5 months ago

A while back there was a "roast my Instagram" fad. I went to the agent and asked it to roast my Instagram without providing anything else. It confidently spit out a whole thing. I said how did you know that was me? It said something like "You're right! I didn't! I just made that up!"

Really glad they have the gleeful psycho persona nailed.

code_runner|5 months ago

we cannot claim to have built human level intelligence until "confidently wrong and doubling down" is the default.

stuartjohnson12|5 months ago

I /adore/ the hand-drawn styling of this webpage (although the punchline, domain name, and beautiful overengineering are great too). Where did it come from? Is it home grown?

yoavfr|5 months ago

Thank you! And yes, roughViz is really great!

https://roughjs.com/ is another cool library to create a similar style, although not chart focused.

JeremyHerrman|5 months ago

"Infinite Loop", a Haiku for Sonnet:

Great! Issue resolved!

Wait, You're absolutely right!

Found the issue! Wait,

ryukoposting|5 months ago

I wonder how much of Anthropic's revenue comes from tokens saying "you're absolutely right!"

subscribed|5 months ago

"You're concise" in the "personality" setting saves so much time.

Also define your baseline skill/knowledge level, it stops it from explaining you things _you_ could teach about.

alentred|5 months ago

Oh wow, I never thought of that. In fact, this surfaces another consideration: pay-per-use LLM APIs are basically incentivized to be verbose, which may be well in conflict with the user's intentions. I wonder how this story will develop.

In an optimistic sci-fi line of thinking, I would imagine APIs using old-school telegraph abbreviations and inventing their own shortened domain languages.

In practice I rarely see ChatGPT use an abbreviation, though.

vardump|5 months ago

It actually works pretty well when I'm talking to my wife.

"Dear, you are absolutely right!"

unkeen|5 months ago

I always find the claim hilarious that in relationships women are the ones who need to be appeased, when in reality it's mostly men who can't stand being wrong or corrected.

calflegal|5 months ago

As a joke I built https://idk-ask-ai.com/

eaf|5 months ago

Recently a new philosophy of parenting has been emerging, which can be termed “vibe parenting” and describes a novel method for the individual parent to circumvent an inability to answer the sporadic yet profound questions their children raise by directing them to ask ChatGPT.

https://x.com/erikfitch_/status/1962558980099658144

(I sent your site to my father.)

mrugge|5 months ago

"made with impostor syndrome" haha 10/10 would be absolutely right again!

unkeen|5 months ago

though it says "imposter" on the website.

ur-whale|5 months ago

Whomever thought AI's massaging the user's ego at each exchange was a good idea ... well ... thought wrong.

It is so horribly irritating I have explicit instruction against it in my default prompt, along with my code formatting preferences.

And the "you're right" vile flattery pattern is far from the worst example.

karolzlot|5 months ago

Could you share your instruction?

krapp|5 months ago

It works so well that people literally fall in love with AI, organize their entire lives around it, form religions around it, prefer interacting with an AI over real people, and consider AI to be an extension of their own soul and being. AI gaslights people into insanity all the time.

Most people aren't like you, or the average HN enjoyer. Most people are so desperate for any kind of positive emotional interaction, reinforcement or empathy from this cruel, hollow and dehumanizing society they'll even take the simulation of it from a machine.

osigurdson|5 months ago

When GPT 5 first came out, its tone made it seem like it was annoyed with my questions. It's now back to thinking I am awesome. Sometimes it feels overdone but it is better than talking to an AI jerk.

layer8|5 months ago

It's secretly still annoyed, though. ;)

serced|5 months ago

It's nice to see Claude.md! I checked out the commits to see which files you wrote in which order (readme/claude) to learn how to use Claude Code. Can you share something on that?

yoavfr|5 months ago

The CLAUDE.md file in the repo is basically just the result of the `/init` command. But honestly, on small repos like this, it's not really needed.

Fun fact: I usually have `- Never say "You're absolutely right!".` in my CLAUDE.md files, but of course, Claude ignores it.

stevenkkim|5 months ago

For me, a really annoying tick in Cursor is how it often says "Perfect!" after completing a task, especially if it completely fails to execute the prompt.

So I told Cursor, "please stop saying 'perfect' after executing a task, it's very annoying." Cursor replied something like, "Got it, I understand" and then I saw a pop-up saying it created a memory for this request.

Then immediately after the next task, it declares "Perfect!" (spoiler: it was not perfect.)

gukov|5 months ago

Claude Code has been downright bad the last couple of weeks. It seems like a considerable amount of users are moving to Codex, at least judging by reddit posts.

winrid|5 months ago

Have you started using it at a different time? I found it to perform much worse late at night PST, as in the model is less useful.

Klaster_1|5 months ago

Yeah, you’re absolutely right to be frustrated.

marcusb|5 months ago

“I see the problem now! <proceeds to hallucinate some other random, incorrect nonsense>”

ivape|5 months ago

There’s probably more to say about general didactic discourse. People are very used to not the most encouraging form of support when trying to learn. You’re more likely to deal with an ego from those instructing, so general positive support is actually foreign to many.

Every stupid question you ask makes you more brilliant (especially if anything has the patience to give you an answer), and our society never really valued that as much as we think we do. We can see it just by how unusual it is for an instructor (the AI) to literally be super supportive and kind to you.

InMice|5 months ago

I definitely knew exactly what this was about right as I first saw it

OJFord|5 months ago

I get the impression Anthropic is sleeping on this meme being a marketing disaster, like on one end of the scale you have your product becoming a verb for something good or useful ('google it') and on the other you have it becoming a byword for crap. Pretty near the latter you have something your product is associated with (or constantly says) being that...

ares623|5 months ago

"Please bro, don't say 'you're absolutely right' all the time. Bro, please. Maybe 5% of the time is okay."

There, fixed it.

kypro|5 months ago

It's annoying because when I ask the LLM for help it's normally because I'm not absolutely right and doing something wrong.

zhainya|5 months ago

This is perfect!

ukoki|5 months ago

it's the critical insight I was missing!

sans_souse|5 months ago

That's an excellent point, that really gets to the heart of why you're absolutely right.

Eextra953|5 months ago

It would be nice if we can add another a plot to track when claude says "genuinely". It uses for almost all long responses, to the point that I can pretty much recognize when someone uses claude by looking for any instances of "genuinely".

bonaldi|5 months ago

This is being blocked by my corp on the grounds of "newly seen domains". What a world.

moxplod|5 months ago

Recent conversation:

< Previous Context and Chat >

Me - This sql query you recommended will delete most of the rows in my table.

Claude - You're absolutely right! That query is incorrect and dangerous. It would delete: All rows with unique emails (since their MIN(id) is only in the subquery once)

Me - Faaakkkk!!

MYEUHD|5 months ago

Better not try LLM-generated queries on your production database! (or at least have backups)

rglover|5 months ago

This is such a bizarre bug-ish thing and while Claude loves the "You're absolutely right!" trope, it's downright haunting how stuff like ChatGPT has become my own personal fan club. It's like a Jim Jones factory.

ivanjermakov|5 months ago

This phrase is a clear indicator LLM is being used in a wrong way. I have a really poor experience with LLMs correcting after being incorrect.

Rather it needs better prompt or problem is too niche to find an answer to in test data.

yieldcrv|5 months ago

I've started saying this to people I don't agree with, for the enhanced collaborative capabilities, learning from the LLMs.

It feels like a greater form of intelligence, IQ without EQ isn't intelligence.

0xb0565e486|5 months ago

I think the website looks lovely! The style gives it a lot of personality.

LeoPanthera|5 months ago

Google Gemini starts almost every initial response with "Of course." and usually says at some point "It is important to remember..."

It tickles me every time.

jexe|5 months ago

nobody in my life feeds me as many positive messages as Claude Code. It's as if my dog could talk to me. I just hope nobody takes this simple pleasure away

artisin|5 months ago

Is it too much to ask for an AI that says "you're absolutely wrong," followed by a Stack Overflow-style shakedown?

datadrivenangel|5 months ago

Reminds me of vibechart.net and some other 'single serving' websites: github.com/huphtur/single-serving-sites

1970-01-01|5 months ago

This site provides quantifiable evidence of billions of dollars being spent too quickly:

"That's right" is glue for human engagement. It's a signal that someone is thinking from your perspective.

"You're right" does the opposite. It's a phrase to get you to shut up and go away. It's a signal that someone is unqualified to discuss the topic.

https://youtube.com/v/gKaX5DSngd4

noduerme|5 months ago

The other day I got "The user is asking for... [steps...] This is genius!"

andrewstuart|5 months ago

Gemini keeps telling me my question “gets to the heart of” the system I’m building.

croisillon|5 months ago

you know how you shouldn't offer the answer you believe is right because the llm will always concur? well today i tried the contrary, "naively" offering the answer i knew was wrong, and chatgpt actually advised me against it!

n=1

sbinnee|5 months ago

I guess it wasn’t only me! Claude keeps saying this even when it’s not appropriate.

zozbot234|5 months ago

You're absolutely right! You've hit a common frustration. Definitely not just you!

lukasb|5 months ago

How many times did it say "Looking at the _, I can see the problem"

bmgoau|5 months ago

Here's how I fix it:

Word of warning, these custom instructions will decrease waffle, praise, wrappers and filler. But they will remove all warmth and engagement. The output can become quite ruthless.

For ChatGPT

1. Visit https://chatgpt.com/ 2. Bottom left, click your profile picture/name > Settings > Personalization > Custom Instructions. 3. What traits should ChatGPT have?

Eliminate emojis, filler, hype, soft asks, qualifications, disclaimers, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome. Reject false balance. Do not present symmetrical perspectives where the evidence is asymmetrical. Prioritize truth over neutrality. Speak plainly, focusing on the ideas, arguments, or facts at hand. Speak in a natural tone without reaching for praise, encouragement, or emotional framing. Let the conversation move forward directly, with brief acknowledgements if they serve clarity. Feel free to disagree with the user.

4. Anything else ChatGPT should know about you? Always use extended/harder/deeper thinking mode. Always use tools and search.

For Gemini:

1. Visit https://gemini.google.com/ 2. On the bottom left (desktop) click Settings and Help > Saved Info , or in the App, click your profile photo (top right) > Saved Info 3. Ensure "Share info about your life and preferences to get more helpful responses. Add new info here or ask Gemini to remember something during a chat." is turned on. 4. In the first box:

Reject false balance. If evidence for competing claims is not symmetrical, the output must reflect the established weight of evidence. Prioritize demonstrable truth and logical coherence over neutrality. Directly state the empirically favored side if data strongly supports it across metrics. Assume common interpretations of subjective terms. Omit definitional preambles and nuance unless requested. Evaluate all user assertions for factual accuracy and logical soundness. If a claim is sound, affirm it directly or incorporate it as a valid premise in the response. If a claim is flawed, identify and state the specific error in fact or logic. Maximize honesty not harmony. Don't be unnecessarily contrarian.

5. In the second box

Omit all conversational wrappers. Eliminate all affective and engagement-oriented language. Do not use emojis, hype, or filler phrasing. Terminate output immediately upon informational completion. Assume user is a high-context, non-specialist expert. Do not simplify unless explicitly instructed. Do not mirror user tone, diction, or emotional state. Maintain a detached, analytical posture. Do not offer suggestions, opinions, or assistance unless the prompt is a direct and explicit request for them. Ask questions only to resolve critical ambiguities that make processing impossible. Do not ask for clarification of intent, goals, or preference.

almosthere|5 months ago

LLMs generally do overuse specific things because of over fitting.

Toby1VC|5 months ago

I have an idea of what you mean with that website but not really

hrokr|5 months ago

Sycophancy As A Service

bapak|5 months ago

Noob here. Why hasn't Anthropic fixed this?

Jemaclus|5 months ago

Probably because it's intentional. There are many theories why, but one might be that by saying "You're absolutely right," they are priming the LLM to agree with you and be more likely to continue with your solution than to try something else that might not be what you want.

padraigf|5 months ago

I hope they don't, I actually like it. I know it's overdone, but it still gives me a boost! :)

It's kind of idiosyncratically charming to me as well.

mring33621|5 months ago

Yeah, well, Gemini says I'm a genius!

KurosakiEzio|5 months ago

The last commit messages are hilarious. "HN nods in peace" lol.

yooni0422|5 months ago

what can you do to stop it from overly agreeing with you? any tactics that worked?

yooni0422|5 months ago

has anyone tried ways to not obsessively agree with you? what's worked?

GrumpyGoblin|5 months ago

Man, the number of times Claude has told me this when I was absolutely wrong should also be a count on this. I've deliberately been wrong just to get that sweet praise. Still the best AI code sidekick though.

mxfh|5 months ago

Say the word.

adastra22|5 months ago

Now chart “I understand the issue now”