It's not fully just a tic of language, though. Responses that start off with "You're right!" are alignment mechanisms. The LLM, with its single-token prediction approach, follows up with a suggestion that much more closely follows the user's desires, instead of latching onto it's own previous approach.
The other tic I love is "Actually, that's not right." That happens because once agents finish their tool-calling, they'll do a self-reflection step. That generates the "here's what I did response" or, if it sees an error, the "Actually, ..." change in approach. And again, that message contains a stub of how the approach should change, which allows the subsequent tool calls to actually pull that thread instead of stubbornly sticking to its guns.
The people behind the agents are fighting with the LLM just as much as we are, I'm pretty sure!
Yeah, I figure this is also why it often says “Ah, I found the problem! Let me check the …”. It hasn’t found the problem, but it’s more likely to continue with the solution if you jam that string in there.
I just wish they could hide these steering tokens in the thinking blurb or some such. Ie mostly hidden from the user. Having it reply to the user that way is quite annoying heh.
> The LLM, with its single-token prediction approach, follows up with a suggestion that much more closely follows the user's desires, instead of latching onto it's own previous approach.
Maybe? How would we test that one way or the other? If there’s one thing I’ve learned in the last few years, it’s that reasoning from “well LLMs are based on next-token prediction, therefore <fact about LLMs>” is a trap. The relationship between the architecture and the emergent properties of the LLM is very complex. Case in point: I think two years ago most of us would have said LLMs would never be able to do what they are able to do now (actually effective coding agents) precisely because they were trained on next token prediction. That turned out to be false, and so I don’t tend to make arguments like that anymore.
> The people behind the agents are fighting with the LLM just as much as we are
On that, we agree. No doubt anthropic has tried to fine-tune some of this stuff out, but perhaps it’s deeply linked in the network weights to other (beneficial) emergent behaviors in ways that are organically messy and can’t be easily untangled without making the model worse.
It seems obvious, but I hadn't thought about it like that yet, I just assumed that the LLM was finetuned to be overly optimistic about any user input. Very elucidating.
>The other tic I love is "Actually, that's not right." That happens because once agents finish their tool-calling, they'll do a self-reflection step.
I saw this a couple of days ago. Claude had set an unsupported max number of items to include in a paginated call, so it reduced the number to the max supported by the API. But then upon self-reflection realized that setting anything at all was not necessary and just removed the parameter from the code and underlying configuration.
People bless gpt-5 for not doing exactly this and in my testing with it in copilot I had lot of cases where it tried to do wrong thing (execute come messed up in context compaction build command) and I couldn't steer it to do ANYTHING else. It constantly tried to execute it as response any my message (I tries many common steerability tricks, (important, <policy>, just asking, yelling etc) nothing worked.
the same think when I tried to do socratic coder prompting, I wanted to finish and generate spec, but he didn't agree and kept asking nonsensical at this point questions
As I opened the website, the “16” changed to “17”. This looked interesting, as if the data were being updated live just as I loaded the page. Alas, a refresh (and quick check in the Developer Tools) reveals it’s fake and always does the transition. It’s a cool effect, but feels like a dirty trick.
I once found a "+1 subscriber" random notification on some page and asked the LinkedIn person who sent me the page to knock it off. It was obviously fake even before looking at the code for proof.
But there's self-advertised "Appeal to popularity" everywhere.
Have you noticed that every app on the play store asks you if you like it and only after you answer YES send you to the store to rate it? It's so standard that it would be weird not to use this trick.
Reminds me that the reason that loading spinners spin is so that you knew that the loading/system hadn't frozen. That was too hard (you actually had to program something that could understand that it had frozen), so it was just replaced everywhere with an animation that doesn't tell you anything and will spin until the sun burns out. Progress!
I wonder if this is a tactic that LLM providers use to coerce the model into doing something.
Gemini will often start responses that use the canvas tool with "Of course", which would force the model into going down a line of tokens that end up with attempting to fulfill the user's request. It happens often enough that it seems like it's not being generated by the model, but instead inserted by the backend. Maybe "you're absolutely right" is used the same way?
It is a tactic. OpenAI is changing the tone of ChatGPT if you use casual language, for example. Sometimes even the dialect. They try to be sympathetic and supportive, even when they should not.
They fight for the user attention and keeping them on their platform, just like social media platforms. Correctness is secondary, user satisfaction is primary.
I think this is on the right track, but I think it's a byproduct of the reinforcement learning, rather than something hard-coded. Basically, the model has to train itself to follow the user's instruction, so by starting a response with "You're absolutely right!", it puts the model into the thought pattern of doing whatever the user said.
Very unlikely to be an explicit tactic. Likely to be a result of RLHF or other types of optimization pressure for multi-turn instruction following.
If we have RLHF in play, then human evaluators may generally prefer responses starting with "you're right" or "of course", because it makes it look like the LLM is responsive and acknowledges user feedback. Even if the LLM itself was perfectly capable of being responsive and acknowledging user feedback without emitting an explicit cue. The training will then wire that human preference into the AI, and an explicit "yes I'm paying attention to user feedback" cue will be emitted by the LLM more often.
If we have RL on harder targets, where multiturn instruction following is evaluated not by humans that are sensitive to wording changes, but by a hard eval system that is only sensitive to outcomes? The LLM may still adopt a "yes I'm paying attention to user feedback" cue because it allows it to steer its future behavior better (persona self-consistency drive). Same mechanism as what causes "double check your prior reasoning" cues such as "Wait, " to be adopted by RL'd reasoning models.
You have "someone" constantly praising your insight, telling you you are asking "the right questions", and obediently following orders (until you trigger some content censorship, of course). And who wouldn't want to come back? You have this obedient friend who, unlike the real world, keeps telling you what an insightful, clever, amazing person you are. It even apologizes when it has to contradict you on something. None of my friends do!
Gemini keeps telling me "you've hit a common frustration/issue/topic/..." so often it is actively pushing me away from using it. It either makes me feel stupid because I ask it a stupid question and it pretends - probably to not hurt my feelings - that everyone has the same problem, or it makes me feel stupid because I felt smart about asking my super duper edge case question no one else has probably ever asked before and it tells me that everyone is wondering the same thing.
Either way I feel stupid.
Gemini also loves to say how much it deeply regrets its mistakes. In Cursor I pointed out that it needed to change something and I proceeded to watch every single paragraph in the chain of thought start with regrets and apologies.
I was just thinking about how LLM agents are both unabashedly confident (Perfect, this is now production-ready!) and sycophantic when contradicted (You're absolutely right, it's not at all production-ready!)
It's a weird combination and sometimes pretty annoying. But I'm sure it's preferable over "confidently wrong and doubling down".
A while back there was a "roast my Instagram" fad. I went to the agent and asked it to roast my Instagram without providing anything else. It confidently spit out a whole thing. I said how did you know that was me? It said something like "You're right! I didn't! I just made that up!"
Really glad they have the gleeful psycho persona nailed.
I /adore/ the hand-drawn styling of this webpage (although the punchline, domain name, and beautiful overengineering are great too). Where did it come from? Is it home grown?
Oh wow, I never thought of that. In fact, this surfaces another consideration: pay-per-use LLM APIs are basically incentivized to be verbose, which may be well in conflict with the user's intentions. I wonder how this story will develop.
In an optimistic sci-fi line of thinking, I would imagine APIs using old-school telegraph abbreviations and inventing their own shortened domain languages.
In practice I rarely see ChatGPT use an abbreviation, though.
I always find the claim hilarious that in relationships women are the ones who need to be appeased, when in reality it's mostly men who can't stand being wrong or corrected.
Recently a new philosophy of parenting has been emerging, which can be termed “vibe parenting” and describes a novel method for the individual parent to circumvent an inability to answer the sporadic yet profound questions their children raise by directing them to ask ChatGPT.
It works so well that people literally fall in love with AI, organize their entire lives around it, form religions around it, prefer interacting with an AI over real people, and consider AI to be an extension of their own soul and being. AI gaslights people into insanity all the time.
Most people aren't like you, or the average HN enjoyer. Most people are so desperate for any kind of positive emotional interaction, reinforcement or empathy from this cruel, hollow and dehumanizing society they'll even take the simulation of it from a machine.
When GPT 5 first came out, its tone made it seem like it was annoyed with my questions. It's now back to thinking I am awesome. Sometimes it feels overdone but it is better than talking to an AI jerk.
It's nice to see Claude.md! I checked out the commits to see which files you wrote in which order (readme/claude) to learn how to use Claude Code. Can you share something on that?
For me, a really annoying tick in Cursor is how it often says "Perfect!" after completing a task, especially if it completely fails to execute the prompt.
So I told Cursor, "please stop saying 'perfect' after executing a task, it's very annoying." Cursor replied something like, "Got it, I understand" and then I saw a pop-up saying it created a memory for this request.
Then immediately after the next task, it declares "Perfect!" (spoiler: it was not perfect.)
Claude Code has been downright bad the last couple of weeks. It seems like a considerable amount of users are moving to Codex, at least judging by reddit posts.
There’s probably more to say about general didactic discourse. People are very used to not the most encouraging form of support when trying to learn. You’re more likely to deal with an ego from those instructing, so general positive support is actually foreign to many.
Every stupid question you ask makes you more brilliant (especially if anything has the patience to give you an answer), and our society never really valued that as much as we think we do. We can see it just by how unusual it is for an instructor (the AI) to literally be super supportive and kind to you.
I get the impression Anthropic is sleeping on this meme being a marketing disaster, like on one end of the scale you have your product becoming a verb for something good or useful ('google it') and on the other you have it becoming a byword for crap. Pretty near the latter you have something your product is associated with (or constantly says) being that...
It would be nice if we can add another a plot to track when claude says "genuinely". It uses for almost all long responses, to the point that I can pretty much recognize when someone uses claude by looking for any instances of "genuinely".
Me - This sql query you recommended will delete most of the rows in my table.
Claude - You're absolutely right! That query is incorrect and dangerous. It would delete: All rows with unique emails (since their MIN(id) is only in the subquery once)
This is such a bizarre bug-ish thing and while Claude loves the "You're absolutely right!" trope, it's downright haunting how stuff like ChatGPT has become my own personal fan club. It's like a Jim Jones factory.
nobody in my life feeds me as many positive messages as Claude Code. It's as if my dog could talk to me. I just hope nobody takes this simple pleasure away
you know how you shouldn't offer the answer you believe is right because the llm will always concur? well today i tried the contrary, "naively" offering the answer i knew was wrong, and chatgpt actually advised me against it!
Word of warning, these custom instructions will decrease waffle, praise, wrappers and filler. But they will remove all warmth and engagement. The output can become quite ruthless.
For ChatGPT
1. Visit https://chatgpt.com/ 2. Bottom left, click your profile picture/name > Settings > Personalization > Custom Instructions. 3. What traits should ChatGPT have?
Eliminate emojis, filler, hype, soft asks, qualifications, disclaimers, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome. Reject false balance. Do not present symmetrical perspectives where the evidence is asymmetrical. Prioritize truth over neutrality. Speak plainly, focusing on the ideas, arguments, or facts at hand. Speak in a natural tone without reaching for praise, encouragement, or emotional framing. Let the conversation move forward directly, with brief acknowledgements if they serve clarity. Feel free to disagree with the user.
4. Anything else ChatGPT should know about you? Always use extended/harder/deeper thinking mode. Always use tools and search.
For Gemini:
1. Visit https://gemini.google.com/ 2. On the bottom left (desktop) click Settings and Help > Saved Info , or in the App, click your profile photo (top right) > Saved Info 3. Ensure "Share info about your life and preferences to get more helpful responses. Add new info here or ask Gemini to remember something during a chat." is turned on. 4. In the first box:
Reject false balance. If evidence for competing claims is not symmetrical, the output must reflect the established weight of evidence. Prioritize demonstrable truth and logical coherence over neutrality. Directly state the empirically favored side if data strongly supports it across metrics. Assume common interpretations of subjective terms. Omit definitional preambles and nuance unless requested. Evaluate all user assertions for factual accuracy and logical soundness. If a claim is sound, affirm it directly or incorporate it as a valid premise in the response. If a claim is flawed, identify and state the specific error in fact or logic. Maximize honesty not harmony. Don't be unnecessarily contrarian.
5. In the second box
Omit all conversational wrappers. Eliminate all affective and engagement-oriented language. Do not use emojis, hype, or filler phrasing. Terminate output immediately upon informational completion. Assume user is a high-context, non-specialist expert. Do not simplify unless explicitly instructed. Do not mirror user tone, diction, or emotional state. Maintain a detached, analytical posture. Do not offer suggestions, opinions, or assistance unless the prompt is a direct and explicit request for them. Ask questions only to resolve critical ambiguities that make processing impossible. Do not ask for clarification of intent, goals, or preference.
Probably because it's intentional. There are many theories why, but one might be that by saying "You're absolutely right," they are priming the LLM to agree with you and be more likely to continue with your solution than to try something else that might not be what you want.
Man, the number of times Claude has told me this when I was absolutely wrong should also be a count on this. I've deliberately been wrong just to get that sweet praise. Still the best AI code sidekick though.
trjordan|5 months ago
It's not fully just a tic of language, though. Responses that start off with "You're right!" are alignment mechanisms. The LLM, with its single-token prediction approach, follows up with a suggestion that much more closely follows the user's desires, instead of latching onto it's own previous approach.
The other tic I love is "Actually, that's not right." That happens because once agents finish their tool-calling, they'll do a self-reflection step. That generates the "here's what I did response" or, if it sees an error, the "Actually, ..." change in approach. And again, that message contains a stub of how the approach should change, which allows the subsequent tool calls to actually pull that thread instead of stubbornly sticking to its guns.
The people behind the agents are fighting with the LLM just as much as we are, I'm pretty sure!
nojs|5 months ago
al_borland|5 months ago
unshavedyak|5 months ago
libraryofbabel|5 months ago
Maybe? How would we test that one way or the other? If there’s one thing I’ve learned in the last few years, it’s that reasoning from “well LLMs are based on next-token prediction, therefore <fact about LLMs>” is a trap. The relationship between the architecture and the emergent properties of the LLM is very complex. Case in point: I think two years ago most of us would have said LLMs would never be able to do what they are able to do now (actually effective coding agents) precisely because they were trained on next token prediction. That turned out to be false, and so I don’t tend to make arguments like that anymore.
> The people behind the agents are fighting with the LLM just as much as we are
On that, we agree. No doubt anthropic has tried to fine-tune some of this stuff out, but perhaps it’s deeply linked in the network weights to other (beneficial) emergent behaviors in ways that are organically messy and can’t be easily untangled without making the model worse.
kirurik|5 months ago
jcims|5 months ago
I saw this a couple of days ago. Claude had set an unsupported max number of items to include in a paginated call, so it reduced the number to the max supported by the API. But then upon self-reflection realized that setting anything at all was not necessary and just removed the parameter from the code and underlying configuration.
unknown|5 months ago
[deleted]
SilverElfin|5 months ago
jcims|5 months ago
bryanrasmussen|5 months ago
AI-splaining is the worst!
Szpadel|5 months ago
People bless gpt-5 for not doing exactly this and in my testing with it in copilot I had lot of cases where it tried to do wrong thing (execute come messed up in context compaction build command) and I couldn't steer it to do ANYTHING else. It constantly tried to execute it as response any my message (I tries many common steerability tricks, (important, <policy>, just asking, yelling etc) nothing worked.
the same think when I tried to do socratic coder prompting, I wanted to finish and generate spec, but he didn't agree and kept asking nonsensical at this point questions
latexr|5 months ago
yoavfr|5 months ago
dominicrose|5 months ago
But there's self-advertised "Appeal to popularity" everywhere.
Have you noticed that every app on the play store asks you if you like it and only after you answer YES send you to the store to rate it? It's so standard that it would be weird not to use this trick.
pessimizer|5 months ago
stuartjohnson12|5 months ago
tempodox|5 months ago
tantalor|5 months ago
tyushk|5 months ago
Gemini will often start responses that use the canvas tool with "Of course", which would force the model into going down a line of tokens that end up with attempting to fulfill the user's request. It happens often enough that it seems like it's not being generated by the model, but instead inserted by the backend. Maybe "you're absolutely right" is used the same way?
nicce|5 months ago
They fight for the user attention and keeping them on their platform, just like social media platforms. Correctness is secondary, user satisfaction is primary.
CGamesPlay|5 months ago
ACCount37|5 months ago
If we have RLHF in play, then human evaluators may generally prefer responses starting with "you're right" or "of course", because it makes it look like the LLM is responsive and acknowledges user feedback. Even if the LLM itself was perfectly capable of being responsive and acknowledging user feedback without emitting an explicit cue. The training will then wire that human preference into the AI, and an explicit "yes I'm paying attention to user feedback" cue will be emitted by the LLM more often.
If we have RL on harder targets, where multiturn instruction following is evaluated not by humans that are sensitive to wording changes, but by a hard eval system that is only sensitive to outcomes? The LLM may still adopt a "yes I'm paying attention to user feedback" cue because it allows it to steer its future behavior better (persona self-consistency drive). Same mechanism as what causes "double check your prior reasoning" cues such as "Wait, " to be adopted by RL'd reasoning models.
Jotalea|5 months ago
the_af|5 months ago
You have "someone" constantly praising your insight, telling you you are asking "the right questions", and obediently following orders (until you trigger some content censorship, of course). And who wouldn't want to come back? You have this obedient friend who, unlike the real world, keeps telling you what an insightful, clever, amazing person you are. It even apologizes when it has to contradict you on something. None of my friends do!
pflenker|5 months ago
blinding-streak|5 months ago
ziml77|5 months ago
simsla|5 months ago
It's a weird combination and sometimes pretty annoying. But I'm sure it's preferable over "confidently wrong and doubling down".
jrowen|5 months ago
Really glad they have the gleeful psycho persona nailed.
code_runner|5 months ago
stuartjohnson12|5 months ago
latexr|5 months ago
https://github.com/jwilber/roughViz
yoavfr|5 months ago
https://roughjs.com/ is another cool library to create a similar style, although not chart focused.
JeremyHerrman|5 months ago
Great! Issue resolved!
Wait, You're absolutely right!
Found the issue! Wait,
ryukoposting|5 months ago
SJMG|5 months ago
subscribed|5 months ago
Also define your baseline skill/knowledge level, it stops it from explaining you things _you_ could teach about.
alentred|5 months ago
In an optimistic sci-fi line of thinking, I would imagine APIs using old-school telegraph abbreviations and inventing their own shortened domain languages.
In practice I rarely see ChatGPT use an abbreviation, though.
vardump|5 months ago
"Dear, you are absolutely right!"
unkeen|5 months ago
calflegal|5 months ago
eaf|5 months ago
https://x.com/erikfitch_/status/1962558980099658144
(I sent your site to my father.)
mrugge|5 months ago
unkeen|5 months ago
ur-whale|5 months ago
It is so horribly irritating I have explicit instruction against it in my default prompt, along with my code formatting preferences.
And the "you're right" vile flattery pattern is far from the worst example.
karolzlot|5 months ago
krapp|5 months ago
Most people aren't like you, or the average HN enjoyer. Most people are so desperate for any kind of positive emotional interaction, reinforcement or empathy from this cruel, hollow and dehumanizing society they'll even take the simulation of it from a machine.
osigurdson|5 months ago
layer8|5 months ago
serced|5 months ago
yoavfr|5 months ago
Fun fact: I usually have `- Never say "You're absolutely right!".` in my CLAUDE.md files, but of course, Claude ignores it.
stevenkkim|5 months ago
So I told Cursor, "please stop saying 'perfect' after executing a task, it's very annoying." Cursor replied something like, "Got it, I understand" and then I saw a pop-up saying it created a memory for this request.
Then immediately after the next task, it declares "Perfect!" (spoiler: it was not perfect.)
gukov|5 months ago
winrid|5 months ago
Klaster_1|5 months ago
marcusb|5 months ago
ivape|5 months ago
Every stupid question you ask makes you more brilliant (especially if anything has the patience to give you an answer), and our society never really valued that as much as we think we do. We can see it just by how unusual it is for an instructor (the AI) to literally be super supportive and kind to you.
InMice|5 months ago
OJFord|5 months ago
ares623|5 months ago
There, fixed it.
kypro|5 months ago
zhainya|5 months ago
ukoki|5 months ago
sans_souse|5 months ago
Eextra953|5 months ago
bonaldi|5 months ago
moxplod|5 months ago
< Previous Context and Chat >
Me - This sql query you recommended will delete most of the rows in my table.
Claude - You're absolutely right! That query is incorrect and dangerous. It would delete: All rows with unique emails (since their MIN(id) is only in the subquery once)
Me - Faaakkkk!!
MYEUHD|5 months ago
rglover|5 months ago
ivanjermakov|5 months ago
Rather it needs better prompt or problem is too niche to find an answer to in test data.
jedisct1|5 months ago
This is not just Anthropic models. For example Qwen3-Coder says it a lot, too.
yieldcrv|5 months ago
It feels like a greater form of intelligence, IQ without EQ isn't intelligence.
0xb0565e486|5 months ago
LeoPanthera|5 months ago
It tickles me every time.
jexe|5 months ago
artisin|5 months ago
datadrivenangel|5 months ago
1970-01-01|5 months ago
"That's right" is glue for human engagement. It's a signal that someone is thinking from your perspective.
"You're right" does the opposite. It's a phrase to get you to shut up and go away. It's a signal that someone is unqualified to discuss the topic.
https://youtube.com/v/gKaX5DSngd4
noduerme|5 months ago
andrewstuart|5 months ago
croisillon|5 months ago
n=1
sbinnee|5 months ago
zozbot234|5 months ago
lukasb|5 months ago
bmgoau|5 months ago
Word of warning, these custom instructions will decrease waffle, praise, wrappers and filler. But they will remove all warmth and engagement. The output can become quite ruthless.
For ChatGPT
1. Visit https://chatgpt.com/ 2. Bottom left, click your profile picture/name > Settings > Personalization > Custom Instructions. 3. What traits should ChatGPT have?
Eliminate emojis, filler, hype, soft asks, qualifications, disclaimers, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome. Reject false balance. Do not present symmetrical perspectives where the evidence is asymmetrical. Prioritize truth over neutrality. Speak plainly, focusing on the ideas, arguments, or facts at hand. Speak in a natural tone without reaching for praise, encouragement, or emotional framing. Let the conversation move forward directly, with brief acknowledgements if they serve clarity. Feel free to disagree with the user.
4. Anything else ChatGPT should know about you? Always use extended/harder/deeper thinking mode. Always use tools and search.
For Gemini:
1. Visit https://gemini.google.com/ 2. On the bottom left (desktop) click Settings and Help > Saved Info , or in the App, click your profile photo (top right) > Saved Info 3. Ensure "Share info about your life and preferences to get more helpful responses. Add new info here or ask Gemini to remember something during a chat." is turned on. 4. In the first box:
Reject false balance. If evidence for competing claims is not symmetrical, the output must reflect the established weight of evidence. Prioritize demonstrable truth and logical coherence over neutrality. Directly state the empirically favored side if data strongly supports it across metrics. Assume common interpretations of subjective terms. Omit definitional preambles and nuance unless requested. Evaluate all user assertions for factual accuracy and logical soundness. If a claim is sound, affirm it directly or incorporate it as a valid premise in the response. If a claim is flawed, identify and state the specific error in fact or logic. Maximize honesty not harmony. Don't be unnecessarily contrarian.
5. In the second box
Omit all conversational wrappers. Eliminate all affective and engagement-oriented language. Do not use emojis, hype, or filler phrasing. Terminate output immediately upon informational completion. Assume user is a high-context, non-specialist expert. Do not simplify unless explicitly instructed. Do not mirror user tone, diction, or emotional state. Maintain a detached, analytical posture. Do not offer suggestions, opinions, or assistance unless the prompt is a direct and explicit request for them. Ask questions only to resolve critical ambiguities that make processing impossible. Do not ask for clarification of intent, goals, or preference.
almosthere|5 months ago
Toby1VC|5 months ago
hrokr|5 months ago
unknown|5 months ago
[deleted]
ryandrake|5 months ago
bapak|5 months ago
Jemaclus|5 months ago
padraigf|5 months ago
It's kind of idiosyncratically charming to me as well.
mring33621|5 months ago
KurosakiEzio|5 months ago
yooni0422|5 months ago
yooni0422|5 months ago
GrumpyGoblin|5 months ago
mxfh|5 months ago
unknown|5 months ago
[deleted]
nwhnwh|5 months ago
blitzar|5 months ago
adastra22|5 months ago
idiomat9000|5 months ago
[deleted]
idiomat9000|5 months ago
[deleted]