top | item 44195298

(no title)

ricketycricket | 9 months ago

From the example: "Oh no, I'm really sorry to hear you're having trouble with your new device. That sounds frustrating."

Being patronized by a machine when you just want help is going to feel absolutely terrible. Not looking forward to this future.

discuss

order

SoftTalker|9 months ago

Yeah it's irritating enough when humans do it, it's so transparently insincere. Just help me with my problem.

I guess I am just old now but I hate talking to computers, I never use Siri or any other voice interfaces, and I don't want computers talking to me as if they are human. Maybe if it were like Star Trek and the computer just said "Working..." and then gave me the answer it would be tolerable. Just please cut out all the conversation.

vlovich123|9 months ago

I agree it seems transparently insincere yes, but the reason it’s done is because it works on some people who either don’t detect it or need it as politeness norms and the ones who see it as insincere just ignore it and move on. Thus net, you win by doing this because it rarely if ever costs you and thus you only have upside.

krick|9 months ago

It's also impossible to turn off in my experience. I have like 5 lines in my ChatGPT profile to tell it to fucking cut off any attempts to validate what I'm saying and all other patronizing behavior. It doesn't give a fuck, stupid shit will tell me that "you are right to question" blah-blah anyway.

DrammBA|9 months ago

Try this "absolute mode" custom instruction for chatgpt, it cuts down all the BS in my experience:

System Instruction: Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user's present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered - no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.

staticman2|9 months ago

I imagine they design these AI's to condescend to you with the "you right to question..." languages to increase engagement.

That said, they probably also do this because they don't want the model to double down, start a pissing contest, and argue with you like an online human might if questioned on a mistake it made. So I'm guessing the patronizing language is somewhat functional in influencing how the model responds.

jofzar|9 months ago

I can't wait for American accidental patronizing gets to EU and Australia, nothing like a bot someone "champ" or "bud".

otterpro|9 months ago

This is straight out of the movie "Her", when OS1 said something like this. And the voice and the intonation is eerily similar to Scarlett Johansson. As soon as I heard this clip, I knew it was meant to mimic that.

nsonha|9 months ago

Are you specifically looking for reasons to be offended? Even if a human said this, it would have been completely fine.

kaycey2022|8 months ago

I dont know man. It makes me inclined to shut off that conversation. Because it sounds like something a nitpicky, “nose all over your business”, tut-tutting Karen would say. It doesn’t convey competence, rather someone trying to manage you using a playbook.

mjamesaustin|9 months ago

"I can help you get a replacement. Here let me pull up a totally hallucinated order number and a link that goes nowhere. Did that solve your problem?"

rhet0rica|9 months ago

Look at it this way—if someone were trying to sabotage the entire tech support industry, convincing companies to ditch all their existing staff and infrastructure and replace them with our cheerfully unhelpful and fault-prone AI friends would be a great start!