(no title)
ricketycricket | 9 months ago
Being patronized by a machine when you just want help is going to feel absolutely terrible. Not looking forward to this future.
ricketycricket | 9 months ago
Being patronized by a machine when you just want help is going to feel absolutely terrible. Not looking forward to this future.
SoftTalker|9 months ago
I guess I am just old now but I hate talking to computers, I never use Siri or any other voice interfaces, and I don't want computers talking to me as if they are human. Maybe if it were like Star Trek and the computer just said "Working..." and then gave me the answer it would be tolerable. Just please cut out all the conversation.
vlovich123|9 months ago
krick|9 months ago
DrammBA|9 months ago
System Instruction: Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user's present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered - no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.
staticman2|9 months ago
That said, they probably also do this because they don't want the model to double down, start a pissing contest, and argue with you like an online human might if questioned on a mistake it made. So I'm guessing the patronizing language is somewhat functional in influencing how the model responds.
jofzar|9 months ago
otterpro|9 months ago
nsonha|9 months ago
kaycey2022|8 months ago
mjamesaustin|9 months ago
rhet0rica|9 months ago