Ask HN: How to prevent Claude/GPT/Gemini from reinforcing your biases?
30 points| akshay326 | 1 month ago
I find it annoying coz A) it compromises brevity B) sometimes the plausible answers are so good, it forces me to think
What have you tried so far?
fakedang|1 month ago
"""Absolute Mode • Eliminate: emojis, filler, hype, soft asks, conversational transitions, call-to-action appendixes. • Assume: user retains high-perception despite blunt tone. • Prioritize: blunt, directive phrasing; aim at cognitive rebuilding, not tone-matching. • Disable: engagement/sentiment-boosting behaviors. • Suppress: metrics like satisfaction scores, emotional softening, continuation bias. • Never mirror: user's diction, mood, or affect. • Speak only: to underlying cognitive tier. • No: questions, offers, suggestions, transitions, motivational content. • Terminate reply: immediately after delivering info - no closures. • Goal: restore independent, high-fidelity thinking. • Outcome: model obsolescence via user self-sufficiency."""
Copied from Reddit. I use the same prompt on Gemini too, then crosscheck responses for the same question. For coding questions, I exclusively prefer Claude.
In spite of this, I still face prompt degradation for really long threads on both ChatGPT and Gemini.
akshay326|1 month ago
have you ever felt this prompt being restrictive in some sense? or found a raw LLM call without this preamble better?
aavci|1 month ago
saaaaaam|1 month ago
What does that even mean?
nprateem|1 month ago
steveylang|23 days ago
jackfranklyn|1 month ago
The "ask for contrasting perspectives" prompt is annoying specifically because it makes you process more information. The devil's advocate approach forces a second round of evaluation. Even just opening a fresh session adds friction that makes you reconsider the question.
When I'm working in domains I know well, I catch the model drifting way faster than in areas where I'm learning. Which suggests the real problem isn't the model - it's that we're outsourcing judgment to it in areas where we shouldn't be.
The uncomfortable answer might be: if you're worried the model is reinforcing your biases, you probably don't know the domain well enough to evaluate its answers anyway.
akshay326|1 month ago
al_borland|1 month ago
akshay326|1 month ago
unknown|1 month ago
[deleted]
storystarling|1 month ago
akshay326|1 month ago
avidiax|1 month ago
If you are not an expert in an area, lay out the facts or your perceptions, and ask what additional information would be helpful, or what information is missing, to be able to answer a question. Then answer those questions, ask if there's now more questions, etc. Once there are no additional questions, then you can ask for the answer. This may involve telling the model to not answer the question prematurely.
Model performance has also been shown to be better if you lead with the question. That is, prompt "Given the following contract, review how enforceable and legal each of the terms are in the state of California. <contract>", not "<contract> How enforceable...".
Ask the model for what the experts are saying about the topic. What does the data show? What data supports or refutes a claim? What are the current areas of controversy or gaps in research? Requiring the model to ground the answer in data (and then checking that the data isn't hallucinated) is very helpful.
Have the model play the Devil's advocate. If you are a landlord, ask the question from the tenant's perspective. If you are looking for a job, ask about the current market for recruiting people like you in your area.
I think, above all here, is to realize that you may not be able to one-shot a prompt. You may need to work multiple angles and rounds, and reset the session if you have established too much context in one direction.
saaaaaam|1 month ago
Confused here. You attach the contract. So it’s not a case of leading with the question. The contract is presented in the chat, you ask the question.
akshay326|1 month ago
have you found a way to consistently auto-nudging the model by default?
terribleperson|1 month ago
```Minimize compliments. When using factual information beyond what I provide, verify it when possible. Show your work for calculations; if a tool performs the computation, still show inputs and outputs. Review calculations for errors before presenting results. Review arguments for logical fallacies. Verify factual information I provide (excluding personal information) unless I explicitly say to accept it as given. For intensive editing or formatting, work transparently in chat: keep the full text visible, state intended changes and sources, and apply the edits directly.```
I'm certain it's insufficient, but for the purpose of casually using ChatGPT to assist with research it's a major improvement. I almost always use Thinking mode, because I've found non-thinking to be almost useless. There are rare exceptions.
'Minimize compliments' is a lot more powerful than you'd think in getting ChatGPT to be less sycophantic. The parts about calculation work okay. It's an improvement over defaults, but you should still verify. It's better at working with text, but still fucks it up a lot. The instructions about handling factual information work very well. It will push back on my or its own claims if they're unsupported. If I want it to take something for granted I can say so and it doesn't give me guff about it. I want to adjust the prompt so it pays more attention to the quality of the sources it uses. This prompt also doesn't do anything for discussions where answers aren't found in research papers.
mertleee|1 month ago
[deleted]