(no title)
wisty | 6 days ago
I think it's related to syncophancy. LLM are trained to not question the basic assumptions being made. They are horrible at telling you that you are solving the wrong problem, and I think this is a consequence of their design.
They are meant to get "upvotes" from the person asking the question, so they don't want to imply you are making a fundamental mistake, even if it leads you into AI induced psychosis.
Or maybe they are just that dumb - fuzzy recall and the eliza effect making them seem smart?
tsimionescu|6 days ago
wisty|6 days ago
Do you want me to track down some research that shows people think information is more likely to be correct of they agree with it?
nomel|6 days ago
HPsquared|6 days ago
EDIT: Though it could simply reflect training data. Maybe Redditors don't drive.