I'm sorry I don't follow - is your claim, that, say, an AI agent exhibiting status quo bias in responding to decision scenarios (e.g., a preference for options posed as the status quo relative to a neutral framing - Figure 3) that the reason this happens, empirically, is because the LLM has been trained on text describing status quo bias? E.g., like if an apple fell to the ground in an game, it was because the physics engine had been programmed w/ laws of gravity?
jcampbell1|3 years ago
Imagine this scenario. You have a group of students and you teach them how libertarians, socialists, optimists, etc empirically respond to game theory questions. For the final exam, you ask them “assuming you are a libertarian, what would you do in this game?” Now the students mostly get the answers right according to economic theory. By teaching economic theory, and having students regurgitate the ideas on an exam, the exam results provide nothing new for field of economics. The AI is answering questions just like the students taking the final exam.
It would be like me teaching my child lots of things, and then when my child shares my own opinions, then I take that as evidence my beliefs are correct. Since I already believe my beliefs are correct, it is natural, but incorrect, to think the child’s utterances offer confirmation.
john_horton|3 years ago