top | item 34385294

(no title)

john_horton | 3 years ago

It is so implausible that the training process that creates LLMs might learn features of human behavior that could then be uncovered via experimentation? I showed, empirically, that one can replicate several findings in behavioral economics with AI agents. Perhaps the model "knows" how to behave from these papers, but I think the more plausible interpretation is that it learned about human preferences (against price gouging, status quo bias, & so on) from its training. As such, it seems quite likely that there are other latent behaviors captured by LLMs and yet to be discovered.

discuss

order

westurner|3 years ago

> As such, it seems quite likely that there are other latent behaviors captured by LLMs and yet to be discovered.

>> What NN topology can learn a quantum harmonic model?

Can any LLM do n-body gravity? What does it say when it doesn't know; doesn't have confidence in estimates?

>> Quantum harmonic oscillators have also found application in modeling financial markets. Quantum harmonic oscillator: https://en.wikipedia.org/wiki/Quantum_harmonic_oscillator

"Modeling stock return distributions with a quantum harmonic oscillator" (2018) https://iopscience.iop.org/article/10.1209/0295-5075/120/380...

... Nudge, nudge.

Behavioral economics: https://en.wikipedia.org/wiki/Behavioral_economics

https://twitter.com/westurner/status/1614123454642487296

Virtual economies do afford certain opportunities for economic experiments.

jcampbell1|3 years ago

The potential hole in your thinking is the end of your paper where you advise how to get good answers: ask questions in an economist phd style! This presents a problem left unaddressed.

john_horton|3 years ago

Are you referring to this: "What kinds of experiments are likely to work well? Given current capabilities, games with complex instructions are not presently likely to work well, but with more advanced LLMs on the horizon, this is likely to change. I should also note that research questions like what is “the effect of x on y” are likely to work much better than questions like “what is the level of x?.” Consider that in my Kahneman et al. (1986) example, I can create AI “socialists” who are not too keen on the price system generally. If I polled them about who they want for president, there is no reason to think it would generalize to the population at large. But if my research question was “what is the effect of the size of the price increase on moral judgments” I might get be able to make progress. That being said, it might be possible to create agents with the correct “weights” to get not just qualitative results but also quantitatively accurate results. I did not try, but one could imagine choosing population shares for the Charness and Rabin (2002) “types” to match moments with reality, then using that population for other scenarios." --- To clarify, this about what research questions are likely to work well here, not what questions posed to LLMs will work well.