(no title)
timshell | 6 months ago
The TLDR of this paper:
You can generalize theories of decision-making into broad functional forms and then apply gradient descent to find the best parameters for that functional form. For example, prospect theory is multiply a utility weighting function U(x) with a probability weighting function p(x). Kahneman and Tversky proposed one specific set of U(x) and p(x), but we can use autodiff to generate all.
We can apply this method to any functional form.
Happy to answer any questions!
slinkypinky|6 months ago
Edit: Seems like a “differentiable theory” is just one that can be framed in terms of an optimization problem that can be solved by gradient descent. Is that right?