Totally bonkers: economists using science to approach "social science" issues. Social sciences should be ashamed that this is not the norm, we should all be startled and surprised that this is a new thing and evidently people have just been using a system of high-fives and good wishes to solve the world's social problems.
QuesnayJr|6 years ago
RCTs didn't start with Duflo. (Duflo isn't even the first to win for RCTs -- Kahneman and Smith won in 2002 for experiments.) Experimental economics dates back to the 70s, but it always suffered from the same problem as psychology -- most experiments were conducted on students, and the interventions were always small-scale.
RCTs in development economics are much bigger scale because there are rich NGOs willing to spend big money on measuring the efficacy of interventions, and willing to work with economists to do it. This is not without controversy. A development RCT involves an economist from a rich country flying to a poor country, and then running an experiment on the inhabitants of that country. Not everyone thinks that's okay.
The RCTs also rely on the fact that economists come from coun
xyzzy2020|6 years ago
hangonhn|6 years ago
jfengel|6 years ago
"Hard" scientists like to pat themselves on the back for rigor, but they get that because they're studying comparatively simple things. Studying the lives of people is hard, but it's also important. It affects public policy, which in turn affects people's actual lives. That public policy gets created whether it's being studied or not -- the studies are hard, but they're better than guessing, and slowly they can build up a picture that makes them better. It's a bit like medicine: we're not going to stop treating people just because we don't understand the mechanism of action and can't guarantee that it will work.
This breakthrough is about finding ways to use the many villages found in poor countries to even attempt to do an RCT, and to come up with mathematical ways to account for the fact that the trials aren't really randomized. Aid had previously been given based on people's best guesses about what would work, which would maximize the value of the aid given if the guesses were correct, but it's hard to measure if it weren't. Aid has been beset by misguided theories and lack of measurement -- good intentions, but often ineffective.
appleiigs|6 years ago
duxut_staglatz|6 years ago
- ethics. Example: is democracy good for economic growth? Of course one could randomly engineer coups in some countries but that's probably not appropriate.
- cost. Example: how much do people change their labor force participation when taxes change by 1%? where a RCT would be "let's give a _lot_ of money to people" and see what happens.
- situation where it is not appropriate. Example: why did Europe rise to prominence (aka the great divergence)? There is not much to randomize here.
Note that RCTs have shortcomings anyway (see for instance [0]).
[0] https://www.nber.org/papers/w22595
IAmEveryone|6 years ago
In economics, you are studying vast systems. For the majority of questions, it is impossible to isolate some part of the system and control and measure all the inputs and outcomes. That's probably obvious for macroeconomics: You can't have FED raise or lower interest rates based on a random number generator. And even if you could, you would still need a second United States to act as the control group.
It's mostly also true for microeconomics. Consider the difficulty of studying UBI. The largest such studies gave a basic income to a small African village, for a limited time of maybe two years. But the idea, and its opponents, mostly deal with the life choices people make, requiring essentially life-long guarantees. And even just knowing to be part of such a study, or continuing to live in a society that hasn't changed, is likely or at least plausible to change the outcome to render the study meaningless.
mrandish|6 years ago
The vast majority are certainly not, however, idiocy or ill-intent are not required to fall prey to many common causes of inaccurate results. Smart people trying their best to do good work still frequently succumb to errors and this is especially true in the less 'hard' sciences.
That's why the push for increasing rigor with RCTs and other methods is important and necessary.
SiempreViernes|6 years ago
The thing is, these complication doesn't explain why nobody overcame them until Kremer, Duflo e. al. started their experiments in the 1990's. Their work appears to be a simple adaption of methods from other fields to studies in developmental economics, not any sort of technological development. (This is one of the earliest papers cited in the motivation provided by Nobel foundation: https://pubs.aeaweb.org/doi/pdfplus/10.1257/app.1.1.112 it does some linear regression at the most)
With creation of new technology ruled out as the blocker for performing the experiment, you are basically left with internal and external sociological explanations.
drak0n1c|6 years ago
duxut_staglatz|6 years ago
bjourne|6 years ago
benibela|6 years ago
And text books. "Causality" by Pearl about causal models in general. "Causation, Prediction, and Search" by Spirtes about how to learn the models from data.
For example assume the world consists of three random variables A, B, and C. If A causes B and B causes C (as DAG A -> B -> C), then A and C are correlated. But if the model is A -> B <- C, then A and C are not correlated. But conditioned on B, A and C are correlated in A->B<-C and not correlated in A->B->C. So you can falsify such causal models without an rct
sbierwagen|6 years ago
amznthrowaway4|6 years ago
[deleted]
squaresmile|6 years ago
Anyway, I think it's more correct to say "It speaks horribly of Hacker News readers _economics knowledge_." There are some topics here in which the quality of comments is quite poor but it's probably unreasonable to expect a group of people (specifically "good hackers") to be knowledgeable about _everything_. It is what it is and you just have to figure out which topic to avoid here.
godelzilla|6 years ago