(no title)
akyu | 3 years ago
I'm fairly sure anyone who has done A/B testing at scale has plenty of evidence that nudging works. Perhaps not up to the standard of science, but there are literally people who manipulate choice architecture for a living and I'm fairly convinced a lot of that stuff actually works.
mcswell|3 years ago
There are lots of people who do X for a living, but where X doesn't work: palm readers, fortune tellers, horoscope writers, and so on. I'm not even sure that funds managers reliably obtain results much above random.
mikkergp|3 years ago
On the other hand the dream of nudge theory is something like a study done in the UK that suggests that adding the line “most of your fellow citizens pay their taxes” will increase the likelihood that people pay taxes. This I’d be more likely to believe the benefits are not clear, and more importantly difficult to replicate across time and culture.
It seems that trying to do a meta-analysis on all of nudge theory (or large categories of it) would indeed show know impact. It’s not like you’re testing one thing, you’re comparing well designed programs, with ones that aren’t.
akyu|3 years ago
No it's really not.
To say things a different way, I don't think this study will change anything for people actually doing choice architecture in applied settings. They have results that speak for themselves.
dr_dshiv|3 years ago
If you run a useful system where it would be meaningful and interesting to know whether a social science theory actually applied, you might run an A/B test to see if it works. If it works, it is adopted—but it is almost never published. And that is for two reasons: 1. no incentive to publish and 2. major incentive not to publish. #2 is recent (post Facebook experiment) and it is specifically because a large portion of the educated public accepts invisible A/B testing but recoils with moral indignation at the use of A/B testing results in published science. Too bad: Facebook keeps testing social science theories, but no longer publishes the results.
MereInterest|3 years ago
As an example, suppose I flip a coin 1000 times and get heads 525 times. The 95% confidence interval for the probability of heads is [0.494, 0.556], so from a scientific standpoint I cannot conclude that the coin is biased. If, however, I am performing an A/B test, I would conclude that I'll bet on heads, because it is at worst equivalent to tails.
themitigating|3 years ago
akyu|3 years ago
zeroonetwothree|3 years ago
marcosdumay|3 years ago
Except the article is more specific and has way more details than that.
aaaaaaaaaaab|3 years ago
Lol! A/B testing in practice is rife with P-hacking and various other statistical fallacies.
omginternets|3 years ago
There are literally people who give astrological analyses for a living.
zeroonetwothree|3 years ago
https://biggestfish.substack.com/p/data-as-placebo
lIl-IIIl|3 years ago
I think this doesn't really apply to A/B testing, because people are incentivized pay as much attention to negative results as to positive ones.
akyu|3 years ago
I'm sure many people here are in similar situations.
mcswell|3 years ago
unknown|3 years ago
[deleted]