top | item 32214376

(no title)

akyu | 3 years ago

No evidence for nudging =/= nudging doesn't exist.

I'm fairly sure anyone who has done A/B testing at scale has plenty of evidence that nudging works. Perhaps not up to the standard of science, but there are literally people who manipulate choice architecture for a living and I'm fairly convinced a lot of that stuff actually works.

discuss

order

mcswell|3 years ago

"... evidence that nudging works. Perhaps not up to the standard of science..." That's pretty close to saying it doesn't work. The point of this meta-study was precisely to show that the evidence claimed to support nudging was probably attributable to random variation + unnatural selection, where the unnatural selection was publication choice: either the researchers who got negative (null) results chose not to bother writing it up and submitting it, or papers that reported negative were rejected by publishers.

There are lots of people who do X for a living, but where X doesn't work: palm readers, fortune tellers, horoscope writers, and so on. I'm not even sure that funds managers reliably obtain results much above random.

mikkergp|3 years ago

I think what’s not clear is what’s in those papers and what exactly they have to say about nudging and what definition they’re using. It defies credulity to think that changing defaults in software doesn’t change behavior if only because most users aren’t technically savvy enough to change their settings.

On the other hand the dream of nudge theory is something like a study done in the UK that suggests that adding the line “most of your fellow citizens pay their taxes” will increase the likelihood that people pay taxes. This I’d be more likely to believe the benefits are not clear, and more importantly difficult to replicate across time and culture.

It seems that trying to do a meta-analysis on all of nudge theory (or large categories of it) would indeed show know impact. It’s not like you’re testing one thing, you’re comparing well designed programs, with ones that aren’t.

akyu|3 years ago

>That's pretty close to saying it doesn't work.

No it's really not.

To say things a different way, I don't think this study will change anything for people actually doing choice architecture in applied settings. They have results that speak for themselves.

dr_dshiv|3 years ago

Seriously, what about that kind of publication bias: A/B tests don’t get published.

If you run a useful system where it would be meaningful and interesting to know whether a social science theory actually applied, you might run an A/B test to see if it works. If it works, it is adopted—but it is almost never published. And that is for two reasons: 1. no incentive to publish and 2. major incentive not to publish. #2 is recent (post Facebook experiment) and it is specifically because a large portion of the educated public accepts invisible A/B testing but recoils with moral indignation at the use of A/B testing results in published science. Too bad: Facebook keeps testing social science theories, but no longer publishes the results.

MereInterest|3 years ago

The standards of selecting a result of an A/B test are less stringent than those of publication for the advancement of knowledge. For publication, the goal is to determine whether a model is accurate. For A/B testing, the goal is to select the best design/intervention. The difference is that for scientific testing "inconclusive" means that there isn't enough evidence to consider it a solved problem and it should have more research, while in A/B testing "inconclusive" means that any effect is small so you should pick an option and move on.

As an example, suppose I flip a coin 1000 times and get heads 525 times. The 95% confidence interval for the probability of heads is [0.494, 0.556], so from a scientific standpoint I cannot conclude that the coin is biased. If, however, I am performing an A/B test, I would conclude that I'll bet on heads, because it is at worst equivalent to tails.

themitigating|3 years ago

You don't have to prove something doesn't exist , you have to prove it exists.

akyu|3 years ago

Absolutely.

zeroonetwothree|3 years ago

They note that there is no evidence for nudging as being generally effective. So any individual nudge could be effective (except in finance in which they found that none are effective).

marcosdumay|3 years ago

"We studied X extensively and there is no evidence that it works" is a textbook example of how scientists say "X doesn't work".

Except the article is more specific and has way more details than that.

aaaaaaaaaaab|3 years ago

>I'm fairly sure anyone who has done A/B testing at scale has plenty of evidence that nudging works

Lol! A/B testing in practice is rife with P-hacking and various other statistical fallacies.

omginternets|3 years ago

What exactly makes you convinced that it works? To be specific: why wouldn’t there be bias in the A/B testing results, too?

There are literally people who give astrological analyses for a living.

lIl-IIIl|3 years ago

We are talking about publication bias, where the decision whether to publish something is biased by the outcome of the experiment.

I think this doesn't really apply to A/B testing, because people are incentivized pay as much attention to negative results as to positive ones.

akyu|3 years ago

I cannot share the reason I am convinced it works. But I can tell you I am convinced it works.

I'm sure many people here are in similar situations.

mcswell|3 years ago

Great minds! I was writing more or less the same thing, you beat me to publication by three minutes.