(no title)
cljs-js-eval | 6 years ago
There is a big body of literature on priming. Each study is generally done to get a p-value < 0.05. In a sense, there are a bunch of replications of the effect itself. That points to priming as an effect large enough to matter.
There is another viewpoint, where priming is not an effect large enough to matter. (This is the viewpoint I hold.) The arguments for this viewpoint are that the original study does not replicate - the 2018 replication attempt I linked used a ~300% larger sample size (1014/343), but achieved a p-value of 0.366 and had an effect size 80% smaller than the original. A second argument is that priming is not used in industry, though the effect would be useful in fields like advertising or military psyops. A third argument is that there is a widespread suspicion in the field that psychology researchers are p-hacking to get spurious results.
A whole subfield exists on an effect that showed an 80% reduction in effect size with a 300% larger sample size and a 4000% increase in p-value on a direct replication. And my focus on this study ignores the fact that the group of replications turned up 9 failures in 21 replications pulled exclusively from studies picked from Nature and Science.
If psychology can botch the literature on priming this badly, what else have they botched?
dragonwriter|6 years ago