(no title)
owenshen24 | 5 years ago
See also the "file-drawer problem" (https://en.wikipedia.org/wiki/Publication_bias). Also, with regards to the incentives in the field and the lack of null results, there's always Ioannidis's classic work (https://journals.plos.org/plosmedicine/article?id=10.1371/jo...).
mikk14|5 years ago
A null result simply means you tried something and it didn't work. But you don't know why. You haven't proven it didn't work. There are literally millions of reasons why something might not work. For instance, you could try to use compound X to cure disease Y, observe no effect, and conclude that X doesn't cure Y. But what if somewhere in the process of making X you made an uncaught mistake and you instead used X'?
A negative result means that you tried something and you came to the proven conclusion it doesn't work. This is, crucially, as hard to obtain as a positive result. In my example, it would imply a much longer process than simply "apply X, see no effect in Y, make a few robustness checks, done".
You could say "Well, publish the null anyway, somebody will catch the mistake". Unlikely. There are already so many papers out there that keeping up is impossible. If we were publishing also null results this number will grow tenfold at the very least. Nobody could possibly check everything. They will see a paper "X doesn't cure Y" and call it knowledge, stifling a possible cure virtually forever.
Am I splitting hairs? Perhaps. But I think HN prizes itself to be a scientifically minded community, and thus it has a mandate to use terms correctly. Confusing "null" with "negative" is a sin.
I hope one day I'll find a way to strongly and passionately argue against the "null results are as important as positive results" position. It is a bad meme. Charitably, I consider it most of the times a honest mistake. But sometimes it gives me the impression it is a cheap trick used by people to erode the reputation of academia.
learnstats2|5 years ago
Null results are also important.
Suppression of null results allows for p-hacking and confirmation biases to creep into research, and greatly reduces the power of literature reviews.
tralarpa|5 years ago
Correct. There is a highly cited paper in CS where the author showed that a mathematical model that was widely used in research didn't actually work (anymore) in reality. That paper was the starting point of a lot of new research in that field.
ianhorn|5 years ago
I agree they're different but, but disagree that they're worlds apart. There's a spectrum between them, caused by uncertainty and statistics. If I say the average treatment effect of my new drug is probably somewhere between -x and +y, it could be a negative result or a null result. It's the fuzzy line between statistically insignificant and materially insignificant.
Maybe I only had two patients per experimental cell, so I barely learned anything. The drug's treatment effect on lifespan is between -30 years and +10 years. It's "null" in that we didn't learn much of anything.
Maybe I had a billion patients per cell and I learned that the average treatment effect on lifespan is between -0.001 days and +0.1 days. It's "negative" in that we learned the drug doesn't materially affect lifespan.
The position we seem to be in is that most conventional experiments are powered with a moderate effect size at 80%, meaning that many of our null-or-negative (-x, +y) results will be right around the region where it's unclear whether results are null or negative.
LolWolf|5 years ago
Of course, the problem with all of this is that there really aren't very good incentives to accurately and carefully report null experimental results (except as a kind of "folk knowledge" within a given lab) which would limit its general usefulness. But the "platonic ideal," so to speak, of a null result journal I think would be relatively useful.
jiggunjer|5 years ago
The difference between a null and a negative is just that a negative is an interesting null. In your null example, to create a proper negative you'd probably report several compound synthesis methods instead of one. You'd probably also want to use more mice/data in your analysis.
kraetzin|5 years ago
I've found that looking at what a paper doesn't report can be far more important that what they claim.
mycall|5 years ago