top | item 20966315

(no title)

gbhn | 6 years ago

From the discussion on this I've read, I think a good direction would be to consider statistical tests like this as simply not "publishable" at all the way we currently think of it.

That is, if you have a theory about how a Gene relates to height on tomatoes, and you do a test, that test would show you you're likely on the wrong track if it falls below some p value, but the only thing it tells you by being above is that "there may be something here."

I think this is true for many fields with a replication crisis. The problem isn't statistical, the problem is no theory. If you have a functional theory there's all kinds of things you do to gain confidence in it, and mostly those will contribute to the ability to predict statistical results, but that is completely different on kind than sending out a survey and noting that question 2 and 6 are statistically correlated.

When a field thinks that the kind of early suggestive work like this is worth talking about, they should probably just talk about it in conferences and similar venues, rather than "publish" it where journalists will pick it up in a "science shows" story that 95% (lol) of the time turns out to be wrong.

In other words, I think it is fine that fields talk about early non-theory results -- that can be interesting for specialists to advance faster. "Publishing" this mostly-going-to-be-wrong stuff is leading to confusion among the public about what the scientific process demands and how trustworthy it is. That is not a good outcome in my opinion.

discuss

order

No comments yet.