(no title)
ikura | 3 years ago
Personally, I think the dichotomy between hypothesis-testing and likelihood-quantification is a false one. The “P=0.05” cutoff we use to “reject” a hypothesis is an arbitrary one. When I read papers, I never “accept” or “reject” hypotheses but rather consider likelihood quantification as a measure of the weight of evidence or a distance of the data from some null hypothesis, as measured by some statistic. I encourage everyone else to consider this probabilistic worldview when viewing our paper: we aimed to quantify probabilities of this system occurring in nature, and P-values were convenient and commonly understood ways of communicating quantiles.
This paragraph does a lot of lifting. Conflating p-values and probabilities is the science equivalent of a code smell.
zosima|3 years ago
They are the probability that the data seen (or more extreme) in the experiment were generated given the null-hypothesis is true.
Now, of course to fully understand the p, you also have to understand the null hypothesis. And yeah, sometimes it is misspecified. (by e.g testing out many null-hypotheses and only showing the more interesting ones, or accidentally creating a bad unlikely null-hypothesis which may allow for many uninteresting alternative hypotheses.)
snake_doc|3 years ago
comte7092|3 years ago
Setting a cutoff of .05 is saying “if there’s less than a 5% chance we’d see this data, assuming the null hypothesis, then we can assume that the null hypothesis is false”
fny|3 years ago
"I have a confidence of 95" has very different ring to it than "I am 95% confident."
It would also prevent people from doing stupid things like using these values to compute expectations.
kgwgk|3 years ago