That’s true of most neuroimaging studies. Have you ever tried to get a bunch of people into an MRI for a study? Not easy, not cheap.
Like they said, the effect size is large. With a large enough difference, you can distinguish the effect from statistical randomness, even with a small sample size.
As with any study, this result must be replicated. But just waving around the sample size as if every study can be like a live caller poll with n = 2,000 is not helpful.
Also this idea that bigger is better with sample sizes can lead to problems on the other side, when we see people assuming an effect must be real because the sample size is so large. The problem is, sample size only helps you reduce sampling error, which is one of many possible sources of error. Most the others are much more difficult to manage or even quantify. At some point it becomes false precision because it turns out that the error you can't measure is vastly greater than the sampling error. Which in turn gets us into trouble with interpreting p-values. It gets us into a situation where the distinction between "probability of getting a result at least this extreme, assuming the null hypothesis" and "probability the alternative hypothesis is false" stops being pedantic hair-splitting and starts being a gaping chasm. I don't like getting into that situation, because, regardless of what we were all taught in undergrad, scientific practice still tends to lean toward the latter interpretation. (Except experimental physicists. You people are my heroes.)
For my part, the statistician in me rather likes methodologically clean controlled experiments with small sample sizes. You've got to be careful about how you define "methodologically clean", of course. Statistical power matters. But they've probably led us down a lot fewer blind alleys (and, in the case of medical research, led to fewer unnecessary deaths) than all the slapdash cohort studies that we trusted because of their large sample sizes that were so popular in the '80s and '90s.
I recognize the Catch 22 that the diagnosis is not possible until several years after birth. But a prospective study of this sort is “in scope” at UCSD. They already have big MRI studies of kids with hundred or even thousands of scans.
MajimasEyepatch|1 year ago
Like they said, the effect size is large. With a large enough difference, you can distinguish the effect from statistical randomness, even with a small sample size.
As with any study, this result must be replicated. But just waving around the sample size as if every study can be like a live caller poll with n = 2,000 is not helpful.
bunderbunder|1 year ago
For my part, the statistician in me rather likes methodologically clean controlled experiments with small sample sizes. You've got to be careful about how you define "methodologically clean", of course. Statistical power matters. But they've probably led us down a lot fewer blind alleys (and, in the case of medical research, led to fewer unnecessary deaths) than all the slapdash cohort studies that we trusted because of their large sample sizes that were so popular in the '80s and '90s.
robwwilliams|1 year ago
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8583264/
I recognize the Catch 22 that the diagnosis is not possible until several years after birth. But a prospective study of this sort is “in scope” at UCSD. They already have big MRI studies of kids with hundred or even thousands of scans.
unknown|1 year ago
[deleted]