We saw this effect in a small study, so it's worth doing a larger study.
It's worth publishing because it's evidence and motivation to do further studying. And if you're asking "Why not start large?" the answer is obvious: money.
The paper includes a section on power analysis which justifies the sample size (although the justification is for a sample of 20, they recruited 25 eligible participants and lost 6 in screening).
Some points though:
- A within-participants study has inherently more power than a between-subjects study. Trying two different diets with the same person removes a lot of variables that you'd need to control for in between-subjects studies (and yes, they randomized the order of intervention and found no difference based on order)
- It looks like this was conducted in a way that supported compliance with the protocol, and using analysis techniques that would be unwieldy for a much larger sample size.
Even with N=19, the reported significance is very compelling.
The number needed for a study to get significant results depends on the strength of the effect it is measuring.
For example if I have a bag full of thousands of coins, pull out 19 at random and flip them sequentially, and they all come out heads I'm going to conclude I have a bag that is overwhelmingly coins that are heavily biased toward coming up heads.
Are you going to say my sample size was too small to support that conclusion?
To see if their sample size was too small you need to at least read the part where they do the math.
Oras|19 hours ago
Who in their own mind decided that this is a "study" worth publishing?
godelski|19 hours ago
You read
In actuality it is It's worth publishing because it's evidence and motivation to do further studying. And if you're asking "Why not start large?" the answer is obvious: money.kibibu|19 hours ago
Some points though:
- A within-participants study has inherently more power than a between-subjects study. Trying two different diets with the same person removes a lot of variables that you'd need to control for in between-subjects studies (and yes, they randomized the order of intervention and found no difference based on order)
- It looks like this was conducted in a way that supported compliance with the protocol, and using analysis techniques that would be unwieldy for a much larger sample size.
Even with N=19, the reported significance is very compelling.
AnEro|19 hours ago
yokoprime|19 hours ago
mijoharas|19 hours ago
tzs|17 hours ago
For example if I have a bag full of thousands of coins, pull out 19 at random and flip them sequentially, and they all come out heads I'm going to conclude I have a bag that is overwhelmingly coins that are heavily biased toward coming up heads.
Are you going to say my sample size was too small to support that conclusion?
To see if their sample size was too small you need to at least read the part where they do the math.
GeoAtreides|18 hours ago