top | item 47211494

(no title)

Oras | 13 hours ago

>> therefore 19 participants completed the study (2 females and 17 males) and their data are presented throughout

Who in their own mind decided that this is a "study" worth publishing?

discuss

order

godelski|13 hours ago

You're reading the study wrong.

You read

  We saw this effect, so it's real. 
In actuality it is

  We saw this effect in a small study, so it's worth doing a larger study.
It's worth publishing because it's evidence and motivation to do further studying. And if you're asking "Why not start large?" the answer is obvious: money.

steve_adams_86|12 hours ago

Especially in dietary studies. You either spend a lot on high quality, controlled studies where you can nail down parameters (takes a LOT of labour), or you spend on facilitating much larger studies where you make up for precision and control with volume.

There are trade offs in either case and some types of research where one is more suitable than the other. But the best case is a combination of the two, and it's exceedingly rare.

Maybe there are other options but this seems to be the polar nature of these studies from what I've seen.

kibibu|13 hours ago

The paper includes a section on power analysis which justifies the sample size (although the justification is for a sample of 20, they recruited 25 eligible participants and lost 6 in screening).

Some points though:

- A within-participants study has inherently more power than a between-subjects study. Trying two different diets with the same person removes a lot of variables that you'd need to control for in between-subjects studies (and yes, they randomized the order of intervention and found no difference based on order)

- It looks like this was conducted in a way that supported compliance with the protocol, and using analysis techniques that would be unwieldy for a much larger sample size.

Even with N=19, the reported significance is very compelling.

AnEro|13 hours ago

Someone with quotas