n=27, testing several things at once, funded by Big Blueberry
Well below the threshold of common interest. Small effect size on what is effectively a significance fishing expedition.
>Acknowledgements
>We appreciate the support of the Wild Blueberry Association of North America for their provision of the wild blueberry powder used in this study. Further, we thank the South East Doctoral Training Centre and the Wild Blueberry Association for their financial support. This work is part of an ESRC Case funded studentship. We also thank the participants and school staff who accommodated this research.
Let's ignore n for a moment, because it doesn't affect the chance of getting a more extreme result under the null hypothesis. And that's always a good thing to wonder about when you're looking at multiple comparisons.
They calculated 11 p-values, and got a significant result for two of them under α=0.1. We'd expect that that to happen about half the time if the null hypothesis is true.
(Realistically, they got a significant result on two numbuers that were both derived from the same measurement, which is less compelling than two unrelated ones. But I'm not sure how to model that, so, like any true armchair statistician, I'm going to handle it by ignoring it.)
I also did a hasty power calculation, and estimate that a study of this size could detect an effect as large as the biggest one they reported about half the time.
In summary: If there's no real effect, there's a 50/50 chance of getting a significant p-value out of this experimental design. If there is a real effect, still a 50/50 chance of getting a significant p-value.
Which hypothesis would I pick if I had to guess? Well, I'm sure this is also junk statistics, but the p-values sure look to me like they could have been drawn from a uniform distribution, which is what I would expect them to look like under the null hypothesis. Definitely not what I would expect them to look like under the alternative hypothesis, given that all these tests are trying to measure roughly the same thing.
Note also that the blueberry test group (WBB) had a higher score in the practice test already (cf. Table 1). In other words: they were probably better before the consumption of the blueberry juice and stayed better afterwards.
Just to make your observation easier to see in the paper:
Note that the table 2 has 16 tests results. Only one of them in "significant". Where "significant" means that if the juice has absolutely no effect, the random noise in the test will cause a fake interesting result only 1/20 of the times (approximately). They have 16 test, so there is a good chance to get a fake interesting result.
In the table they don't include the 120ms and 500ms test in the MANT test, but both are not "significant" anyway. So IIUC there are 18 test (or more) instead of 16.
Why is the study only single-blinded? If the person administering the tests knows that they are testing the placebo group, it could influence the results.
I don't see why they could not make this double blind if they wanted to.
It makes me very suspicious.
I wish these articles would make a clear distinction between blueberries and bilberries (wild blueberries) which are so common in eastern European forests but almost unavailable on the shelves. They have very dark purple colour and are full of flavour. Interestingly, you can find them in most blueberry muffins.
They really _should_ make a clear distinction and give the scientific name of the species they were using in their research.
Here in Denmark, blueberry (“blåbær”) means Vaccinium myrtillus and is readily available pickled or frozen, whereas the "blueberries" with light flesh (e.g., Vaccinium uliginosum – "mosebølle") are not considered blueberries in the botanical sense.
The real title was "The effects of acute wild blueberry supplementation on the cognition of 7–10-year-old schoolchildren" which is unfortunately way too long for a HN submission.
>In this paper, we summarize published data on the penetration of PPs into animal brain and review some hypotheses to explain the biological basis of potentially health-beneficial effects of PPs to the brain. Finally, we highlight promising new approaches, especially those of a hormetic dose-response and gut microbiota-brain interaction, which may allow a better understanding of PPs’ mode of action in animals and humans.
Only in very low concentrations, but there might be enough for an effect. Possible mechanisms are not well understood.
For the MANT, significantly quicker RTs were observed for WBB participants when compared to placebo participants on 120 ms trials, without cost to accuracy. Furthermore, WBB participants showed enhanced verbal memory performance on the AVLT, recalling more words than placebo participants on short delay and memory acquisition measures post-consumption. Despite these significant improvements in cognitive performance, no significant effects were observed for reading measures.
If they were testing flavonoids/anthocyanins, it is probably a species that has with blue/purple flesh – e.g. the european blueberry (Vaccinium myrtillus).
[+] [-] colechristensen|6 years ago|reply
Well below the threshold of common interest. Small effect size on what is effectively a significance fishing expedition.
>Acknowledgements
>We appreciate the support of the Wild Blueberry Association of North America for their provision of the wild blueberry powder used in this study. Further, we thank the South East Doctoral Training Centre and the Wild Blueberry Association for their financial support. This work is part of an ESRC Case funded studentship. We also thank the participants and school staff who accommodated this research.
[+] [-] mumblemumble|6 years ago|reply
They calculated 11 p-values, and got a significant result for two of them under α=0.1. We'd expect that that to happen about half the time if the null hypothesis is true.
(Realistically, they got a significant result on two numbuers that were both derived from the same measurement, which is less compelling than two unrelated ones. But I'm not sure how to model that, so, like any true armchair statistician, I'm going to handle it by ignoring it.)
I also did a hasty power calculation, and estimate that a study of this size could detect an effect as large as the biggest one they reported about half the time.
In summary: If there's no real effect, there's a 50/50 chance of getting a significant p-value out of this experimental design. If there is a real effect, still a 50/50 chance of getting a significant p-value.
Which hypothesis would I pick if I had to guess? Well, I'm sure this is also junk statistics, but the p-values sure look to me like they could have been drawn from a uniform distribution, which is what I would expect them to look like under the null hypothesis. Definitely not what I would expect them to look like under the alternative hypothesis, given that all these tests are trying to measure roughly the same thing.
[+] [-] ckuehne|6 years ago|reply
[+] [-] mcv|6 years ago|reply
[+] [-] Havoc|6 years ago|reply
[+] [-] blackhaz|6 years ago|reply
[+] [-] gus_massa|6 years ago|reply
Note that the table 2 has 16 tests results. Only one of them in "significant". Where "significant" means that if the juice has absolutely no effect, the random noise in the test will cause a fake interesting result only 1/20 of the times (approximately). They have 16 test, so there is a good chance to get a fake interesting result.
In the table they don't include the 120ms and 500ms test in the MANT test, but both are not "significant" anyway. So IIUC there are 18 test (or more) instead of 16.
Obligatory xkcd: https://xkcd.com/882/
[+] [-] Accujack|6 years ago|reply
[+] [-] Jolter|6 years ago|reply
[+] [-] driverdan|6 years ago|reply
[+] [-] vertigolimbo|6 years ago|reply
[+] [-] kseistrup|6 years ago|reply
Here in Denmark, blueberry (“blåbær”) means Vaccinium myrtillus and is readily available pickled or frozen, whereas the "blueberries" with light flesh (e.g., Vaccinium uliginosum – "mosebølle") are not considered blueberries in the botanical sense.
[+] [-] kseistrup|6 years ago|reply
[+] [-] _Microft|6 years ago|reply
[+] [-] Merrill|6 years ago|reply
>In this paper, we summarize published data on the penetration of PPs into animal brain and review some hypotheses to explain the biological basis of potentially health-beneficial effects of PPs to the brain. Finally, we highlight promising new approaches, especially those of a hormetic dose-response and gut microbiota-brain interaction, which may allow a better understanding of PPs’ mode of action in animals and humans.
Only in very low concentrations, but there might be enough for an effect. Possible mechanisms are not well understood.
[+] [-] kuu|6 years ago|reply
For the MANT, significantly quicker RTs were observed for WBB participants when compared to placebo participants on 120 ms trials, without cost to accuracy. Furthermore, WBB participants showed enhanced verbal memory performance on the AVLT, recalling more words than placebo participants on short delay and memory acquisition measures post-consumption. Despite these significant improvements in cognitive performance, no significant effects were observed for reading measures.
(Emphasis is mine)
[+] [-] lonelappde|6 years ago|reply
[+] [-] pteraspidomorph|6 years ago|reply
[+] [-] kseistrup|6 years ago|reply
[+] [-] Scoundreller|6 years ago|reply
[+] [-] leoh|6 years ago|reply
[+] [-] jonplackett|6 years ago|reply
[+] [-] SmellyGeekBoy|6 years ago|reply