I'm the first author of the salmon fMRI paper, if you have any questions. Generally, how the investigators do their statistics can lead to implausible conclusions. Extraordinary claims should require extraordinary evidence.
Wow, thanks for the awesome work! It's my favorite neuroimaging study, and we have a print of the poster on our lab's wall.
How do you find the rigor of neuroimaging analyses have developed since that paper? I don't follow it much, but I've seen some quite wild looking stuff being published (e.g. predicting smallish datasets by feeding voxel activations into a huge ANN). Are my concerns about a new era of overfitting realistic?
That's amazing - you made my day with that statement.
I left neuroscience for the software world back in 2012, so I don't have a lot of data points since then. I know between 2009 and 2012 the field went from ~50% of papers doing the right statistical corrections to about ~90%, which is a huge step in the right direction. I hope those numbers are even better today.
The expense of MRI time means that studies include far fewer subjects than they might want/need. My opinion is that there are still significant challenges that go beyond correction for multiple comparisons, like data peeking and low-power experimental designs. I think that we should move to a mindset where we need replication and convergent evidence for major claims. Not a single study with 18 college freshman participants.
Pretty much what TeMPOraL said. You can scan pretty much anything with fMRI and find results if you don't use proper statistical corrections. I have found "significant" voxels in a pumpkin before while doing testing. Our argument was/is that scientists need to have appropriate rigor in their analyses, otherwise you can reach ridiculous conclusions - like a dead fish looking alive...
jampekka|1 year ago
How do you find the rigor of neuroimaging analyses have developed since that paper? I don't follow it much, but I've seen some quite wild looking stuff being published (e.g. predicting smallish datasets by feeding voxel activations into a huge ANN). Are my concerns about a new era of overfitting realistic?
prefrontal|1 year ago
That's amazing - you made my day with that statement.
I left neuroscience for the software world back in 2012, so I don't have a lot of data points since then. I know between 2009 and 2012 the field went from ~50% of papers doing the right statistical corrections to about ~90%, which is a huge step in the right direction. I hope those numbers are even better today.
The expense of MRI time means that studies include far fewer subjects than they might want/need. My opinion is that there are still significant challenges that go beyond correction for multiple comparisons, like data peeking and low-power experimental designs. I think that we should move to a mindset where we need replication and convergent evidence for major claims. Not a single study with 18 college freshman participants.
dartos|1 year ago
Also how do u determine a salmon is sad?
prefrontal|1 year ago
TeMPOraL|1 year ago