"and along the way they swipe the fMRI community for their “lamentable archiving and data-sharing practices” that prevent most of the discipline's body of work being re-analysed."
That's quite funny. My girlfriend recently finished her master's thesis on data sharing for neuroscience data and created a model for universal access to research data across institutions, but came to the conclusion that making researchers share their data is a bigger hurdle than actually implementing the system.
The main reason for lack of sharing, she postulated, is, that studies (that create funding for the researcher who publishes them) can be done using just the raw data and researchers who create data want to publish all the studies/papers for themselves (because "they" paid for the data acquisition) and are also afraid to publish underlying data for it to be harder for others to falsify their results, which would lead, in their opinion, to funding going away.
Edit: of course there are privacy issues for the test subjects as well.
As someone who works in the biomedical imaging business and is also a fan of philosophy, I think this news will matter more to folks in the latter camp. For a couple years now philosophers have insisted that fMRI images prove there is no such thing as free will. Today's revelation should put an end to that whole line of reasoning (and the absurd amount of fatalism that it engendered).
(The back story: Apparently fMRI showed motor signals arising before the cognitive / conscious signals that should have created them, assuming we humans have free will. This has led to the widely adopted belief among philosophers that we humans act before we think, thus we don't and can't act willfully and freely. To wit, science has proven there is no such thing as free will; we're all just automatons.)
Just this week there was an article in The Atlantic on how we all must accept that we're mere robots and we don't really choose our actions (nor can we choose to believe in a god).
Ah well. It seems philosophers STILL haven't learned the importance of applying the scientific method before leaping to a conclusion -- sometimes just to check that someone else didn't just abuse the scientific method.
Why would this put an end to that line of reasoning? I'd expect it to flare up the debate, not end the debate. They didn't disprove behavior being computed, they demonstrated that a class of data supporting it was useless. The natural reaction to this isn't "okay we give up", it's "better go get some good data".
(Also, I don't like mixing the question "Is our behavior computed?" with the question "Does computed behavior imply no free will?".)
What are the actionable insights that have come out of fMRI studies? Even when properly conducted (no false positives), the conclusions that are often drawn have always felt dubious to me. Basically you are looking for regions of the brain that light up with various stimuli. Except that's as far as it goes, we don't yet understand much beyond that.
It's as if you figure out that your car is making a funny sound, and you can pinpoint where it is coming from, you can even reproduce the sound on demand - but you have no idea WHY it sounds the way it does.
Well, the non-pop philosophers will understand that as a minor hit to the model of consciousness as solely concerned with post-facto rationalization and basically removed from the real-time decision making loop. Those studies were never strong evidence anyway, the protocols were pretty weak as they required asking the subjects to self-report at what time they "decided" to act.
The real free will debate is about both the definition of free will (for the compatibilists and libertarians) and a debate about whether the evidence for materialism outweighs the subjective experience of free will.
But yes, this will hopefully stop those confused pop-philosophy stories.
To be fair, there are more arguments to be made contra free will (thought experiments like: At what point from conception to adulthood would free will develop (if at all), could free will be a merely probabilistic byproduct, etc...) and even if the details are wrong in these fMRI images, the general notion that a motor signal is generated before it appears in consciousness might still be valid.
Free will as in "there's something that can't be explained by chemistry and electrical signals in the brain" is just an attempt of some believers to rationalize their belief in the kind of the supernatural "soul."
There's free will as in a decision-making system that generates different options and selects one based on constraints, which is something a computer can do, and then there's free will as in the ability to defy the mechanics of one's biophysics, which is just an indirect way of saying metaphysical soul.
I don't think empiricists can even discuss the kind of free will which defies causal explanation, the kind of mind which exists outside of the brain.
If you reject that kind of metaphysical soul from the discussion, what remains is whatever soul can be housed in a cage of biophysics. And whether you believe there is fundamental randomness in the universe, or whether given perfect information the universe becomes predictable, both perspectives are equally hostile to the kind of free will people dream about.
People who talk about free will want to escape biophysics, and the only way is to talk about the mind outside the brain, or the ghost outside the machine.
So have these studies been invalidated by the software bug as well? If so, do you have any pointers? I. e. which were the infamous studies, and did they indeed use the faulty software to derive their conclusions?
I'm genuinely interested.
> It seems philosophers STILL haven't learned the importance of applying the scientific method before leaping to a conclusion
To be fair, if you can apply the scientific method it's not really philosophy anymore, it's science. Philosophy exists in order to attempt understanding of domains we cannot rigorously apply empirical reasoning to.
"Apparently fMRI showed motor signals arising before the cognitive / conscious signals that should have created them, assuming we humans have free will."
Has anyone presented any theory where consciousness precedes neural activity that doesn't invoke hard-core dualism and an immaterial soul?
This paper (http://www.ncbi.nlm.nih.gov/pubmed/19423830) tried to prove exactly what you talk about and called it free will, but they used the SPM software that was invalidated.
I thought someone had found that while "fMRI showed motor signals arising before the cognitive / conscious signals" that they also had found that we could choose to negate/override that signal thus allowing free will. That is free will was expressed by overriding the default.
The real takeaway lesson from this research should be the vital importance of Open Data to the modern scientific enterprise:
> "lamentable archiving and data-sharing practices" that prevent most of the discipline's body of work being re-analysed.
Keeping data private before publication is (at this point in time) understandable. Once results are published, however, there is no excuse for not depositing the raw data in an open repository for later re-evaluation.
This is medical data. "Open repository" and "medical data on individuals" doesn't really mix well. Ask the next person denied health insurance based on an open access fmri scan (Just to name the most basic/trivial example).
In fact, "simpler" things like heart rate etc, are not so simple, as mixed with other factors that are needed for control, such data can be surprisingly hard to meaningfully anonymize. I'm not saying we should give up on open medical data, but it is definitively different from eg: open data regarding a physics or chemical experiment, material science etc.
[ed: There's of course projects to collect data, but limit access - in that sense it can be "open" but one would need permission approval to work with the data. There are many such projects, like for Norway, for questionnaires and similar research: http://www.nsd.uib.no/personvern/om/english.html - research that uses standard clauses for possible re-use is also available to be combined with new studies and meta-studies. Often these data are only available in aggregate and/or anonomyzed form.]
Uh, where would you store it though? IIRC from the time a relative was going through a chemistry PHD, they produced several GBs of raw data every half hour. Storage is cheap but it's not that cheap...
I would send it back and ask for a detailed description of the null hypothesis they are testing, because they are not clear on this point at all:
>"All of the analyses to this point have been based on resting-state fMRI data, where the null hypothesis should be true."
They are not careful to explicitly define this null hypothesis anywhere, but earlier in the paper they describe some issues with the model used:
>"Resting-state data should not contain systematic changes in brain activity, but our previous work (14) showed that the assumed activity paradigm can have a large impact on the degree of false positives. Several different activity paradigms were therefore used, two block based (B1 and B2) and two event related (E1 and E2); see Table 1 for details."
This means that they actually know the null model to be false and have even written papers about some of the major contributors to this:
If the null hypothesis is false, it is no wonder they detect this. In fact, if the sample size was larger (they used only n=20/40 here) they would get near 100% false positive rates. The test seems to be telling them the truth, it is a trivial truth, but according to their description it is correct nonetheless.
Doesn't sound like a straight up bug, but rather unsound statistical methods which can happen with or without software. You get the same problem with finite element analysis software: the operator has to be aware of all the assumptions baked in, and has to ensure that the input conforms to them.
The paper has been rebutted by other researchers who argue that the original results hold:
"This technical report revisits the analysis of family-wise error rates in statistical parametric mapping - using random field theory - reported in (Eklund et al., 2015). Contrary to the understandable spin that these sorts of analyses attract, a review of their results suggests that they endorse the use of parametric assumptions - and random field theory - in the analysis of functional neuroimaging data. We briefly rehearse the advantages parametric analyses offer over nonparametric alternatives and then unpack the implications of (Eklund et al., 2015) for parametric procedures."
"Further: “Our results suggest that the principal cause of the invalid cluster inferences is spatial autocorrelation functions that do not follow the assumed Gaussian shape”."
This has nothing to do with bugs and everything to do with bad statistical analysis. It's Google Flu all over again.
"Our results suggest that the principal cause of the invalid cluster inferences is spatial autocorrelation functions that do not follow the assumed Gaussian shape."
In other words, researchers cut corners. You should never assume that something is a certain way without rigorously proving it. How did these papers make it past peer review?
Peer-review in biology-related fields doesn't bother too much about the details of your code. Only recently a few journals begin to ask guys to submit the source code if the manuscript is mainly about some new analysis methods. So it is more than likely even if you are reading submitted manuscript of a computer program, you won't read the source code with the paper.
Another thing is that statistical analysis is kind of the missing piece in many researches training. It maybe not be "researchers cut corners", but they might not know some of the prerequirements for the analyses used. Even in this year, 2016 when I talked with a colleague in my institute I found out he didn't know a single assumption for t-test, who is a senior postdoc and used it everyday.
They were probably peer-reviewed by biologists, not statisticians. Peer review is mostly just a good think and read for about a month, not a perfect analysis.
Developing a statistical threshold requires some null hypothesis, which very often takes Gaussian or linear form. The paper on which the method is based likely states their assumptions clearly, allowing their result to pass peer review.
Papers merely employing some method are rarely reviewed by the same peers as those of the methods papers, and often neither aware of or state the assumptions of the method.
This is how this happens, at least in neuroscience. You can look down on it and judge, but neuroscience has not really found its Kuhnian paradigm yet, so be nice.
I reckon there is a vast number of similar problems in other studies across most fields. Linear regression, ANOVA, and t-tests are widely used techniques with an assumption of Gaussian errors that is rarely checked. I wonder how many of these papers would become null results if switched to nonparametric tests...
Just goes to show when you're doing science you need to test and validate your experimental methodology, including the tools you use. In computer vision, its common to need to do some kind of calibration for many algorithms which can usually reveal some kind of statistical error or problem. I wonder why none of the researchers thought to do some very simple validation of the data?
And I wonder if the software was at one point correct and then this bug was introduced at a later point? Many times it feels like after a company does a formal scientific validation they never do it again despite the fact they have engineers constantly working on the code...
Well, I think the problems with interpreting fMRI scans have been at least vaguely known since that time a dead salmon activated its neurons when asked to judge the of a person from a photo, this was in 2009.
There's this weird snobbishness about fMRI: it's uniquely terrible, the people are hacks, etc. It seems particularly common amongst first and second-year grad students who are doing something they think is "harder" science. I hate it.
In the right hands, fMRI can be a really powerful technique for probing neural activity in healthy human subjects; in fact, it's one of the only ways to do so that has decent spatial resolution and thus lets you link brain structure and function.
It certainly does have problems. There are plenty of ways to subtly mess up the data analysis or over-interpret results. The experimental design is often lacking, etc.
However, I think these largely reflect the very low barrier of entry to fMRI research--all you really need is a laptop and somewhere willing to sell you scanner time (almost all major universities or hospitals)--rather than some intrinsic limitation of the field. The good work remains very good.
It's not that the research is 100% wrong, but the way the research is reported in the news is generally wrong. No, researchers can't really tell if you're thinking about a sailboat from looking at an fMRI, despite whatever the newspaper says.
Source: worked in an fMRI research lab for 4 years.
There's a fair bit of junky data and quite a few analysis methods that cannot compare data accurately between magnets, position wrt isocentre, or even processing software revisions. But your claim is a lot bolder than that.
[+] [-] maweki|9 years ago|reply
That's quite funny. My girlfriend recently finished her master's thesis on data sharing for neuroscience data and created a model for universal access to research data across institutions, but came to the conclusion that making researchers share their data is a bigger hurdle than actually implementing the system.
The main reason for lack of sharing, she postulated, is, that studies (that create funding for the researcher who publishes them) can be done using just the raw data and researchers who create data want to publish all the studies/papers for themselves (because "they" paid for the data acquisition) and are also afraid to publish underlying data for it to be harder for others to falsify their results, which would lead, in their opinion, to funding going away.
Edit: of course there are privacy issues for the test subjects as well.
[+] [-] randcraw|9 years ago|reply
(The back story: Apparently fMRI showed motor signals arising before the cognitive / conscious signals that should have created them, assuming we humans have free will. This has led to the widely adopted belief among philosophers that we humans act before we think, thus we don't and can't act willfully and freely. To wit, science has proven there is no such thing as free will; we're all just automatons.)
Just this week there was an article in The Atlantic on how we all must accept that we're mere robots and we don't really choose our actions (nor can we choose to believe in a god).
Ah well. It seems philosophers STILL haven't learned the importance of applying the scientific method before leaping to a conclusion -- sometimes just to check that someone else didn't just abuse the scientific method.
[+] [-] Strilanc|9 years ago|reply
(Also, I don't like mixing the question "Is our behavior computed?" with the question "Does computed behavior imply no free will?".)
[+] [-] hammock|9 years ago|reply
It's as if you figure out that your car is making a funny sound, and you can pinpoint where it is coming from, you can even reproduce the sound on demand - but you have no idea WHY it sounds the way it does.
[+] [-] freshhawk|9 years ago|reply
The real free will debate is about both the definition of free will (for the compatibilists and libertarians) and a debate about whether the evidence for materialism outweighs the subjective experience of free will.
But yes, this will hopefully stop those confused pop-philosophy stories.
[+] [-] RGamma|9 years ago|reply
[+] [-] acqq|9 years ago|reply
[+] [-] threatofrain|9 years ago|reply
I don't think empiricists can even discuss the kind of free will which defies causal explanation, the kind of mind which exists outside of the brain.
If you reject that kind of metaphysical soul from the discussion, what remains is whatever soul can be housed in a cage of biophysics. And whether you believe there is fundamental randomness in the universe, or whether given perfect information the universe becomes predictable, both perspectives are equally hostile to the kind of free will people dream about.
People who talk about free will want to escape biophysics, and the only way is to talk about the mind outside the brain, or the ghost outside the machine.
[+] [-] nitrogen|9 years ago|reply
[+] [-] tucosan|9 years ago|reply
[+] [-] rqebmm|9 years ago|reply
To be fair, if you can apply the scientific method it's not really philosophy anymore, it's science. Philosophy exists in order to attempt understanding of domains we cannot rigorously apply empirical reasoning to.
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] mcguire|9 years ago|reply
Has anyone presented any theory where consciousness precedes neural activity that doesn't invoke hard-core dualism and an immaterial soul?
[+] [-] PascLeRasc|9 years ago|reply
[+] [-] mc32|9 years ago|reply
[+] [-] ufo|9 years ago|reply
[+] [-] jballanc|9 years ago|reply
> "lamentable archiving and data-sharing practices" that prevent most of the discipline's body of work being re-analysed.
Keeping data private before publication is (at this point in time) understandable. Once results are published, however, there is no excuse for not depositing the raw data in an open repository for later re-evaluation.
[+] [-] e12e|9 years ago|reply
In fact, "simpler" things like heart rate etc, are not so simple, as mixed with other factors that are needed for control, such data can be surprisingly hard to meaningfully anonymize. I'm not saying we should give up on open medical data, but it is definitively different from eg: open data regarding a physics or chemical experiment, material science etc.
[ed: There's of course projects to collect data, but limit access - in that sense it can be "open" but one would need permission approval to work with the data. There are many such projects, like for Norway, for questionnaires and similar research: http://www.nsd.uib.no/personvern/om/english.html - research that uses standard clauses for possible re-use is also available to be combined with new studies and meta-studies. Often these data are only available in aggregate and/or anonomyzed form.]
[+] [-] toyg|9 years ago|reply
[+] [-] nonbel|9 years ago|reply
>"All of the analyses to this point have been based on resting-state fMRI data, where the null hypothesis should be true."
They are not careful to explicitly define this null hypothesis anywhere, but earlier in the paper they describe some issues with the model used:
>"Resting-state data should not contain systematic changes in brain activity, but our previous work (14) showed that the assumed activity paradigm can have a large impact on the degree of false positives. Several different activity paradigms were therefore used, two block based (B1 and B2) and two event related (E1 and E2); see Table 1 for details."
This means that they actually know the null model to be false and have even written papers about some of the major contributors to this:
>"The main reason for the high familywise error rates seems to be that the global AR(1) auto correlation correction in SPM fails to model the spectra of the residuals" http://www.sciencedirect.com/science/article/pii/S1053811912...
If the null hypothesis is false, it is no wonder they detect this. In fact, if the sample size was larger (they used only n=20/40 here) they would get near 100% false positive rates. The test seems to be telling them the truth, it is a trivial truth, but according to their description it is correct nonetheless.
Edit: I was quoting from the actual paper.
http://www.pnas.org/content/early/2016/06/27/1602413113.full
[+] [-] honkhonkpants|9 years ago|reply
[+] [-] UVDMAS|9 years ago|reply
"This technical report revisits the analysis of family-wise error rates in statistical parametric mapping - using random field theory - reported in (Eklund et al., 2015). Contrary to the understandable spin that these sorts of analyses attract, a review of their results suggests that they endorse the use of parametric assumptions - and random field theory - in the analysis of functional neuroimaging data. We briefly rehearse the advantages parametric analyses offer over nonparametric alternatives and then unpack the implications of (Eklund et al., 2015) for parametric procedures."
http://arxiv.org/abs/1606.08199
[+] [-] nerdponx|9 years ago|reply
This has nothing to do with bugs and everything to do with bad statistical analysis. It's Google Flu all over again.
[+] [-] AlexCoventry|9 years ago|reply
[+] [-] williamscales|9 years ago|reply
In other words, researchers cut corners. You should never assume that something is a certain way without rigorously proving it. How did these papers make it past peer review?
[+] [-] leemailll|9 years ago|reply
Another thing is that statistical analysis is kind of the missing piece in many researches training. It maybe not be "researchers cut corners", but they might not know some of the prerequirements for the analyses used. Even in this year, 2016 when I talked with a colleague in my institute I found out he didn't know a single assumption for t-test, who is a senior postdoc and used it everyday.
[+] [-] Vexs|9 years ago|reply
[+] [-] marmaduke|9 years ago|reply
Papers merely employing some method are rarely reviewed by the same peers as those of the methods papers, and often neither aware of or state the assumptions of the method.
This is how this happens, at least in neuroscience. You can look down on it and judge, but neuroscience has not really found its Kuhnian paradigm yet, so be nice.
[+] [-] thaw13579|9 years ago|reply
[+] [-] acveilleux|9 years ago|reply
[+] [-] trentmb|9 years ago|reply
[+] [-] greenyoda|9 years ago|reply
[+] [-] pfooti|9 years ago|reply
http://blogs.scientificamerican.com/scicurious-brain/ignobel...
[+] [-] nkurz|9 years ago|reply
The principled control of false positives in neuroimaging
Bennett, Wolford, Miller 2009
http://scan.oxfordjournals.org/content/4/4/417.full
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] Toenex|9 years ago|reply
https://news.ycombinator.com/item?id=12019205
[+] [-] iamleppert|9 years ago|reply
And I wonder if the software was at one point correct and then this bug was introduced at a later point? Many times it feels like after a company does a formal scientific validation they never do it again despite the fact they have engineers constantly working on the code...
[+] [-] Trombone12|9 years ago|reply
http://www.wired.com/2009/09/fmrisalmon/
[+] [-] chrramirez|9 years ago|reply
[+] [-] bjourne|9 years ago|reply
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] iLoch|9 years ago|reply
[+] [-] Alex3917|9 years ago|reply
[+] [-] mattkrause|9 years ago|reply
In the right hands, fMRI can be a really powerful technique for probing neural activity in healthy human subjects; in fact, it's one of the only ways to do so that has decent spatial resolution and thus lets you link brain structure and function.
It certainly does have problems. There are plenty of ways to subtly mess up the data analysis or over-interpret results. The experimental design is often lacking, etc.
However, I think these largely reflect the very low barrier of entry to fMRI research--all you really need is a laptop and somewhere willing to sell you scanner time (almost all major universities or hospitals)--rather than some intrinsic limitation of the field. The good work remains very good.
[+] [-] op00to|9 years ago|reply
Source: worked in an fMRI research lab for 4 years.
[+] [-] acveilleux|9 years ago|reply
[+] [-] JumpCrisscross|9 years ago|reply
Source for a layman?
[+] [-] verytrivial|9 years ago|reply
[+] [-] jamesrom|9 years ago|reply
[+] [-] seesomesense|9 years ago|reply
However most neurologists view the vast majority of fMRI research as junk science.