My previous job was at a startup doing BMI, for research. For the first time I had the chance to work with expensive neural signal measurement tools (mainly EEG for us, but some teams used fMRI). and quickly did I learn how absolute horrible the signal to noise ratio (SNR) was in this field.
And how it was almost impossible to reproduce many published and well cited result. It was both exciting and jarring to talk with the neuroscientist, because they ofc knew about this and knew how to read the papers but the one doing more funding/business side ofc didn't really spend much time putting emphasis on that.
One of the team presented a accepted paper that basically used Deep Learning (Attention) to predict images that a person was thinking of, from the fMRI signals. When I asked "but DL is proven to be able to find pattern even in random noise, so how can you be sure this is not just overfitting to artefact?" and there wasn't really any answer to that (or rather the publication didn't take that in to account, although that can be experimentally determined). Still, a month later I saw tech explore or some tech news writing an article about it, something like "AI can now read your brain" and the 1984 implications yada yada.
So this is indeed something probably most practitioners, masters and PhD, realize relatively early.
So now that someone says "you know mindfulness is proven to change your brainwaves?" I always add my story "yes, but the study was done with EEG, so I don't trust the scientific backing of it" (but anecdotally, it helps me)
There are lots of reliable science done using EEG and fMRI; I believe you learned the wrong lesson here. The important thing is to treat motion and physiological sources of noise as a first-order problem that must be taken very seriously and requires strict data quality inclusion criterion. As far as deep learning in fMRI/EEG, your response about overfitting is too sweepingly broad to apply to the entire field.
To put it succinctly, I think you have overfit your conclusions on the amount of data you have seen
But none of this (signal/noise ratio, etc) is related to the topic of the article, which claims that even with good signal, blood flow is not useful to determine brain activity.
The difference is that EEG can be used usefully in e.g. biofeedback training and the study of sleep phases, so there is in fact enough signal here for it to be broadly useful in some simple cases. It is not clear fMRI has enough signal for anything even as simple as these things though.
I'm not sure I understand. Wouldn't any prediction result above statistical random (in the image mind reading study) be significant? If the study was performed correctly I don't really need to know much about fMRI to tell whether it's an interesting result or not.
> When I asked "but DL is proven to be able to find pattern even in random noise, so how can you be sure this is not just overfitting to artefact?"
So here you say quite a mouthful. If you train it on a pattern it'll see that pattern everywhere - think about the early "Deep Dream" trippy-dogs-pictures nonsense that was pervasive about eight or nine years ago.
I repaired a couple of cameras for someone who was working with a large university hospital about 15 years ago, where they were using admittedly 2010s-era "Deep Learning" to analyse biopsy scans for signs of cancer. It worked brilliantly, at least with the training materials, incredible hit rate, not too terrible false positive rate (no biggie, you're just trying to decide if you want to investigate further), really low false negative rate (if there was cancer it would spot it, for sure, and you don't want to miss that).
But in real-world patient data it went completely mental. The sample data was real-world patient data, too, but on "uncontrolled" patients, it was detecting cancer all over the place. It also detected cancer in pictures of the Oncology department lino floor, it detected cancer in a picture of a guy's ID badge, it detected cancer in a closeup of my car tyre, and it detected cancer in a photo of a grey overcast sky.
Aw no. Now what?
Well, that's why I looked at the camera for them. They'd photographed the biopsies with one camera on site, from "real patients", but a lot of the "clear" biopsies were from other sites.
You're ahead of me now, aren't you?
The "Deep Learning" system had in fact trained itself on a speck of shit on the sensor of one of the cameras, the one used for most of the "has cancer" biopsies and most of the "real patient under test" biopsies. If that little blob of about a dozen slightly darker pixels was present, then it must be cancer because that's what the grown-ups told it. The actual picture content was largely irrelevant because the blob was consistent across all of them.
I'm not too keen on AI in healthcare, not as a definitive "go/no-go" test thing.
90% of papers I read in computer science / computer security speak of software written or AI models they trained that are nowhere to be found. Not on git nor via email to the authors.
I remember reading a paper back in grad school where the researchers put a dead salmon in the magnet and got statistically significant brain activity readings using whatever the analysis method à la mode was. It felt like a great candidate for the Ig Nobel awards.
That was our paper! We showed that you can get false positives (significant brain activity in this case) if fMRI if you don't use the proper statistical corrections. We did win an Ig Nobel for that work in 2012 - it was a ton of fun.
This study is validating a commonplace fMRI measure (change in blood-oxygenation-level-dependent or BOLD signal) by comparing it with a different MRI technique, one that uses a multiparametric quantitative BOLD model, a different model for BOLD derived from two separate MRI scans which measure two different kinds of signal (transverse relaxation rates), and then multiply/divide by a bunch of constants to get at a value.
I'm a software engineer in this field, and this is my layman-learns-a-bit-of-shop-talk understanding of it. Both of these techniques involve multiple layers of statistical assumptions, and multiple steps of "analysing" data, which in itself involves implicit assumptions, rules of thumb and other steps that have never sat well with me. A very basic example of this kind of multi-step data massaging is "does this signal look a bit rough? No worries, let's Gaussian-filter it".
A lot of my skepticism is due to ignorance, no doubt, and I'd probably be braver in making general claims from the image I get in the end if I was more educated in the actual biophysics of it. But my main point is that it is not at all obvious that you can simply claim "signal B shows that signal A doesn't correspond to actual brain activity", when it is quite arguable whether signal B really does measure the ground truth, or whether it is simply prone to different modelling errors.
In the paper itself, the authors say that it is limited by methodology, but because they don't have the device to get an independent measure of brain activation, they use quantitative MRI. They also say it's because of radiation exposure and blah blah, but the real reason is their uni can't afford a PET scanner for them to use.
"The gold standard for CBF and CMRO2 measurements is 15O PET; but this technique requires an on-site cyclotron, a sophisticated imaging setup and substantial experience in handling three different radiotracers (CBF, 15O-water; CBV, 15O-CO; OEF, 15O-gas) of short half-lives8,35. Furthermore, this invasive method poses certain risks to participants owing to the exposure to radioactivity and arterial sampling."
I'll get raked for this, but as someone in the field, I can say with high confidence that the majority of comments in this thread are not from imaging experts, and mostly (mis)informed by popular science articles. I do not have the time to properly respond to each issue I see. The literature is out there in any case.
I was a grad student at UCSD when Ed Vul published Voodoo Correlations in Social Neuroscience [1], which stoked a severe backlash from the fMRI syndicate resulting in a title change to Puzzlingly High Correlations in fMRI Studies of Emotion, Personality, and Social Cognition [2]. There is a lot of interesting commentary around this article (e.g., “Voodoo” Science in Neuroimaging: How a Controversy Transformed into a Crisis [3]). To me it was fascinating to watch Vul (an incredibly rare talent, perhaps a genius), take on an entire field during his 1st year as assistant professor.
This study was really highlighting a statistical issue which would occur with any imaging technique with noise (which is unavoidable). If you measure enough things, you'll inevitably find some false positives. The solution is to use procedures such as Bonferroni and FDR to correct for the multiple tests, now a standard part of such imaging experiments. It's a valid critique, but it's worth highlighting that it's not specific to fMRI or evidence of shaky science unless you skip those steps (other separate factors may indicate shakiness though).
In related news: ironically, Psychedelics disrupt normal link between brain’s neuronal activity and blood flow - thus casting some doubt on findings that under psychedelics more of the brain is connected (since fMRI showed elevated blood flow, suggesting higher brain activity).
As a caveman pondering "Stoned Ape Theory" during the rise of MRI in the 80s, having done light reading of Huxley, McKenna et. al, the claim that vascular variations were so tied to thought patterns in a purely calm and cognitive activity was fascinating. To see the brain of someone as they went through a deck of cards and paused to look at each... astounding! But frustrating also. My first question always was, was the person's hands busy going through the deck and holding up the cards, focusing on them... or were they merely shown the cards sitting still? It seemed the popsci articles often glossed over that information, and any simple "control for coordinated body movement" played second fiddle to the novelty of it all. Then I worked in a club where I was often surrounded by tripping people. I'd fetch them glasses of water and they would always drink. Do you know you can smell them, they smell like fear? The experience has every sweat gland working overtime. When I learned that I greeted this "tripping people MRIs light up indicating enhanced brain connectivity" with a grain of salt. I would not be the least bit surprised if the sweat gland thing also has the brain's vascular system in overdrive.
This isn’t entirely news to people in the field doing research, but it’s important information to keep in mind when anyone starts pushing fMRI (or SPECT) scans into popular media discussions about neurology or psychiatry.
There have been some high profile influencer doctors pushing brain imaging scans as diagnostic tools for years. Dr. Amen is one of the worst offenders with his clinics that charge thousands of dollars for SPECT scans (not the same as the fMRI in this paper but with similar interpretation issues) on patients. Insurance won’t cover them because there’s no scientific basis for using them in diagnosing or treating ADHD or chronic pain, but his clinics will push them on patients. Seeing an image of their brain with some colors overlayed and having someone confidently read it like tea leaves is highly convincing to people who want answers. Dr. Amen has made the rounds on Dr. Phil and other outlets, as well as amassing millions of followers on social media.
Dr. Mike, a rare YouTube doctor who is not peddling supplements and wares, and thus seems to be at the forefront of medical critical thinking on the platform, interviewed Dr. Amen recently[0]. I haven't finished the interview yet, but having watched some others, generally the approach is to let the interviewee make their grandiose claims, agree with whatever vague generalities and truisms they use in their rhetoric (yes it's true, doctors don't spend enough time explaining things to patients!), and then lay into them on the actual science and evidence.
Back in 2009 I remember reading about how dead salmon apparently turns up brain activity in fMRI without proper statistical methods. fMRI studies are something frequently invoked unscientifically and out of context.
I saw a clinical report of his on a patient, he puts a graphic in their report of their "brain scan" but it's basically a vector graphic of the brain w/ a multicolor MS Paint gradient...
>> Seeing an image of their brain with some colors overlayed ... is highly convincing
Indeed, there's been quite a few studies [1] that find just including any old image of a brain with stuff highlighted will cause a paper to be perceived as more scientifically credible.
Pop science guru-ing is a giant flashing red sign for me. I am never even a little surprised when the latest “sense maker” or pop science guru comes out as a complete loon or is consumed by some kind of scandal.
Influencers in general are always suspect. The things that get you an audience fast are trolling or tabloid-ish tactics like conspiracism.
There are good ones but you have to be discerning.
this headline is a bit misleading on the first read, since it only affects functional (f)MRI, which is controversial since a longer time. a prominent example is the activity that has been detected in a dead salmon
As someone who used to work at the Cognitive Neurophysiology Lab in the Scripts Institute-- doing some work on functional brain image-- I can confirm this was not news even thirty years ago. I guess this is trying to make some point to lay people?
fMRI has been abused by a lot of researchers, doctors, and authors over the years even though experts in the field knew the reality. It’s worth repeating the challenges of interpreting fMRI data to a wider audience.
Are there proposed reasons for increased blood flow to brain regions other than neural activity? Are neurons flushing waste products or something when less active?
The BOLD response (oxygen-neuronal activity coupling) has been pretty much accepted in neuroscience. There have been criticisms about it (non-neuronal contributions, mysteries of negative responses/correlations) but in general it is pretty much accepted.
I might be oversimplifying, but isn't a lot of our neurological understanding about ADHD based on "fMRI shows decreased activity in the frontal cortex"? Or for that matter, our neurological understanding of a lot of mental health conditions.
I know the actual diagnosis is several times more layered than this attempt at an explanation, but I always felt that trying to explain the brain by peering at it from outwards is like trying to debug code by looking at a motherboard through a bad microscope.
I wonder how much variation there is between a person who does certain mental activity regularly vs a person who rarely does it.
If they were to measure a person who performs mental arithmetic on a daily basis, I'd expect his brain activity and oxygen consumption to be lower than those of a person who never does it. How much difference would that make?
It involved going to the lab and practicing the thing (a puzzle / maze) I would be shown during the actual MRI. I think I went in to “practice” a couple times before showing up and doing it in the machine.
IIRC the purpose of practicing was exactly that, to avoid me trying ti learn something during the scan (since that wasn’t the intention of the study).
In other words, I think you can control for that variable.
(Side note: I absolutely fell asleep during half the scan. Oops! I felt bad, but I guess that’s a risk when you recruit sleep deprived college kids!)
I worked in an fMRI lab briefly as a grad student. I suspect you'd be correct but perhaps not exactly why you'd expect. Studies using fMRI measure a blood-oxygenation-level-dependent (BOLD) signal in the brain. This is thought to be an indirect measure of neural activity because a local increase in neural firing rate produces a local increase in the need for, and delivery of, oxygenated blood.
The question then is, do you expect a person who is really good at mental arithmetic to have less neural firing on arithmetic tasks (e.g., what is 147 x 38) than the average joe. I would hypothesize yes overall to solve each question; however, I'd also hypothesize the momentary max intensity of the expert to peak higher. Think of a bodybuilder vs. a SWE bench-pressing 100 lbs for 50 reps. The bodybuilder has way more muscle to devote to a single rep, and will likely finish the set in 20 seconds, while the SWE is going to take like 30 minutes ;)
For task fMRI, the test-retest reliability is so poor it should probably be considered useless or bordering on pseudoscience, except for in some very limited cases like activation of the visual and/or auditory and/or motor cortex with certain kinds of clear stimuli. For resting-state fMRI (rs-fMRI), the reliabilities are a bit better, but also still generally extremely poor [1-3].
There are also two IMO major and devastating theoretical concerns re fMRI that IMO make the whole thing border on nonsense. One is the assumed relation between the BOLD signal and "activation", and two is the extremely horrible temporal resolution of fMRI.
It is typically assumed that the BOLD response (increased oxygen uptake) (1) corresponds to greater metabolic activity, and (2) increased metabolic activity corresponds to "activation" of those tissues. This trades dubiously on the meaning of "activation", often assuming "activation = excitatory", when we know in fact much metabolic activity is inhibitory. fMRI cannot distinguish between these things.
There are other deeper issues, in that it is not even clear to what extent the BOLD signal is from neurons at all (could be glia), and it is possible the BOLD signal must be interpreted differently in different brain regions, and that the usual analyses looking for a "spike" in BOLD activity are basically nonsense, since BOLD activity isn't even related to this at all, but rather the local field potential, instead. All this is reviewed in [4].
Re: temporal resolution, essentially, if you pay attention to what is going on in your mind, you know that a LOT of thought can happen in just 0.5 seconds (think of when you have a flash of insight that unifies a bunch of ideas). Or think of how quickly processing must be happening in order for us to process a movie or animation sequence where there are up to e.g. 10 cuts / shots within a single second. There is also just biological evidence that neurons take only milliseconds to spike, and that a sequence of spikes (well under 100ms) can convey meaningful information.
However, the lowest temporal resolutions (repetition times) in fMRI are only around 0.7 seconds. IMO this means that the ONLY way to analyze fMRI that makes sense is to see it as an emergent phenomenon that may be correlated with certain kinds of long-term activity reflecting cyclical BOLD patterns / low-frequency patterns of the BOLD response. I.e. rs-fMRI is the only fMRI that has ever made much sense a priori. The solution to this is maybe to combine EEG (extremely high temporal resolution, clear use in monitoring realtime brain changes like meditative states and in biofeedback training) with fMRI, as in e.g. [5]. But, it may still well be just the case fMRI remains mostly useless.
[1] Elliott, M. L., Knodt, A. R., Ireland, D., Morris, M. L., Poulton, R., Ramrakha, S., Sison, M. L., Moffitt, T. E., Caspi, A., & Hariri, A. R. (2020). What Is the Test-Retest Reliability of Common Task-Functional MRI Measures? New Empirical Evidence and a Meta-Analysis. Psychological Science, 31(7), 792–806. https://doi.org/10.1177/0956797620916786
[2] Herting, M. M., Gautam, P., Chen, Z., Mezher, A., & Vetter, N. C. (2018). Test-retest reliability of longitudinal task-based fMRI: Implications for developmental studies. Developmental Cognitive Neuroscience, 33, 17–26. https://doi.org/10.1016/j.dcn.2017.07.001
[3] Termenon, M., Jaillard, A., Delon-Martin, C., & Achard, S. (2016). Reliability of graph analysis of resting state fMRI using test-retest dataset from the Human Connectome Project. NeuroImage, 142, 172–187. https://doi.org/10.1016/j.neuroimage.2016.05.062
[5] Ahmad, R. F., Malik, A. S., Kamel, N., Reza, F., & Abdullah, J. M. (2016). Simultaneous EEG-fMRI for working memory of the human brain. Australasian Physical & Engineering Sciences in Medicine, 39(2), 363–378. https://doi.org/10.1007/s13246-016-0438-x
Even if neuronal activity is (obviously) faster, the (assumed) neuro-vascular coupling is slower. Typically there are several seconds till you get a BOLD response after a stimulus or task, and this has nothing to do with fMRI sampling rate (fNIRS can have much faster sampling rate, but the BOLD response it measures is equally slow, too). Think of it as that neuronal spiking happens in a range of up to some hundred milliseconds and the body changing the blood flow happens much slower than that.
The issue is that measuring the BOLD response, even in best case scenario, is a very very indirect measure of neuronal activity. This is typically lost when people referring to fMRI studies as discovering "mental representations" in the brain and other non-sense, but here we are. Criticising the validity of the BOLD response itself, though, is certainly interesting.
re: your last point that is not true. we can measure arbitrarially quickly (Nottingham group does some 3d EVI at ~100ms TRs). You can also reduce volumes and just look at single slices etc, a lot of the fundamental research did this (wash U / Minnesota / etc in the 90s). Its just not all that useful because the SNR tanks and the underlying neurovascular response is inherently low-pass. There is a much faster 'initial-dip' where the bold signal swings the other way and crosses zero (from localized accumulation of DeoxyHg before the inrush of OxyHg from the vascular response). Its a lot better correlated with LFP / spiking measures but just very hard to measure on non-research scanners...
What's surprising is the desire to have a silver bullet, or not solution.
What's still amazing is fMRI can provide more visual context of what's happening in the brain, in what region, and activities that can help that improve.
There are other complementary technologies like QEEG and SPECT that can also shed a light as well.
It does seem the case that fMRI cann be more of a snapshot photo, and technologies like SPECT can provide more of a regional time lapse of activity.
> Many fMRI studies on psychiatric or neurological diseases – from depression to Alzheimer’s – interpret changes in blood flow as a reliable signal of neuronal under- or over-activation. Given the limited validity of such measurements, this must now be reassessed
idk about that, the brain is complicated, blood flow itself may as well be a factor to interpret
fMRI is a cool, expensive tech, like so many others in genetics and other diagnostics. These technologies create good jobs ("doing well by doing good").
But as other comments point out, and practitioners know, their usefulness for patients is more dubious.
Why did TUM let this misleading headline front the news release? Dont we have enough issues with Academia? The result just mean BOLD is an imperfect proxy.
It is especially unforgiveable that the title of on the news release itself is about "40 percent of MRI signals". What, as in all MRI, not just fMRI? Hopefully an honest typo and not just resulting from ignorance.
[+] [-] NalNezumi|3 months ago|reply
And how it was almost impossible to reproduce many published and well cited result. It was both exciting and jarring to talk with the neuroscientist, because they ofc knew about this and knew how to read the papers but the one doing more funding/business side ofc didn't really spend much time putting emphasis on that.
One of the team presented a accepted paper that basically used Deep Learning (Attention) to predict images that a person was thinking of, from the fMRI signals. When I asked "but DL is proven to be able to find pattern even in random noise, so how can you be sure this is not just overfitting to artefact?" and there wasn't really any answer to that (or rather the publication didn't take that in to account, although that can be experimentally determined). Still, a month later I saw tech explore or some tech news writing an article about it, something like "AI can now read your brain" and the 1984 implications yada yada.
So this is indeed something probably most practitioners, masters and PhD, realize relatively early.
So now that someone says "you know mindfulness is proven to change your brainwaves?" I always add my story "yes, but the study was done with EEG, so I don't trust the scientific backing of it" (but anecdotally, it helps me)
[+] [-] SubiculumCode|3 months ago|reply
To put it succinctly, I think you have overfit your conclusions on the amount of data you have seen
[+] [-] jtbayly|3 months ago|reply
[+] [-] D-Machine|3 months ago|reply
[+] [-] Plutoberth|3 months ago|reply
[+] [-] ErroneousBosh|3 months ago|reply
So here you say quite a mouthful. If you train it on a pattern it'll see that pattern everywhere - think about the early "Deep Dream" trippy-dogs-pictures nonsense that was pervasive about eight or nine years ago.
I repaired a couple of cameras for someone who was working with a large university hospital about 15 years ago, where they were using admittedly 2010s-era "Deep Learning" to analyse biopsy scans for signs of cancer. It worked brilliantly, at least with the training materials, incredible hit rate, not too terrible false positive rate (no biggie, you're just trying to decide if you want to investigate further), really low false negative rate (if there was cancer it would spot it, for sure, and you don't want to miss that).
But in real-world patient data it went completely mental. The sample data was real-world patient data, too, but on "uncontrolled" patients, it was detecting cancer all over the place. It also detected cancer in pictures of the Oncology department lino floor, it detected cancer in a picture of a guy's ID badge, it detected cancer in a closeup of my car tyre, and it detected cancer in a photo of a grey overcast sky.
Aw no. Now what?
Well, that's why I looked at the camera for them. They'd photographed the biopsies with one camera on site, from "real patients", but a lot of the "clear" biopsies were from other sites.
You're ahead of me now, aren't you?
The "Deep Learning" system had in fact trained itself on a speck of shit on the sensor of one of the cameras, the one used for most of the "has cancer" biopsies and most of the "real patient under test" biopsies. If that little blob of about a dozen slightly darker pixels was present, then it must be cancer because that's what the grown-ups told it. The actual picture content was largely irrelevant because the blob was consistent across all of them.
I'm not too keen on AI in healthcare, not as a definitive "go/no-go" test thing.
[+] [-] selfmodruntime|3 months ago|reply
[+] [-] rcv|3 months ago|reply
[+] [-] prefrontal|3 months ago|reply
[+] [-] kspacewalk2|3 months ago|reply
I'm a software engineer in this field, and this is my layman-learns-a-bit-of-shop-talk understanding of it. Both of these techniques involve multiple layers of statistical assumptions, and multiple steps of "analysing" data, which in itself involves implicit assumptions, rules of thumb and other steps that have never sat well with me. A very basic example of this kind of multi-step data massaging is "does this signal look a bit rough? No worries, let's Gaussian-filter it".
A lot of my skepticism is due to ignorance, no doubt, and I'd probably be braver in making general claims from the image I get in the end if I was more educated in the actual biophysics of it. But my main point is that it is not at all obvious that you can simply claim "signal B shows that signal A doesn't correspond to actual brain activity", when it is quite arguable whether signal B really does measure the ground truth, or whether it is simply prone to different modelling errors.
In the paper itself, the authors say that it is limited by methodology, but because they don't have the device to get an independent measure of brain activation, they use quantitative MRI. They also say it's because of radiation exposure and blah blah, but the real reason is their uni can't afford a PET scanner for them to use.
"The gold standard for CBF and CMRO2 measurements is 15O PET; but this technique requires an on-site cyclotron, a sophisticated imaging setup and substantial experience in handling three different radiotracers (CBF, 15O-water; CBV, 15O-CO; OEF, 15O-gas) of short half-lives8,35. Furthermore, this invasive method poses certain risks to participants owing to the exposure to radioactivity and arterial sampling."
[+] [-] SubiculumCode|3 months ago|reply
[+] [-] subroutine|3 months ago|reply
1. http://prefrontal.org/blog/2009/01/voodoo-correlations-in-so...
2. https://journals.sagepub.com/doi/10.1111/j.1745-6924.2009.01...
3. https://www.mdpi.com/2076-0760/12/1/15
[+] [-] eykanal|3 months ago|reply
fMRI has always had folks highlighting how shaky the science is. It's not the strongest of experimental techniques.
[+] [-] cafebeen|3 months ago|reply
[+] [-] salynchnew|3 months ago|reply
Direct link to the poster presentation: http://prefrontal.org/files/posters/Bennett-Salmon-2009.pdf
[+] [-] dang|3 months ago|reply
Risk of false positives in fMRI of post-mortem Atlantic salmon (2010) [pdf] - https://news.ycombinator.com/item?id=15598429 - Nov 2017 (41 comments)
Scanning dead salmon in fMRI machine (2009) - https://news.ycombinator.com/item?id=831454 - Sept 2009 (1 comment)
[+] [-] levocardia|3 months ago|reply
[+] [-] zahlman|3 months ago|reply
[+] [-] yboris|3 months ago|reply
https://source.washu.edu/2025/12/psychedelics-disrupt-normal...
[+] [-] HocusLocus|3 months ago|reply
[+] [-] Aurornis|3 months ago|reply
There have been some high profile influencer doctors pushing brain imaging scans as diagnostic tools for years. Dr. Amen is one of the worst offenders with his clinics that charge thousands of dollars for SPECT scans (not the same as the fMRI in this paper but with similar interpretation issues) on patients. Insurance won’t cover them because there’s no scientific basis for using them in diagnosing or treating ADHD or chronic pain, but his clinics will push them on patients. Seeing an image of their brain with some colors overlayed and having someone confidently read it like tea leaves is highly convincing to people who want answers. Dr. Amen has made the rounds on Dr. Phil and other outlets, as well as amassing millions of followers on social media.
[+] [-] kspacewalk2|3 months ago|reply
[0] https://www.youtube.com/watch?v=J-SHgZ1XPXs
[+] [-] ashleyn|3 months ago|reply
https://www.wired.com/2009/09/fmrisalmon/
[+] [-] badlibrarian|3 months ago|reply
[+] [-] caycep|3 months ago|reply
[+] [-] mNovak|3 months ago|reply
Indeed, there's been quite a few studies [1] that find just including any old image of a brain with stuff highlighted will cause a paper to be perceived as more scientifically credible.
[1] https://pubmed.ncbi.nlm.nih.gov/17803985/
[+] [-] suyash|3 months ago|reply
[+] [-] api|3 months ago|reply
Influencers in general are always suspect. The things that get you an audience fast are trolling or tabloid-ish tactics like conspiracism.
There are good ones but you have to be discerning.
[+] [-] saidnooneever|3 months ago|reply
[+] [-] Olshansky|3 months ago|reply
We sped up fMRI analysis using distributed computing (MapReduce) and GPUs back in 2014.
Funny how nothing has changes.
[+] [-] mrcrm9494|3 months ago|reply
[+] [-] georgeecollins|3 months ago|reply
[+] [-] Aurornis|3 months ago|reply
[+] [-] tlb|3 months ago|reply
[+] [-] freehorse|3 months ago|reply
[+] [-] jtbayly|3 months ago|reply
[+] [-] instagraham|3 months ago|reply
I know the actual diagnosis is several times more layered than this attempt at an explanation, but I always felt that trying to explain the brain by peering at it from outwards is like trying to debug code by looking at a motherboard through a bad microscope.
[+] [-] zerof1l|3 months ago|reply
If they were to measure a person who performs mental arithmetic on a daily basis, I'd expect his brain activity and oxygen consumption to be lower than those of a person who never does it. How much difference would that make?
[+] [-] cj|3 months ago|reply
It involved going to the lab and practicing the thing (a puzzle / maze) I would be shown during the actual MRI. I think I went in to “practice” a couple times before showing up and doing it in the machine.
IIRC the purpose of practicing was exactly that, to avoid me trying ti learn something during the scan (since that wasn’t the intention of the study).
In other words, I think you can control for that variable.
(Side note: I absolutely fell asleep during half the scan. Oops! I felt bad, but I guess that’s a risk when you recruit sleep deprived college kids!)
[+] [-] subroutine|3 months ago|reply
The question then is, do you expect a person who is really good at mental arithmetic to have less neural firing on arithmetic tasks (e.g., what is 147 x 38) than the average joe. I would hypothesize yes overall to solve each question; however, I'd also hypothesize the momentary max intensity of the expert to peak higher. Think of a bodybuilder vs. a SWE bench-pressing 100 lbs for 50 reps. The bodybuilder has way more muscle to devote to a single rep, and will likely finish the set in 20 seconds, while the SWE is going to take like 30 minutes ;)
[+] [-] D-Machine|3 months ago|reply
For task fMRI, the test-retest reliability is so poor it should probably be considered useless or bordering on pseudoscience, except for in some very limited cases like activation of the visual and/or auditory and/or motor cortex with certain kinds of clear stimuli. For resting-state fMRI (rs-fMRI), the reliabilities are a bit better, but also still generally extremely poor [1-3].
There are also two IMO major and devastating theoretical concerns re fMRI that IMO make the whole thing border on nonsense. One is the assumed relation between the BOLD signal and "activation", and two is the extremely horrible temporal resolution of fMRI.
It is typically assumed that the BOLD response (increased oxygen uptake) (1) corresponds to greater metabolic activity, and (2) increased metabolic activity corresponds to "activation" of those tissues. This trades dubiously on the meaning of "activation", often assuming "activation = excitatory", when we know in fact much metabolic activity is inhibitory. fMRI cannot distinguish between these things.
There are other deeper issues, in that it is not even clear to what extent the BOLD signal is from neurons at all (could be glia), and it is possible the BOLD signal must be interpreted differently in different brain regions, and that the usual analyses looking for a "spike" in BOLD activity are basically nonsense, since BOLD activity isn't even related to this at all, but rather the local field potential, instead. All this is reviewed in [4].
Re: temporal resolution, essentially, if you pay attention to what is going on in your mind, you know that a LOT of thought can happen in just 0.5 seconds (think of when you have a flash of insight that unifies a bunch of ideas). Or think of how quickly processing must be happening in order for us to process a movie or animation sequence where there are up to e.g. 10 cuts / shots within a single second. There is also just biological evidence that neurons take only milliseconds to spike, and that a sequence of spikes (well under 100ms) can convey meaningful information.
However, the lowest temporal resolutions (repetition times) in fMRI are only around 0.7 seconds. IMO this means that the ONLY way to analyze fMRI that makes sense is to see it as an emergent phenomenon that may be correlated with certain kinds of long-term activity reflecting cyclical BOLD patterns / low-frequency patterns of the BOLD response. I.e. rs-fMRI is the only fMRI that has ever made much sense a priori. The solution to this is maybe to combine EEG (extremely high temporal resolution, clear use in monitoring realtime brain changes like meditative states and in biofeedback training) with fMRI, as in e.g. [5]. But, it may still well be just the case fMRI remains mostly useless.
[1] Elliott, M. L., Knodt, A. R., Ireland, D., Morris, M. L., Poulton, R., Ramrakha, S., Sison, M. L., Moffitt, T. E., Caspi, A., & Hariri, A. R. (2020). What Is the Test-Retest Reliability of Common Task-Functional MRI Measures? New Empirical Evidence and a Meta-Analysis. Psychological Science, 31(7), 792–806. https://doi.org/10.1177/0956797620916786
[2] Herting, M. M., Gautam, P., Chen, Z., Mezher, A., & Vetter, N. C. (2018). Test-retest reliability of longitudinal task-based fMRI: Implications for developmental studies. Developmental Cognitive Neuroscience, 33, 17–26. https://doi.org/10.1016/j.dcn.2017.07.001
[3] Termenon, M., Jaillard, A., Delon-Martin, C., & Achard, S. (2016). Reliability of graph analysis of resting state fMRI using test-retest dataset from the Human Connectome Project. NeuroImage, 142, 172–187. https://doi.org/10.1016/j.neuroimage.2016.05.062
[4] Ekstrom, A. (2010). How and when the fMRI BOLD signal relates to underlying neural activity: The danger in dissociation. Brain Research Reviews, 62(2), 233–244. https://doi.org/10.1016/j.brainresrev.2009.12.004, https://scholar.google.ca/scholar?cluster=642045057386053841...
[5] Ahmad, R. F., Malik, A. S., Kamel, N., Reza, F., & Abdullah, J. M. (2016). Simultaneous EEG-fMRI for working memory of the human brain. Australasian Physical & Engineering Sciences in Medicine, 39(2), 363–378. https://doi.org/10.1007/s13246-016-0438-x
[+] [-] freehorse|3 months ago|reply
Even if neuronal activity is (obviously) faster, the (assumed) neuro-vascular coupling is slower. Typically there are several seconds till you get a BOLD response after a stimulus or task, and this has nothing to do with fMRI sampling rate (fNIRS can have much faster sampling rate, but the BOLD response it measures is equally slow, too). Think of it as that neuronal spiking happens in a range of up to some hundred milliseconds and the body changing the blood flow happens much slower than that.
The issue is that measuring the BOLD response, even in best case scenario, is a very very indirect measure of neuronal activity. This is typically lost when people referring to fMRI studies as discovering "mental representations" in the brain and other non-sense, but here we are. Criticising the validity of the BOLD response itself, though, is certainly interesting.
[+] [-] physPop|3 months ago|reply
[+] [-] j45|3 months ago|reply
What's still amazing is fMRI can provide more visual context of what's happening in the brain, in what region, and activities that can help that improve.
There are other complementary technologies like QEEG and SPECT that can also shed a light as well.
It does seem the case that fMRI cann be more of a snapshot photo, and technologies like SPECT can provide more of a regional time lapse of activity.
[+] [-] stainablesteel|3 months ago|reply
idk about that, the brain is complicated, blood flow itself may as well be a factor to interpret
[+] [-] antipaul|3 months ago|reply
fMRI is a cool, expensive tech, like so many others in genetics and other diagnostics. These technologies create good jobs ("doing well by doing good").
But as other comments point out, and practitioners know, their usefulness for patients is more dubious.
[+] [-] physPop|3 months ago|reply
[+] [-] unknown|3 months ago|reply
[deleted]
[+] [-] belter|3 months ago|reply
[+] [-] kspacewalk2|3 months ago|reply