"This study used machine-learning algorithms (Gaussian Naive Bayes) to identify such individuals (17 suicidal ideators versus 17 controls) with high (91%) accuracy, based on their altered functional magnetic resonance imaging neural signatures of death-related and life-related concepts."
Anyone with a Nature subscription want to check whether they simply trained their discriminator and then used it on the same data set? There's no mention in the abstract of testing it against a fresh control set and that's not promising.
"On each fold, the trained classifier was tested on the data of the left-out participant. This procedure was
reiterated for all 34 possible ways of leaving out one participant, yielding 34 classifications whose
averaged accuracies are reported."
Sounds like they overfit their cross validation score and reported that. The data is actually available here though:
Looking at it either from a machine learning or statistical point of view, using such a small sample is problematic.
This is the chronic issue with fMRI studies, since administering an fMRI is extremely expensive, and has led to some very difficult to reproduce results in the field.
scihub has it. looks like leave-one-out cross validation?
"A Gaussian Naive Bayes (GNB) classifier trained on the data of 33 out of 34 participants predicted the group membership of the remaining participant with a high accuracy of 0.91 (P<0.000001), correctly identifying 15 of the 17 suicidal participants and 16 of the 17 controls"
in the paper they state they performed " 34 “leave one participant out” cross-validation cycles (folds)", the dataset still seems to small to be able to draw any conclusion though.
Doesn't 91% seem far too low to be useful for the general population? Consider that only 7% of the background population experiences one or more depressive episode per year[0] (edit: okay maybe 8% in youth). Assuming independence and using the higher 8% background rate figure for youth, .91 * .08 = 7.3% of the population will receive a true positive result and (1-.91) * (1-.08) = 8.3% of the population will receive a false positive result. This is "pretty bad" — false positives outweigh the true positives — making the value of a positive result useless.
(Consider what happens to people so-diagnosed as suicidal when in fact they are not (false positives). Involuntary psychiatric imprisonment is a terrible thing if it isn't absolutely necessary.)
> I don't have the stats grounding to come up with the proportion of true positives to false positives, but I suspect this would be "pretty bad" — vastly more false positives than true positives
IANAStatistician, but let’s consider the system is right 91% of the time and we try to detect those 7% you mentioned. Let’s take 1000 people. 70 people are depressive and 930 aren’t. Out of those, 700.91=63 will be correctly classified as depressive by the system and 9300.91=846 will be correctly classified as non-depressive.
That leaves us with 63 positives, 846 negatives, 7 false negatives and 84 false positives. False positives largely outnumber false negatives, but they also outnumber the true positives.
(if a statistician read this, please correct me if I’m wrong)
Isn't the point of research to advance science one step at a time, not go from "does this look promising" to "yes, it works perfectly 100% of the time" in a single quantum leap.
"Machine learning entails training a classifier on a subset of the data and testing the classifier on an independent subset. The crossvalidation procedure iterates through all possible partitionings (folds) of the data, always keeping the training and test sets separate from each other. The main machine learning here uses a GNB classifier (using pooled variance).
[...]
The features used by the classifier to characterize a participant consisted of a vector of activation levels for several (discriminating) concepts in a set of (discriminating) brain locations. To determine how many and which concepts were most discriminating between ideators and controls, a reiterative procedure analogous to stepwise regression was used, first finding the single most discriminating concept and then the second most discriminating concept, reiterating until the next step reduced the accuracy. A similar procedure was used to determine the most discriminating
locations (clusters)."
https://www.nature.com/articles/s41562-017-0234-y
The winner is #3: data leakage leading them to use predictive skill on the training data.
It's training data. There's 17 suicidal and 17 non-suicidal scans, for a total of 34 scans. They trained 34 models, leaving one scan out each time. Of those 34 models, 31 correctly predicted the left-out scan.
IANAStatistician, but this seems like a trash result.
Not only that, the researchers admit that 80% of suicidal people deny being suicidal. Then, how can they be sure than the ones in the control group are not suicidal?
"Words like death and cruelty differentially activated the left superior medial frontal area and the medial frontal/anterior cingulate in the individuals with suicidal ideation – these are areas associated with self-referential thought." I wonder how they reacted to "alive" and "humane"
It's the kind of "studies" you call BS on first, then go on to figure out the details. Not a very scientific process for sure, but always produces the correct result.
Slightly off topic but the book "Change your brain change your life" was pretty interesting. Perhaps not as scientific as some would prefer, but none the less thought provoking.
Basically you extract a matrix representation of the active or inactive regions that is classified and have DNN learn it like you would learn images, is that a correct assumption?
To the moderators, the title would be more accurate with 'fMRI' as opposed to 'MRI'. The latter is typically used to examine structural brain elements, whereas fMRI is thought to correlate with brain activity and, by extension, thought.
Confusing the two would lead to the more unusual conclusion that suicidal ideation is associated with abnormal brain connectivity, while the authors are instead focusing on neuronal activity.
Specifically fMRI measures blood flow across the brain (the BOLD response) which is correlated with neuron activity. It has good spatial resolution but poor temporal resolution [0], compared to EEG which gives you good temporal resolution but poor spatial resolution.
[0] i.e. you know with precision where in the brain activity occurred, but less precisely when it occurred in time
I didn't even really realize I had these confused until you pointed it out. This makes a lot more sense and helps me understand the results. At first I was confused at how brain structure analysis predicted suicidal tendencies/thoughts.
So what will they do after they detect you are suicidal? Stick you in a psych ward? Yet more attempts at taking away the rights of those going through trauma.
That seems like putting the cart before the horse.
Diagnosis tools could mean faster access to treatment. Currently in the UK the waiting list for access to mental health treatment is on the range of two to three years. Transforming "suicidal ideation" from a "vague human-given diagnosis" to "tool-given diagnosis" makes it politically easier to push for that.
In any case, that's not going to happen based off a single study with 91% accuracy.
Maybe they should use this test before gun purchases... I don't think someone suicidal should purchase a gun...hell I don't care if they kill themselves, but lately a lot of suicides were mass suicides...we don't need more of that shit.
[+] [-] nieve|8 years ago|reply
Anyone with a Nature subscription want to check whether they simply trained their discriminator and then used it on the same data set? There's no mention in the abstract of testing it against a fresh control set and that's not promising.
https://www.nature.com/articles/s41562-017-0234-y?error=cook...
[+] [-] asperous|8 years ago|reply
Sounds like they overfit their cross validation score and reported that. The data is actually available here though:
http://www.ccbi.cmu.edu/Suicidal-ideation-NATHUMBEH2017/
[+] [-] onetwotree|8 years ago|reply
Looking at it either from a machine learning or statistical point of view, using such a small sample is problematic.
This is the chronic issue with fMRI studies, since administering an fMRI is extremely expensive, and has led to some very difficult to reproduce results in the field.
[+] [-] sigstoat|8 years ago|reply
"A Gaussian Naive Bayes (GNB) classifier trained on the data of 33 out of 34 participants predicted the group membership of the remaining participant with a high accuracy of 0.91 (P<0.000001), correctly identifying 15 of the 17 suicidal participants and 16 of the 17 controls"
[+] [-] bobstaples|8 years ago|reply
[+] [-] loeg|8 years ago|reply
(Consider what happens to people so-diagnosed as suicidal when in fact they are not (false positives). Involuntary psychiatric imprisonment is a terrible thing if it isn't absolutely necessary.)
[0]: https://www.healthline.com/health/depression/facts-statistic...
[+] [-] hk__2|8 years ago|reply
IANAStatistician, but let’s consider the system is right 91% of the time and we try to detect those 7% you mentioned. Let’s take 1000 people. 70 people are depressive and 930 aren’t. Out of those, 700.91=63 will be correctly classified as depressive by the system and 9300.91=846 will be correctly classified as non-depressive.
That leaves us with 63 positives, 846 negatives, 7 false negatives and 84 false positives. False positives largely outnumber false negatives, but they also outnumber the true positives.
(if a statistician read this, please correct me if I’m wrong)
[+] [-] PeachPlum|8 years ago|reply
[+] [-] nonbel|8 years ago|reply
Will it be that accuracy actually means AUC?
Will it be that they are reporting predictive skill on the training data?
[+] [-] nonbel|8 years ago|reply
[...]
The features used by the classifier to characterize a participant consisted of a vector of activation levels for several (discriminating) concepts in a set of (discriminating) brain locations. To determine how many and which concepts were most discriminating between ideators and controls, a reiterative procedure analogous to stepwise regression was used, first finding the single most discriminating concept and then the second most discriminating concept, reiterating until the next step reduced the accuracy. A similar procedure was used to determine the most discriminating locations (clusters)." https://www.nature.com/articles/s41562-017-0234-y
The winner is #3: data leakage leading them to use predictive skill on the training data.
[+] [-] jaibot|8 years ago|reply
IANAStatistician, but this seems like a trash result.
[+] [-] lottin|8 years ago|reply
[+] [-] iregina|8 years ago|reply
[+] [-] avip|8 years ago|reply
https://www.naturalblaze.com/2017/03/scandal-mri-brain-imagi...
[+] [-] verall|8 years ago|reply
[0] https://www.sciencealert.com/a-bug-in-fmri-software-could-in... [1] http://www.pnas.org/content/113/28/7900.abstract
[+] [-] chiefalchemist|8 years ago|reply
[+] [-] m3kw9|8 years ago|reply
[+] [-] unknown|8 years ago|reply
[deleted]
[+] [-] evolve2017|8 years ago|reply
Confusing the two would lead to the more unusual conclusion that suicidal ideation is associated with abnormal brain connectivity, while the authors are instead focusing on neuronal activity.
[+] [-] mintplant|8 years ago|reply
[0] i.e. you know with precision where in the brain activity occurred, but less precisely when it occurred in time
[+] [-] topgear25|8 years ago|reply
[+] [-] chris_wot|8 years ago|reply
[+] [-] fao_|8 years ago|reply
Diagnosis tools could mean faster access to treatment. Currently in the UK the waiting list for access to mental health treatment is on the range of two to three years. Transforming "suicidal ideation" from a "vague human-given diagnosis" to "tool-given diagnosis" makes it politically easier to push for that.
In any case, that's not going to happen based off a single study with 91% accuracy.
[+] [-] gremlinsinc|8 years ago|reply
[+] [-] sctb|8 years ago|reply
> Avoid unrelated controversies and generic tangents.
https://news.ycombinator.com/newsguidelines.html
[+] [-] Gibbon1|8 years ago|reply
Tip: If you own a gun and are feeling suicidal, give it to a trusted person for safekeeping.