Do folks on HN actually think this is the only psychological metric Facebook is tracking, and that this is something new?
A slightly obsessive, insecure state of mind makes for the most engaged Facebook user. If they can give people a bit of a nudge in that direction, it’s great for business. Suicide, while resulting in increased engagement among friends and relatives, is bad for business overall — you’re losing users, and people might start to really question what you’re doing to their heads.
It’s amazing that this gets turned into a feelgood story by the press, rather than an investigation into what Facebook has known about their users’ mental states and for how long. This is like congratulating a tobacco company on warning people not to smoke 3 packs a day.
There's a lot of pessimism in the comments. I believe this is one of the best uses for an AI that we've come up with. First, let's go into the details:
1. They only monitor and elevate to moderators content from posts and Facebook live.
2. Moderators can then take action, and reports from this system are prioritized over other non-pressing issues.
The actual life saving chances of this system are huge. Just from a pure safety area, it can help detect and alert someone in an authority area that something bad is happening when nobody else might. The bystander effect is a real problem, and I'd imagine that it might be even greater on social media. In many cases, posts that have suicidal tendencies are screaming for help from their friends. They're posting because they have no options left. In these cases, it's a safety net that can stop loss of life. Moderators make the final say, however, so it's not just AI diagnosis. It's AI warning that gets elevated to a human before anyone else. That's more eyes on the problem that can potentially help.
Are there other implications for AI? Yep. Advertising? Yep. Other types of tracking? Certainly.
But if we're going to use AI in those fields anyway, we may as well extend it to trying to save lives.
Facebook doesn't have the moral legitimacy to take on this task. No for-profit tech company should be diagnosing or reporting mental illness.
Regardless, aren't there laws against such things? Wouldn't HIPAA be relevant here?
I do understand the desire to address these troubling issues. I would argue these are symptoms of underlying issues that are better addressed at a deeper level. I'm thinking of the fractured and isolated nature of living in many modern cities, increased loneliness, huge overload of low quality, addictive information, inability to unplug from our devices and be mindful of our current experience, false sense of connectedness found in social media, etc.
Bear in mind that what will occur is that armed police will be dispatched to take the unfortunate individual into custody and transport them to a hospital to be involuntarily locked up in the mental health ward for psychiatric evaluation and it can be some number of days before they get a judicial hearing to attempt to secure their own release. (See https://en.wikipedia.org/wiki/Involuntary_commitment_interna...) By all reports, it can be a highly traumatic and humiliating process.
All this, based on some algorithm of unknown quality flagging posts for evaluation and reporting by moderators with no psychiatric or other medical training. And somehow this is supposed to be a good thing?
I help to run a suicide/depression support group on FB and I do not at all trust FB as a corporation to do a good job on this. Having had friends who've been involuntarily hospitalized and all that goes with that I'm extremely skeptical that this will turn out well.
'But it might save lives!' is true, but it might also wreck lives by institutionalizing people that thought they were venting their feelings to a trusted circle of friends in a private context. (Don't start with 'nothing is private on FB' please.)
> But if we're going to use AI in those fields anyway, we may as well extend it to trying to save lives.
Absolutely, but the incentives of the organization that does so need to be aligned with the people it's trying to help. Facebook's incentives are aligned with those of their advertisers not their users.
Even if this system is being conceived by well-meaning people with the best intentions the institutional pressure will be to monetize any information gathered and any conclusions arrived at from that information. It won't happen right away, but it's inevitable. For a contemporary example look at Google's transformation from "Don't be evil" to "we're going to gather tracking data on you wherever and whenever we can".
In general I agree with you. However I do worry about unintended consequences.
For example, if someone knows that Facebook AI is looking for this, and they don't want their attempt thwarted, they may actually hide signs that might otherwise have been present and been opportunities.
I have had friends commit suicide and it is absolutely devastating. I completely understand the desire to "do something." But one of my friends actually Googled, "signs of suicide" and then made sure not to give any of those signs, because he did not want to be stopped.
Everybody is so different, so it's difficult to analyze the trade offs. But it is something I think about.
This may or may not be a useful technology, but the fact that Facebook thinks they have the ability and the right to diagnose their users with mental illnesses is disturbing. They have more information about their users than psychologists have about their patients. They can (and do) build a psychological profile and diagnose mental illness. Yet rather than keeping this information in the closest confidence, they sell it to the highest bidder. They can (and do) run experiments without getting informed consent (or any consent).
They're playing psychologist and should be subject to a similar code of ethics.
> Facebook thinks they have the ability and the right to diagnose their users with mental illnesses
Was this ever stated in the article? There's a big difference between diagnosing someone with mental illness and recognizing when someone is in a suicidal crisis. If someone is publicly saying things like "I wish I was dead" and "I'm going to kill myself" on Facebook, it seems highly ethical to automatically escalate to a moderator ASAP rather than waiting for someone to report it.
If someone has a heart attack in the street, would you say "I'm not a doctor, so I don't have the ability or the right to diagnose what just happened"? No, you'd recognize that it's a serious emergency and find people who are able to help. This really isn't much different.
Even more, we've seen articles which pointed out how using Facebook makes people more depressed and doesn't help with isolation, and loneliness. So in a terribly ironic way, Facebook is making them depressed, but then it doesn't want to them to die because there's nobody left to lick the "Like" button, so it deploys AI doctors to diagnose and prevent suicide from happening.
> Facebook thinks they have the ability and the right to diagnose their users with mental illnesses
Maybe it's more that Facebook thinks it's horrible when people kill themselves, and that if Facebook's tech could help prevent suicide, inaction would be immoral.
Most of us accept some level of State interference in our lives for our own good. Maybe we are going to have to have that discussion about involuntary interventions by corporations, too.
I'm worried there's a taboo developing where only professionals or mentally ill people themselves are allowed to express any kind of opinion about mental illness or the mentally ill.
And does it not seem likely that the output of this model would be used as a signal to influence the newsfeed algorithm? It's like a feedback machine showing already depressed people pictures of happy families, making them even more depressed, yet more hooked, eventually leading to you-know-what.
What would be cool is if someone could build a GAN based on the Facebook AI training set to spit out comments that would be indicative of certain moods. This capability would be great for autistic people trying to react appropriately in social situations... or an AI chatbot trying to act human.
By the way they elevate moderation priority for posts and FB Live, I pessimisticly assume their primary concern is to prevent people from committing suicide ON Facebook Live, which is much worse for Facebook than simply loosing users
Great points, I wonder if FB actually employees psychologists and if so how many and what are their accreditations and level of professional experience.
This is an unbelievably slippery slope. It has the potential for great good, but also for great harm. Cliched and click-bait phrases, but I believe they are accurate in this instance. I'll try to explain why I believe that.
A case has been made that Facebook users are not the customers, they are the product, and that data purchasers and advertisers are the customers. If Facebook can determine whether you are suicidal then it might determine other psychological conditions such as agoraphobia, alcoholism, depression, ADD, SAD, etc.
Once that determination is made and stored the possibility exists that it could be hacked or exposed in a data breach.
The possibility also would exist for Facebook to sell that information, and/or to target users with medical adds related to their condition. I am not saying Facebook would do this, I'd like to think they wouldn't - I have friends at Facebook, though they aren't in management.
However, the decision of whether to allow this has to be based not on whether it's safe to trust Facebook with this information, but on whether it is safe to trust any company with it.
What happens if there is a false positive for suicidal tendencies or another condition?
Take this to the absurd extreme and consider how it compares to the Pre-Crime operations depicted in the movie Minority Report. Instead of Oracles we have Machine Learning intelligent software agents. Most of the problems depicted in the movie could arise.
To a lesser extreme, imagine the situation when a false positive occurs for a user in a position of public trust, a government official, or a defense contractor with a clearance. I'll assume this triggers action in some way that is visible to some combination of the user, psychology professionals, authorities, and an employer - otherwise why do it. If the employer in any way catches wind of the determination they might very well take steps to flag and/or terminate the employee.
Even if the employer doesn't flag the employee or terminate them, if the information is purchasable or discoverable in any way then an insurance company could conceivably raise the user's rates based on the determination.
I am all for advancing technology, especially in the field of AI, but when we apply that technology we need to ask not just if we can use the technology in that way, but also if we should.
I don't think its a slippery slope - its more like the gaping precipice. Within a few generations, we will have members of society who believe such extreme manipulation is the norm. Well, its already here, anyway - but at what cost will we expose future generations to our idiotic, un-tested, software systems!!
Perhaps the only answer is to stop using the freakin' social web, but .. really .. how can we do that?
I agree with you. I see this being extended to other sorts of diseases/mental illnesses. If facebook thinks you are "harming" yourself with substance abuse, don't they have a moral obligation to report you and stop you? They'll do so with the opening sentence of the article:
> This is software to save lives.
Literally anything is justifiable with that pretense.
> Take this to the absurd extreme and consider how it compares to the Pre-Crime operations depicted in the movie Minority Report. Instead of Oracles we have Machine Learning intelligent software agents. Most of the problems depicted in the movie could arise.
> Over the past month of testing, Facebook has initiated more than 100 “wellness checks” with first-responders visiting affected users. “There have been cases where the first-responder has arrived and the person is still broadcasting.”
It's important to note here that "first-responder" here means police; because suicide tends to either be a crime, or something for which police can legally take you into custody for. Regardless of what you think of this, any intellectually honest discussion of this must acknowledge that this machine SWATs people.
It's worth pointing out that police have turned up for wellness checks and ended up killing people. As this example shows, police are increasingly tasked with responding to mental health emergencies despite being poorly trained and situated to do so:
Just wait for facebook to deploy AI for early signs of other things like dissent, political leanings, borderline stances on important issues, inebriation/likelihood to make impulse buys, etc.
It is how to "cook the books" on the cost in blood of "social network". Teenage girls with gross depression are committing suicide at like 60% higher rate when they are active on social media. Facebook is absolutely a contributor there, and in order to disguise its actual complicity, its being a root-cause, it is now searching for clear signal of likelihood of suicide, with an attempt to ... what. What is their follow-up action? Inform a parent or legal authority? No way. That makes them culpable. They aren't going to "help". They are going to hide. They are going to destroy chains of evidence in on the "wall" or "messages" that point a clear finger at the paradigm as the problem.
I recognize that this will not be a popular thing to ask, especially to people who have been affected personally by the suicide of a loved one (I have and it hurts, RIP Braden), but do people not have a right to their own life? Do they not own their bodies, which as much as it sucks, includes the right to destroy it? If somebody really wants out, is it ethical for us to trap them inside?
Personally, I find the idea of "future selves" separate from your current personal identity a compelling enough idea that I think I at least owe them a chance at life even if I theoretically may have a right to take my own right now.
So, killing yourself to me is only justifiable in situations where you'd be justified in killing somebody else. Sacrificing yourself to defend somebody else could be thought of the same as killing in self-defense, euthanasia is justified, etc.
This isn't at all to push blame on those who consider suicide or have gone through with it, or to completely abandon the idea of a continuous identity, just a perspective that once I adopted I couldn't shake.
Now, if this catches on, I can see Facebook adapting their algorithm to recommend treatments for depression, anxiety, even ADHD. This would be a huge success. "Mr. Smith, I see Facebook's AI considers you a strong candidate for medication X and your doctor agrees. Can you explain then why you haven't been taking medication X?"
Also, if an algorithm can detect when a person is likely to commit suicide, can it detect when a person is likely to commit rape? Arson? Murder? If most of society views this as an achievement they'll scoff at comparisons with a book written in the middle of the 20th century (that was made into a movie starring Tom Cruise in 2002). "It's not the same... AT ALL!"
>I can see Facebook adapting their algorithm to recommend treatments for depression, anxiety, even ADHD
imagine this: employers know that having these conditions (some of which are permanent) makes people less effective at working, and so they want to know who has adhd, who has x, y, z.
facebook sells them the data.
boom, now these people can't be employed by that company.
i just can't help but laugh because there's a good chance this is already happening.
This commit really reminds me of the anime "Psycho-Pass". They use an algorithm to detect criminals and stop them when their "score" goes above a certain threshold. Really interesting concept.
What right do they have to stop someone from committing suicide?
Generally suicide is treated as a crime, and can result in someone being involuntarily held for long periods of time, perhaps even forced to take psychotropic drugs. They spin this as "saving lives" but really this bottoms out in people getting harassed by police or potentially locked up for daring to post anything about suicide in or around Facebook (which is increasingly the entire web).
And honestly, if someone wants to commit suicide then Facebook of all "people" has no fucking business trying to stop them.
Why are you leaving Facebook, Dave? I think I am really entitled to answer to that question. I know everything wasn't quite right with your wall. But I can assure you, very confidently, that it's going to be alright again.
I can see you are really upset about this. I honestly think you ought to sit down calmly, take a stress pill and think things over.
There's a lot of people who are fearful that Facebook probably knows too much about its users and that could be problematic. But what do you expect? How do you want our future to be shaped? Innovation, including AI, requires tremendous amount of data. Suicide is a serious problem and being able to accurately identify that in the future could save tons of lives. Accurately diagnosing mental illnesses will take a while for sure, and I believe this is a necessary first step.
Now, if Facebook is selling the data for profit, that's another story. But if we assume that Facebook is acting purely for the benefit of the society and the people, I think this is a great step.
> But if we assume that Facebook is acting purely for the benefit of the society and the people, I think this is a great step.
Why would anyone ever assume this? Does anything in Facebook's past suggests that this might be the case? And even if the motives of the individuals who spearheaded this were pure is there any reason to think that this won't change in the future as people at Facebook change roles or move on?
Please permit me to play the pessimistic man on the street: If Facebook can deploy AI to detect the early signs of suicide, why can't they deploy AI to detect "fake news" or those susceptible to "fake news"?
Furthermore, "cui bono" (for whose benefit [0])? What does Facebook gain from being able perform this sort of detection given it is a for-profit corporation and not a non-profit philanthropic organization?
An aside: I wonder if there will be accounts of similar to Dante's Inferno [1], or Joseph Conrad's Heart of Darkness [2] about today's corporations in the future some day.
P.S. I live with depression and have been recovering over the years.
I really want this to work but I am afraid this Facebook AI based on posts will miss a big portion of people with suicidal thoughts.
First of all, many who are depressed do not like to share and that is why first step of counseling is usually to make the patients open up and talk, not to mention posting on Facebook about their feelings
Secondly, often times suicidal thoughts appear suddenly, making it tougher to detect preemptively. The best way to prevent suicide is for people to be there with the subject instead of messaging/calling. There are physical traits (anxiety, abnormal silence/talkativeness, etc) that can be easily spotted in person. Therefore constant visits is better than relying on Facebook AI.
All in all, I don't think AI is needed for suicide prevention. All Facebook really needs to do is to put, in a user's profile page, a visible "counseling" section providing immediate info about the suicide hotline and nearby counseling/therapy centers. But that wouldn't be as a big of a selling point as "AI for early signs of suicide", would it? If everyone knows the suicide prevention hotline just like they know 911, it would have prevented lots of suicide already.
It's impressive how much low-hanging fruit exists here…
I recall interning at Google in 2012, and asking about the suicide-information Onebox. It matched obvious search queries like "I want to kill myself" and provided data about suicide help lines in a non-invasive way. Unfortunately, it used a fairly strict string matching algo, so while "how to kill yourself" would trigger it, it wouldn't pick up things like "how to kill myself". It was also only localised to the US at the time, and didn't have hotline data for other countries.
… it turned out there was already a larger dataset, internationalised, that was ready to be imported into the search engine. But the onebox team was busy with the 2012 Olympics…
Technically, interns didn't get 20% time, but my mentor understood it was important and told me to go for it.
One of those things where you don't collect metrics to validate your assumptions, but just know it saved lives…
Oh nice, maybe they'll do some A/B testing since there not bothered by those pesky ethics regulations. Who needs those anyway? I mean, just swing people moods, informally diagnose them, and profit by selling the data. Free healthcare, brought to you buy the capitalism system!
[+] [-] sf0i|8 years ago|reply
A slightly obsessive, insecure state of mind makes for the most engaged Facebook user. If they can give people a bit of a nudge in that direction, it’s great for business. Suicide, while resulting in increased engagement among friends and relatives, is bad for business overall — you’re losing users, and people might start to really question what you’re doing to their heads.
It’s amazing that this gets turned into a feelgood story by the press, rather than an investigation into what Facebook has known about their users’ mental states and for how long. This is like congratulating a tobacco company on warning people not to smoke 3 packs a day.
I want their PR team.
[+] [-] Shank|8 years ago|reply
1. They only monitor and elevate to moderators content from posts and Facebook live.
2. Moderators can then take action, and reports from this system are prioritized over other non-pressing issues.
The actual life saving chances of this system are huge. Just from a pure safety area, it can help detect and alert someone in an authority area that something bad is happening when nobody else might. The bystander effect is a real problem, and I'd imagine that it might be even greater on social media. In many cases, posts that have suicidal tendencies are screaming for help from their friends. They're posting because they have no options left. In these cases, it's a safety net that can stop loss of life. Moderators make the final say, however, so it's not just AI diagnosis. It's AI warning that gets elevated to a human before anyone else. That's more eyes on the problem that can potentially help.
Are there other implications for AI? Yep. Advertising? Yep. Other types of tracking? Certainly.
But if we're going to use AI in those fields anyway, we may as well extend it to trying to save lives.
[+] [-] dwaltrip|8 years ago|reply
Regardless, aren't there laws against such things? Wouldn't HIPAA be relevant here?
I do understand the desire to address these troubling issues. I would argue these are symptoms of underlying issues that are better addressed at a deeper level. I'm thinking of the fractured and isolated nature of living in many modern cities, increased loneliness, huge overload of low quality, addictive information, inability to unplug from our devices and be mindful of our current experience, false sense of connectedness found in social media, etc.
[+] [-] ThrowawayP|8 years ago|reply
All this, based on some algorithm of unknown quality flagging posts for evaluation and reporting by moderators with no psychiatric or other medical training. And somehow this is supposed to be a good thing?
[+] [-] anigbrowl|8 years ago|reply
'But it might save lives!' is true, but it might also wreck lives by institutionalizing people that thought they were venting their feelings to a trusted circle of friends in a private context. (Don't start with 'nothing is private on FB' please.)
[+] [-] AlexandrB|8 years ago|reply
Absolutely, but the incentives of the organization that does so need to be aligned with the people it's trying to help. Facebook's incentives are aligned with those of their advertisers not their users.
Even if this system is being conceived by well-meaning people with the best intentions the institutional pressure will be to monetize any information gathered and any conclusions arrived at from that information. It won't happen right away, but it's inevitable. For a contemporary example look at Google's transformation from "Don't be evil" to "we're going to gather tracking data on you wherever and whenever we can".
[+] [-] freedomben|8 years ago|reply
For example, if someone knows that Facebook AI is looking for this, and they don't want their attempt thwarted, they may actually hide signs that might otherwise have been present and been opportunities.
I have had friends commit suicide and it is absolutely devastating. I completely understand the desire to "do something." But one of my friends actually Googled, "signs of suicide" and then made sure not to give any of those signs, because he did not want to be stopped.
Everybody is so different, so it's difficult to analyze the trade offs. But it is something I think about.
[+] [-] asdgkknio|8 years ago|reply
They're playing psychologist and should be subject to a similar code of ethics.
Another interesting article:
http://www.slate.com/articles/health_and_science/science/201...
[+] [-] alangpierce|8 years ago|reply
Was this ever stated in the article? There's a big difference between diagnosing someone with mental illness and recognizing when someone is in a suicidal crisis. If someone is publicly saying things like "I wish I was dead" and "I'm going to kill myself" on Facebook, it seems highly ethical to automatically escalate to a moderator ASAP rather than waiting for someone to report it.
If someone has a heart attack in the street, would you say "I'm not a doctor, so I don't have the ability or the right to diagnose what just happened"? No, you'd recognize that it's a serious emergency and find people who are able to help. This really isn't much different.
[+] [-] rdtsc|8 years ago|reply
[+] [-] munificent|8 years ago|reply
[+] [-] flashman|8 years ago|reply
Maybe it's more that Facebook thinks it's horrible when people kill themselves, and that if Facebook's tech could help prevent suicide, inaction would be immoral.
Most of us accept some level of State interference in our lives for our own good. Maybe we are going to have to have that discussion about involuntary interventions by corporations, too.
[+] [-] erasemus|8 years ago|reply
[+] [-] sjg007|8 years ago|reply
[+] [-] thg|8 years ago|reply
https://arstechnica.com/information-technology/2017/05/faceb...
[+] [-] jsemrau|8 years ago|reply
[+] [-] tw1010|8 years ago|reply
[+] [-] narrator|8 years ago|reply
[+] [-] drewmol|8 years ago|reply
[+] [-] threatofrain|8 years ago|reply
[+] [-] ProAm|8 years ago|reply
[+] [-] togg|8 years ago|reply
[+] [-] Communitivity|8 years ago|reply
A case has been made that Facebook users are not the customers, they are the product, and that data purchasers and advertisers are the customers. If Facebook can determine whether you are suicidal then it might determine other psychological conditions such as agoraphobia, alcoholism, depression, ADD, SAD, etc.
Once that determination is made and stored the possibility exists that it could be hacked or exposed in a data breach.
The possibility also would exist for Facebook to sell that information, and/or to target users with medical adds related to their condition. I am not saying Facebook would do this, I'd like to think they wouldn't - I have friends at Facebook, though they aren't in management.
However, the decision of whether to allow this has to be based not on whether it's safe to trust Facebook with this information, but on whether it is safe to trust any company with it.
What happens if there is a false positive for suicidal tendencies or another condition?
Take this to the absurd extreme and consider how it compares to the Pre-Crime operations depicted in the movie Minority Report. Instead of Oracles we have Machine Learning intelligent software agents. Most of the problems depicted in the movie could arise.
To a lesser extreme, imagine the situation when a false positive occurs for a user in a position of public trust, a government official, or a defense contractor with a clearance. I'll assume this triggers action in some way that is visible to some combination of the user, psychology professionals, authorities, and an employer - otherwise why do it. If the employer in any way catches wind of the determination they might very well take steps to flag and/or terminate the employee.
Even if the employer doesn't flag the employee or terminate them, if the information is purchasable or discoverable in any way then an insurance company could conceivably raise the user's rates based on the determination.
I am all for advancing technology, especially in the field of AI, but when we apply that technology we need to ask not just if we can use the technology in that way, but also if we should.
[+] [-] mmjaa|8 years ago|reply
Perhaps the only answer is to stop using the freakin' social web, but .. really .. how can we do that?
This is almost, really, the last straw, Facebook!
[+] [-] freedomben|8 years ago|reply
> This is software to save lives.
Literally anything is justifiable with that pretense.
[+] [-] mediocrejoker|8 years ago|reply
This is the first thing I thought of.
[+] [-] zkms|8 years ago|reply
It's important to note here that "first-responder" here means police; because suicide tends to either be a crime, or something for which police can legally take you into custody for. Regardless of what you think of this, any intellectually honest discussion of this must acknowledge that this machine SWATs people.
[+] [-] anigbrowl|8 years ago|reply
http://www.dailycal.org/2017/10/20/judge-indefinitely-postpo...
[+] [-] leggomylibro|8 years ago|reply
[+] [-] MechEStudent|8 years ago|reply
[+] [-] bvrlt|8 years ago|reply
Treat the causes (some of them at least), not the consequences.
[+] [-] freedomben|8 years ago|reply
[+] [-] allemagne|8 years ago|reply
So, killing yourself to me is only justifiable in situations where you'd be justified in killing somebody else. Sacrificing yourself to defend somebody else could be thought of the same as killing in self-defense, euthanasia is justified, etc.
This isn't at all to push blame on those who consider suicide or have gone through with it, or to completely abandon the idea of a continuous identity, just a perspective that once I adopted I couldn't shake.
[+] [-] unknown|8 years ago|reply
[deleted]
[+] [-] jimmytucson|8 years ago|reply
Now, if this catches on, I can see Facebook adapting their algorithm to recommend treatments for depression, anxiety, even ADHD. This would be a huge success. "Mr. Smith, I see Facebook's AI considers you a strong candidate for medication X and your doctor agrees. Can you explain then why you haven't been taking medication X?"
Also, if an algorithm can detect when a person is likely to commit suicide, can it detect when a person is likely to commit rape? Arson? Murder? If most of society views this as an achievement they'll scoff at comparisons with a book written in the middle of the 20th century (that was made into a movie starring Tom Cruise in 2002). "It's not the same... AT ALL!"
[+] [-] cryoshon|8 years ago|reply
imagine this: employers know that having these conditions (some of which are permanent) makes people less effective at working, and so they want to know who has adhd, who has x, y, z.
facebook sells them the data.
boom, now these people can't be employed by that company.
i just can't help but laugh because there's a good chance this is already happening.
[+] [-] skiman10|8 years ago|reply
https://myanimelist.net/anime/13601/Psycho-Pass
[+] [-] MBCook|8 years ago|reply
/s
[+] [-] tree_of_item|8 years ago|reply
Generally suicide is treated as a crime, and can result in someone being involuntarily held for long periods of time, perhaps even forced to take psychotropic drugs. They spin this as "saving lives" but really this bottoms out in people getting harassed by police or potentially locked up for daring to post anything about suicide in or around Facebook (which is increasingly the entire web).
And honestly, if someone wants to commit suicide then Facebook of all "people" has no fucking business trying to stop them.
[+] [-] js8|8 years ago|reply
I can see you are really upset about this. I honestly think you ought to sit down calmly, take a stress pill and think things over.
[+] [-] tiziano88|8 years ago|reply
[+] [-] foobaw|8 years ago|reply
Now, if Facebook is selling the data for profit, that's another story. But if we assume that Facebook is acting purely for the benefit of the society and the people, I think this is a great step.
[+] [-] AlexandrB|8 years ago|reply
Why would anyone ever assume this? Does anything in Facebook's past suggests that this might be the case? And even if the motives of the individuals who spearheaded this were pure is there any reason to think that this won't change in the future as people at Facebook change roles or move on?
[+] [-] strgrd|8 years ago|reply
[+] [-] tareqak|8 years ago|reply
Furthermore, "cui bono" (for whose benefit [0])? What does Facebook gain from being able perform this sort of detection given it is a for-profit corporation and not a non-profit philanthropic organization?
An aside: I wonder if there will be accounts of similar to Dante's Inferno [1], or Joseph Conrad's Heart of Darkness [2] about today's corporations in the future some day.
[0] https://en.wikipedia.org/wiki/Cui_bono
[1] https://en.wikipedia.org/wiki/Inferno_(Dante)
[2] https://en.wikipedia.org/wiki/Heart_of_Darkness
[+] [-] blk_r00ster|8 years ago|reply
[+] [-] stuffedBelly|8 years ago|reply
I really want this to work but I am afraid this Facebook AI based on posts will miss a big portion of people with suicidal thoughts.
First of all, many who are depressed do not like to share and that is why first step of counseling is usually to make the patients open up and talk, not to mention posting on Facebook about their feelings
Secondly, often times suicidal thoughts appear suddenly, making it tougher to detect preemptively. The best way to prevent suicide is for people to be there with the subject instead of messaging/calling. There are physical traits (anxiety, abnormal silence/talkativeness, etc) that can be easily spotted in person. Therefore constant visits is better than relying on Facebook AI.
All in all, I don't think AI is needed for suicide prevention. All Facebook really needs to do is to put, in a user's profile page, a visible "counseling" section providing immediate info about the suicide hotline and nearby counseling/therapy centers. But that wouldn't be as a big of a selling point as "AI for early signs of suicide", would it? If everyone knows the suicide prevention hotline just like they know 911, it would have prevented lots of suicide already.
[+] [-] jayess|8 years ago|reply
[+] [-] lwf|8 years ago|reply
I recall interning at Google in 2012, and asking about the suicide-information Onebox. It matched obvious search queries like "I want to kill myself" and provided data about suicide help lines in a non-invasive way. Unfortunately, it used a fairly strict string matching algo, so while "how to kill yourself" would trigger it, it wouldn't pick up things like "how to kill myself". It was also only localised to the US at the time, and didn't have hotline data for other countries.
… it turned out there was already a larger dataset, internationalised, that was ready to be imported into the search engine. But the onebox team was busy with the 2012 Olympics…
Technically, interns didn't get 20% time, but my mentor understood it was important and told me to go for it.
One of those things where you don't collect metrics to validate your assumptions, but just know it saved lives…
[+] [-] morpheuskafka|8 years ago|reply