top | item 8162942

Show HN: CheckUp – Social network suicide prevention in Go

28 points| rkirkendall | 11 years ago |checkupapp.org | reply

46 comments

order
[+] phfez|11 years ago|reply
I can understand why you think this might be a good idea. But it completely doesn't take the depressed person into account at all. It thinks it can solve suicide by checking up on the individual? Do you think the individual wants to be checked up on? Maybe the people checking up on the person are the actual problem in that person's life, which could create an even more serious dilemma for that person.

If there is anything that would drive me to suicide, it would be more people thinking that they can 'solve the problem' in this manner.

[+] rkirkendall|11 years ago|reply
Hey thanks for the feedback. I understand your point on the topic, but I would like to point out that CheckUp isn't trying to 'solve' suicide in this manner. The philosophy behind the project is that you, at some level, care about the people you are socially connected to, and you may care if they are contemplating something very bad.

The goal is to create a service that will prioritize self-threatening posts from people you care about above the usual noise of your social network. Publicly posted cries for help can indicate very serious intent and we just want to make sure they don't go unnoticed.

[+] blowski|11 years ago|reply
As somebody who was affected by the suicide of a close friend, it would be wonderful if something existed that could help others with suicidal feelings.

But there is something sad about the idea that an algorithm would be better at detecting suicidal feelings than your closest friends.

[+] 4lgorythm|11 years ago|reply
If a "depressed" person is posting things that flag a SUICIDE watch app, then I'd say they are obviously posting those things to draw attention to their issues. Therefore someone noticing would probably be just what they want.
[+] JoeAltmaier|11 years ago|reply
Cynical. Depression is often the cause of suicide. Its not 'caused' by people trying to socialize. Makes perfect sense to me to monitor my mental state externally, just like I check my blood pressure and weight.
[+] smclaughlin|11 years ago|reply
That's an interesting perspective. I disagree. I think if someone is tweeting about suicidal intentions then it follows that they are not concerned about the privacy of those intentions.
[+] wyager|11 years ago|reply
This seems a little accidentally sinister to me... Something about doing automatic sentiment analysis on someone's data without their permission seems morally questionable. I mean, almost none of us are happy when the government does it, even though it's allegedly with the (very questionably) "good" intention of "fighting (terrorism|drugs|bogeymen)".
[+] fmdud|11 years ago|reply
It's different when it's public data, though - it's data willingly shared.
[+] jwise0|11 years ago|reply
Ricky,

Thanks for this service.

Sometimes, all it takes is to remind someone that people are there for them, and care about them. I don't terribly like Twitter, but I try to read it once a day or so just to keep a view on how people are doing; this service would be incredibly useful to me to make sure that I don't have things that I'd like to pay attention to falling through the cracks.

I know a bunch of people here seem to wish that the service were less aggressive (or that it didn't exist at all): for my application, I'd almost prefer it be more aggressive! Three negative sentiments is a high bar to meet, and I fear the false negative rate could be quite high.

Anyway, thanks very much for writing this. As a whole, classes of applications that help me mediate my interaction with social networks are things that I support, and in specific, this application is incredibly valuable to me.

[+] rkirkendall|11 years ago|reply
Glad you like it! Thanks for kind words.
[+] conistonwater|11 years ago|reply
Have you thought at all about the ethics of meddling in people's lives? [1]

This "service" seems contrary to basic notions of privacy and freedom.

[1] http://www.davidhume.org/texts/suis.html

[+] jwise0|11 years ago|reply
The only thing the service does is alert the friends of someone who appears to be about to end their own life, but who, for some reason, is posting about it on a public forum.

I can assure you that the service does no meddling whatsoever.

Someone who is swept by the depths of depression, and takes a last moment to emerge long enough to ask their friends for help -- well, I think that it is only fair to counterbalance the feed algorithms that try to keep the rest of us blissfully ignorant, and let them have the support that they ask for.

[+] cjslep|11 years ago|reply
I think this is really cool. Clarifying question from these two statements that seem to conflict:

> The goal of the CheckUp project is to detect any serious sign of depression, self-harm or suicide posted to a social network and provide peer support by notifying a concerned party.

> The app works by checking the tweets on your home timeline every few minutes and sending you an email notification if a tweet is flagged.

Shouldn't it instead make the person signing up the "concerned party" to be notified via e-mail, and instead have that concerned party specify which twitter feeds to watch? I'm probably missing something here.

[+] rkirkendall|11 years ago|reply
Hey thanks for the feedback! I will try to update the site to clarify the language a little more –– what you described actually is how it works. The user that signs up is the concerned party that we would notify. The app watches all tweets on that user's home timeline. By "home timeline", I was referring to all tweets posted by the user and everyone followed by that users (so, your Twitter feed). That's how it's described in Twitter's API docs, but I can see where that may be a point of confusion.
[+] eglover|11 years ago|reply
I don't have Twitter, but I'm pretty sure "home timeline" means everyone you follow.
[+] DanBC|11 years ago|reply
This looks like a useful tool.

Assuming for the moment that this is very accurate.

What then?

Ann discovers that Bob is suicidal. What is Ann supposed to do then? (My suggestions are to ask Bob if he intends to die; and to help Bob access medical help. (Emergency help if he says he intends to die soon)).

Perhaps some set of accurate, international, flowcharts showing what you're supposed to do if a colleague / friend / relative / etc is suicidal would help. (In UK: Contact their GP; or an ambulance if suicide is in progress;)

You also seem to be ignoring ages, which is tricky. What do your ethics team say about reporting suicidality and people under 18?

[+] bussiere|11 years ago|reply
hum, interesting.

Kind of minority report here.

Machine learning on depression syndrome ?

I'am a hacker and not truly white, are you not afraid that people used this to take advantage of depressed people ?

A man can use this to spot depressed women , some are manipulator born people.

I find the project interesting but it have a lot of ethic problems. I think that it will be more a tools for the people in wealth.

maybe it could be more interesting to have the global mood of people than the mood of one people.

[+] bussiere|11 years ago|reply
As i said the problem that i have with this is that i could help a manipulative pervert to spot his next victim.

Allowing anyone to use it is a ethic problem for me.

Manipulative pervert have a natural tendency to spot people in weakness state you could just help them more with this ...

Take care of this, i think maybe a way to avoid this will be to give it access to people only with a good reputation.

(reputation system are a things that a lot of people are working on it).

But the project and the technology interest me.

Did you see the project go learning ?

There are some association that use phone line to help people and prevent suicide. Your product could be useful for them.

We can imagine them using it online to prevent suicide or contact people in private.

regards and i wish you to sucess.

[+] fleitz|11 years ago|reply
ML would be a huge step forward, this just matches a couple keywords.
[+] matart|11 years ago|reply
How accurate do you think this can be? What happens if I write a facebook update that says, as an example:

I don't believe I have ever said "I feel depressed"

Would this be flagged? Does it only take one post to be flagged or is it looking for recurring behaviour?

[+] rkirkendall|11 years ago|reply
Good question. Part of the ongoing nature of this project is to expand our phrase detections. I pulled the original phrase list from a white paper written by BYU last year because I figured that would be a good starting point. There is room for vast improvement though.

If the last few posts leading up to the flagged post are classified as predominately negative (currently looking at the last 3 tweets before the flagged tweet), then we send the notification. The intuition is that if a person is comfortable enough with social to post seriously suicidal content, he or she has probably already made some preceding negative remarks.

[+] rubiquity|11 years ago|reply
This sounds like a great service but I really don't give a damn that it is written in Go.

Also:

> This application is temporarily over its serving quota. Please try again later.

[+] rkirkendall|11 years ago|reply
Hey! Thanks for the heads up. We picked up more traction than I had anticipated, but we should be back online now! And some people may be interested in the Go repo ;)