Hacker News new | past | comments | ask | show | jobs | submit login

Sounds better than Reddit's system of 'suicide alerts':

> "Reddit has partnered with Crisis Text Line to provide redditors who may be considering suicide or seriously hurting themselves with support from trained Crisis Counselors. If you’re worried about someone, you can let us know by reporting the specific post or comment that worried you and selecting, Someone is considering suicide or serious self-harm. After you let us know, we’ll reach out (confidentially) to put them in touch with Crisis Text Line’s trained Crisis Counselors."

However, many people on Reddit seem to view this as an opportunity for harrassment of those they disagree with, by generating bogus reports. Any thoughts on how to avoid those kind of outcomes?




Haha, that's actually more of a warning. If you don't quit posting about suicide, they temporarily suspend your account, then if you do it again, they permanently ban your account, and if you do it yet again (say, you're a complete loser who can't get through with it, with zero help and the only thing you can do is, well, complain about it online) you get permabanned based on your IP and email info.

Even in the few subreddits dedicated to it, you have to be real careful about what you post if you don't want a ban.

Out of sight out of mind... yeah, I guess it works. Reddit doesn't need suicidal people posting about this problem, it hurts the platform and they can't do anything about it anyway, to be fair.

Source: me and 4 people I've talked about it, all were previously banned. Not much, I know, but I'm confident enough they do it on the regular. Again, not really blaming Reddit here, they're a business not a charity.


and, from my own experience, the whole idea of sending someone anti-suicide hotlines is a bit... insulting, honestly.

It's like if I had a chronic back condition, and instead of finding from people wanting to listen I get the equivalent of a flyer in the mail about back issues.

The person that was trying to ends the potentially uncomfortable conversation and gets to wash their hands of the situation, thinking they helped.

If you're suicidal and posting on social media, of course you know about the hotlines. Getting spammed with it is so discouraging though.

And, for what it's work, I live in the US, and have tried calling the major hotlines in two different episodes only to get a busy signal. A person to talk to what would have helped me most in that situation.

(And btw, I'm not saying people are obligated to help suicidal people. it's just if someone actually wants to help, a canned text response is not effective.)


Frankly, I agree with pretty much all of this. We hear similar things from our users. This is why we try to provide a suite of options, including things like peer support and other interventions they can engage with immediately — as compliments to lifelines. We’re still learning about what works best, but the status quo is abysmal. Here’s an example: I can go on Google and search for “flight to Miami” and I’ll be led through an incredible UX that’s designed to get me to a purchase as quickly as possible. But if I search for “depression”, I get a one-box that provides a list of clinical definitions of depression, bipolar, and its various subtypes — better suited for a diagnostic manual than for anyone who might actually be struggling. Other platforms provide tips on how to take a deep breath, reach out to friends, or walk around the block (the digital equivalent to a health brochure you might find in a waiting room). The shortcomings of these approaches have been studied before, and yet they still persist. Why don’t we measure and track these things with the same rigor we do for all online experiences?


I know how to help someone buy a plane ticket, and I can program a computer to help them do that.

I often do know how to help people deal with non suicidal depression but I dont always have time and energy to help…and I definitely cannot program a computer to do what I know how to do.

I don’t have any clue how to help someone reduce suicidal intent.


I've thought about this topic a lot myself (how to reduce or remove suicidal intent) and the most consistently "successful" and promising (yet still vague) solution has been: make an IMMEDIATE and significant change in the suicidal person's environment. Environment includes where they are, how much money/debt/costs they have, who they are in contact with, and many other factors. These are the factors that underlie and trigger the suicidal intent (n.b. depression may exist but it is entirely orthogonal under this premise).

I don't mean "fix the problem that made them suicidal."

I mean physically pick them up and take them somewhere else (a safe place preferably, but there's something to be said for a sudden shock of actual danger). I mean send them a thousand bucks. I mean pay off their car loan, pay their rent for a year, something that eliminates that primary stressor.

Suicide is very often a single/recurrent practical situation that gets catastrophized into sheer despair, yes often with other mental health concerns confounding. But you can't fix those immediately. You can bring force to rehab (not great, many downsides). You can take them for coffee.

Talking might help, in fact it's necessary, but it's not enough.


Truth. The only reason I'm suicidal is that I'm broke.


This is absolutely disgusting. I would consider reaching out to a few media publications (eg VICE, The Guardian, etc).

Banning people who express suicidal intentions from online platforms, which often are the last community they belong to, is unbelievably harmful.

Advertising dollars be damned: companies don't get to put toxic materials in our foods, and social media companies don't get to clandestinely use "crisis support" buttons to figure out who to "clean up".

You may also wish to write a letter to your attorney general.


Yea this is an interesting problem. The whole question of whether, when, or how to intercept someone who might be in trouble is really challenging and we’ve thought about this for many years (and had some missteps along the way and learned a lot on what works and what doesn’t).

Our system gently recommends our service to users right when they search and so the cost of a false positive is low (they can just ignore it or it might just seem like an unrelated PSA). Search is also great because we can vary the intensity of the keywords. For one of our partners, we’re now surfacing resources (in subtle ways) for lower risk searches like “depression.” It is super important to us to think about how we might help people upstream, before they reach a state of crisis.

For users flagged, we work well as a layer on top of CTL as our ux works for people across the entire spectrum of severity.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: