Hacker News new | past | comments | ask | show | jobs | submit login
Launch HN: Koko (YC W22 Nonprofit) – Online Suicide Prevention Kit
184 points by robertrmorris on March 26, 2022 | hide | past | favorite | 53 comments
Hi! My name is Rob and I’m working with my cofounder Kareem on Koko (https://www.kokocares.org). We’re a nonprofit that provides free digital mental health services to millions of people struggling online — particularly adolescents.

Today, we are launching our Online Suicide Prevention Kit (https://www.kokocares.org/suicide-prevention-toolkit). The goal is to help social networks and online communities better support at-risk individuals on their platforms.

Many social platforms have built-in lists of keywords that detect mental health-related search terms (e.g., “self-harm” or “depression”). There is already an established practice to suppress content or surface disclaimers for such searches. Search “suicide” on most platforms and you’ll at least get shown a 1-800 number.

But there are a few problems with this. The keyword lists always have glaring omissions. Millions of young adults can still easily find dangerous content, such as tips on how to self-harm or kill themselves. And while some platforms redirect users to “emotional support” pages, the resources provided are often underwhelming and lack evidence-base. The most common approach is to provide an overwhelming list of crisis lines (which isn’t particularly helpful to someone who may already be overwhelmed themselves).

Here’s our solution: We have a privacy-first native library designed for social networks, streaming services, online communities, forums, etc. It catches common search terms like “kill myself”, “depressed” or “thinspiration”, as well as a huge long-tail of slang terms and evasive language (e.g., “sewerslide” or “an0rex1a”).

The library is written in Rust and matches in under a microsecond. It has language bindings to Python, Go, and Ruby, and all other major runtimes are coming soon. Our keywords are sourced from over 12k known crisis posts and are hand-curated by social and clinical psychologists on our team. We also use text generators like GPT-3 to expand these lists with other keywords beyond our user-generated corpus. The terms are updated regularly based on new patterns that emerge on our support platform, as well as co-listed terms on large social platforms.

We also provide evidence-based mental health interventions and resources, to help supplement what online platforms might already provide (though, frankly, many do essentially nothing). Our interventions can be accessed online, for free, without having to download an app. We provide users with online peer support, self-guided mini courses, crisis triage, etc. We have published seven reviewed papers on these interventions and we have two more in prep now. In a randomized controlled trial with Harvard, our services increased the conversion rate to crisis lines by 23%.*

This combination —search detection + evidence-based online interventions — enables us to reach users where they are, right at the moment they are reaching out for help. Instead of showing a user an ad or, at worst, harmful content, we can display resources that are actually helpful. We have seen young people search for “proanorexia” content, then click our banner, then engage with our courses, and then show marked improvement in body image perception and a greater motivation to get help offline.

Our library collects no data and our interventions are anonymous (we do not collect emails, usernames, IP addresses, phone numbers, etc).

Online platforms are heavily (and rightly) criticized for contributing to the youth mental health crisis. But what’s missing from the discussion is how these platforms are uniquely positioned to do something about it. Everyday, millions of people are crying out for help and the most anyone does is throw up a 1-800 number or offer suggestions to “go take a walk” or “reach out to a friend.”

Fortunately, we have partnered with a few large social networks that are eager to take the next step. We are now helping over 12,000 people a month with this approach. For users who complete our online interventions, we see significant improvements across clinical outcomes, including hopelessness, body image perception, and self-hatred.

This definitely won’t help everyone and nothing can replace direct human-to-human connection. Some at-risk users need far more than we can ever give them with our approach. But it does help some people in profound ways, and that inspires us to keep going.

Koko is something I started while I was a graduate student at MIT. I was severely depressed at the time, so I hacked together various technologies to manage my own mental health, as a way to fill the gaps between sessions with my therapist. That was almost ten years ago. I now have a kid of my own and I can see him struggle emotionally, just as I did.

Suicide rates for young people have increased dramatically over the past decade.* Since 2019, the rate of suspected suicides for girls aged 12-17 has increased by over 50% [3].* There is nothing more terrifying to me than the thought of a young person dying by suicide. If we can help avert at least one tragedy, it’ll be worth it.

We need your support. If you work at a large platform, or even if you just have a small Discord server or subreddit, you can help us by trying out our kit:

https://www.kokocares.org/suicide-prevention-toolkit

And please donate! If you care about this issue, please support us: https://every.org/kokocares

If you work at a large social network, or even if you just have a small online community (a Discord server, a subreddit), we think our resources could be helpful. But we’re curious if there are other opportunities we haven’t considered. We would love your feedback on what we’re building, and any technical ideas that might help improve it.

* Happy to provide references in the comments - just ask




Sounds better than Reddit's system of 'suicide alerts':

> "Reddit has partnered with Crisis Text Line to provide redditors who may be considering suicide or seriously hurting themselves with support from trained Crisis Counselors. If you’re worried about someone, you can let us know by reporting the specific post or comment that worried you and selecting, Someone is considering suicide or serious self-harm. After you let us know, we’ll reach out (confidentially) to put them in touch with Crisis Text Line’s trained Crisis Counselors."

However, many people on Reddit seem to view this as an opportunity for harrassment of those they disagree with, by generating bogus reports. Any thoughts on how to avoid those kind of outcomes?


Haha, that's actually more of a warning. If you don't quit posting about suicide, they temporarily suspend your account, then if you do it again, they permanently ban your account, and if you do it yet again (say, you're a complete loser who can't get through with it, with zero help and the only thing you can do is, well, complain about it online) you get permabanned based on your IP and email info.

Even in the few subreddits dedicated to it, you have to be real careful about what you post if you don't want a ban.

Out of sight out of mind... yeah, I guess it works. Reddit doesn't need suicidal people posting about this problem, it hurts the platform and they can't do anything about it anyway, to be fair.

Source: me and 4 people I've talked about it, all were previously banned. Not much, I know, but I'm confident enough they do it on the regular. Again, not really blaming Reddit here, they're a business not a charity.


and, from my own experience, the whole idea of sending someone anti-suicide hotlines is a bit... insulting, honestly.

It's like if I had a chronic back condition, and instead of finding from people wanting to listen I get the equivalent of a flyer in the mail about back issues.

The person that was trying to ends the potentially uncomfortable conversation and gets to wash their hands of the situation, thinking they helped.

If you're suicidal and posting on social media, of course you know about the hotlines. Getting spammed with it is so discouraging though.

And, for what it's work, I live in the US, and have tried calling the major hotlines in two different episodes only to get a busy signal. A person to talk to what would have helped me most in that situation.

(And btw, I'm not saying people are obligated to help suicidal people. it's just if someone actually wants to help, a canned text response is not effective.)


Frankly, I agree with pretty much all of this. We hear similar things from our users. This is why we try to provide a suite of options, including things like peer support and other interventions they can engage with immediately — as compliments to lifelines. We’re still learning about what works best, but the status quo is abysmal. Here’s an example: I can go on Google and search for “flight to Miami” and I’ll be led through an incredible UX that’s designed to get me to a purchase as quickly as possible. But if I search for “depression”, I get a one-box that provides a list of clinical definitions of depression, bipolar, and its various subtypes — better suited for a diagnostic manual than for anyone who might actually be struggling. Other platforms provide tips on how to take a deep breath, reach out to friends, or walk around the block (the digital equivalent to a health brochure you might find in a waiting room). The shortcomings of these approaches have been studied before, and yet they still persist. Why don’t we measure and track these things with the same rigor we do for all online experiences?


I know how to help someone buy a plane ticket, and I can program a computer to help them do that.

I often do know how to help people deal with non suicidal depression but I dont always have time and energy to help…and I definitely cannot program a computer to do what I know how to do.

I don’t have any clue how to help someone reduce suicidal intent.


I've thought about this topic a lot myself (how to reduce or remove suicidal intent) and the most consistently "successful" and promising (yet still vague) solution has been: make an IMMEDIATE and significant change in the suicidal person's environment. Environment includes where they are, how much money/debt/costs they have, who they are in contact with, and many other factors. These are the factors that underlie and trigger the suicidal intent (n.b. depression may exist but it is entirely orthogonal under this premise).

I don't mean "fix the problem that made them suicidal."

I mean physically pick them up and take them somewhere else (a safe place preferably, but there's something to be said for a sudden shock of actual danger). I mean send them a thousand bucks. I mean pay off their car loan, pay their rent for a year, something that eliminates that primary stressor.

Suicide is very often a single/recurrent practical situation that gets catastrophized into sheer despair, yes often with other mental health concerns confounding. But you can't fix those immediately. You can bring force to rehab (not great, many downsides). You can take them for coffee.

Talking might help, in fact it's necessary, but it's not enough.


Truth. The only reason I'm suicidal is that I'm broke.


This is absolutely disgusting. I would consider reaching out to a few media publications (eg VICE, The Guardian, etc).

Banning people who express suicidal intentions from online platforms, which often are the last community they belong to, is unbelievably harmful.

Advertising dollars be damned: companies don't get to put toxic materials in our foods, and social media companies don't get to clandestinely use "crisis support" buttons to figure out who to "clean up".

You may also wish to write a letter to your attorney general.


Yea this is an interesting problem. The whole question of whether, when, or how to intercept someone who might be in trouble is really challenging and we’ve thought about this for many years (and had some missteps along the way and learned a lot on what works and what doesn’t).

Our system gently recommends our service to users right when they search and so the cost of a false positive is low (they can just ignore it or it might just seem like an unrelated PSA). Search is also great because we can vary the intensity of the keywords. For one of our partners, we’re now surfacing resources (in subtle ways) for lower risk searches like “depression.” It is super important to us to think about how we might help people upstream, before they reach a state of crisis.

For users flagged, we work well as a layer on top of CTL as our ux works for people across the entire spectrum of severity.


Congratulations on the launch!

I have two perspectives on suicide.

First, 'Informed Suicide' where a person rationally determines that there's no point in continuing their life after exhausting all other options. Deliberate hurdles to prevent that person from ending their life through social stigma, criminal action(laws against suicide), unavailability of euthanasia just feels like taking away their right, treating them as a commodity and leading them to painful death or disability through botched suicides.

Second, 'Hasty Suicide' where a person haven't exhausted all options, they like living but they don't like their current life. I think the intervention methods are very useful for preventing such suicides. I hope Koko library comes handy to make this process more efficient to saves more lives 'which want to be saved'.

I see the social and practical reasons for why suicides aren't categorized, I just wish it was.


Totally agree. For some people, forcing them to continue life is like making them suffer for longer. If people can give birth without someone's consent, there should also be a way out if they don't like it, and with dignity instead of going out with huge pain and mess.

Affordable housing and education, livable wage, universal healthcare etc. are suicide prevention. This is what people should be focusing on more.


From one YC nonprofit to another, this looks awesome! Amazing work and really interesting to read about the stack powering it.

We run a free tutoring service for low income students and this is something we deal with pretty regularly so def gonna look into this more and may reach out directly.


> We have published seven reviewed papers on these interventions and we have two more in prep now. In a randomized controlled trial with Harvard, our services increased the conversion rate to crisis lines by 23%.*

Do you have any numbers for outcomes or harm reduction?


Thanks for the question. I’m going to interpret this broadly and try to go from there, but let me know if you had something more specific in mind.

TL;DR At a high level, for people who complete our interventions, we see 71% feel more hopeful, 42% feel better about their bodies, and 67% feel less self-hatred. Completion rates range from 25-55%. Outcomes would most likely be lower for those who dropout prematurely.

More specifically:

We track multiple outcomes, depending on what the user may be presenting. If they are experiencing suicidal thoughts, we track conversion to crisis lines.

See here: https://psycnet.apa.org/record/2019-14424-004

We follow-up 5hrs later and ask general questions about their experience with the life line.

If they are experiencing self harm, in addition to crisis lines, we offer them a single-session online intervention on managing sh. For that, we see significant improvements pre vs post on measures like “self-hatred”, and “desire to stop selfharm”, with medium effect sizes (.4-.8 cohen’s d). Very hard to show enduring effects for this, however. This research, as well as our work on disordered eating, is still in prep.

For peer support, we have previously published data here: https://pubmed.ncbi.nlm.nih.gov/25835472/

And here: https://pubmed.ncbi.nlm.nih.gov/28903637/

For our interventions on mood and stress regulation, we’ve adapted single session interventions, alongside some wonderful collaborators at Stony Brook. They have published their work here: https://www.nature.com/articles/s41562-021-01235-0


> We follow-up 5hrs later and ask general questions about their experience with the life line.

> If they are experiencing self harm, in addition to crisis lines, we offer them a single-session online intervention on managing sh. For that, we see significant improvements pre vs post on measures like “self-hatred”, and “desire to stop selfharm”, with medium effect sizes (.4-.8 cohen’s d). Very hard to show enduring effects for this, however.

Are you measuring in such a way that you can realistically determine which effects are due to the online intervention and which are due to the SH itself? I ask because after SH, especially a few hours later, I consistently have increased "desire to stop selfharm", and lessened "self-hatred". SH has that effect on me, hence its unfortunate use as a coping mechanism.


Good questions. And your experience makes a lot of sense. Ideally, we would see positive changes persist over many time points (and so we aren't measuring immediately before self-harming and immediately after). I'd be really grateful if you tried it provided some feedback for us: https://join.kokocares.org/koko-referral-lifelines?source=hn

Scroll to the bottom to try the "managing self-harm" mini course. It only takes 7-8min and there is a spot for feedback at the end.


Thanks for the reply.

When I click on the "managing self-harm" course, I only see a "form.typeform.com refused to connect" error. Seems this is because I'm using tor, which is the only way I'd feel comfortable legitimately using the service. Would be nice if there were a way to use the service via tor.

I did complete the course. All the negativity coupled with "it's easy!!" made me feel worse, but sounds like I'm an outlier. Is there a reason there are no positive statements in the course, like "I think I'm a good person"?


Is there any plan to just publish your list of keywords? While you don’t collect any PII, to use Koko, it seems like you still have to send all content you want to scan to your API. For things like DMs or private posts, this seems less than ideal.


Great questions! Our library runs completely on the server side, caching a list of regexes which are used to match against. So no data is ever sent to our API.

As for publishing the lists, it's definitely something we're thinking of. for now, it's easy to get them if you sign up with us. we don't charge for use


Maybe a dumb question, but I am not sure what is evidence-based online interventions. Maybe an example would help me to understand.

And maybe you could open up your API documentation not behind a registration. I understand you are operating a non profit, and signup is free, but somebody like me does not have a social platform that needs this integration right now, don't like to pretend to be a customer/user in order to read a doc. Maybe after seeing the doc, I can have some idea what is the capabilities and can be used on future project.

Anyway, I like your intent working on helping vulnerable.


Thanks for doing this. I'm a therapist in California and interested in advising or helping. You can email me if you want, bcbernstein@gmail

Good luck.


Why waste money and be based in SF as a non profit? you'd better live elsewhere and have more fund for your mission

Next question, whos is the team behind that non-profit? no faces, it feels almost like a VC scam..

> Use our library of support and referral links (provided as single pages or div blocks)

What do you mean by referral links?


This looks great. My wife is training in sucide awareness and prevention (college level course) in the UK; how applicable is Koko outside the USA ? How are you validating how well you system works or has a positive effect,have you had input/review by qualified professionals ?


That's great to hear. Kudos to your wife! Koko works well outside of the USA (but is English only). One of our earliest users, from London, is now a member of our core team.

We regularly consult clinical advisors (listed here: https://www.kokocares.org/our-team)

I made some comments on outcome measures in another comment you can find here, but you can also see some of our papers here: https://www.kokocares.org/research


Please consider creating Elixir bindings for this. I’d love to try use it for my site isitnormal.com where unfortunately I’ve had to deal with this issue for years (doing my best). Thanks! Amazing initiative!


Thank you. We can release bindings for new languages very quickly. Just fill out our signup form and we'll prioritize Elixir. https://r.kokocares.org/api_signup/


Awesome, thanks. Signed up.


Is "koko" also a slang for being crazy/unstable/insane in English, or is it just in Scandinavian languages this sounds a bit funny?


Koko is a character from the operetta The Mikado by Gilbert and Sullivan. He is a death row convict appointed to be executioner.


I think the word you're looking for is "cuckoo".


Ah, so spelled differently in English, but still the same sound.


I read KoKo as ‘Koh Koh’, not ‘Koo Koo’.

Still, in retrospect - definitely a bad naming choice for a service like this.


Actually, My first impression was of a friendly panda or even Baymax from Big Hero 6.


Good for you guys, this is a really uncomfortable space to build in, but with amazing social impact potential.

What motivated you personally to be in this space?


Thanks a lot, that's really nice to hear. We've had a somewhat meandering path, which you can read about here: https://www.kokocares.org/origin-story


Your “get started” page didn’t load for me, and your dev docs did some redirect to GitHub after I clicked “Python”.

Love the intent though.


Thanks for reporting this. We use an embedded typeform for our signup page which looks to be ok but you can also go directly to the form here (https://koko-ai.typeform.com/to/xB0X2Grc). For the Python docs, our language bindings are open sourced and we’ve been maintaining their documentation directly on the repo to ensure it is up to date without having to copy data across. Was the documentation confusing on the github readme?


I just clicked the “next page” link and it looked to start loading a new content in the same style as the one I was on, and then redirected to GitHub. Was just a little jarring UX.


Is it just me or is it obvious that anyone who successfully goes through like _any_ intervention would show better results than someone who can’t? That just seems like an indicator of something about the individual or their circumstances instead of the effectiveness of the intervention. Do you compare your interventions with more typical interventions like calls to the Trevor project or something?

Also I think combining self harm with suicide resources might actually have a negative effect. If someone is searching something like hiding self harm marks from cutting and gets resources on suicide, it could trigger suicidal ideation when it wasn’t actually the issue they were seeking help with.


Great questions, thank you. Looking only at completers creates selection bias, for the reasons you articulate. In published studies, we compare interventions to control conditions (ideally an “active” control, or something that has some purported therapeutic benefit). We love the Trevor project and work hard to get candidates on our platform to that resource. We have done some comparisons with other life lines and the issue is some have incredibly long wait times and drop-offs. Ideally, we can offer both. For the suicide prevention lifeline, we’re a resource that’s listed that people can access while they wait.

It is very true that self-injury is not the same as suicidal ideation, though they can certainly overlap. A common thought is that asking about suicide or presenting resources could be harmful or ‘trigger’ more ideation. The evidence to date suggests, on the contrary, that asking about suicide can actually reduce risks. https://pubmed.ncbi.nlm.nih.gov/24998511/ https://www.cambridge.org/core/journals/the-british-journal-...


What resources do you link to? Can't find it on your homepage?


Hi, thanks for your interest. The resources appear differently for different platforms we integrate with. In some cases they can be accessed via DM. Some will differ based on the search term as well. The most generic case is here: https://join.kokocares.org/koko-referral-lifelines?source=hn" We are actively updating these based on user feedback.


Thanks! I started the mood tutorial and was a bit surprised by the YouTube video, but overall it seems good. I am not really the target group atm, either.


Love the intent, but broken links for Privacy Policy and Terms of Service from this page: https://www.kokocares.org/suicide-prevention-toolkit doesn't inspire confidence.


"oops" new landing page. I updated the links, but you can also find them on our main page: "www.kokocares.org". Note also that for the kit, we have our own licensing agreement that's available if you sign up, but is not on this webpage.


[flagged]


We’re a non-profit and our service will always be available for free. There is no paywall. We support ourselves through donations. This should be more clear on our landing page, thanks for noting this.


Please don't break the site guidelines when commenting on HN. You can make your substantive points without doing that.

https://news.ycombinator.com/newsguidelines.html


? Paying with data is still a paywall if I want to use it just because I want to help I can’t without giving up my SUICIDAL users data. Srsly good intentions bad execution and what part of the comment is against the site rules?


Your comment broke at least these guidelines:

"When disagreeing, please reply to the argument instead of calling names. 'That is idiotic; 1 + 1 is 2, not 3' can be shortened to '1 + 1 is 2, not 3."

"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."

"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."

https://news.ycombinator.com/newsguidelines.html


> But there are a few problems with this. The keyword lists always have glaring omissions. Millions of young adults can still easily find dangerous content, such as tips on how to self-harm or kill themselves.

This is an incredibly condescending worldview. If a person's going to commit suicide, allowing them to find methods that aren't likely to fail or cause extreme amounts of pain is incredibly important. By interrupting their access to information, you're likely to end up pushing suicidal people into making attempts using what little information they already know, which can often lead to excruciatingly painful medical consequences for the rest of their lives, whether lasting minutes or decades.

Intervention is good, but pushing for the elimination of the ability to find that content is almost impossible to see as anything but harmful.

By the way, are you related to the chain of Robert Morrisi that worked on UNIX, wrote the first computer worm, and wrote the language this site is written in?

https://en.wikipedia.org/wiki/Robert_Morris_(cryptographer)

https://en.wikipedia.org/wiki/Robert_Tappan_Morris


Lives are saved by not having easy information on how to commit suicide. When you talk to suicidal teens, often the reason they seek help is because they don't have a workable plan. So putting these roadblocks in their way drives them to seek help instead of making and carrying out plans. This is basic safety, like locking away knives and medication - don't help them plan.


Encouraging them to make incorrect attempts is outright worse than allowing them to make good ones. There are a lot of ways in which a suicide attempt can go wrong, and pretty much all of them leave the person wanting to die more, not less.


The problem with this point is the inherent assumption that suicide is bad.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: