Hacker News new | past | comments | ask | show | jobs | submit login
NIH’s mental health chief on why he’s leaving for Google (washingtonpost.com)
76 points by subnaught on Oct 27, 2015 | hide | past | favorite | 38 comments



I wonder if they are going to plan to mine people's search patterns? I don't know about you, but I've never lied to Google about my intentions. When I was feeling very depressed, I'd often search for sad things, or even "ways to kill myself". I imagine a lot of other people do so as well, and aren't even aware their search history is monitored by Google -- and if they are, they think it can be cleared by deleting their Internet history in their browser.

There is again a fine line between privacy and what will help the individual and society at large. The most pressing questions will not be what new analytical methods we can create to better identify mental illness from Google's data. I've seen the quality of the mental health system from a close friend's perspective (in the US at least) and all I can say is you don't want to get mixed up with it; it often has the same outcome and social stigma as a criminal record.


It's really a challenge to use search in this way. The searcher could be depressed, or be a worried friend of some who's depressed, or be a writer looking for story ideas, or be a family member of the last person who used the browser, etc.


Something like the individualized advertising model Google has for each person could probably be used in tandem to segment that more reliably.


>>> When I was feeling very depressed, I'd often search for sad things, or even "ways to kill myself"

That may explain those "Hotels near Golden Gate Bridge starting at $40 a night" ads.

In all seriousness, I do not want Google to help me at all, just to leave my privacy alone and show the most relevant results when asked. I'd rather scroll a bit longer than have Google know everything about me


as a user of google search can't you switch this off? maybe you have to be signed into a google account to have the ability to switch it off? maybe they would ignore this switch for the purposes of determining if you are suicidal (or homicidal)?


Is anyone but me conflicted about the idea of Google being involved in "identifying who is at highest risk and developing interventions that preempt psychosis", given that Google's primary product is detailed personal information about everyone?


As the former NIH mental health head is quoted in the article:

One of the possibilities here is, by using the technologies we already have, technologies that are linked to a cellphone, technologies that are linked to the Internet, we may be able to get much more information about behavior than what we’ve been able to use in making a diagnosis

He also makes arguments for more involuntary treatment.

Some ideas like this are already in practice. Some police departments, such as Chicago's, try to predict likely criminals and pre-emptively threaten and track them.

Also, let's remember that the field of mental health gave us lobotomies, torture, and other abuses. I fear society does nothing until there's a catastrophe, and only then reacts.


Something that I feel does not get enough attention wrt using machine learning is proper consideration of the downsides when there are false positives. There are always false positives and you have to weigh the harm of a false positive against the benefits you think you'll get.

On one end of the spectrum a false positive serves you an ad you'd never be interested in. On the other end it gets you killed. Searching for the mentally ill in this way could lean hard toward the latter. Scary stuff if misused.


I'd also add that the big problem in mental health treatment in the US is not identifying people with mental illness. The big problem is that we're not equipped (funded or institutionally prepared) to treat the huge number of people who we already know need it.

So seeking out the mentally ill isn't really the big problem either. We should imo be more focused on solving the big issues like stopping using prisons in lieu of real mental health facilities, etc.


Two opportunities to plug PSYCHO-PASS in one day? We're on a roll, HN!

But seriously, if Google co-opted people's smart phone data to do covert psychological evaluations on a constant basis we'd literally be one step removed from the dystopic future PSYCHO-PASS depicted. Albeit a big step (sending out armed teams to apprehend at-risk subjects before they have a chance to hurt themselves or others).


>Albeit a big step (sending out armed teams to apprehend at-risk subjects before they have a chance to hurt themselves or others).

Look at the story behind the police officer who was committed for trying to expose police quotas. Is that not the same as an armed team coming to commit you... just for an even less justifiable reason?


Do you have a link for this?


> sending out armed teams to apprehend at-risk subjects before they have a chance to hurt themselves or others

Aout half the people shot and killed in the US have mental health problems.

The relatives recognise the person is going through a MH crisis, possibly suicidal. They call the police. Piss-poor US police then kill that person because they don't have the skills or training to deal with that person. (Note the lack of killing by police in the rest of the western world)


Of course there's a lot of potential for tremendous good to come out of this. Right from the article: "Insel: One of the great challenges in this field is how to help people who do not want help."

If Google can somehow predict and prevent a mental health issue from getting to the point you describe, that would be wonderful. But we can't ignore the huge privacy issue to deal with there.

If Google was surreptitiously capturing reliable psychological assessments of people on a large scale, especially post-Snowden, I'm sure anyone could imagine more than a couple potential issues.


I feel like yelling "Google already has this data!"

People have been saying for years that it's probably a bad idea to let Google slurp as much data as they do and keep it as long as they do.

We passed the point of no return about 7 years ago. Everything is now online; online functionality is baked into the OS; everything calls back to the mothership.


Except we have a choice to not fall into the Google trap and altogether avoid using any Google products. Also my hue isn't cloudy at all ;)

> sending out armed teams to apprehend at-risk subjects

Even if we did reach a point where we could be able to identify at-risk subjects with high accuracy this would be very difficult to bring about, just because of the implications.


Right, hence the big step is for a government agency to receive a mandate to implement such a system. The even bigger step is probably to openly make enforcement decisions based on that info. I think there is a huge gap between "your psychological data indicates you may be at increased risk of committing a violent crime" and actually taking people off the streets based on that indicator.

But then again, with what we've learned since Snowden, this doesn't actually seem that impossible...


Note further that "you may have a mental illness" -> "you may be at increased risk of committing..." is as sketchy as "you are a minority" -> "you may be at increased risk of committing...".


Right. Hence the need for a dash of dystopia to make the plot work.


Perhaps I misunderstood you, but I feel like you're implying that racial profiling isn't an active thing.


Honestly, from the point of view of a minority this is already a dystopia.


You're forgetting two really important conditions: completely closed borders and hyper oats.

I doubt such a system would work in practice, because the vast majority of people are likely asymptomatic like Tsunemori and Makishima.


It's not as if Google or anyone else needs to go find the person and forcibly detain them. Google can steer people towards more ads or inject websites for treatment into the search result pages.

Also, the hue detection was triggering on people who clearly had temporary issues and could have been helped instead of subjugated. Some of them were far more interesting and "healthier" than the sheep, like Kogami. It's more of a problem if someone doesn't get angry about something like Kogami has.


The dystopian society had an incentive to have a placated and numb society, in a Brave New World sense. I would argue that Makishima is the true hero in the story, because he is "awake" while others are asleep.


I think Sergey was a bit terrified of developing Parkinsons, a possiblity from his genetic profile. So I guess this in part contributes to their attempt to build a general world class brain institute. Wno knows, technology could have amajor role in understanding and cure.


Google already takes into account searches that indicate mental health and sometimes shows related ads. Currently this is unregulated.

Also in general Google is no worse(and probably much better) than current companies who sell mental health drugs who got fined for drugging children with risky drugs illegally , etc.

So who knows, Maybe Google's involvement , and it's unique control structure(controlled by real people who seem to care, and not by the psychopathic being called "the corporation") would improve healthcare.

Also ,some of this don't necessitates monitoring of search queries - for example there's some research about using EEG to monitor people with bipolar disorder - so maybe some of this monitoring could be done locally, in the name of privacy.

But there are also ethical questions - for ex. if some social pressure could push someone who would suffer from mental health issues to getting treated and preventing far worse results in the future - what should we do ?


Under FDA or American Psychiatric Association guidance and regulations, it could be a suitable aid for help in diagnosing mental health issues. Yet, it would only be ethically reasonable for it to be nothing more than a diagnostic tool in the hands of qualified psychiatrists much like an MRI/CT/PET/etc. is for a radiologist or neurologist or ECG/ultrasound/etc. for a cardiologist.

Edit: In my experience, experts in some areas are just slow to adopt technology. Physicians and cardiologists still use rubber-tube stethoscopes (there's a startup in this area, Eko Devices that's trying to change it, but the physician community, IMO has inertia). Sleep health assessment is done manually by technicians. Psychiatrists rely on simple questionnaires (or sets of questions), while leaving out plenty of other data points. Same with doctors in many other fields, quantification of behavior/phenotype/etc. is in its infancy, with only numbers like blood glucose, blood pressure, body temperature, etc. being reliably used.


Serious question: what's wrong with a rubber-tube stethoscope?


I don't think anything is wrong, per se. But the notion that in 2015, we don't have more widely used fancier technology on stethoscopes is interesting. There are some problems in how stethoscopes are used, primarily because a lot of skill is required to interpret the sounds and many doctors could be and actually are poor at this. There are many studies which show this, for example [1].

[1] http://pediatrics.aappublications.org/content/105/6/1184.sho...


The idea that 2015 should be about "fancy" technology sounds like it was invented by advertising people. Fancy in the way multi-blade disposable razors are fancy? No thanks.


Of all the human rights stuff happening around psychosis an early diagnosis doesn't seem too bad. It's a bit of a worry, but it's not as bad as being shot and killed[1]; transported to hospital by police against your will; being medicated (sometimes by force) against your will (with meds that will shorten your life); being deprived of your liberty (in more ways than just being held in a hospital); etc etc.

[1] http://tacreports.org/storage/documents/2013-justifiable-hom...


Not at all conflicted. I am 100% sure I do not like the direction this is going.

From people who see mental health as a moral failing to those who see it as a justifiable reason to restrict rights; I see great potential for harm.


I would be ok with a feature that looked at any history I had chosen to enable and informed me and only me that it believed I may be at risk for some mental illness, and suggested how to get help.

For example if you search for https://www.google.com/search?q=suicide (or related queries) it informs you of a suicide hotline.


Well, we don't know how it will tie into personal information; let's not speculate at this early stage.


I'm definitely concerned about where this might head. Even the idea of early diagnosis is problematic as it carries with it all the baggage of mental health diagnostics---disorders defined by committee, lack of objectivity and reliability, lack of apparent physical reality backing up many disorders.

Much of what we call mental health diagnosis can actually be reinterpreted as institutionalized societal judgments of non-conforming behaviors. The fact that (in my personal experience and the experience of many others) "treatment" often directly inhibits recovery makes an enhanced diagnosis->treatment pipeline a frightening thing indeed.


Absolutely. But by the time our inept government even gets around to considering regulations for...well, whatever that may entail, Google will already be heavily profiting off of whatever R&D they assembled years before.

This is what our tax dollars give us. It's the government's fault they can't keep this guy on the payroll. On the whole, there's such mindbendingly little personal incentive to solve problems in the public sector when entities in the private sector will offer you whatever you desire in pay/benefits.


There are some interesting machine approaches to early diagnosis and prediction of psychosis.

Who will develop psychosis? Automated speech analysis may have the answer: http://www.sciencedaily.com/releases/2015/08/150824110809.ht...

http://www.theatlantic.com/technology/archive/2015/08/speech...

http://www.nature.com/articles/npjschz201530

I'm interested to see what Google can do. Early diagnosis and rapid start of an early intervention team dramatically improves outcomes.


Maybe just replace the water supply Fluoride with a Paxil/Zoloft combination.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: