I'd say very little good can come from this. What would be an example of an "appropriate" use of this technology?
Basically where it could excel is giving probabilistic prediction of "mental disorders" across a high volume of users. What is a good thing you would do with that? Advertising to suggest someone to get help, maybe?
I worked on a shared task (CLPsych [1]) that estimates mental distress severity. So the classification task was to determine how acutely a user message/post on a mental health forum would require moderator intervention. Some people announce their suicides live on the forum for instance. You want to point the moderators to those people asap.
Our research group applied this tech to Belgium's suicide prevention center chat tool, so the workers can prioritize based on severity.
An "appropriate" use is of course targeted advertising. You can create products with a brand image that appeals to specific personalities. Then you buy advertising space when the person is in an emotional state where they are most receptive to your ad.
An appropriate use could be the treatment of those mental disorders on an 'industrial scale'. You can use the technology to identify as well as apply cost-effective tools that allow people with mental issues to heal and grow.
The most likely good is in the ability to study the model itself to better understand mental disorders and therefore inform treatment, or in the ability to deploy something similar to monitor folks who are already clinical patients on an opt-in basis. I'd agree with you that the deployment of the model in almost any context re: observation of the general public is very unlikely to be beneficial.
I can see companies using this to screen applications to filter out potentially problematic people, sometimes with more social acceptability (think airline pilots), sometimes with less (most other jobs).
I disagree. I happily include my personal site with quite some controversial articles (being my opinion at that time) on it in my resume. It works as a great filter for companies I wouldn't want to work for.
This works great for competent, experienced software engineers who are in extremely high demand right now. To be blunt, most people are not that valuable economically to a company, and so have to fight a lot more to get a lot less. Most people would find a hard time getting work with any broadly controversial topics associated with them.
There's also the argument that software engineers may not always be in such high demand, and you never know what's going to be catalogued and indexed to you 10, 20, 30 years from now.
When you find yourself having to make a public apology for an old blog post in the future, you’re going to regret doing that. Or maybe a gen Z hacker that hasn’t been born yet will ransomware your business because you wrote an article today with a word that will become socially unacceptable in the future.
I have no delusions that my political opinions or writing style are original enough or convincing enough to change the world. So it’s all just risk for me. If I’m writing something on my blog, it’s probably a focused technical article with no politics and little to no humor.
Why should I censor myself just because there are extremists out there?
I actually despise these trigger warnings and cancel culture of cultural pieces just because the author was a misogynistic idiot (or anti semite, or proponent of slavery). We need to put bools/texts/essays into their historical contexts and discuss why we today hold beliefs that are different (and to us feel more modern/humane/better).
I am politically very left, atheist and I argue against dehumanizing isms since my days in high school 25 to 30 years ago. But still I find value in discussing ideas from Kant, Luther or many others that nowadays are often canceled in educational settings because of their despicable values/isms.
Honestly you could probably train a GPT-3 based model on the posts of medium and high profile people in whatever industry you work in, add a couple post-content generation checks to eliminate any accidental plagiarism, and you'd be most of the way there.
Dunno if it's too productive to fret over the appropriateness of computational-tools.
I mean, when it comes to stuff like mass-surveillance, we can debate the merits of having such a mass-surveillance infrastructure. But when it comes to simple calculatory tools, they're basically just thoughts -- thoughts that people can have even without AI, and presumably'll become increasingly well-developed over time.
Forward-facing ethical-frameworks should probably discount perceived scarcity of computation. This is, if we're talking ethics, presumably we ought to consider things under a presumption of ample cheap computing power, as that seems to be the direction we're going in.
In other words, it ought to become trivial for individuals to come up with better methodologies than this on a whim. And since it'll be so easy for most folks to think up (and implement) stuff like this on their own, there's little point in fretting over exploration of such things now. Rather, we presumably ought to regard stuff like this as mundane thinking.
You could actually use it to counter bias in an existing system. Consider a real example from a prior coworker:
We lend money for mortgages to people based on legitimate criteria: FICO score, income, etc.
THEN, we use a model to see if we lend more/less money to African Ameicans at better/worse rates. If we see bias in this model, we make adjustments to the first model so that they are "treated fairly".
I've seen the same done on message boards where certain actions were more likely for men vs women - and the administration discovered they were accidentally awarding more points to men vs women. So they adjusted the points for different actions so women had an fair-ish chance at getting points
> What would be an example of an "appropriate" use of this technology?
Well, investigative officers concerned for the well-being of the individual and the potential for self harm or danger to others could attempt to help them with a taser or a shotgun.
Or, if you are a liberal you could have an algorithmic silo-ing of their posts? That way it would be demonstrably not a prejudiced censorship but something that could be investigated later if the right channels are followed.
It could be coupled with the software that detects whether people are homosexual or conservative based on their physiognomy.
Could there be some public health benefits? Could the data help governments decide where to spend money and evaluate if the money spent has had any effect?
If that was your original intent, you phrased it really badly. You didn't say "trust no one". People were asking what to do with it, and you said "i wouldn’t give it to gov". That isn't "this is dangerous and shouldn't exist".
This doesn't work as a reasonable objective measure of population mental health disorders for several reasons.
1) The oversampling of the training set makes it uncalibrated when applied to the general population (to recalibrate you need an estimate of the prevalence to begin with, which sort of defeats the purpose).
2) Online posts are not a random sample of the population. (Perhaps this is solveable with some poststratification of the estimates, although requires demographic data on the poster.) If you take self reports that the researchers used to define disorders at face value, those would make more sense than using this model.
These text based models are so superficial, when applied to mass datasets with low prevalence of the underlying condition, they will ultimately result in very low positive predictive values (e.g. flag 100 people, if the model is good will only get 5/100 as actual mental health problems).
As version_five asks, it is hard to imagine any reasonable use of the model given such low positive predictive values (which imply incredibly high false positive rates).
> getting rates of mental disorders at the population level is a useful application
this post sounds dangerous to me -- there are vast differences in a legal judgement and a medical judgement; thirdly, political policy levers are sometimes used in ways that make very little sense, due to situational factors.
It is no secret that humans are competitive, tend towards violence, and political oppression exists to some extent, of some groups, almost everywhere. A "medical judgement" used for legal or worse, political purposes, I claim, is dangerous. The post above this one appears to make no distinctions and leans towards "yes, do this."
Boundless good will come of it from the perspective of the people running the system. Just abandon the idea that the system exists to help you or make society better.
this sounds like a fantastic tool to use psychiatric diagnosis the way it's supposed to be used: as a science-shaped cudgel with which to beat inconvenient members of society.
Remember that time in living memory when homosexuality was in the DSM. Now picture that but with machine learning able to pick out the gay posters online.
I would assume that someone will come up with a phone app to fix text so it triggers no filters.
SaneSpeak fixes your communications to analyze as sane, honest, and confident. Simple sliders allow you to project the personality you want your audience's AI to pick up. Chose proven presets and watch those sales and promotions roll in.
This would work great as an AR overlay, auto-suggesting one's next sentence in a conversation, so that it sounds more sane and confident.
With enough training and refinement, we might be able to get the % of original untweaked human utterances down to single digits.
> Maria's public Wishlist indicates she recently purchased those shoes. Her public watchlist indicates she watches Severance on Apple TV. A main character, Helly, wears similar-looking shoes. Maria has used the word "fab" to describe items of clothing in 3.6% of her Instagram posts. Next, comment to Maria that her "fab shoes" remind you of the ones you saw Helly wear on Severance?
Not a bad idea! There’s been plenty of friends and colleagues that have asked me to look over their work emails in order come across as professional or least likely to generate conflict.
Might be nice to lazily respond to people and the program autogenerates professional boilerplate for you
I would say it is the same kind of bullshit. "Mental disorders" are just artificial conventions that classify emotional distress, non-compliant behavior and stuff.
There are no proven "chemical" imbalances with for example depression. And psychotropic medication does not balance the "chemicals".
I also doubt that being sad or mad about something on the internet will give you accurate predictions about the mental order or disorder of the user. Of course there will be some correlation, but there also will be a lot of false negatives and false positives. I mean diagnoses by non-artificial human professionals are quite unreliable.
> "Mental disorders" are just artificial conventions that classify emotional distress, non-compliant behavior and stuff.
Illness is also an artificial convention that classifies physical distress, abnormal phenomena, "and stuff." That doesn't make it less useful as a label. There are people out there with all kinds of mental disorders.
> I also doubt that ... mental order or disorder of the user.
Me too. The article basically sounds like fund-raising/trying to get support from a different financial source.
I’m fairly sure the “chemical imbalance” thing is still not a proven cause. It’s more of a metaphor psychiatrists use when trying to get someone to take meds.
Time in nature, meaningful work, healthy food, body movement, healthy relationships, processing emotional trauma, etc. are the contexts that ultimately get leverage on “mental health”.
For me, I was told by Standford psychiatrists I’d be on meds my entire life, presenting w suicidal depression, biploar, schizophrenia, anxiety,etc.
Thankfully I trusted my intuition and found a path through without traditional drugs (psychedelics helped).
But from my journey I realized “chemical imbalance” was more metaphor than reality… and sort of a hyphothesis that didn’t really play out.
So basically this was a theory that came out around when LSD was discovered to impact the serotonin system.
It never really panned out but psychiatrists continue to use it. They will say it’s like a metaphor to get people to accept taking medications.
When I was confronted w diagnosis of Biploar / schizoaffective and having suicidal depression and psychotic mania — over a decade ago, I basically landed at… the tribe of psychiatry calls the dance I’m doing a certain thing, but their solutions are really not that sound and we can do better.
So, I didn’t continue to ignore what was going on, but I went on a personal research mission and took on my diagnosis as a dharma, not a dogma.
Ultimately I was able to address the underlying trauma, relationship patterns and language patterns that lead to all of those things, and now lead a life filled with joy, deep calm, gratitude and possibility, with no medications — despite what the standford psychiatrists said was possible.
Psychedelics helped as well, and I went on to run an underground clinic in NYC for a few years before they were mainstream; I developed a Breathwork practice that helped as well (without drugs) BioMythic.com, and founded a non-profit w funding from Doc Bronners to legalize access to psychedelic medicines.
Also realized the path I was on was really about developing a capacity for deep presence and leadership as founders from Unicorn startups began showing up at my door so to speak to help them master their inner landscapes so they could articulate their bigger visions.
I’ve had the honor to be the personal coach on relationship, spirituality, mental wellness and leadership for some of the biggest names in tech.
And for me, it was ultimately a discovery that this “thing” that was causing me pain and I was told I needed to medicate away, was actually my greatest gift. And now it’s allowed me to run a multi six figure coaching business for a few hours of client calls a week, while touring around the country in MajikBus.co
Anyhow, long rant, but I found other peoples stories really helpful when I was finding my way outside the main stream!
My understanding is that it has gone out of favor as a theory. It wasn’t ever developed into much of a robust model and is sorely lacking in many ways.
But this is a layman understanding so I could be missing something.
I've been reading a little about epigenetics, the systems by which your body's genetic code responds to your environment.
For instance, people with scalp psoriasis can change the epigenetic structure of the skin on their scalp by using anti-psoriatic shampoos, so that the body itself, being freed of the irritance response of psoriasis stops acting like it is psoriatic.
Of course, as soon as the medication stops, the psoriasis returns, so it's not a permanent fix, nothing in epigenetics is. The fact that you can alter your genetic representation in real time is an amazing thing to me nevertheless.
I said all of that to say that there's a chance that any "chemical imbalances in the brain" might be the reaction to the underlying mental illness, and not the cause of or solution to the mental illness in the first place, and that treating the mental imbalance may give the person time away from the effects of their illness.
That, combined with therapy and validation, can conceivably result in "curing" the illness in people that had a chance to recover in the first place by giving them the boost they needed to get over the hump.
Not a one-size-fits-all solution, of course, we don't know enough about the brain to know what would be a permanent solution, but in your case, finding your own path and having uplifting experiences thanks to psychedelics might have been all that you needed to put that chapter of your life behind you.
This is all armchair conjecture but it's a thought I've been having so I figured I would share.
Yes, that's why there's medication to treat depression, anxiety etc. Obviously it doesn't work for every condition and every person and there's still a lot of research being done but this is one of the main ways to tackle these problems.
I think so. A quick Google search returns 15.3M results. Googling for an exact match returns 217K results. All results seem to be related to mental health.
It’s a common phrase, but people who dislike psychiatry are correct to point out that “normal” behavior is also caused by “chemical imbalances”. That’s just basic metaphysical materialism!
I don’t agree with them that “there’s no such thing” as mental illness, but I do agree that we medicalize trivial variations too quickly, and what counts as ordered or disordered depends on social context, and we can’t just assume the social context is ordered.
I think companies will certainly find a use for it to monitor internal chats on teams or wherever, code review comments, web searches, and any other text input they can see.
Hopefully you won't pop up on an AI definition of a mental disorder.
They are not measuring emotions in isolation; that would not be novel. They are computing signature patterns of emotional transitions. This is revealing of some mental disorders.
This was what stood out to me about the article as well. I could see something like this being a useful tool for therapists or self-help to be able to map the overall emotional tone of one's content over time.
This reminds me of phrenology because it's just as crazy and evil.
Yes, I did read the article as it's really short. I was about to say that I've been on the internet longer than this guy's been alive but... I am no longer sure that number in his name? refers to his age.
Like any tool, it can be used for both good and bad.
And like any tool, once it exists, it exists. Pointless to worry about if it should exist.
The only useful reaction is to ensure that for whatever powers of manipulation and discrimination it gives to some, that individuals get to use the same tools in their own defense.
I don't see how this could work, because the AI would need some training data of web posts that included people _without_ mental disorders, and frankly I don't see where on the internet you would be able to find much of that...
> Reddit, which offers a massive network of user forums, was their platform of choice because it has nearly half a billion active users who discuss a wide range of topics.
Let’s be honest here… Reddit was their first choice because of the high prevalence of mental disorders amongst its users. One wonders why they didn’t choose Twitter instead in that case… but better to test the AI system’s load capacity before tackling such a gargantuan task I suppose.
> One wonders why they didn’t choose Twitter instead in that case
I don't know, but I doubt it's that Reddit users are substantially less mentally healthy than Twitter users. There might be a mean intelligence difference (just guessing), but Twitter isn't exactly known for rational and sane communication of ideas.
Although might Reddit simply have a larger corpus per user given the lack of a character limit?
EDIT: The more I think about it, the more I think my assumption is close to the truth. Even if your character limit is 280 characters, that limit can affect how someone forms their sentences. If there is no meaningful limit then a person can feel they can express their full idiolect unencumbered.
Basically where it could excel is giving probabilistic prediction of "mental disorders" across a high volume of users. What is a good thing you would do with that? Advertising to suggest someone to get help, maybe?