And that’s fine as long as the person using it has a sophisticated understanding of the technology and a company isn’t selling it as a “therapist”.
When an AI therapist from a health startup confirms that a mentally disturbed person is indeed hearing voices from God, or an insecure teenager uses meta AI as a therapist because Mark Zuckerberg said they should and it agrees with them that yes they are unloveable, then we have a problem.
That last 20% of “missing nuance” is really important if someone is in that state! For the rest of us, the value of an AI therapist roughly matches journaling.
When an AI therapist from a health startup confirms that a mentally disturbed person is indeed hearing voices from God, or an insecure teenager uses meta AI as a therapist because Mark Zuckerberg said they should and it agrees with them that yes they are unloveable, then we have a problem.