I see where they're coming from, though: right now you have to be certified on this very specific program, meaning you only get the benefits if you have access to one of the 38 people currently trained for it in the UK.
I would definitely want a professional to be in charge but, as the article itself points out, "Joe recently went back to his GP in search of help with his anxiety (...) The GP put him on a waiting list for NHS talking therapy, and warned that he could be in for a very long wait". Given how bad access to mental health resources is I may be willing to take "a community nurse, or a nursing assistant" now over "wait several months for a chance at a doctor who may not be the right fit for you".
I wouldn't dream of allowing an AI to roam free - as the article says patients can get more psychotic and arguably "you should end it" could very well be part of the training data. But if the AI suggests lines that a trained human can oversee... then maybe?
I think your proposal of AI therapists with human overseers would be okay if we were able to develop some kind of metrication and monitoring of the human oversight portion.
Without that control, what would inevitably happen would be that the highly-scalable part of the system (the AI) would be scaled, and the difficult-to-scale part of the system (the human) would not. We would fairly quickly end up with a situation where a single human was "overseeing" hundreds or thousands of AI conversations, at which point the oversight would become ineffective.
I don't know how to metricate and monitor human oversight of AI systems, but it feels like there are already other systems (like Air Traffic Control) where we manage to do similar things.
I would definitely want a professional to be in charge but, as the article itself points out, "Joe recently went back to his GP in search of help with his anxiety (...) The GP put him on a waiting list for NHS talking therapy, and warned that he could be in for a very long wait". Given how bad access to mental health resources is I may be willing to take "a community nurse, or a nursing assistant" now over "wait several months for a chance at a doctor who may not be the right fit for you".
I wouldn't dream of allowing an AI to roam free - as the article says patients can get more psychotic and arguably "you should end it" could very well be part of the training data. But if the AI suggests lines that a trained human can oversee... then maybe?