Hacker News new | past | comments | ask | show | jobs | submit login

I think your proposal of AI therapists with human overseers would be okay if we were able to develop some kind of metrication and monitoring of the human oversight portion.

Without that control, what would inevitably happen would be that the highly-scalable part of the system (the AI) would be scaled, and the difficult-to-scale part of the system (the human) would not. We would fairly quickly end up with a situation where a single human was "overseeing" hundreds or thousands of AI conversations, at which point the oversight would become ineffective.

I don't know how to metricate and monitor human oversight of AI systems, but it feels like there are already other systems (like Air Traffic Control) where we manage to do similar things.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: