I think you’re both right. Microsoft won’t let theirs run free but there will be other vendors that do.
Who is intimately responsible for all of this?
Is it the end user? Don’t ask questions you don’t want to hear potentially dangerous answers to.
Is it Microsoft? It’s their product.
Is it OpenAI as Microsoft’s vendor?
When we start plugging in the moderation AI is it their responsibility for things that slip through?
Who and where did they get their training data from? And is there any ability to attribute things back to specific sources of training data and blame and block them?
Lots of layers. Little to no humans directly responsible for what it decides to say.
We used to see those articles, but now that the models are actually good enough to be useful I think people are much more willing to overlook the flaws.
And the lawsuits, oh the lawsuits. ChatGPT convinced my daughter to join a cult and now is a child bride, honest, Your Honor.