Hacker News new | past | comments | ask | show | jobs | submit login

They’re not going to let it run free or you will see countless articles on “ChatGPT is a Holocaust denier, news at 11”.

And the lawsuits, oh the lawsuits. ChatGPT convinced my daughter to join a cult and now is a child bride, honest, Your Honor.




I think you’re both right. Microsoft won’t let theirs run free but there will be other vendors that do.

Who is intimately responsible for all of this?

Is it the end user? Don’t ask questions you don’t want to hear potentially dangerous answers to.

Is it Microsoft? It’s their product.

Is it OpenAI as Microsoft’s vendor?

When we start plugging in the moderation AI is it their responsibility for things that slip through?

Who and where did they get their training data from? And is there any ability to attribute things back to specific sources of training data and blame and block them?

Lots of layers. Little to no humans directly responsible for what it decides to say.

Maybe the end user does have to deal with it…


We used to see those articles, but now that the models are actually good enough to be useful I think people are much more willing to overlook the flaws.


> They’re not going to let it run free or you will see countless articles on “ChatGPT is a Holocaust denier, news at 11”.

If we're afraid of that then we're already worse off.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: