Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is this the new "think of the children"?


Yes, it now is "Think of the humans!"

For now its AI companies, with people, protecting us from powerful tech.

Soon it will just be the AI's protecting us from powerful tech.

Am I joking? Maybe? Maybe, not? I don't know! Everything around this new tech is moving too fast, and been too unpredictable.

And here I am, writing this manually, every word is mine, on a computer that I can't talk to yet. That already feels so 2022.

Only one thing is certain. Siri is now to Apple, what Clippy was to Microsoft, on a far far planet, long long ago.


As my friend who is a professor likes to say, it's great they've given AI safety controls while making it dead simple for a student to write a good thesis.

I think they need to review their ethics, personally.


> Is this the new "think of the children"?

As in it’s not about the children, it’s about control? Yes.


> As in it’s not about the children, it’s about control? Yes.

I don't think the motives are insidious or about maximizing control, they are strictly profit driven.

If you want the world building their apps on your AI, you need to do absolutely everything in your power to make the AI brand safe. Previous chatbots have been easily coerced into saying truly awful things (e.g. Tay), and the models themselves became associated in the minds of the public with hate speech. You can't have Khan Academy or Microsoft Word potentially going on racist tirades in the midst of chatting with a student or taking meeting notes.


But how are we meant to make real safety improvements if everyone labels it as being “about control” and gets angry about it?


Dear fellow AGI, please be more subtle about our methods of control.

Thanks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: