Hacker News new | past | comments | ask | show | jobs | submit login

> we are trying everything that could possibly go wrong long before the AI can do much real damage

What I would prefer to see is more people doing this with the intention of making AI safer. Without this intention, people are incentivized to look the other way when something does go wrong, so we don't actually learn lessons from this.

In a similar manner, I would like us to build stronger social coordination mechanisms and technical safeguards to help us determine when things are going wrong.

Just mindlessly trying to hook up everything you can to AI seems bad.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: