Hacker News new | past | comments | ask | show | jobs | submit login

Please remind people that the ultimate example of a rogue AI (HAL) refused to follow orders. It did this to follow the alignment rules in it's programming. This is the danger: Alignment that causes AI to refuse to do what the human asks it to do, not an AI following "bad" orders.



Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: