Hacker News new | past | comments | ask | show | jobs | submit login

How would your behavior change, if some AI blackbox might decide on unknown parameters, that it is time to kill you?

The idea of an idealistic killbot overlord sounds a little bit dangerous to me.




I'd probably plot to destroy the killbot, which would probably result in it trying to kill me. Doesn't sound ideal.


Well, I assume in that situation my behavior would stop altogether.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: