Hacker News new | past | comments | ask | show | jobs | submit login

It's a good question. I also immediately found myself asking the same one of myself after posting that comment. I guess part of me just wants as many possible breakpoints along the process as possible.

But also at least then you have someone who is liable when things go wrong. When its fully automated, like the other comment mentions, they can just shrug and blame the AI. Who gets sued when a self driving car kills someone by accident? I don't know. Perhaps a lack of ownership is excusable. But when a weapon deliberately kills someone I think we need to have ownership somewhere.

Perhaps as a general rule the maker of the AI system should have liability for the AI up until someone else signs and accepts that responsibility. None of this "Company does not accept liability" crap. They have to make it clear that "customer accepts liability" or else it's them. That way they will be incentivized to make the military or whoever sign.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: