Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This accident didn't save any lives though. It's just associated in your mind with saving lives. It's one more death; there's no decrease in traffic accidents to compensate.


So you're implying that our ability to take risks in the pursuit of massive systemic gains is exactly zero?


Trading risks now for future gains is always tricky.

Suppose the CEO of Uber was a cannibal, and you framed letting him eat people as a necessary perk in order to keep him happy and the self-driving program on track. Would it be valid to say the number of people it's permissible for him to eat is exactly zero, even if it slows down the production of a truly self-driving car? I mean, what's one or two lives compared to 40,000 a year or whatever? There's a lot of uncertainty about the costs and benefits though, even if you strictly adhere to a utilitarian viewpoint.


I'm fine with taking risks. I just don't think we should be making a cavelry charge into the machine guns. I know the payoff would be great if we succeeded, but it's still not a good idea.

I'm deeply excited by the possibilities of self-driving cars, and I would agree that it's necessary to take risks to make them a reality. The question is always if we're taking a necessary risk or just being reckless.

Uber has taken unnecessary risks and learned relatively little from them. They didn't need a fleet on public roads to tell them that object detection was terribly broken.


I actually don’t think the payoff will be great. What’s the real benefit here. Some jobs will be lost but now we have a large hackable software system that humans have little control over. If I have to sit in the front seat and watch the road then it really doesn’t do anything for me. And then there’s always going to be instances like this “I didn’t test that scenario” crap above.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: