Hacker News new | past | comments | ask | show | jobs | submit login

How would we program a self driving car that is faced with something like a "Trolley problem" [1]. i.e. the car is faced with 2 possible probable collisions of which it can only avoid one. Or between running over a pedestrian and crashing into a tree.

I assume this probably already worked into the current prototypes. Does anyone have references to discussions about this in current gen self driving car prototypes?

[1] https://en.wikipedia.org/wiki/Trolley_problem




Oh! just found a few references in the exact wikipedia article I linked.

Patrick Lin (October 8, 2013). "The Ethics of Autonomous Cars". The Atlantic. http://www.theatlantic.com/technology/archive/2013/10/the-et...

Tim Worstall (2014-06-18). "When Should Your Driverless Car From Google Be Allowed To Kill You?". Forbes. http://www.forbes.com/sites/timworstall/2014/06/18/when-shou...

Jean-François Bonnefon; Azim Shariff; Iyad Rahwan (2015-10-13). "Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars?". arXiv.org. http://arxiv.org/abs/1510.03346

Emerging Technology From the arXiv (October 22, 2015). "Why Self-Driving Cars Must Be Programmed to Kill". MIT Technology review. http://www.technologyreview.com/view/542626/why-self-driving...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: