Hacker News new | past | comments | ask | show | jobs | submit login

There is an ethics/philosophy thought experiment (which I'm failing to remember the name of), which goes a little something like this (modified for the AI example):

Imagine that in the world as it stands today that the accidental death rate of auto crashes is 100.000 per year, and we'll call that Earth One. Now imagine a world in which AI reduces the accident rate to 20.000 per year, and we'll call that Earth Two. Given that this is two different worlds, and the types of accidents that human and AI drivers will get into are likely going to be different, then there is likely going to be a large number of people who die in Earth Two that would still be alive in Earth One.

In other words, if AI drivers become the norm, there are some subset of people who are going to die, but would have been alive if AI drivers did not become the norm.

Luckily, we don't live in counter-factual worlds like that, or have knowledge of other timelines, so we're spared from knowing that this would be case.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: