And they can't as they current are, and won't be able to for quite some time.
Also, what they need to beat is not the aggregate safety level, but the average reliability of human drivers. Not all deaths are due to driver error. And they need to beat it by a large enough margin that people will accept that they are more reliable.
Finally, as another poster pointed out, there is the issue of liability. We know how to assign liability if a human driver makes a mistake. How do we assign liability if an autonomous self-driving system makes a mistake? I can tell you that, if I'm the human who owns the car and I'm still going to be held liable if the car's autonomous self-driving system makes a mistake, I'm going to be very, very hesitant about letting that system control the car, even if the statistics show it's more reliable than I am. At the very least I'm going to want to closely monitor what the system is doing--which of course defeats the whole purpose of having it. And if I'm an automaker and am going to be held liable if my self-driving autonomous system makes a mistake, even if a human driver is in the car and I can't control how they operate the system, I'm going to very, very hesitant about selling cars with that system, even if the statistics show it's more reliable than human drivers.
> it has to be the automaker that stands behind its product with a financial safety guarantee.
And, as I said, if I were the automaker, I would be extremely hesitant to give such a guarantee with human drivers in the car whose operation of the systems I am unable to control. So cars with such guarantees would basically have to be entirely self-driving, i.e., human intervention not even possible by design. That is a huge increase in required reliability, not to mention a huge change in how cars are currently used. Which is not to say it won't eventually happen, just that such a liability regime would, I think, significantly increase the time it will take to get to ubiquitous use of self-driving cars.
And they can't as they current are, and won't be able to for quite some time.
Also, what they need to beat is not the aggregate safety level, but the average reliability of human drivers. Not all deaths are due to driver error. And they need to beat it by a large enough margin that people will accept that they are more reliable.
Finally, as another poster pointed out, there is the issue of liability. We know how to assign liability if a human driver makes a mistake. How do we assign liability if an autonomous self-driving system makes a mistake? I can tell you that, if I'm the human who owns the car and I'm still going to be held liable if the car's autonomous self-driving system makes a mistake, I'm going to be very, very hesitant about letting that system control the car, even if the statistics show it's more reliable than I am. At the very least I'm going to want to closely monitor what the system is doing--which of course defeats the whole purpose of having it. And if I'm an automaker and am going to be held liable if my self-driving autonomous system makes a mistake, even if a human driver is in the car and I can't control how they operate the system, I'm going to very, very hesitant about selling cars with that system, even if the statistics show it's more reliable than human drivers.