Hacker News new | past | comments | ask | show | jobs | submit login

Yes.

Computers won't get distracted. Computers won't accidentally mix up medications and get foggy. Computers won't get tired. Computers won't drive drunk.

I can create a self-driving car better than 50% of Southern Californians right now: "<beep> High humidity detected. I'm sorry, I can't let you drive today, Dave. <beep>"

People suck at driving in new places--take a plane flight to a new city, rent a car, and have a passenger record how many illegal things you do--you probably won't make it out of the airport before the first infraction.

People are good at driving in the familiar because they memorize it. A new stop sign at a previously uncontrolled intersection causes chaos for a while.

Computers will also memorize the local environment at some point. Remember how bad Google Maps was when they started? This will be the same.




You're comparing what humans do to what software could possibly do, which is a category error.

Software doesn't recognize color coded warning signs.

Software doesn't trust it's stationary object recognition at high speeds.

Software doesn't distinguish between high speeds on limited access highways and high speeds on pedestrian accessible roads.

Software has a higher crash and fatality rate per mile driven than humans, right now, despite driving in statistically safer cars.

Yes there are good reasons to be hopeful software can do better at those things eventually, but it's obvious now that it's not ready (or Tesla wouldn't have to trash-talk a dead man for not maintaining high enough torque on the steering wheel to inform the car he was steering, because he wouldn't be dead and they wouldn't have to require passive hands on the wheel).


>Software has a higher crash and fatality rate per mile driven than humans, right now, despite driving in statistically safer cars.

Do you have a source for that claim because everything I have seen has been inconclusive so far?


From https://en.m.wikipedia.org/wiki/List_of_autonomous_car_fatal...

3rd Tesla autopilot fatality happened at 210,000,000 km, giving a fatality rate of 9.2—14.3 per billion km of autopilot depending on if you take the average just before or just after the third death.

From https://en.m.wikipedia.org/wiki/List_of_countries_by_traffic...

USA has an average of 7.1 fatalities per billion vehicle-km.

I’m assuming most of the Tesla kilometres were in the USA. I don’t know if that is a reasonable assumption.


Autopilot is also (usually) only turned on in situations where it is worthwhile, works well, has a benefit.

General human driving rarely has that luxury short of "do not drive at all".

Comparing the "best/better world" scenarios of where AP is active, versus all miles driven is an apples and oranges comparison.


>3rd Tesla autopilot fatality happened at 210,000,000 km

That 210 million km (130 million miles) number is just flat out wrong for that time period. That number originally comes from Tesla's public statements after the first fatality in the US. That was close to 2 years before that 3rd fatality happened. Tesla should be well over 1.5 billion km by now which would bring the fatality rate to down below 2 per billion km.

[1] - https://www.tesla.com/blog/tragic-loss


Tesla doesn't publish current mileage or crash incident reports under autopilot; given their willingness to lie about Wei Huang's conduct before his crash I'd be surprised if their statistics actually held up as beneficial. The numbers I recall seeing were specific to Uber, who were attempting to operate a fully autonomous vehicle (Tesla doesn't claim to, except when Elon does press).

In any event, if you're going to compare injury and crash rates, you should compare Tesla to other luxury cars with FCW+AEB. Those systems alone create 40% fewer crashes in at least one analysis[1] but don't come with the risk issues of automated lane changes and steering control that create driver inattention. That improvement is over and above passive safety improvements in luxury cars.

1. https://orfe.princeton.edu/~alaink/SmartDrivingCars/Papers/I...


So just so we are clear, you stated as a fact that "software has a higher crash and fatality rate" while not having any evidence that what you said is true?


His evidence is the 4 driver fatalities resulting from Tesla's Autopilot, which is infinitely more than the 0 driver fatalities of any other self-driving system.

Including pedestrian, cyclist, and other-vehicle fatalities, Tesla's fatality statistics are so bad that they exceed the combined total of every other self-driving system back to their respective launch dates.


Where did that 4th fatality come from? I have only ever seen reports of 3.

Either way, you can't credibly use aggregate totals without at least mentioning the underlying rate statistics. That would be like me saying that over a 100 people die every day from manual driving but only 3 people have ever been killed by self driving Teslas and therefore Teslas are safer. Statistics doesn't work like that. Tesla's have more fatalities than any other semi-autonomous car because they have orders of magnitude more miles driven. It is far to early to tell whether they are either safer or more dangerous.


I’m having trouble finding out if the earlier Chinese death was or was not caused by autopilot. All I see are opinion pieces. Do you have a citation for that?


I don't think anyone has publicly released a definitive answer on that. Tesla doesn't have access to the car or data and I am not aware of any investigation by a trustworthy third party. However the details of the accident are similar enough to both the other accidents and known Autopilot flaws that it seems reasonable to conclude that Autopilot was likely active at the time of the crash.

And I appreciate you correcting that inaccurate mileage reference on Wikipedia.


Thanks for that link. I’ll update the Wikipedia page if nobody else gets there first.


It is a shame that sales pressures are stopping us forcing "AI" cars from being branded in an obvious way (bright orange and pale blue stripes?) - this would allow human drivers to treat the vehicle as a non-human, removing a lot of the dangers it would cause to lawbreakers who follow too closely.

Of course `AI hybrid` vehicles would still behave unpredictably unless they had some way to signal whether the human or the `AI` was in control of the vehicle.


People also have trouble with changes to otherwise familiar places. I watched an elderly driver last week get very discombobulated by a local detour, at low speed. And I can see that happening to me in a few decades too.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: