Hacker News new | past | comments | ask | show | jobs | submit login

On the other hand Google has driven a lot of miles without an incident and this minor accident doesn't change the fact that their system is safer than a human behind the wheel. And unlike humans who don't improve after an accident, Google's whole fleet just got even more safe.



They have driven millions of miles - in perfect driving conditions: great weather, lots of highways, all of it by day (how do I know that? Simple: Google would brag about their AIs being able to drive in rough situations if they were able to do that. But they don't. They just brag about a huge number of miles in undefined conditions...). That is exactly what the original poster criticized. When driving is so easy that it bores human drivers to death, todays' AIs can be better.

The problem is: real-life driving does not only consist of situations that even a child could easily manage. Humans are extremely good at letting their driving skills degrade gracefully when conditions get rough. AIs? They drive perfectly until they reach their limit, then they suddenly pull off epic fails - if no human driver is on standby to resolve the situation immediately.


I guess I was dreaming last Friday when I saw the Google self-driving car pass by twice on the same route, at 10:30pm. I even handled a stop with very limited visibility on one side like a champ. If anything, I think self-driving cars will end up driving like my Grandma, but probably safer.


Your grandma, with the ability to receive telemetry from every other car around her, knowing her position to within a few mm, having a reaction time governed by light speed delays, and knowing as much about the Physics of driving a car as an army of PhDs has been able to teach her.

And programmed to turn off the blinker light after merging onto the freeway...


Well, for one, it could have still been driven at that point by the driver.


Last Friday's weather in Mountain View wasn't very stormy IIRC


> all of it by day (how do I know that? Simple: Google would brag about their AIs being able to drive in rough situations if they were able to do that. But they don't. They just brag about a huge number of miles in undefined conditions...)

They totally drive by night too. How do I know that? Simple: You see them with your own eyes, in Mountain View. Even if Google doesn't brag about it.


Google mentions focusing on city driving instead of highway driving in their annual disengagement report [1]:

> Mastering autonomous driving on city streets -- rather than freeways, interstates or highways -- requires us to navigate complex road environments [...]. This differs from the driving undertaken by an average American driver who will spend a larger proportion of their driving miles on less complex roads such as freeways. Not surprisingly, 89 percent of our reportable disengagements have occurred in this complex street environment (see Table 6 below).

1: https://static.googleusercontent.com/media/www.google.com/en...


This is false. Google's cars have only gone as far as they have without an incident BECAUSE of human drivers. In a 12 month period, human drivers prevented Google Self-Driving Cars from causing ten accidents. And 272 times the car's software outright failed and dropped control of the car to the human driver. This is all in Google's recent report to the California DMV, but it's not a reality they like to advertise openly.

Statistically, Google's Self-Driving Car would've lost it's license by now, if not for human drivers keeping it in check.


Spread that over "the equivalent of 75 years of typical U.S. adult driving." (1million + miles) and the alpha version of this software seems almost on par with the average driver which is expected to file an insurance claim every 17.9 years.

PS: How often do you think the average drivers ed teacher uses there break?


I'd like to see some analysis of the predicted severity of the accidents for self driving cars.

A minor fender bender at low speed vs. losing control and going over a guard rail at 80 mph are both "accidents" - but have entirely different consequences.

I have a hunch that while human drivers are still better, when they do have accidents some percentage of those tend to be much more fatal.


Your statistic is incorrect. The incidents described took place over a much smaller number of miles. Only I believe 14 months of the Self-Driving Car program, not the sum total of it's driving to date. (I said 12 above, but I think it's actually 14 months.)

424,331 miles, according to the report. http://www.consumerwatchdog.org/resources/cadmvdisengagerepo...

Nor are you counting all of the times the driving software handed control back to the driver. Consider each one of those equivalent to your human driver falling asleep at the wheel.


"Google had operated its self-driving cars in autonomous mode for more than 1.3 million miles. Of those miles, 424,331 occurred on public roads in California during the period covered by this report" So, you where talking about a specific report not the overall system, got it.

Our objective is not to minimize disengagements; rather, it is to gather, while operating safely, as much data as possible to enable us to improve our self-driving system. Therefore, we set disengagement thresholds conservatively, and each is carefully recorded.

That's a long way from falling asleep at the wheel. That's we don't want to put people at risk so we are going to hand off control if "Disengage for a recklessly behaving agent" etc.


As you correctly point out disengagements may not be the equivalent of falling asleep: they might simply be the equivalent of the driving instructor adjudging the learner at the wheel to be looking a bit too nervous, or just not ready for the junction that's coming up yet. And yes, its assumptions on the type of "software discrepancy" and "unwanted maneuver" that prompt human input are conservative by design.

But the report actually goes so far to points out that on at least 13 of those occasions Google believes that without human intervention a crash would have occurred. It also implies that only 3 of those situations were created by poor driving on the part of another human. That's a definite crash due to Google car error averted every ~40k miles, even though humans already take over whenever there's a software warning, a spot of rain or something else that they think might stretch its capabilities too much.

The average US driver has 1 accident per 165000 miles (which, like the unusual tendency for the Google car to get bumped when stopping at junctions, may or may not be their fault) and that's including DUI as well as people driving in slightly more difficult conditions than sunny surburban California.


> The average US driver has 1 accident per 165000 miles (which, like the unusual tendency for the Google car to get bumped when stopping at junctions, may or may not be their fault) and that's including DUI as well as people driving in slightly more difficult conditions than sunny surburban California.

The average driver doesn't do all city miles and that's why Google's cars have gotten bumped. Highway driving is much easier to automate (several cars already do this today) and makes up a large percentage of total miles driven.


Disengagements while on public roads is the only representative metric. The real world is more challenging than whatever artificial testing environment Google has on their private roads.


"424,331 miles, according to the report"

That is a useful data point. Extremely optimistic bystanders think self driving cars will lower death rates to zero.

Statistically about 7 people die on the public roads per billion passenger miles. At half a million miles, assuming the miles are statistically random and indistinguishable from the entire country (sure, summer in socal is as hard driving as a winter in a blizzard at night in Montana, sure...) then 424331 death free miles means that technologies death record is no worse than 336 times worse than human drivers, at least so far. Perhaps its only 335 times more deadly than human drivers. Even drunk people aren't that dangerous.

Or in other words, the self driving car can be summarized to lots of predictions based on very little data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: