Hacker News new | past | comments | ask | show | jobs | submit login

>> One fatality in 4 million miles is safe? Humans are 20 times better than that.

And this is considering the fatality rate for all human drivers, including those that are drunk, playing on their smartphone, sleepy, speeding, street racing, etc. I would not be surprised if those cases account for a very sizable majority of fatal car accidents. Now imagine you would reduce those numbers by relatively simple measures that don't involve AI. Let the car enforce speed limits, mandate smartphone companies to have a 'car mode' that cannot be shut off, mandate attention tracking capabilities for cars, dead-zone indicators, automatic emergency braking, etc. Maybe even mandate an alcohol lock, or at the very least a black box that records driving behavior so it can be analyzed in case of a crash, with sever penalties after the fact if the black box shows signs of drunk driving, etc.

The list of things you can do to improve road safety without requiring AI cars is endless, yet billions are invested in trying to fully automate driving. This just shows that self-driving cars are not about safety, but purely about business opportunities.




And conversely, those 4 million miles will (presumably) be highly biased to safe/above-standard driving conditions.

As you've basically said, the human stats include all the very worst drivers in the full range of driving conditions, whereas the auto-cars not only have the very best driving conditions, but likely also have the statistical benefit of a likely number of cases where the human actually stepped in and upped the auto-car score even further (since we can't account for the auto-car deaths/fatalities that humans have successfully stepped in and stopped).


Yes, so far the statistics we have about the safety of AI cars are mostly meaningless to say anything useful about how they would scale if applied universally.

Another thing that bothers me is how often the argument 'human drivers are terrible, far worse than computers for situations A, B, C, etc' is used to suggest that AI systems will improve road safety. I will be the first to admit that many people are terrible drivers, they make mistakes, don't pay enough attention, don't obey traffic laws, have bad reaction times, etc, no argument there. But so far I have never seen any statistics about how many of the accidents caused by these kinds of human failures actually lead to fatalities. Taking the human out of the equation could in theory improve all kinds of car accidents, but how many would be just fender-benders as opposed to road fatalities? It's not like it is super-easy to accidentally cause a fatal road accident, and everyone knows one or more people who were involved in one. Who can prove that AI cars will not decrease the total number of accidents of any kind, but increase the kinds of accidents with fatalities, like this one with the Uber car? Maybe human drivers, with all their faults, would have avoided this particular accident by driving more defensively or using social cues to be more prepared for this woman crossing the street?

I'm not trying to make an argument about the viability of AI cars that are safer than humans here by the way. I personally don't believe AI cars will ever be safer overall than humans in all possible situations that people drive in, but that's a different discussion. What bugs me is that suddenly it seems as if the whole world is willing to bend over backwards to interpret statistics about the presumed safety of AI cars in the most favorable way possible, and at the same time ignore all these relatively simple things that could actually improve safety right now, at much lower costs than developing fully autonomous cars.


Yeah, I have always felt that a lot of these easy reasons for crashes (texting, drinking alcohol, speeding) are, while statistically linked with crashes of course, also emphasized due to psychology -- we all want to feel like there's something straightforward we can do to avoid serious wrecks. If it's moralistic, even better, that means morally good people don't wreck and wrecks happen when people do morally bad things.

There's not enough emphasis on the strategy and skill of driving, and that's for the same reason, I think. We've tried so hard to make driving as basic as walking, someone anyone should easily be able to do, and that doesn't fit well with something that takes a long time to master, that you can always learn a little more about.

So instead there's a narrative that all a driver has to do is obey some cut and dry rules, stop at all stop signs, never drink and drive, always do exactly 5mph over the speed limit (or is it 10? or is it 4? 7? 0?), and if they do all that they can zone out and not worry about anything but what's right in front of them in the windshield.

No immediate source, but pretty sure I read somewhere the safety stats between the best and worst drivers is one or two orders of magnitude. And the above is why. The best drivers are playing chess, the worst drivers are playing checkers. The worst drivers look at the same playing board but ignore almost all the information and just stay zombie-like in their lane till they need to brake to avoid something or their GPS says time to turn.

I'm really curious how self-driving software approaches all this. I have a sad suspicion they're programmed from a naive rule-following non-defensive point of view but would love to be proven wrong.


That's also an interesting perspective, it's not just about accidents caused by irresponsible behavior, but also about a difference in how people mentally engage the task of driving.

To me, these kinds of observations only solidify the idea that AI that is 'better than the average driver' does not necessarily mean 'less road fatalities', and that to improve driving safety, it would be more effective to start by taking the most common factors in accidents out of the equation first.


Yeah, in a nutshell, there is a huuuuge gap between two kinds of human drivers.

1) people who, like, you know, just drive, man, and like, as long as everyone else does the right thing, then like, i guess things will uh, like, work out dude

2) defensive drivers.

Obviously, self driving cars need to be more like #2 if they plan to actually be as safe as an average human, and have a shot at actually beating humans on this... I suspect Waymo gets this and Uber, well, is uh, ''moving fast and breaking things''




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: