Hacker News new | past | comments | ask | show | jobs | submit login

Seems like Google acknowledged this:

"From now on, our cars will more deeply understand that buses (and other large vehicles) are less likely to yield to us than other types of vehicles"




I will probably get downvoted for this, but: this is why I am skeptical about self-driving vehicles. Driving in general and adjusting to each country/locality, then to vehicle types is such a human thing that you'd need a thousand more "deep understanding" things like this.

It will be an endless process for an algorithm driven car. Buses in Dublin will be so much different from buses in London, or trucks in the US different from trucks in Germany. Then there are the bikes, pedestrians. When human drivers/bikers/pedestrians look at each other and know what to expect. These are the things that would be incredibly difficult to formalize, if possible at all. Unless you have something like a Deep Mind on board, but even then I wouldn't be so sure.


Humans will also adapt. If there are local driving quirks in an area that the automated cars don't follow, after watching the automated cars fail to follow them a few times, people will account for the unexpected behavior.

This is already an issue humans solve in the context of tourists driving through town, who aren't accustom to local rules like "First driver at an intersection gets to make a left on green instead of yielding to the column of oncoming straight traffic."


I utterly despise that local rule, even as someone who grew up with it.


If you hate that one, I hope you've never encountered the one that appears to operate around the D.C. beltway / suburb area:

Signalling intent to change lanes indicates that you are weak; people will actively cut you off.


There can be only one!..person on time to work and it's going to be me


Wouldn't this be a great application for machine learning? Collect traffic data locally, run it through the system a few bazillion times, them you realize the only way to avoid a car crash is to drive like a maniac in certain places.


Google already does this. The cars have virtually driven nearly as many miles as they have on the road by now. And every time there's a near accident, or one of the various accidents where other vehicles were at fault, they feed in the data and run simulations on how the car could have avoided the accident.

Even if they're not perfect, they're still already better than the median driver in the majority of daily driving. They don't need to be perfect, just better than most and you're drastically reducing deaths. Below median drivers cause more accidents than above median.

Most accidents come down to awareness and reaction (both reaction speed and the ability to evaluate the right course of action). These are two things that a sufficiently advanced computer system will always be better at. A human gets distracted, doesn't have 360-degree constant vision, doesn't have thermal imaging or the ability to range-find obstacles through fog. A human can't use perfect situational awareness of obstacles and road conditions to avoid an accident.

Anyway, the plural of anecdote isn't data. Google is in the data business and has a firm grasp of what they're attempting. I doubt they're going to be proven wrong.


> They don't need to be perfect, just better than most and you're drastically reducing deaths.

From a utilitarian perspective, they would only need to be better than whoever they are replacing one driver at a time.

From a human-feely point of view, they'll need to be massively better than the best human to stand a chance of adoption.

My guess about AI: for human-suitable tasks the gap between a typical human and the best human is actually not all that great, compared to the difficulty of getting to human-level skill in the first place. So by waiting for surpassing the best human, we don't actually lose too much time.


The problem is ANN (I'm assuming you're thinking of them) are not deterministic so I wonder whether they're may be issues regarding the legality of using a system with an uncertain outcome.


As far as I know, they mix different approaches.

Artificial Neural Networks are actually totally deterministic by default. (Even if training them is not.) What they lack is people being able to explain in simple ways what the ANN is doing.


I don't think ML can solve the "human" part, i.e. like I said drivers, bikers and pedestrians looking at each other knowing what to expect and silently agreeing on what to do next, all happening in an instant. This is a crucial component of driving especially in cities with dense traffic and oftentimes unclear markings and signs, where the human factor becomes important.


[The pedestrian was] giving the awkward body language that he was planning on jaywalking. This was a very human interaction: the car was waiting for a further visual cue from the pedestrian to either stop or go, and the pedestrian waiting for a cue from the car. http://theoatmeal.com/blog/google_self_driving_car




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: