Hacker News new | past | comments | ask | show | jobs | submit login

>While planes have become so, so, so much safer because of all this automation, pilots uncertainty regarding autopilot functioning is a major concern nowadays, and the reason for several accidents.

If % of accidents caused by pilot interference on a working system > % of accidents caused by system malfunctioning and pilot ignoring it: people will still be against not allowing pilots to interfere. Even when it causes more accidents...

There's something about humans trusting humans more than machines that I don't fully understand. Systems can make mistakes but the amount of human mistakes is often exponentially greater to a degree of absurdity that humans are even trusted at all and yet people will side with the human over the machine.

Humans will always want human oversight - even when that oversight does more harm than good once automation reaches a certain threshold...

Special note: I'm not aware of avionics and the data on pilot interference w/ the system vs failure of pilot to intervene. So maybe this example doesn't hold very well for avionics...




A human can make judgement calls in unexpected situations. Objectively, a human is more likely to see falling debris and react to it than a self-driving car (of course depending on the programming and sensors).

Or when the road is slick and you are doing 5 under, and you see someone pass you doing 5 over, with a car following too closely to it... you have a better understanding that there's likely to be an accident ahead... (I've seen this happen several times). A computer wouldn't predict that most likely.

Unless ALL cars are automated, it is a must that a driver be able to take over quickly.


Strongly disagree.

> A human can make judgement calls in unexpected situations.

A properly programmed machine can behave smarter and faster and it also knows its limitations, so it can account for them. "Judgement calls" is something not needed because the machine always keeps good judgement.

> Objectively, a human is more likely to see falling debris and react to it than a self-driving car (of course depending on the programming and sensors).

Very much depending on programming and sensors, but from some point on, I'll always be betting on programming and sensors.

> Or when the road is slick and you are doing 5 under, and you see someone pass you doing 5 over, with a car following too closely to it... you have a better understanding that there's likely to be an accident ahead... (I've seen this happen several times). A computer wouldn't predict that most likely.

The computer will track all vehicles, their velocities, past observed movement history, perceive road conditions that you can't due to limitations of Mark I Eyeballs, and will push all that data through an inference system that is capable to stay rational all the time, without being affected by "stress" or "surprise". Solutions will be computed literally faster than the blink of an eye. The machine will see the phase space of the road-cars system and be able to navigate it to safety.

> Unless ALL cars are automated, it is a must that a driver be able to take over quickly.

The driver needs to never take over. A dog should be placed, trained to bite the driver if he tries to touch the controls. Better yet, remove the controls.

The machine knows its limitations. The right way to avoid an accident is not to perform stunts based on intuition, it's to keep the entire system (of cars and the road) in a stable and known state, and navigate that state to safety.

People seem to get machines really wrong. Machines today are limited in their creativity. But they are orders of magnitude better than humans in getting things right. On the road, we need more of the latter than the former.


The thing that humans can do that machines cannot is react to something that they never expected and haven't been programmed for. A human can generalise or invent a solution to a new situation that a computer simply will not be able to do.

You are correct that in the vast majority of situations a computer will outperform a human. I've avoided accidents by luck more than judgement, guessing that the car will be able to do what I'm asking (whilst the autopilot would know at all times and react in milliseconds).

All the world needs to accept is that at some point someone will die because of an auto pilot error and that's ok because it's net lower numbers of deaths than the same number of humans driving.


I still am yet to understand how a self driving car deals with a cop directing traffic. Or even a construction worker holding traffic back from a backing up backhoe. But maybe I am not aware of the genius of the technology yet.


From my limited and slightly hopeful understanding they don't try to understand much more than 'something is moving toward our planned path and that is no good'. As long as the car avoid going over the cop or the worker it's ok. Every decision above that is optional.


They will (eventually have to) recognize hand signals from construction workers and police officers. Current generation, they won't run over the person, but you should probably take over and follow their instructions.


Our best of the best image recognition tech cannot see a difference between a zebra, and a sofa in a zebra print. I think "reading hand signals of a policeman/workman on the road" is far far far beyond what we can currently do. Or rather - I'm sure we can make a solution which will work right now, in perfect conditions. I'm sure it will fail in the dark/rain/snow or if the worker is making small gestures near his hip rather than moving hands high up in the air. There's just so much uncertainty in driving that I think for cars to be absolutely 100% automated, where you can genuinely go to sleep when the car drives, the roads would need to be 100% automated as well, with beacons everywhere. That will probably happen, but it's definitely not "3 years away".


Zebra vs. zebra-patterned sofa is a kind of problem designed to be hard for image processing algorithms. From the practical point of view however, you just have to require that policemen / construction workers wear specific patterns on the uniforms or even have special IR-reflective buttons/threads. That would solve like 95% of cases, and by the time self-driving cars become the norm, we'll have figured out at least some specific algorithms for that very purpose that would work in general case.


They will have devices that communicate directly with the car's systems.


It's called image processing.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: