Hacker News new | past | comments | ask | show | jobs | submit login

Technically, even if self driving cars are more safe than human drivers (even if they're not perfect), should be good enough. But my lizard brain tells me that I'm putting my life in the hands of a machine that potentially has bugs, and that's a little scary.

Most of us are going to expect nothing short of perfection from these machines to really trust them.




The same thing happened with elevators. When they had operators who would physically make the elevator move and then these new fangled "magic" ones came in people freaked out, got used to it and now no one cases.

I get the feeling the it's the same way with cars just an order of magnitude bigger of a change since they're such a part of our lives.


But that also provides evidence of the parent's point that "Most of us are going to expect nothing short of perfection from these machines to really trust them."

We absolutely wouldn't be OK with elevators that now and then fail in a dangerous way. Doesn't mean it never happens but it's considered to be someone's fault when it does.


Elevators do fail occasionally. There are several videos of elevators moving before the door has finished closing, or stopping mis-aligned with a floor. It's just a very unusual occurrence.


Yeah, when I was a teenager I was hit in the head by a closing elevator door. That hurt like all hell.


However the "search space" of elevator maneuvering is much, much smaller than driving a car.


Technically a computer is already controlling modern cars, just not steering it.

Similarly, any modern passenger jet is doing the same and steering it. It can even land without the pilot's help.

We have already passed over the torch to the machines and self driving cars are just the next logical step I guess.


Exactly this. When I was trying to get my pilots license for small aircraft I wasn't afraid of flying at all. But my lizard brain tells me that when I fly commercial I should be terrified because I'm not in control. Even though the commercial pilot is statistically magnitudes safer than me driving, and considerably safer than me flying.


Agreed. When I explain why autonomous cars are always going to be better than humans, I use this example:

When you drive, you only look at one thing. If you glance at the rear view mirror, you're no longer processing the front. If you look at a side view mirror, you're no longer looking at the front or back. All this is not counting blind spots. But imagine if you could see all around the car, all the time, and process all of it with the same level of importance.

That's what self-driving cars can do.

People I speak to are usually receptive to that, but there's still the lizard brain aspect, one even I'm victim to. ;)


There is an ethics/philosophy thought experiment (which I'm failing to remember the name of), which goes a little something like this (modified for the AI example):

Imagine that in the world as it stands today that the accidental death rate of auto crashes is 100.000 per year, and we'll call that Earth One. Now imagine a world in which AI reduces the accident rate to 20.000 per year, and we'll call that Earth Two. Given that this is two different worlds, and the types of accidents that human and AI drivers will get into are likely going to be different, then there is likely going to be a large number of people who die in Earth Two that would still be alive in Earth One.

In other words, if AI drivers become the norm, there are some subset of people who are going to die, but would have been alive if AI drivers did not become the norm.

Luckily, we don't live in counter-factual worlds like that, or have knowledge of other timelines, so we're spared from knowing that this would be case.


> Most of us are going to expect nothing short of perfection from these machines to really trust them.

I hope not. I'd like them as safe as possible, but I already expect that they'll be better than human drivers, and that ought to be sufficient to allow them.

However, there's another issue in play here: in addition to the possibility of holding self-driving cars to a higher standard than humans, people like the feeling of control, and will feel "better" about an accident where they see someone to blame.


And yet we routinely trust other people to drive us around, when we know that they're imperfect.


Maybe you do. I mostly spend every moment in terror when someone else is driving.


This is funny. I feel the same way

It doesn't mean they are bad drivers, but not being in control gives you a very different view of the situation

I guess since you don't know their "internal state" you don't know how recklessly they are driving actually


Human drivers are better at judging the behavior of other human drivers than autonomous cars currently seem to be. We can tell when a bus is about to pull out, or not, due to various cues (engine noise, exhaust, the sound of hydraulic brakes, onboarding passengers,signal lights, the presence of traffic behind us, etc,) which the Google car was apparently blind to.


If you watch closely too you can almost always tell when a car is about to change lanes, even before (if they bother to at all) activate their turn signal or depart their lane. I'd love to see automated cars process stuff like this (small behavioral cues from human drivers)...maybe they already do? I'm guessing that they're not just relying on turn signals to predict intentions :)


The driver was also blind to this, according to the article.


The Google car apparently had a driver who missed it too.


Would they have missed it if they were driving, or did they trust the Google car more than their own instincts, though?


I think it's a matter of framing. You put your life in the hands of a machine all the time. Even when you are driving yourself -- software and mechanical engineering conspire to consistently connect your movements on the steering wheel and brakes, as well as be ready to deploy safety measures in the event of an emergency. These things fail in many different ways...if you weren't ware that the auto machine you currently use has life-threatening bugs, take a visit to the NHTSA complaint/recalls databases: http://www-odi.nhtsa.dot.gov/downloads/

Robot-driving is just one more layer on top of that. And not one that frankly, seems substantially less safe than autopilot on planes, given how unreliable we ourselves are when it comes to driving. But sure, the emotional impact of hearing a self-driving car malfunction is always going to be emotionally stronger -- i.e. in the man bites dog way -- than the daily fatal accidents that happen to other people that we filter out.


The issue for me is drive-by-wire. I'm cool with a computer trying to steer as long as it gives up when I try to fight it. I also preferred the cruise control where you could feel the pedal moving under your foot, because that was the master control tied to the carburetor or sensor that managed fuel.

I still think that self-driving stuff should be more enhanced cruise control and less "there is no steering wheel or controls".


The problem with this is the limits of human attention. It's hard enough to maintain focus on long drives as it is; if the "enhanced cruise control" takes over the job entirely, the driver will have nothing to do and is likely to stop paying attention to the road at all. Then he'll either miss his chance to take manual control, or do so in a state of panic.

Google has been making arguments along these lines -- for instance: http://gizmodo.com/why-self-driving-cars-really-shouldnt-eve...


I suspect things may start out this way, but I wouldn't be surprised if our laziness rapidly overcomes our fear.


>Most of us are going to expect nothing short of perfection from these machines to really trust them.

Now, expecting perfection, and especially giving up 'better than what we have' in exchange for perfection is a bad thing, but but I do think it's completely reasonable to expect the self-driving cars to be way better than an average human, even better than the best humans, just because we lose so many people to auto accidents every year.

I think it's completely reasonable to want to reduce the risks associated with transport, and I think the only politically possible way to do this is with self-driving vehicles, because I don't think it's politically possible to remove the worst half of drivers from the road without offering them similar levels of mobility.


If they are as secure as elevators I'm fine with it. I don't expect perfection.


Its going to be similar to the software running on airplanes, you expect it to be essentially bug free(airplane software is held to a much higher standard) since you are trusting it with your life.


Except AI code inside self driving cars isn't crafted by hand, but is a matter of training data as much as code. How can you determine that a training data is "bug free" ? It barely makes any sense.


> But my lizard brain tells me that I'm putting my life in the hands of a machine that potentially has bugs, and that's a little scary.

> Most of us are going to expect nothing short of perfection from these machines to really trust them.

I am not sure if this is satire or oblivion.


And yet we're happy to let them fly our planes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: