Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been beating this drum for years (but I'm a nobody, so it's like pissing in the wind):

1. Anything less than 100% FULL automation is MORE dangerous than manual driving, because the "driver" will almost certainly lack any situational awareness. When the need for manual intervention happen, it will be at the moments where you need maximum awareness and split second reflexes.

2. There are SO many edge cases and never-seen-before situations that happen when driving "at scale" that the automation features will fail unexpectedly and in strange ways.

3. G and Cruise might be exceptions, but most of the companies in this space are cowboys with reckless disregard for public safety and terrible "iterate quickly" coding practices.

4. At some point there will be an accident that kills a photogenic "middle America" person or people and at that point the government will crush this industry with regulation, with the financial backing of automakers, UAW, and other people who benefit from the status quo.

The only way 100% fully self driving cars will ever happen is for the infrastructure itself to be built to accommodate them. Mixing regular cars, parking, trucks, bicycles, scooters, pedestrians, dog walkers, hoverboards, etc all together on the same roads ensures that the problem is unsolvable.



>Anything less than 100% FULL automation is MORE dangerous than manual driving, because the "driver" will almost certainly lack any situational awareness.

Something I worry about is that if SD became normal, then people would never get the experience of thousands of hours of driving a car in countless situations that is needed to develop good judgement, much less quick reflexes. And so when a rare situation arises when they need to take over, they won't be able to do it well.


We've already seen this in aviation actually. AF447 is a good example of flying a plane into the ground because of reliance on automation and lack of experience hand flying.

There's a really great YouTube video called "Children of the magenta" that's part of a lecture the chief training pilot for AA gave about 20 years ago or so as part of their continuing education. He goes over incidents and situations and the essential conclusion is that pilots are getting too used to turning dials and flipping switches when in many situations they need to just take control and fly the plane.


1. Anything less than 100% FULL automation is MORE dangerous than manual driving, because the "driver" will almost certainly lack any situational awareness. When the need for manual intervention happen, it will be at the moments where you need maximum awareness and split second reflexes.

Airline pilots already face this -- it's hard to stay engaged in flying when the plane can fly itself. By the time something bad happens and the plane gives up and hands control back to the pilot, the pilot lacks the full situational awareness he would have had if he was flying the whole time.


It's too expensive to built the infrastructure to accommodate them--current infrastructure spending isn't even enough to eliminate potholes and other bad pavement conditions.


> Anything less than 100% FULL automation is MORE dangerous than manual driving, because the "driver" will almost certainly lack any situational awareness.

With enough reliable bandwidth a decent workaround is to have the system fall back to human control by someone other than the local driver. I'm imagining a Car Traffic Control Center where your onboard robot driver sees a situation it doesn't understand and throws control to a remote driver wearing a VR rig with your car's video feeds as input. The remote human driver assesses the situation, steers you carefully past the weird obstacle/issue then returns control to your robot.

A system where robots drive automatically, say, 95% of the time while human remote drivers handle special cases 5% of the time still seems like a big improvement over the status quo - there's a market for that.


> throws control to a remote driver

Throwing a remote driver into a dangerous situation with no context sounds like a terrible solution to me. See also: Cpt Dubois from AF447. Doing that deliberately and repeatedly just multiplies the chances of catastrophic error.

And the VR driver job would be so stressful that there may not be many takers. Who would take responsibility if they made a bad call and caused a crash?


I can possibly see an On-Star-like backup VR driver role at some point when there's full self-driving and there needs to be some sort of backup of last resort when a car with no human in it freaks out. (Or if there's just a child, etc.)

But there has to be an assumption that this is a rare event and that it takes place in a context where a VR driver has time to establish some situational awareness. (Oh, and in a lot of situations, there is no "just pull over option." I've gotten into some bad weather situations but there are often only sub-optimal options at that point. Pulling over can also be dangerous or may not even be an option.


The solution would need some massive breakthroughs in reducing latency...and time travel. "We need a decision within 6 seconds...4...2...hey human, watch the inevitable crash!" (This already happened btw)


> Throwing a remote driver into a dangerous situation with no context sounds like a terrible solution to me.

We might be thinking of different situations. I'm mostly imagining a car or truck that does great on the freeway but poorly on surface streets or poorly on particular KINDS of surface streets or even particular KINDS of weather...and we KNOW this and can recognize those situations. The remote driver typically jumps in BEFORE the part that is actually dangerous.

This isn't a new problem - consider a big ship that delegates harbor navigation decisions to a harbormaster and/or tugboat, or a big plane that delegates final runway approach decisions or parking at the gate decisions to a control tower and/or local guy driving a tow vehicle or waving directional flags. You could slice the world up into "regions we can reliably navigate without help" versus "regions where we still need a little help", with the latter group shrinking over time as technology advances and maps get better and edge cases are better handled.

The initial product offering might be for long-haul truckers - the truck drives itself for hours on separated freeways and then throws to a handler when it needs to navigate unfamiliar local surface streets for a delivery. But once you've GOT that sort of infrastructure - basically a map with geofenced areas where remote drivers step in - it's a logical next step to make the help areas dynamic and mark slowdowns or detours due to an accident or a landslide or a cow on the road for similar handling.

95% of that job would not be stressful. I'd be more worried about it being boring...but then, so is normal in-person driving.


> 4. At some point there will be an accident that kills a photogenic "middle America" person or people

I'm ready to guess what it looks like: Car on autopilot going through a residential neighborhood, playground ball bounces out into the street from between two parked cars, car does not brake in anticipation of a 6-year-old that is not currently visible.


I have some confidence we'll get there on limited access highways because there are a lot fewer of the things that you describe there. That doesn't really help the people who want to be driven around everywhere. But it would actually be a really nice feature for the majority of people who own and drive cars.


To your first point, I'm not convinced that's true. People augment their lives in lots of ways that don't seem to reduce safety. A few examples off the top of my head: simple "dumb" cruise control hasn't lead to more accidents. Parachutes have auto-deploy features if the cord isn't pulled by a certain height. Scuba divers use dive computers that basically eliminate the need to learn dive tables (and beep at you when you're doing something dumb). Apparently passenger jets are highly automated (I'm out of my depth on that one). These are all on the spectrum towards automation and have only been helpful. Do you think the problem occurs as you approach 100%? Like an uncanny valley in the 99 to 99.99% range?


The thing with other activities (diving, flying are great examples) is that when a problem occurs, you generally have minutes to analyze what's happening and decide on a solution. If my dive computer goes on the fritz, I can decide to immediately start an ascent, or go off physical dive table, or make an extra safety stop at say 20 feet just to be sure.

When you're going 45 in a curve driving along PCH, and a sudden fog bank obscures your cameras and LIDAR and the computer says "your controls, good luck!" you have maybe 2 seconds to react, if you're lucky. It might be a lot less.

Humans make really dumb decisions sometimes, but we are also outstandingly capable of reacting to novelty.


Not OP but to your last question I think there's proven danger in the "too familiar, too easy" zone, i.e. most car accidents for instance tend to happen in places you know pretty well — hence why you may get surprised when things happen out of the ordinary.

Whether it's an illusion of safety, a letdown of attention, the general idea is that humans should never trust that things will go well when there's real probability that they don't. I think it's not in the amount of automation, as you explained well, but rather in focusing users on the critical parts that they should watch out for — and there clearly automating helps us remove the unimportant from the equation, and also make us more responsive, more accurate for the important parts. But it's a lot of great UX, and that's one field where e.g. the military is usually great but commercial companies are abysmal if they can get away with it (read: sell enough to justify not spending a dime on more quality). That's worrying when security is involved, but it hasn't proven a moral or ethical problem for most industries absent of regulation (forced ethics, ha!), so... I think there's valid concern by OP.

As for passenger jets, the Airbus A320 (late 1980s) was the first commercial plane to have a "full" autopilot; all systems were electrical¹ (manoeuvering, thrust control, etc) which allowed the computer to integrate and manage it all. :)

It was tested a number of times by pilots for fun, from taking off to landing entirely on autopilot — ofc they're standing right there ready to take over if anything goes wrong but I've seen it first hand many times. We're talking commercial flights with passengers, it's 100% safe and actually quite "smooth" because the computer is so accurate.

Honestly, the problem is much, much, much easier for planes: a good GPS and it becomes quite the closed problem, and obviously 100% of autopiloted planes are simultaneously piloted by real humans... ready to take over. Yet a plane could technically land itself just fine if pilots were incapacitated, it really could. I suspect it did more than we know for many reasons. And when flying by instrument (means you see s__t), an autopilot is basically just a computer doing what a human would do slower by reading the same data (and maybe cross-checking with physical/manual instruments, but an autopilot doing the grunt work of stick-holding gives you more time to double, triple-check everything incidentally).

_____

[1]: Note that all systems are also doubled (even tripled) with mechanical (hydraulic etc) failovers, because obviously you can lose electricity in catastrophic situations, hence why it always seemed crazy to me that a planed requires software to fly properly instead of plain old good physics and mechanics).


> The only way 100% fully self driving cars will ever happen is for the infrastructure itself to be built to accommodate them.

I think we'll see 100% automation for freeways within a short time. I think the only way we'll see 100% automation for arbitrary point a to point b in 50 years, that a standard human could "safely" do, is if we get flying transport.


I agree with you. I think freeway travel can be 99% automated in the very near future. I also think that's where the MAJOR wins will come from. Long haul trucking can flow 24/7 at that point, with the drivers keeping their jobs and doing last mile delivery.


How do you define 'near future'? If I want to reliably autopilot any significant distance, I do it by camping in the middle lane. Even then, AP isn't any better now than it was a year ago, it still ping pongs, turns late, cuts off merging traffic, etc. The freeway certainly seems like a good first candidate for automation, but I don't feel like we're anywhere near 99% automation of it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: