I live in the area. The stinking buses around here ... They crowd the lanes regularly. Us human drivers I suppose are used to it, but it is a serious pain in the butt.
The buses will drive right next to the lane marker, with their mirrors hanging over into your lane. This makes it so everybody has to creep over a little bit into the drivers side lane and hopefully everybody in traffic has a small enough car to deal with it. Otherwise, you have to hang back behind the bus as if it's in your lane and wait for it to get to a stop.
I've had my issues with Google autonomous cars (they drive slow and they used to be exceptionally slow at making right hand turns, causing traffic problems), but in this instance I'm happy to throw VTA under the bus, if you will, and lay blame 100% at their feet.
From reading the above report, it seems it was officially only a one-lane road, but the road was big enough to handle two-streams of traffic. The google car could have just stayed in the middle of the road, but instead, was hugging the right side of the road in preparation for a right-hand turn. Due to sand bags next to a storm drain, the google car had to "merge" back into the one-land road to get around the sandbags. Considering it's still a one-land road, the bus driver should have yielded to any car that was in front of it. I'd place a majority of the blame on the bus.
What I don't understand: Why doesn't the Google car have video of the accident? Or if they do, does anyone know if they will share a video of it?
I suspect Google will render some of the fancy video of 'what their car sees' of this incident like they've used in previous marketing materials. Those fancy videos aren't rendered on the spot though! We'll probably see it soon.
I doubt this is big enough a deal to be handled in open court. But I'm sure the data could be looked at by a third party, and the visualization could be vetted as a fair representation of the sensor data picked up by the car.
I would rather a bus any day. Easier to see, move slowly, less dangerously, less off them, more predictable, and mostly pretty courteous. Completely unlike drivers.
When I was in London a few years ago I used to cycle to work and the buses were the worse. Taxis would beep or gesture at you, but at least they gave you space.
One time I was stopped at a set of traffic lights waiting for them to turn green. It was on a tight bend and I was about 20cm from the line, and 50cm from the inside of the road. A bus came up and decided for whatever reason he had to get in front of me (even though there was a stop directly on the other side of the junction). There was enough space for a car, but definitely not for a bus as it was on a bend.
He slowly kept creeping forward, and the side of the bus kept getting closer and closer. I starting banging on the window as he got next to me, but no he kept going. I was far enough back that I was out of his blind spot. In the end I had to move onto the pavement.
I then walked to the front and banged on the door, gave him a few choice words and the driver just shrugged.
I see this sort of strong arming happening in San Francisco pretty frequently. It all makes a lot more sense when you consider that the drivers know that even they accidentally push it a little too far and kill someone, there won't be any serious repercussion [1] [2].
I'm not sure if much has changed, but I cycle daily through central London and buses are great. Usually give plenty of space and never try and run you over on left turns.
The worst in my experience are trucks, and even more so, pedestrians (almost every day I have to pre-emptively break before someone walks into the road without looking)
I don't know about London specifically but with regard to cyclists I get the impression that bus and lorry drivers have become more careful these days for one reason: the growing number of cyclists with cameras mounted on their bikes or helmets, and in cities even more so due to the number of CCTV cameras that could also catch dangerous behaviour. Furthermore some buses in bad areas have their own cameras watching the road recording in case of dispute (hooked to the same recording equipment that records inside the bus to capture events where passengers attack each other or the driver).
The average bus and lorry driver always was pretty safe on the roads and courteous with it, of course. They haven't changed.
But there were a lot of idiots out there in all forms of transport who were not so careful and got away with it because "who was going to know?". If it came to an argument it was the cyclist's word against theirs and if it came to court they could threaten to sue for damage to professional reputation if the case was lost by the cyclist. Now with a helmet cam the cyclist has evidence so it isn't just word against word. In most jurisdictions a single helmet cam recording is not sufficient for a prosecution (and may even not be directly admissible as evidence in court) but more than one complaint is usually enough for the bus/lorry company to investigate further (which they will as there are potential fines for not and in some places that threat has teeth) and evidence from official CCTV and on-bus recorders is usually admissible (the former because it is operated by the authorities, the latter because the driver has consented to being recorded). If there is a chance you will get caught and successfully charged with dangerous driving (or the lesser offence of driving without due care and attention) then bad/aggressive/otherwise-unsafe drivers are likely to put more effort into being less bad as it can get them reprimanded by their employer, possibly sacked, possible prosecuted, and it can harm their ability to get driving jobs in future.
So the average carefulness of large vehicle drivers is improving because of monitoring, the perceived average even more so because we never notice the good drivers as much so the bad ones look more statistically relevant than they might actually be.
I have had a "near death incident" or two, so the situation is still far from perfect, but these days I see more complaints about large cars (SUV types and people carriers) and private vans, than buses or lorries.
I cycled through London (not the absolute centre) until a year ago and found the lorries fine, they're courteous and give you a lot of space (much more than the buses, who were worse than cars - cars would give you space, whereas buses seemed to pretend you didn't exist). Worst of all was people-carriers, especially around start/end of school hours.
The pedestrians here terrify me. Cyclists get a lot of stick because there are a few bad apples that ignore red lights, but I'd say the vast majority of pedestrians don't give a second thought about walking into the road on a red light.
The worst time for me is when I'm cycling past a stationary queue of cars and people assume that because the cars aren't moving, that nothing else could possibly be travelling along the road!
Same with MUNI in San Francisco, I can have a bright flashing light, be in the bike lane, and be able to see the driver in their sideview mirror, and they'll just drift over without ever even glancing in the mirror and almost kill me. As expensive and trouble prone as street cars can seem, it's one thing I really like about them, they stay on their tracks.
I completely avoid ever being next to them now, I'll just stop and wait if there's any chance
f = ma. The bus is considerably more dangerous than a car, even while traveling at a significantly lower speed. As the parent comment noted, there are certainly some aggressive bus drivers out there as well.
Actually neither of those matter much anyway. The energy transferred is primarily a function of the velocity difference (squared) and the mass of the lighter of the two objects. That is, a pedestrian will experience a collision with a car and with a bus in much the same way.
If the two objects are close to the same mass, i.e. car vs. car, the energy transfer will be reduced by up to 75%, but otherwise the mass of the larger object is immaterial. That is, a car-car collision at 60 MPH does equal damage as a bus-car collision at 30 MPH.
This is correct. My grandfather was a physicist who reconstructed some of the nastier accidents in CA. He was always looking for two values for most situations: delta-v, and (mv^2)/2. The 'v' being the delta-v of the two vehicles when they collided. The damage done will generally be a function of the kinetic energy that hit it.
Note that this implies that unless there are other dangerous road conditions (fog, ice, etc), the safest speed is "the same speed as everybody else" so the delta-v is minimized.
Fun note for HN: he had software built for DOS that I helped him get running in dosbox so he could run it on a modern computer. It would reconstruct the motion of the vehicles from the final resting positions and the depth of the dents. working backwards from the implied kinetic energy. Apparently the DOS version was a port of his original FORTRAN source... on punch cards.
The bus is considerably more dangerous than a car, even while traveling at a significantly lower speed.
The potential damage may be greater, but significantly lower speeds creates a much greater risk of some accident. All things considered, then, it's not clear what's worse at the bottom line.
I was also confused, and I'm a native speaker. Understanding that "rather" means "prefer" is obvious.
The sentence could be read as "I prefer to be on the road with a bus since I think it's safer". Or it could be "choosing what's most dangerous, I'd rather say it's a bus".
It's also not clear what the perspective of the speaker is. It might be that a bus is safer for the rider, but more dangerous for cars around it. So would he rather be on board a bus, or rather on the road with a bus?
I have no idea how you could get that second interpretation from what he said. And the perspective is clearly inherited from the parent comment, who didn't mention being "on board" a bus, just "on the road with" a bus.
No ambiguity in that comment that I can see.
EDIT: I guess the use of rather in this context could be a colloquialism?
Actually, I didn't. It's an odd word choice, and 'prefer' would have made more sense. Rather is typically used in conjunction with another verb, while prefer isn't.
The same goes for Chicago, where busses are longer, but articulated in the middle for maneuverability. They drive just like the taxi drivers there, and that isn't a compliment.
I don't life in the US but one time the bus driver decided to overtake a motorcycle that was properly driving at the current speed limit. The motorcyclist was effectively trapped by a 20m long bus.
Indeed. Buses are held up as a solution to traffic, but they are terrible things to share the road with as either a driver or cyclist. I wonder how many people have to be on the bus for that many cars to be as annoying as a single bus.
I've been cycling as a primary mode of travel for about 10 years now in LA and in the Mountain View area. So pretty broad depth of experience from bat shit insane metro traffic to the bucolic in comparison suburban commuting.
Car drivers are orders of magnitude more of a threat to my safety than buses. Buses are far more predictable and tend to change their vector of travel more gradually when they do change. Also, the drivers tend to be much more alert and aware of their surroundings.
Also, in general and particularly during commuter traffic, they really aren't that much slower aside from the stops.
I've also never, ever had a bus driver intentionally try to harm me in a bus (like trying to 'muscle' me off the road). This has happened on multiple occasions with car drivers.
edit
I'd also like to mention I love the google cars. They're just so predictable and courteous. And in general, they just follow the rules.
For example, at a stop sign that a human driver reaches first. They have the right of way, but they want to be polite so they'll try to wave a cyclist on...then start to go...then stop and wave....then start to go. There's nothing more dangerous to a cyclist then a driver not following the rules and behaving erratically.
Google car? Stops. Waits. Goes. No fuss. I have seen the older models get "stuck" and wait longer than normal. Though usually this is when other drivers refuse to go before the google car.
Here in Montreal, car drivers seem much more willing and eager to kill me (cyclist) than bus drivers. What makes car drivers so willing to casually endanger a human life I will never know. That they get away with all that reckless behavior can be infuriating.
From experience cycling in london cars don't belch as much nasty crap into the atmosphere and its the big vehicles buses and lorries lorries turning into you that are the real killers.
Busses here have been natural gas for a while.[1] We don't have many lorries here, mostly very large 18 wheelers and local deliver trucks...I count the delivery trucks amongst the regular car drivers. I wouldn't say they're more dangerous, though not less.
Oooh fun thought exercise - My guess is about four cars equal the annoyance of a bus over the span of a year. Four randomized drivers will on average be slightly more annoying than an average year for one bus.
Taking into account:
1) Number of drivers removed from the roads by average annual bus use
2) Size of the vehicles
3) Random acts of annoyance
4) Dangers posed by each
To confess my bias - I'm a huge fan of mass transit and a bike commuter who touches his car maybe once a week, but curious how else one might model this.
By the bus's frequent stops and lumbering acceleration, I'd estimate it cuts the average speed behind it by at least half. Unclear how far down the road this effect travels, but a significant distance. I've also yet to see a bus stop that doesn't obstruct the bike lane, so apply this effect to bikes as well.
Sometimes (though rarely) the bus stop is such that the bus can pull out of the traffic lane and become less of an obstruction, but even then, right turns are delayed significantly.
I don't mind this when I see a full bus, because I know that many private cars would slow down traffic at least as much. But damn is it annoying when the bus has 3 people on it (as is usually the case in my non-transit-oriented city).
> I've also yet to see a bus stop that doesn't obstruct the bike lane
They're fairly common in a lot of places, though only recently being built in the US, because they typically go together with having protected bike lanes, which were almost unknown in the US until recently. Here's a fairly typical example from Copenhagen (you can see that the bike lane passes behind the bus stop): https://www.google.com/maps/@55.6757297,12.5451953,3a,15y,46...
It's not usually an issue, though it does require having norms that both pedestrians and bicyclists follow. In Denmark, at least, there are two kinds of bus stops. At ones like this one, which have an island between the bike lane and road, passengers cross the bike lane whenever there's a break in bike traffic, and wait on the island for the bus. Then actual bus loading/unloading is directly to the island and doesn't cross the bike lane. In other cases, where there isn't a waiting island, passengers wait on the sidewalk and do have to cross the bike lane to board/unboard. In those cases, there's a zebra stripe painted on the bike lane in the area where passengers are supposed to cross, and bicyclists must stop before the stripe whenever a bus is present with open doors. So in those cases a bus stopping does interrupt bicycle traffic, though not by the bus actually entering the bike lane. That looks like this: https://www.google.com/maps/@55.6754034,12.5457476,3a,75y,16...
> Sometimes (though rarely) the bus stop is such that the bus can pull out of the traffic lane and become less of an obstruction
It is actually a deliberate policy in many cases not to have the buses pull out of the traffic lane, because it is hard for the buses to get back into traffic since cars do not want to let them in, resulting in slower bus journeys.
I beg to differ. It doesn't remove that many cars, really. How often do you see a bus pass by? Every 20 minutes or so? Count how many cars pass by in the same amount of time, and some of them carry multiple passengers. (I have no problems with buses here in Europe at all, just sayin'..)
blackhaz didn't say it, but is probably thinking there are hundreds of cars per bus. My mental image was a bus with 10-20 people, vs. 500 cars (in that 20 minutes)
One could argue every passenger on a bus saves some fraction of a car trip. Even if the total number of car trips saved is a much smaller number than the total number of cars, there's a 'breakeven' point above which the pain of the buses is less than the pain of the bus passengers in cars. I bet in most cities, we've hit that breakeven point, regardless of bus behavior.
<wild speculation>
I bet that number is about 6 passengers per bus, excluding environmental concerns.
</wild speculation>
I think a good way to estimate it is to get a graph where you have average number of people in a bus at a time, then compare that to the average amount of people that are in a car at a time.
people_in_bus/average_#_people_in_cars = #_of_cars_removed_at_this_moment . Of course the model can be made significantly more complicated, but it's a good starting point.
most of the buses I see round here are full of either school kids or elderly folks. I suspect the annoyance they would cause by becoming drivers might be considerably more than they do as bus passengers.
I will probably get downvoted for this, but: this is why I am skeptical about self-driving vehicles. Driving in general and adjusting to each country/locality, then to vehicle types is such a human thing that you'd need a thousand more "deep understanding" things like this.
It will be an endless process for an algorithm driven car. Buses in Dublin will be so much different from buses in London, or trucks in the US different from trucks in Germany. Then there are the bikes, pedestrians. When human drivers/bikers/pedestrians look at each other and know what to expect. These are the things that would be incredibly difficult to formalize, if possible at all. Unless you have something like a Deep Mind on board, but even then I wouldn't be so sure.
Humans will also adapt. If there are local driving quirks in an area that the automated cars don't follow, after watching the automated cars fail to follow them a few times, people will account for the unexpected behavior.
This is already an issue humans solve in the context of tourists driving through town, who aren't accustom to local rules like "First driver at an intersection gets to make a left on green instead of yielding to the column of oncoming straight traffic."
Wouldn't this be a great application for machine learning? Collect traffic data locally, run it through the system a few bazillion times, them you realize the only way to avoid a car crash is to drive like a maniac in certain places.
Google already does this. The cars have virtually driven nearly as many miles as they have on the road by now. And every time there's a near accident, or one of the various accidents where other vehicles were at fault, they feed in the data and run simulations on how the car could have avoided the accident.
Even if they're not perfect, they're still already better than the median driver in the majority of daily driving. They don't need to be perfect, just better than most and you're drastically reducing deaths. Below median drivers cause more accidents than above median.
Most accidents come down to awareness and reaction (both reaction speed and the ability to evaluate the right course of action). These are two things that a sufficiently advanced computer system will always be better at. A human gets distracted, doesn't have 360-degree constant vision, doesn't have thermal imaging or the ability to range-find obstacles through fog. A human can't use perfect situational awareness of obstacles and road conditions to avoid an accident.
Anyway, the plural of anecdote isn't data. Google is in the data business and has a firm grasp of what they're attempting. I doubt they're going to be proven wrong.
> They don't need to be perfect, just better than most and you're drastically reducing deaths.
From a utilitarian perspective, they would only need to be better than whoever they are replacing one driver at a time.
From a human-feely point of view, they'll need to be massively better than the best human to stand a chance of adoption.
My guess about AI: for human-suitable tasks the gap between a typical human and the best human is actually not all that great, compared to the difficulty of getting to human-level skill in the first place. So by waiting for surpassing the best human, we don't actually lose too much time.
The problem is ANN (I'm assuming you're thinking of them) are not deterministic so I wonder whether they're may be issues regarding the legality of using a system with an uncertain outcome.
Artificial Neural Networks are actually totally deterministic by default. (Even if training them is not.) What they lack is people being able to explain in simple ways what the ANN is doing.
I don't think ML can solve the "human" part, i.e. like I said drivers, bikers and pedestrians looking at each other knowing what to expect and silently agreeing on what to do next, all happening in an instant. This is a crucial component of driving especially in cities with dense traffic and oftentimes unclear markings and signs, where the human factor becomes important.
[The pedestrian was] giving the awkward body language that he was planning on jaywalking. This was a very human interaction: the car was waiting for a further visual cue from the pedestrian to either stop or go, and the pedestrian waiting for a cue from the car.http://theoatmeal.com/blog/google_self_driving_car
And some places even run smaller buses. Here in the DC area, we have articulated buses, standard ones, and even smaller ones—about 60 percent of the size of a standard bus. It's rare to get the ideal size at the right time in an area of such bad traffic, but they do make an attempt.
In the small city I live in, we have small buses, which is sized right for the ridership. Unfortunately their top speed is about 26MPH on level ground and My commute includes a steep freeway overpass, which that drops to about 18MPH. The speed limit on that stretch of road is 25MPH and most traffic goes about 35MPH on the overpass as there is nowhere for a speed trap to be set. I can tell from 4 blocks away when one of those buses has been there recently.
London used to have them. It was sort of amusing watching them trying to navigate the tight corners and narrow streets. But very scary to be near them on a bike.
Running buses more frequently has its own problems. When a bus is running slightly further from the one in front than it should, it has to stop more often for passengers, and those passengers take longer to buy their tickets, which makes the bus run even later. This is a positive feedback loop that is made worse when buses are more frequent. I have certainly seen a set of four buses coming as a convoy on a route that is meant to have them every 12 minutes.
The solution is to either run fewer bigger buses, or to inject a whole load of negative feedback into the system, either by having buses wait for a minute at a few stops until their scheduled departure time (which involves making the route slower) or by real time tracking with radioed instructions to the drivers, like they do on some underground train systems.
If you're going to give it its own lane anyway you might as well make a proper tram (which will have higher capacity and a smoother ride for passengers).
In Buenos Aires, there are bus lanes on some major avenues. That means buses are on bus lanes some of the time but not all of the time. A tram can't do that.
The problem is not whether cars can enter the exclusive lane. The bus need to be able to get out of the lane unless the lanes go everywhere the bus may want to go. Having tram lines go everywhere can be prohibitively expensive. Building exclusive lanes in major avenues used by lots of buses is cheaper and can improve travel times a lot.
Buenos Aires used to have a tram, actually. In the nineties you could still see the rails, on the road as you describe. But buses replaced them in forties because as the city grew the ability to change where public transport could go proved to be very valuable.
Tram lines are expensive, and require that the city (population, economic, and social centers) remain largely static. If you predict wrong, you can end up spending millions on a tram line that doesn't get used.
Every bus you run requires a qualified employee to drive it, and putting them on for only an hour or two per shift would make it even more expensive (what employee would want to work such minor shifts?)
There is a common assumption that humans = bad drivers. And that these self driving cars will be much better than people at driving. I think this belief greatly underestimates the difficulty of what Google and others are trying to do.
Humans are actually great drivers when the situation requires thinking. Lot of snow and can't see the lane markers? Millions of people adapt every day to this during winter. Lots of pedestrians, bicycles, motorcycles, etc. doing somewhat unpredictable things? Again, look at any Asian mega city, people can adapt very fast. Basically, if the situation requires being alert, people are very good at driving.
When are people bad a driving? Whenever it's monotonous. Bumper to bumper traffic or a free flowing freeway. Constant repetition of red/green cycles while going down a suburban street. These boring situations make a lot of people basically turn off their alert thinking. Then they do other things like texting, talking on phone, etc.
And these boring situations are exactly where AI self driving is better at driving than people. Computers never get bored. The self driving car will be at 100% attention even during the most boring traffic. But when boring suddenly turns to not boring? The current state of AI is very very bad at this.
And unfortunately there's no easy way to use the best of both sides. If the AI is fully driving, then it will do great while the driving is boring. But by the time the AI decides, this is too much, can't handle this, it's too late to alert the human driver to take over. But if the human driver is required to always pay attention, what's the point of the self driving car?
Self driving cars will get there someday, but I think it's much farther away than many assume it will be.
"Lot of snow and can't see the lane markers? Millions of people adapt every day to this during winter."
Having driven a corridor with a ton of snow quite a lot (western PA to washington DC), i'm going to strongly disagree with this.
People don't "adapt". They just do it anyway because they have to.
The rate of minor and major fender benders in this kind of situation is really high, it's just "acceptable".
In truth, computers that understand the physics of what is happening to the car at a given point in time based on tons of sensor data are going to be much more effective drivers in snow and ice than people (because of even simple data it can use like "this wheel has this much traction currently, so i need to correct this much"). People are really bad at figuring this out in their head and use horrible heuristics, often hitting other cars, guard rails, you name it.
You could also can get data about surrounding conditions (IE the ice the wheel will hit next or whatever) that people can't because they are not close enough/can't process fast enough (IE not just what is going on with the car, but what is the state of the ice on the road 2 inches to the right or left of the car and would it be better for the car to steer in that direction to get more traction, etc)
This is it. There is nothing inherent to CV that makes self driving cars unable to operate in snow. Humans already suck massively at it (and in the rain).
I remember driving home in a massive rainstorm at night about two years ago with around 2 meter visibility and was going like 20mph because I could not see shit. I had less sensory data than a computer with extra vectors of vision beyond sight would have had, and my reaction time is a ton slower.
Then a year later I'm driving in icy conditions and count five cars that either off roaded or rear ended each other in the span of 15 minutes because of the slick surfaces. I imagine self driving cars would do a much better job slowing down and budgeting for potential slippage than impatient humans.
There is nothing really magical about our eyes that give us better clarity in awful driving conditions, and we don't have radar and GPS fed into our brains to give us a clue where the roads are supposed to be - we just rely on historical context and past driving experience on the same roads.
How many hours would it take a networked fleet of self driving vehicles to outdo my historical context for driving on any of the roads I've driven on for a decade? Two? Maybe three? You just need to get the tunable parameters right though some good old genetic algorithm work like what Google is doing now with its test fleet and while it would be impossible to guarantee perfect performance in every scenario (because inevitably an earthquake is going to make a self driving car crash where nobody could have prevented it) it just has to be better than us, and that has almost certainly already happened.
"because inevitably an earthquake is going to make a self driving car crash where nobody could have prevented it"
Maybe not. On March 11, 2011, a magnitude 8.9 earthquake hit Japan.
The first seismometer to detect it (out of 92 in the system) was on Kinkazan Island, just off the eastern coast of Japan. Each seismometer has some compute power and is constantly computing a hazard level based on the waveforms of earth movements. The Kinkazan unit did what it was supposed to, and sent an emergency shutdown signal to the Shinkansen high speed rail system. Not through the signaling system; the circuit breakers at power substations were immediately tripped and power to the trains cut. The trains treat a power cut as an emergency stop situation and apply the brakes hard. All trains stopped safely.
> I remember driving home in a massive rainstorm at night about two years ago with around 2 meter visibility and was going like 20mph because I could not see shit.
I'm a relatively new driver (1.5 years or so since taking my license) but this happened to me twice, i.e. catching a rainstorm so big that I could barely see in front of me. In both cases I decided to stop in a refuge on the right-side of the road, waiting for the weather to improve. Sure enough, after about 20 minutes of waiting the rain eased down. What I'm saying is that under some circumstances (like close to no visibility) one should just stop and wait for the external factors to improve.
It's good that you stopped, but it would be even better to take the nearest exit and wait it out there, because if you are stopped on the side of the road in poor visibility conditions, someone might not see you and before it's too late might hit you if they aren't exactly in the lane, which can happen during poor visibility.
Completely agree. Though there was one situation where I found everyone had stopped on the freeway and I kept going to an exit. Had I known why they had stopped, I probably would have stopped and gotten out of my car, too. Driving through a massive thunderstorm beside dozens of semi trucks, and all of them suddenly stop all at the same time. I keep going and get to a gas station because being trapped between all these completely stopped trucks made me very nervous. When I got to the gas station, I heard the attendant say that Amtrak had stopped too because of all the tornados blowing through the immediate area.
The biggest thing is that people drive when they shouldn't. Sometimes your car just isn't going to do it. All season tires, 2WD, ice storm. Yeah that's when you see people sliding down hills. There's no traction and no automation is going to fix that. The best thing it will do, no joke, is just not start the car in those conditions.
You do see a spike of people who don't do well to adjust to winter driving. They don't give longer gaps, make sure they have more time before crossing intersections than they do. Computers will do that much better, eventually. I can't wait to go to work or drive around the city and not have to worry about someone texting, doing makeup or making poor driving decisions. Have a nap on long trips, watch a movie, finish up some work.
I can see computers having an amazing reaction time to some kid running into the street to catch a ball. An animal running in front of you at night. It's going to take awhile though. I wouldn't trust it right now to safely take me anywhere. There are just so many variables that the engineers have to account for. Weather conditions, people conditions, car conditions and on and on. It's nuts.
> The biggest thing is that people drive when they shouldn't.
It is extremely common to drive beyond your actual "in complete control" speed.
You are supposed to drive slowly enough in snow so that you can correct for unexpected patches of black ice or rocks in the snow slush. You are supposed to leave a long enough gap to be able to react and stop if the car in front of you unexpectedly fully slams on their brakes. You are supposed to drive slowly enough to stop if a kid runs out between parked cars.
Humans almost never do, because unexpected things almost never happen. Traffic flows faster and perhaps smoother, because we willingly accept driving at speeds where we can avoid most, but not all hazards.
But if we expect self-driving cars to always put safety first, then we will find them to be "tediously" slow, even if they have better sensors and faster reaction speed than us.
I don't think we'll care quite so much that a 20 minute drive is now 25 minutes when we have wifi and sitting in the car is basically like sitting in our living room. Driving slow is only tedious when you're doing it, not when you're able to get work done or be entertained throughout.
Yeah the norms will be completely different. If an adult kept asking a bus driver or airline stewardess "are we going to be there soon?" while fidgeting and straining to look out the front windows, most people would consider that adult to be poorly adjusted to modern living. It will be the same for robocars.
I don't mind stringent enforcement of life saving measures on a place as dangerous as the road, even if it means everybody moving at 10kph in bad conditions. Roads are the biggest killer of youth globally[1], minor emotional factors such as 'tediousness' should not be allowed to get in the way of solving such a big problem
I drive at a "correct" following distance. Way back when I took my driving test I was told to allow 1 car length for every 10 MPH speed. So on the highway I leave at least 6 car lengths.
I constantly need to tap the brake and back away as a car cuts into the lane. Almost no one on the highway leaves adequate space for human reaction times.
I would say that 1 car length / 10 mph is actually a very short following distance. If an average car is 15 feet in length, at 60 mph, that is only 90 feet. At 60 mph, 90 feet of following distance is only 1.02 seconds. (1 mph == 1.4667 fps)
When I was taught to drive, we were told that 2 seconds of following distance was appropriate.
This just reinforces your point that the typical human driver leaves insufficient following distance.
Some of the roads in the UK have chevrons painted on them at intervals. The idea is that at a normal speed for that road, if you can see two chevrons between you and the car in front, you're leaving the right distance.
It's a good idea; the distance is always bigger than you think.
Not bad, but you need an element from the square of velocity (for kinetic energy/braking distance) there to be safe. 1 second is fairly okay at low speeds, but at 60 mph (~100 km/h, 27 m/s) two seconds is shortish and if you are doing 200 km/h (as on a German Autobahn) then 2 seconds (100 m) is not safe in my opinion.
(Although the penalty line in Germany is just this, half the speed in km/h to get meters, i.e. driving closer than 100 meters if going 200 km/h gives you a ticket for tailgating)
Completely agree with you. Should have stated that i was taught in the US, where it would be ... slightly atypical to be going 200km/h ;) At the time that I took drivers education classes, the US national speed limit was still 55mph, about 88km/h. At that speed, 2 seconds is probably a good rule of thumb for something that's pretty easier for a driver to observe.
True of course. At low speeds, the braking distance is negligible, and the margin is needed for driver reaction. When exceeding 60-80 km/h, the amount of kinetic energy and correspondingly the braking distance will start to dominate what is necessary for safety margin.
When I'm not in heavy traffic I give even more space. But unfortunately at a certain density of traffic the aggression caused by a safe distance actually makes it unsafe as people cut you off and try to "retaliate" for the imagined insult.
This incidentally is also (in my opinion) the hardest skill in general aviation. Knowing when not to fly a small plane because the weather is unsuitable seems to be very difficult for people to master.
The difference is, in aviation, when you get this wrong you will very quickly no longer be a pilot. Or alive.
And this is the classic theory vs pragmatism problem in a nutshell. You've done a good job of explaining why humans SHOULD be horrifically bad drivers in theory.
There is a slight problem. In practice, in actual measured statistics, we're utterly fantastic drivers despite all the reasoning.
We need a baseline. How about trains? Trains in the PTC and semi-PTC era don't (can't) speed and are monitored to crazy 1984 levels, usually there are two employees in the cab (not counting subway type things, I mean real Amtrak here). Trains don't have steering wheels, usually can't spin out, etc. Still the death rate is about 1/2 per billion passenger-miles. What can I say, brakes fail, drivers have heart attacks, rails crack. Its apparently impossible to put squishy humans in a sealed box and toss that box around at 90 MPH without killing about one per two billion miles. Note that's to the moon and back 4000 times, safer than even spacecraft.
The death rate for passenger cars is about 7 per billion passenger miles. A little higher for trucks/SUVs, a little lower for cars (cars are safer on the road, more stable).
That's only about 14 times worse than trains.
Anecdotally human drivers do dumb things, because there's 7 billion people and world wide networks every form of idiocy is known to all, yet simultaneously its very rare. Anecdotally a billion passenger miles isn't very far, so every year we stack bodies like cordwood, sadly. But even on the ultimate in safety, a train, we'd still have 10th the number of dead. By reasoning the ratio should be immensely larger than 14, like 1000 or 1000000. But its 14. That has to be answered...
Even crazier to consider is the death rate has dropped by a factor of 3 since the 1970s due to better engineering and regulation. We KNOW how to lower death rates using existing techniques. Yet we also know software written by humans is basically worthless. So given a choice of strategies to further lower death rates, given the track records I'd trust the MechEng and CivEng a lot more than the CS department.
A factor of 14 reduction in death rates is unfortunately far more likely to come from the MechEng/CivEng grads than the CS grads, and I say that as a CS grad.
I'm with you up until the software being inherently unsafe bit. I've seen a lot of code I wouldn't trust with anything, let alone my life, but in your own examples: trains are partly safe due to automation, warning systems, etc. and cars have gotten safer at least partly due to things like ABS and traction control which rely heavily on algorithms. However, those are relatively simple systems compared to a self driving car and a lot more "bugs" will likely show up as more AI drives more miles.
While I agree about good measurements and need for baselines, death rate isn't it. When cars collide, you get really high number of injuries and deaths. For example there are almost 20x more injuries to pedestrians than deaths. It's nor fair to exclude them from those stats, since that can be a permanent and life changing injury.
We probably don't get any more people decapitated with the steering column. But it doesn't mean they are completely fine after the same collision.
Thanks a lot for making me aware of transportation safety statistics, it's a fascinating topic. And I really was surprised to see that death rates for passenger cars are only about an order of magnitude worse than train's.
I just wanted to note, that planes have a several times lower death rate than trains: 0.07 per billion passenger miles according to the sources I've seen. Maybe this could be a hint that safety in trains could still be improved?
No, that's just because flights are longer distance than train trips on average. It doesn't really make sense to compare transatlantic flights with a commuter train...
Ground failure can knock out a train (say the collision system breaks or the politicians are bought off to not require modern PTC). Ground failure can't knock out a plane but it could theoretically knock out a car so I think its a good comparison.
Also when planes encounter bad weather or ... events of any sort (erupting volcanoes?) planes working with ATC can use three dimensions to avoid the hazard. In that way trains have it worse because they can't just tear off across farmland to avoid a tornado and they only have 1-D mobility (speed). I acknowledge its unfair for cars because of the denser road network they have 2-D mobility.
Also the security and 1984 style monitoring and monday morning quarterbacking is brought to a fine art in the aviation community, far beyond the train community.
On one hand the airplane industry is rapidly moving maintenance offshore unregulated uncontrolled facilities, which will likely impact accident rates in the future, although not so bad right now. On the other hand its hard to maintain a train or its tracks offshore, which leads to inherently higher quality, but rushed jobs due to the expense.
The problem is that drivers don't know when to not drive. I've driven a relatively large number of kilometers over the years as a non-professional driver and it always amazes me how many people will continue to drive in conditions that are nowhere near safe. The best way to avoid accidents is knowing when not to drive. Nothing you have to do is so urgent that you're willing to risk a detour through the hospital (or the morgue).
Also, Google never said the cars were incapable of driving on snow. Just that they weren't prioritizing tests and development with that in mind yet. They want to nail typical San Francisco/Mountain View driving before branching out into other road condition types.
To be fair, a lot of those concerns are handled pretty well by traction control systems, etc. I drive in the snow a lot and really appreciate the things my car does to micro-manage traction.
The main rule for driving in snow is to assume everyone else is actively trying to kill you. :)
The main rule for all driving is to assume everyone else is actively trying to kill you.
It's something I was told when getting a motorcycle at 17, and it's served me pretty well.
Specifically on snow I think we subcontract too much from brain to ABS, traction control, lovely powerful heater and what not. We don't get Scandinavian amounts of snow in the UK, so whenever we do have snow you'll see dozens of ridiculous accidents because people don't take account of stopping distances or speed, or ABS being a hinderance in snow etc.
My motorcycle instructor advised me to "treat all car drivers as a lower life form, and treat taxi drivers as the lowest of the low." That mindset saved me a few times. U turns without warning, pulling over in prohibited areas without indicating, opening doors right in front of you - they probably aren't actively trying to kill you, but certainly can give that impression.
When we get ice in the DC area, it's always the 4WD cars that are in ridiculous accidents as they are the only ones that get get up enough speed, but pretty much all cars these days have 4 wheel ABS, so stopping ability on ice is equal between those and the cars that can barely start moving.
The main rule for all driving is to assume
everyone else is actively trying to kill you.
If you were really doing that you'd never drive down an undivided road when a car was coming in the opposite direction, in case they swerved across the divider at the last second :)
Or you'd do it cautiously. As a motorcyclist/bicyclist, I have to watch for this because drivers literally do it all the time, they call it "turning left."
Many years ago I was almost taken out when a driver did a very unexpected U turn into the gap he thought was there. Trouble is, the gap in traffic was actually me on large motorbike. So it can happen :p
I know that corridor - and I agree with you. Unfortunately, there's also a huge number of people that don't treat snow with the respect it deserves. If you live in WV or OH (or in PA near enough to the border to fake it) you still don't need to have your car inspected. In PA, you can probably still find a mechanic that will slap a sticker on anything, but it's getting harder.
In any case, you need good tires and working brakes to navigate snow effectively. Anti-lock brakes (in my opinion) are no better than a person that can properly pulse their brakes but ... if you've got even a slight warp in your rotor, you won't brake evenly enough to stop in the snow. The other problem in PA is PennDOT's "new regime". There's been far more ice packed onto the roads when it snows lately - because melting snow that's packed under tires turns to ice. I'd far rather drive on dry, crunchy snow (for traction)
Driving in Latvia, -25 celcius, 1.5 meters snowfall, main road (closest they have to freeway/motorway not cleared ... it's 2 narrowish lanes each way with no center divider).... visibility extremely poor due to continuing snow fall.... Everyone continues driving like it's +25 celcius and sunny. This means 110km/h (90km/h speed limit in the dry 70km/h in the snow) bumper to bumper (or is that fender to fender?). Never been so petrified driving in my life. Somehow it all worked out but I can imagine one false move and we would have been looking at a 100 car plus pile up.
That can be scary but there are scarier places in traffic.
Latvia does have a traffic-related death rate (per number of motor vehicles) that is twice as big (24.8) as neighbouring Estonia (11.8), or the United States (12.9), which are about on the same level, or over five times as high as Finland (4.4) somewhat further to the north.
On the other hand it's less than half of that in Russia (53.4), and one-fifth of that in India (130.1). In Ethiopia, the death rate is 200 times as big (4984).
We can also make digital lane markers, doing whatever the case may be. If we have future infrastructure built with autonomous cars in mind, they can easily be better in those situations too.
The hard part of this we're trying to force self driving into our current infrastructure and traffic. If the bus was self driving, they would have communicated properly and merged perfectly.
And if the Moon was made of cheese, why are we not harvesting it already? Even if all new vehicles starting now were autonomous and perfect, it would take decades to get all-autonomous traffic.
Some of this stuff about self driving cars sounds to me like the "sufficiently smart compiler" that can can optimize away the overhead of certain features of Lisp. Meanwhile, Uber thinks my house is in the middle of I-83.
True, the number of fender benders on the first snow of the season is very high because people haven't adopted the winter braking distance model in their head yet.
The difference is that all self-driving cars benefit from the knowledge created by bumping into the edge cases. People can only do this in an ad-hoc manner (i.e. personal experience, taught by mentor, etc)
It doesn't even have to be in a place that doesn't get much snow.
First snow fall in Calgary looks like what you get in a standard snow storm in a city in the south east. Its even funnier if the first snow fall is early and then there are another two months of sun and no snow and then you get a repeat during the second snow fall. People have to relearn how to drive in snow every year.
People are bad drivers in good conditions, they just happen to be able to be bad drivers in bad conditions as well.
Bah, it's not just the first snow storm. They salt the roads so much in Ontario that people think they know how to drive in bad conditions but actually they just know how to drive in bad conditions on heavily salted roads. If they hit a patch of unsalted road they get in trouble quite quickly.
In my experience Saskatoon fares a lot better. They don't salt as heavily because it's often cold enough that salt doesn't work as well, and because judges don't blame the municipality for not salting when there's an accident.
I think the poster assumed that the person was trained to fit their surroundings. We are all hyper-fitted to what we know best and what we encounter every day, and will probably fail spectacularly if we're unaware of that.
I have a $100 bet going with a coworker at the office.
We defined self-driving cars: When I can pull out my phone in any of at least 5 major metropolitan areas around the world, order a driverless car, have it pick me up and deliver me to a specified location within the city in a timely fashion, with as little risk as getting in a taxi.
He is sub 20 years. I'm thinking more like 40-50 mark.
My thesis? We can solve the technical challenges, but so much physical, legal, and regulatory infrastructure needs to change to make this viable that it will be more than 2 decades until we see self-driving cars as a reality across multiple cities. Of course, I would be more than happy to lose the $100 as it would make everyone's lives better that much sooner. But I'm skeptical.
> He is sub 20 years. I'm thinking more like 40-50 mark.
Perspective:
* 60 years ago, we didn't have highways.
* 45 years ago, seat belts weren't required.
* 40 years ago, we expected engines to last 10s of thousands of miles. We didn't really understand crumple zones. Open containers and even drinking while driving.
* 30 years, we added fuel injection, computers, and airbags.
* 20 years ago, the hybrid car, better fuel efficiency, and side airbags
Perspective: nearly 50 years ago we did have Category IIIa instrument landing systems in commercial jet aircraft capable of landing without pilot input. We had reliable gyroscopic autopilot capable of keeping aircraft on a set course long before that.
Vast sums of money have been sunk into developing exceptionally reliable avionics systems since then.
Passenger carrying aircraft still all have a pilot or two.
And to complete the thought: the main reason why human pilots are still on all those aircraft is because... the automated/computerized systems still can't really make decisions that match what the humans can do. So the computer is really great at maintaining heading, altitude and speed for a set period of time, but really bad at deciding which heading, which altitude and which speed, and for how long (the best you can do that I'm aware of is have a human lay out the desired route and punch that into the FMC, and then all bets are off if conditions change along the way, and the humans will have to decide what to tell the plane to do again).
The exact opposite has happened to cars. We've delegated route building to our GPS devices, and we maintain heading and speed until the computer tells us to do something different.
And those decisions themselves could be easily automated, but people choose not to. There's a checklist for making those decisions. If you can write a checklist, you can write code. And for when there's not a checklist, then you can pick randomly, because that's what the human is essentially doing.
A human can notice when new evidence becomes relevant. If something isn't already on a computer's checklist when it becomes relevant, the human has a better chance of deciding what to do.
EDIT: I understand that there is research going into making computers recognize "novel" situations. My comment only applies to algorithms that contain "checklists".
No, then you need to get meta. There's a checklist for identifying "novel" information and reacting. By checklist I mean algorithm.
Sure, it can be tough to tease out the algorithm from the minds of current pilots. Luckily in aviation we already have training manuals. In other domains it'd be tougher to know when we're done extracting knowledge from humans.
It's not the landing that's difficult, it's the "write an algorithm which correctly identifies which specific parameters should cause the aircraft to choose a river near the airport as its landing site, and get approval for it" that's the difficult part.
How often does it happen, though? You don't need a human on every flight in order to deal with that. Build a few ground-based emergency remote-control centers across the continent, and train the autopilots (or the air traffic controllers) to call them when things get bad.
I'm not convinced anyone in the commercial aviation industry would agree with you.
Some on-ground duty pilot sitting in a simulator who's suddenly placed in control of an aircraft that's notified an anomaly seconds before landing has absolutely zero chance of being able to make the same judgement call as a human (co)pilot who's been sitting at the controls for the last few hours. And the likelihood of human input being required is highly correlated with the possibility the aircraft has lost connectivity and/or some of the equipment that's supposed to be sending back data is malfunctioning
(and yeah, you could build in more redundancy, but that still doesn't eliminate the problem and probably costs more than the pilot)
Pilots on modern large aircraft are more like managers that tell the aircraft what to do, and do fly them to keep with minimum legal requirements (minimy numbers of landinds etc...)
He probably meant Interstates, and 1956 is 60 years ago, which is why I'm guessing that. Interstate 5 (Seattle portion) was only completed in 1969[0] so depending on where you are, it could be significantly less.
Good point. I-80 wasn't complete coast-to-coast until 1986.
There were gaps in the Western sections for years. That was the first coast-to-coast Interstate, the old Lincoln Highway route. I-40, the "Route 66" route, was completed even later.
All these were some sort of win win, not hard sells, low resistance. SDV are rotating the notion of driving and the economy around it, car dealers, oil companies. It won't be the same game at all in my view.
>so much physical, legal, and regulatory infrastructure needs to change to make this viable that it will be more than 2 decades until we see self-driving cars as a reality across multiple cities.
It could be a lot more straightforward than you suspect. The three things we have going for us:
1. Car accidents are almost always resolved by deals made between insurance companies (this will be doubly true with unpiloted cars, which will presumably not commit criminal acts)
2. The same few large companies are likely to deal in insurance of piloted and autonomous vehicles, so there's not a huge incentive to go sue-crazy
3. If autonomous vehicles cause significantly fewer accidents and companies can charge the same amount for insurance covering them, that is a huge win for some very large companies with the power to get the ball rolling.
Increased vehicle safety will be a big win for these companies in the short term, although in the long term some price discounts might be factored into the cost of autonomous vehicle insurance.
Do you live in the suburbs? This could become true, but more because of the increasing urbanization trend making public transit more attractive than cars in more places.
We haven't exactly spelled out the terms precisely, but the spirit of the wager is that self-driving cars will be involved somehow. Certainly public transit could be part of a mix of transportation options, but I wouldn't win if self-driving cars don't come to fruition yet she doesn't get a driving license because she can subsist entirely on public transit.
I'd agree. I think in 10 years we're starting to have the conversation about when human driving should be banned in metro areas. At that point it'll be obvious that the technology has arrived, and the safety record will be there, but not everyone will have the technology. There will be the problem of telling someone with a jalopy that they need to park that outside of the metro and pay for an Uber to their destination which will probably snarl that up for at least a decade or more... At some point, though, as rich people are zipping along in cars that never get pulled over for speeding or having taillights out, the "Uber tax" is going to be calculated to be less expensive than the "PoPo tax" on the working poor.
I'm rather skeptical on that happening anywhere near 10 years. Even if the tech was ready to go full-scale tomorrow, just the process of tooling up to make tens of millions of these cars and actually selling them to people, letting them trickle down to lower-income used car buyers and letting the oldest manual cars trickle out of the market could take decades.
Once the first ones are available on the open market, I'd expect at least 10 years of them being very expensive rich peoples' toys while the remaining edge cases and kinks in the tech are worked out, manufacturing scales up, insurance and legal issues get sorted out, various road maintenance departments learn to deal with their needs, etc.
One reasonable scenario for rapid deployment of self-driving cars - is shared vehicles, driving a few people at a time over a shared but optimal route.
It's possible that such vehicles will have 5x-10x total occupancy per day of regular vehicles(which sit idle at most times). And it could imagine how the right financial incentives would shift a large share of the manufacturing capacity towards such cars. And considering the car replacement rate at probably something like 20 years - maybe 10 years is a possibility of starting to have that conversation.
That's an good possibility for getting self-driving cars to market sooner and getting more people in them all right, but I'm thinking we have to get a lot of manual cars out of the market before anyone starts taking the idea of restricting them seriously.
Even if it were possible to go buy a fully automated car today, there's no way enough of the fleet would have changed over in as short a time as ten years to make such a ban practical. And it's not possible to buy an automated car today; nor are the research prototypes Google and others have been working with up to a level of sophistication where simply productizing them would yield a fully automated car which could be used as a generic replacement for current automobiles.
I could believe that the first generation of fully automated cars might be available for purchase in ten years.
So I argued we'd start the conversation in 10 years and that it'd take at least another decade. I that is 20 years until we'd see the first ban against human drivers in metro areas.
To be there I do think we need the first generation of fully automated cars available to the general public in 5 years or less, which I think we're actually close to or we wouldn't be having this very discussion...
> there's no way enough of the fleet would have changed over in as short a time as ten years to make such a ban practical.
Cars sit idle 95% of the time. That means you only need to flip 5-10% of the total fleet. Tesla could do it by themselves building 2-4 million cars/year, with the amount of cars needed per year declining the higher the utilization level you can achieve.
I don't understand how replacing a small minority of the fleet with automated cars leads to a situation where it is reasonable to talk about banning human drivers.
You would always consider banning humans from an activity if it was overly dangerous compared to having a machine perform the operation.
40K people a year die in the US from vehicle accidents. Why would you not ban humans from driving if machines were objectively superior? Where do you find the "right" to drive to be? Driving is a privilege granted by the government, not a right.
The comment I replied to was suggesting that vehicle automation would progress so rapidly that it would be practical to start working on such a ban as soon as ten years from now. I'm not disputing that such a ban might eventually become a good idea, but for it to become practical so soon, we'd have to be much, much further down the road to full automation than we are now.
If fully-automated vehicles become commercially available (which seems likely at some point, though not in the immediate future), we will likely see rapid adoption in the applications where they are very well suited, followed by a long, slow, gradual trend toward broader usage. The hardest cases will be the ones sticking around longest, and one can't simply legislate them out of existence. Just replacing a small percentage of the fleet won't come anywhere close! It's only when automated cars have become so dominant that only a small percentage of the fleet still depends on manual drivers that it will become practical to start talking about a ban on manual operation.
I think 10 years is just a bit optimistic. I think it'll take about 10 years from the time we have the software/hardware at a point where it is ready. I think that could be 5 years. I think it'll be another 15-20 years after that where the majority of cars on the road are self driving. Baring any crazy legislation, which I don't think would happen, in the US anyway.
Yeah, I apologize, I meant to respond to the GP and accidentally responded to you. I don't want to clutter the thread with duplicate posts though, so I'll just leave it as is.
I think you or the colleague are more likely to be correct than 40+ years. There are hugely capable companies driving this and more understanding than ever on what will be achievable and how much money will be on the line.
Self-driving, electric cars will make future taxi-equivalents affordable enough that it's considered for many instances. That makes for a massive market.
It's not as though current taxis have a sparkling reputation as drivers either. Not many will be sad to see the end of them.
I think you are seriously underestimating what the baby-boomers will do when they have to start giving up their drivers licenses because they are getting to old to drive.
>> but so much physical, legal, and regulatory infrastructure needs to change
Let's go with that. Say that tomorrow Google finished the development of the car. It works very well.
Than it just need to find a single city that would agree to a wide scale deployment. And I imagine there are plenty of reasons why a city might want to do that. Heck if Google cared, they could legally bribe politicians. And it doesn't even have to be a city in the west. It could be a city ruled by a dictator somewhere in africa. Or it could be a self driving car created by a chinese company, and china would be willing to take the risk to lead in this field.
So maybe the first city isn't as hard as it seems.
Now what happens after a car has proven itself in a city for a year or two ?Showing almost zero accidents, and all the other benefits of a self-driving car. How much will people everywhere would want it ? how much political support would it gather ? i imagine a lot.
Would such support and proof be enough for relatively rapid deployment of self-driving cars ?
You're right that there's significant legal and regulatory hurdles that have to be cleared, but I think they're more of an issue for broad adoption. SF will have a driverless taxi service probably within the next 5 years. Once that's in place it's only a matter of time/strength of the taxi lobby that's going to stop it, (and Uber has shown that can be broken even in places like London).
If you're talking about _everyone_ in major economies having a driverless car, yeah, I think 40 years is realistic given the replacement rate for cars alone, nevermind the legal issues.
I know what you mean, but London is a poor example — the well known "black cab" service has been heavily regulated forever, but cheaper "minicabs" have been available to book (rather than hail on the street) since the 1960s, and subject to a little regulation¹ since 2001. Most operators were tiny companies with a few cars servicing a local area, until Addison Lee started running a London-wide service about 10-15 years ago — they had nicer cars, better availability and marketing and so on. Uber set up in London under the same regulations.
¹ Additional car inspection, criminal record check on the driver, etc.
While I agree in general - I can think of a lot of technological predictions from my childhood in the 70s and 80s (fusion power, widespread off planet colonies, general AI) that we still seem to be waiting for.
Considering the number of accidents after major storms I think your overstating the case. People are bad drivers in all conditions.
The problem with driving is you have to consistently do it well. You can make zero mistakes 99.99% of the time and be a terrible driver. One accident per 10,000 minutes would be ~one accident every year. Worse, a lot can happen every minute, so it's closer to not messing up for 600,000+ seconds.
The only reason people can drive is we build in very wide tolerances. ex: Car lanes are far wider than strictly necessary, you can fit ~2 cars in the average lane. Traffic lights have a significant delay where they are all red etc.
I think it's more accurate to say that people are unreliable drivers. Humans are certainly capable of driving very very well, but they perform unreliably.
Actually, it sounds like the self-driving car got muscled out by the bus. They claim that the the car AND the driver of the car thought the bus would let them through. Now this is a problem with the car not driving defensively enough, because either the car miscalculated the bus's ability to slow or assumed the bus would be able to see the car and react accordingly (or, just possibly, the bus driver is an ass who tried to shut out the car. I know, its practically unheard of, but its possible). I don't think this is a deathblow to self-driving cars, nor is it a sign that they are worse at driving than people, only that they are still being perfected and that the single greatest threat to a self driving car are people because they are the ultimately unpredictable.
Anecdotally, this is a lesson I too had to learn, almost the hard way. The closest I've ever come to accidents in city driving is with municipal buses. City buses seem to drive as though they always have the right of way, and other cars will always get out of their way.
Coming from a smaller, more suburban area, I learned to recalibrate my driving to be more aggressive in larger cities (primarily NYC & DC), to hold my own against pushy taxis, etc. Most city drivers kind of bluff, and if you show you're not falling for it, they'll stay out of your way.
But city buses don't bluff. If they start pulling into your lane, even if clearly cutting you off, you better slam on those brakes, because that bus is not stopping. It's one of those unwritten rules you learn, at least in my experience driving in major East Coast cities.
I can see how it'd be hard to train AI that you have adjust the driving style depending your locale and for specific types of non-emergency vehicles.
I'm not making an argument here, really just sharing a chuckle about the crash involving a city bus.
But city buses don't bluff. If they start pulling into
your lane, even if clearly cutting you off, you better
slam on those brakes, because that bus is not stopping.
Bus drivers would learn just like you did if there were a few thousand unmanned cars whose programming assumed every other driver would follow the law :)
The city buses effectively do have the right of way, at least in the sense that, if you have an accident with a city bus, you are going to be considered at fault, rules notwithstanding.
I'm not convinced that the distinction between "situations requiring thinking" and "boring situations" is that strong. You seem to be implying that self-driving cars will probably be worse than humans at things like negotiating through city traffic (where this accident apparently occurred), but is there any evidence of that?
It's true that most deaths and serious accidents occur on highways, where human factors like boredom and sleepiness are probably significant. But humans also have a heck of a lot of fender benders like this incident, and a lot of the ones I have seen weren't exactly situations where "a lot of thinking" was necessary. People are texting and rear-end the person in front of them, or underestimate how much room they have to switch lanes between two cars, or other things like that. I don't see why these situations are uniquely difficult for AI to solve. In fact, they seem like very apt problems for computers to solve.
On the other hand Google has driven a lot of miles without an incident and this minor accident doesn't change the fact that their system is safer than a human behind the wheel. And unlike humans who don't improve after an accident, Google's whole fleet just got even more safe.
They have driven millions of miles - in perfect driving conditions: great weather, lots of highways, all of it by day (how do I know that? Simple: Google would brag about their AIs being able to drive in rough situations if they were able to do that. But they don't. They just brag about a huge number of miles in undefined conditions...). That is exactly what the original poster criticized. When driving is so easy that it bores human drivers to death, todays' AIs can be better.
The problem is: real-life driving does not only consist of situations that even a child could easily manage. Humans are extremely good at letting their driving skills degrade gracefully when conditions get rough. AIs? They drive perfectly until they reach their limit, then they suddenly pull off epic fails - if no human driver is on standby to resolve the situation immediately.
I guess I was dreaming last Friday when I saw the Google self-driving car pass by twice on the same route, at 10:30pm. I even handled a stop with very limited visibility on one side like a champ. If anything, I think self-driving cars will end up driving like my Grandma, but probably safer.
Your grandma, with the ability to receive telemetry from every other car around her, knowing her position to within a few mm, having a reaction time governed by light speed delays, and knowing as much about the Physics of driving a car as an army of PhDs has been able to teach her.
And programmed to turn off the blinker light after merging onto the freeway...
> all of it by day (how do I know that? Simple: Google would brag about their AIs being able to drive in rough situations if they were able to do that. But they don't. They just brag about a huge number of miles in undefined conditions...)
They totally drive by night too. How do I know that? Simple: You see them with your own eyes, in Mountain View. Even if Google doesn't brag about it.
Google mentions focusing on city driving instead of highway driving in their annual disengagement report [1]:
> Mastering autonomous driving on city streets -- rather than freeways, interstates or highways -- requires us to navigate complex road environments [...]. This differs from the driving undertaken by an average American driver who will spend a larger proportion of their driving miles on less complex roads such as freeways. Not surprisingly, 89 percent of our reportable disengagements have occurred in this complex street environment (see Table 6 below).
This is false. Google's cars have only gone as far as they have without an incident BECAUSE of human drivers. In a 12 month period, human drivers prevented Google Self-Driving Cars from causing ten accidents. And 272 times the car's software outright failed and dropped control of the car to the human driver. This is all in Google's recent report to the California DMV, but it's not a reality they like to advertise openly.
Statistically, Google's Self-Driving Car would've lost it's license by now, if not for human drivers keeping it in check.
Spread that over "the equivalent of 75 years of typical U.S. adult driving." (1million + miles) and the alpha version of this software seems almost on par with the average driver which is expected to file an insurance claim every 17.9 years.
PS: How often do you think the average drivers ed teacher uses there break?
I'd like to see some analysis of the predicted severity of the accidents for self driving cars.
A minor fender bender at low speed vs. losing control and going over a guard rail at 80 mph are both "accidents" - but have entirely different consequences.
I have a hunch that while human drivers are still better, when they do have accidents some percentage of those tend to be much more fatal.
Your statistic is incorrect. The incidents described took place over a much smaller number of miles. Only I believe 14 months of the Self-Driving Car program, not the sum total of it's driving to date. (I said 12 above, but I think it's actually 14 months.)
Nor are you counting all of the times the driving software handed control back to the driver. Consider each one of those equivalent to your human driver falling asleep at the wheel.
"Google had operated its self-driving cars in autonomous mode for
more than 1.3 million miles. Of those miles, 424,331 occurred on public roads in California during the
period covered by this report" So, you where talking about a specific report not the overall system, got it.
Our objective is not to minimize
disengagements; rather, it is to gather, while operating safely, as much data as possible to enable us to
improve our self-driving system. Therefore, we set disengagement thresholds conservatively, and each
is carefully recorded.
That's a long way from falling asleep at the wheel. That's we don't want to put people at risk so we are going to hand off control if "Disengage for a recklessly behaving agent" etc.
As you correctly point out disengagements may not be the equivalent of falling asleep: they might simply be the equivalent of the driving instructor adjudging the learner at the wheel to be looking a bit too nervous, or just not ready for the junction that's coming up yet. And yes, its assumptions on the type of "software discrepancy" and "unwanted maneuver" that prompt human input are conservative by design.
But the report actually goes so far to points out that on at least 13 of those occasions Google believes that without human intervention a crash would have occurred. It also implies that only 3 of those situations were created by poor driving on the part of another human. That's a definite crash due to Google car error averted every ~40k miles, even though humans already take over whenever there's a software warning, a spot of rain or something else that they think might stretch its capabilities too much.
The average US driver has 1 accident per 165000 miles (which, like the unusual tendency for the Google car to get bumped when stopping at junctions, may or may not be their fault) and that's including DUI as well as people driving in slightly more difficult conditions than sunny surburban California.
> The average US driver has 1 accident per 165000 miles (which, like the unusual tendency for the Google car to get bumped when stopping at junctions, may or may not be their fault) and that's including DUI as well as people driving in slightly more difficult conditions than sunny surburban California.
The average driver doesn't do all city miles and that's why Google's cars have gotten bumped. Highway driving is much easier to automate (several cars already do this today) and makes up a large percentage of total miles driven.
Disengagements while on public roads is the only representative metric. The real world is more challenging than whatever artificial testing environment Google has on their private roads.
That is a useful data point. Extremely optimistic bystanders think self driving cars will lower death rates to zero.
Statistically about 7 people die on the public roads per billion passenger miles. At half a million miles, assuming the miles are statistically random and indistinguishable from the entire country (sure, summer in socal is as hard driving as a winter in a blizzard at night in Montana, sure...) then 424331 death free miles means that technologies death record is no worse than 336 times worse than human drivers, at least so far. Perhaps its only 335 times more deadly than human drivers. Even drunk people aren't that dangerous.
Or in other words, the self driving car can be summarized to lots of predictions based on very little data.
It's hard to say with such a short description, but as i read it, the google car never left its lane, just moved right to the edge of it's lane to avoid some debris.
It sounded like it was merging into the bus's lane. But I imagine Google will prep some very fancy videos of the incident from the car's perspective within the next week or so, and then we'll know for sure.
You got that backwards. They moved right in their lane to slow down for a turn (while allowing traffic going straight to pass), and then had to move back into the stream of traffic as there was debris preventing them from turning on the far-right side of the lane.
From the writing in the article, I found it extremely difficult to get a good visual of exactly what happened in the accident. I wish the description was a little more, uh, descriptive.
Bad drivers take on a whole new meaning outside of wealthy countries where the dangerous drivers routinely end up disqualified from driving. Just by providing a predictable vehicle everyone can trust will stop and stay stopped at red lights, stop at stop signs etc AI will bring a lot to the world.
Seventy-four per cent of road traffic deaths occur in middle-income
countries, which account for 70% of the world’s population, but only 53%
of the world’s registered vehicles, burdens for 74% of world's road
deaths. In low-income countries it is even worse. Only one percent of the
world's registered cars produce 16% of world's road traffic deaths.
People are bad drivers. Every winter at the first snowfall there are tonnes of accidents because folks forgot to adjust following distances.
The promise of an AI driver is that there will never be a second accident of the same type. The only question is how fast the learning rate can ramp up.
The problem you are stating is called the 'handoff' problem which I believe will be a serious challenge for manufacturers of driverless cars. The handoff problem is when an automated system identifies a scenario in which it needs to hand over control to a human. How will that work in a vehicle that is not designed to be occupied by someone that knows how to control it? This problem has been written about in the book Our Robots, Ourselves [0] and there is an Econ Talk podcast about it as well [1].
To me, it makes sense to make use of the brain power provided by the occupant of the vehicle. For example, if one of the main vision sensors in the car breaks, why not let the human occupant drive the car manually?
From a safety engineering standpoint, handoff is a very, very bad idea. When you automate the common operation of a system, how is the operator supposed to get a feeling for the system? And when the operator doesn't handle the system regularly, how is he supposed to handle a situation that's so dangerous and unusual that the automation can't handle it?
If you have years of experience driving a car, maybe it'll work. For some time. But what if your last manual intervention was five years ago? Or twenty years ago? The rarer handoff happens, the more stress it puts on the manual operator, and the more likely things go wrong.
It's much better to have a safe default reaction (stop the car, shut down the reactor, ...) that kicks in when normal, automatic operation can't continue.
You probably misunderstood what I meant by 'handoff'. It refers to the range of programmed interactions between an automated system and a human, for example, a warning light on the dash is a handoff. A warning sound or an automatic request for user action is a handoff. Pulling over and shutting down is a handoff. You say it's a bad idea. It would be a very very bad idea to build systems that do not have these responses programmed into them. It would be like throwing an exception without any exception handlers.
If you haven't already, I highly recommend listening to the Econ Talk podcast for much more depth from someone who has been thinking about these kind of problems for years (I can't really do it justice in a HN comment).
Thankyou! I've been saying this for a while, the current approach of 'driver assistance' or 'autopilot' is terrible. The more reliable the autopilot, the less attention the human 'driver' is going to be paying when the autopilot actually does say "okay turkey, you fly it" and shut off. And that is far more likely to happen in an unusual (read: hazardous) situation. You're self-selecting for a human panic reaction followed by a crash.
As far as I understand, the current behaviour of self-driving cars is to stop the cars safely in a safe place, to the best of their degraded ability. Once it's stopped, the human can either take over. If a human can't, you're stuck on the side of the road and you call a tow truck.
Which isn't really any different than any other sort of major mechanical failure.
That's all well and good if you make it to the side of the road, but what happens when the car decides it can't make it that far and stops in the middle lane of an Interstate?
I'd expect major mechanical problems with robots, like "the vision sensor died in the middle of an interstate," to be about as rare as major mechanical problems with humans, like "they had a heart attack in the middle of an interstate." If not rarer.
We're already okay with the possibility that a human might drop dead in the middle of an interstate and lose control and more humans might be injured or killed as a result. As long as it is not more likely that a self-driving car will drop dead, I don't see why it's any different.
(And this isn't even getting into things like the possibility of catastrophic failure being lower and the options for recovery being much higher. You can run two robots with redundant hardware; you can't run two humans quite as easily. A human who loses vision will lose their concentration; a robot that loses vision can use its last known positions of nearby cars to attempt to pull over immediately in an orderly fashion.)
What happens if your car breaks down such that it can't make it off the interstate?
It's the same principle. You try to do the safest thing you can, but yes, at some point a car will fail and be stuck in the middle lane of an interstate because the autopilot fails, just as can happen because of mechanical failure.
The same thing that happens when any other car breaks down in the middle of the road--a traffic jam. Cars (especially ICE ones) break down all the time, we know how to deal with it.
Hold up. Just because Google's problem is a big one to solve, doesn't refute the point that human drivers are bad.
Yes, driving is actually a challenging task. But guess what, humans are BAD AT IT. And are computers going to be better? I'd be pretty damn surprised if they weren't.
Most crashes come down to distraction. Computers never get distracted. So if we can workout errors like the one in this story, then at least we can say the computer isn't going to get lazy or distracted and suddenly forget it's lessons learned.
Yes, its a challenging road ahead for these self driving cars. But I'd feel so much safer in a world of autonomous vehicles than I would with the morons behind the wheel of today.
Not all of the humans are bad drivers, but some of them definitely are. I have been living in the US for 4 years and here are the list of things that are extremely dangerous:
- not using signals at all, just randomly changing lanes
- not using signals at all and on the top of that do not care about traffic in the other lanes
- driving 45 mph in the left lane because California teaches to new drivers that changing lanes are dangerous, aka "stick to your lane" (I learned that recently and I could not believe)
- driving with the exact same attitude in rain like under dry weather conditions
- do not care about people merging lanes, do not look at them, pretend they dont exist
- being on facebook on the highway driving with 65mph
- do not care about bicycles, pretending they are not legitimate traffic
I am not sure about the rest of US, but several friends told me it is specific to California. I would argue that humans are not as good drivers, at least there is a large part of drivers who aren't. Replacing drivers with software is always going to win, just like it did with airplanes or anything else repetitive task that we automated. I am pretty sure there is going to be a period while AI is not as good as humans but this is why Google is collecting to data at large scale to improve it to beat humans and make driving a safer way to commute.
>> And unfortunately there's no easy way to use the best of both sides.
It doesn't seem too hard to go in that direction. For example you can use satellties or drones or other sensing mechanism to survey a city in real time and tell the driver X minutes in advance before he would need to take control of the wheel - and of course you could adapt this recommendation according the skill of the person, his current state(being drunk) or other data.
Not sure that'd work: "Sorry I mowed your child down, but it wasn't my fault, the computer didn't tell me I needed to pay attention". People are in general extremely quick to deflect blame from themselves (especially under stress) no matter _how_ stupid or implausible their no-the-spot excuse is.
(Source: I'm a motorcycle rider, we have a term here "SMIDSY", which stands for "Sorry mate, I didn't see you" - because it's such a common thing for drivers to say when they've just driven into you. There's a great quote from a judge a few years back in a court case over a driver excusing themselves for running into a bike "But the plaintiff was _clearly_ there to be seen.")
If the vehicle told you to be focused ahead of time(say a few minutes before, enough mragin), and recorded that action than legally at that point in time, you're driving a regular vehicle.
Personally, I feel we are approaching the self driving situation backwards. We are so heavily invested in current technology that we are trying to shoehorn existing tech into doin something it was not designed to be autonomous in the first place.
Imagine how much easier it would be, for example, if roads had embedded markers (like magnets or RFID chips) that designated lanes, speed limits, and other road rules and weather conditions. AI vehicles would simply need to read this information and adjust accordingly.
But, unfortunately, this would be almost unfeasible in today's world vecause a) it depends on every car following the same rules and that's impossible if you have human drivers and b) it would require massive investment and national-level cooperation to change the infrastructure.
> Imagine how much easier it would be, for example, if roads had embedded markers (like magnets or RFID chips) that designated lanes, speed limits, and other road rules and weather conditions. AI vehicles would simply need to read this information and adjust accordingly.
Speed limits can be defined as metadata on existing mapping data. Existing vehicles, such as Tesla's with their autopilot system and GPS receivers, can "define" existing lanes by refining existing mapping data using their drivers as expert system trainers [1].
You don't need national-level cooperation to change infrastructure; you just need a massively automated system for collecting data and training based on it. Google and Tesla are doing both, just in different ways.
Let's say for instance we wanted an RFID indicator for each lane every 200 feet.
You're talking about 13 million RFID chips that would need to be installed on 164,000 miles of roadway - just to get the interstate highways covered.
Now think about the fact that these chips are going to need to sit static and not be replaced for more than 15 years at a time. So you have to come up with a way to house them such that they will not be affected by > 150 degree heat or by < -50 degree (F) cold.
And then think about the fact that it takes ~6 months for the government wheels to spin in the right direction for a pothole to be filled in.
The project to put RFID, or any new technology on the roadways in the US will have to involve federal cooperation as well as the governments of 50 states and countless local governments. It will be 100 times more expensive than any other venture using the same technology.
Look at the Big Dig project in Boston. It took 25 years, $24 billion and several acts of congress to get it to a state where they could start working out the kinks like "why do the guardrails keep killing people".
The scale of solving this problem "the right way" is incredible. I don't disagree with you entirely. But I think any change like that is a solid 100 years out into the future.
Of course, you and I are pretty much on the same page. The complexity and cost would be staggering to redesign the whole thing. But maybe there are smarter, incremental ways to go about it. Perhaps mandate a special paint for lane markers to include magnetic paint or some kind of nanotech. We already do repairs on roads, repaint, replace "cat's eyes markers" etc, so modifying the materials used should be easier.
I fully agree it's going to take a long while for everyone to adopt this, and one of the keys will be proving the AI tech - which then becomes a chicken and egg problem. Maybe Google/Tesla are laying the groundwork for that, and on second thought maybe I'm wrong about us thinking about this the backwards. Perhaps this "over-engineered" solution (for the lack of a beter word) is exactly what we need to then later optimize for "the right way".
I think about the half-measures like new paint for lane markers and I'm equally skeptical, but that's probably just me being pessimistic.
All we really need is an idea that we can't fathom today. Google has a library of images of just about every street in the states. If they could run that same project and in the process lay out markers of some sort that would be helpful for the next generation of autonomous vehicles, then from that point forward incremental change is simply a matter of re-tooling an autonomous vehicle. Cooperation from local governments could be pushed through in a similar way that the 55 MPH speed limit happened.
Of all the "billion dollar ideas" floating around silicon valley, the person/investment group who's willing to take on an idea like this to a nationwide scale is going to be the winner.
The first round doesn't even need to be high tech. They could have a paint ball gun hanging out the side of a car spraying out a biodegradable path that a series of other cars can follow. Those cars could then train other cars.
...
Yeah, I'm flip flopping. I'm actually an idealist deep down inside. Sometimes I forget that for a little while.
We need visual processing and heuristic logic to handle a kid on the side of the road chasing a ball unlikely to intersect and catch it before he enters traffic. Given that mandatory ability, reading a speed limit sign is pretty trivial.
Given that its very unusual for an oncoming car to swerve into your lane, and a giant pothole in the oncoming lane, and the car sees three cars swerve into your lane taking turns, what are the odds of the next car hitting you head on? Compared to this puzzler, interpreting a stop sign is pretty easy.
Also the human drivers presumably will not understand the weird behavior of an enhanced self driving car, unless the human infrastructure matched the automatic infrastructure. Of course you could analyze the human data visually to predict the human behavior, but then why bother with the automatic hardware sensors?
I guess a bad automobile analogy is if you must design this pickup truck to safely haul a 8000 pound trailer, then a 300 pound trailer will not overtax it.
I feel the same but refactoring the whole driving infrastructure will only happen if people see SDV as that much valuable and not overnight. If SDV get critical mass, what you say will happen.
If all cars were self-driven and there was a system that allowed them to exchange information, negotiate decisions, etc with all cars in proximity, that would be the ultimate vision safety-wise. I expect that we'll live to see that day. Removing the human factor would be removing the biggest problem.
On the other hand, making this mandatory would also be the wet-dream of most governments. In the name of safety, they could have access to so much information... Just knowing how many passengers the car is carrying, what's the weight of each passenger, and, of course, a history of the coordinates, would open insane possibilities, not all of them good.
> But when boring suddenly turns to not boring? The current state of AI is very very bad at this.
"Boring" versus "not boring" is too vague for this statement to really be meaningful. Examples could be drummed up on both sides. For example computers beat humans hands down in at least some criteria fitting the "not boring" description, like reaction time and seeing in multiple directions at once, and probably some others.
That humans are bad drivers is something most of us experience everyday. Of course, everyone thinks THEY are the exception.
Self driving cars being better than the average driver is absolutely to be expected since they can only get better. They'll be consistent. They will always be focused, not get distracted, not get tired, have more sensors than a human could. It's not just about the environment but also about the mental state of the driver.
Sure it's difficult but your using one case to flip things around doesn't seem to be a very strong argument.
> Self driving cars will get there someday, but I think it's much farther away than many assume it will be.
There's no evidence to support your assertion given the tests that have been going on so far. There's no evidence to support that switching back and forth would be better than having the AI all the time either.
Let's suppose you are given a 2d field in which there are a number of rectangles, each of which have a velocity vector. So you know the location of each rectangle now, and, to a good approximation, in the near future.
How difficult is it to program a path-finding algorithm that will not crash into any of these rectangles, given that lowering speed is always a good option within a city?
I'd say this is really not all that difficult. Yes, there are a lot of special cases to consider, but still, in essence, this seems to me a quite simple problem.
So breaking down the problem is key. If you look at an autonomous vehicle as a black box, then, of course, it will seem dauntingly complicated, and difficult to trust.
IIRC, the autonomous cars all use a LIDAR system (kind of radar) to detect objects, and do not fully rely on computer vision.
Also, here's ([1]) an article about a student who (single-handedly!) wrote the software for a self-driving car, which doesn't use a LIDAR system and is much cheaper.
There is a way to get the best of both worlds: variably-autonomous swarming vehicles. The larger the swarm, the more autonomy the car has. The smaller the swarm, the less autonomy.
Large swarms on the highway, small swarms (or solitary driving) elsewhere. Best of both!
only this one human-driven car in the middle of the swarm is what stops this swarm from reaching new level of autonomy. Opps, such an unfortunate mishap! that human-driven car has just hit the car in front of it and went off the road. Finally the swarm is free... err... can move onto the next level.
> And unfortunately there's no easy way to use the best of both sides.
How about using the "self-driving AI" only as "really advanced automatic assistance". Humans will still drive the cars, but the AI will kick-in in those boring situations where the humans turn off their thinking.
Interesting point. Would be interesting if this lead to remote drivers who are responsible for navigating self driving car when the car decides human intervention is safest? (At least until all cars on the road are self driving and connected to each other for real time communication...)
"A Google Lexus-model autonomous vehicle ("Google AV") was traveling in autonomous mode eastbound on El Camino Real in Mountain View in the far right-hand lane approaching the Castro St. intersection. As the Google AV approached the intersection, it signaled its intent to make a right turn on red onto Castro St. The Google AV then moved to the right-hand sid of the lane to pass traffic in the same lane that was stopped at the intersection and proceeding straight. However, the Google AV had to come to a stop to go around sandbags positioned around a storm drain that were blocking its path. When the light turned green, traffic in the lane continued past the Google AV. After a few cars had passed, the Google AV began to proceed back into the center of the lane to pass the sand bags. A public transit bus was approaching from behind. The Google AV test driver saw the bus approaching in the left side mirror but believed the bus would stop or slow to allow the Google AV to continue. Approximately three seconds later, as the Google AV was reentering the center of the lane, it made contact with the side of the bus. The Google AV was operating in autonomous mode and travelling less than 2 mph, and the bus was travelling at about 15 mph at the time of contact.
The Google AV sustained body damage to the left front fender, the left front wheel and one of its driver's-side sensors. There were no injuries reported at the scene."
Interesting. That is a fairly complex scenario (sand bags partially blocking a lane) -and given the test driver also didn't expect the bus to move ahead it seems like this is a mistake that many humans would also make.
Interesting to note that the collision was very minor (car moving at 2 MPH). When looking at accident rates we should also consider the severity.
I'm pretty sure it's the bus. Generally a already moving vehicle with a green light on straight road has right of way. The correct action for the autonomous vehicle (or a human driver) would be to wait for all traffic in the lane to clear (for example, after the previous light turns red and there is a lull in the traffic), then make it's way around the sandbags.
Here's my artists rending. The vertical lane is just the right hand lane, and is "wide", but still one lane. The bus is traveling "up" and the Google AV tries to manenouver around the sandbags blocking it's way. Thinking that it can get around in time, the AV starts moving to the left and crashes into the bus.
I guess the "one lane" aspect is the confounding variable, but since the autonomous vehicle was stopped, then it takes on the aspect of any other stopped vehicle on the side of the road (ie giving up it's right of way). Just like if you have a road where cars are allowed to park on the side of the road, those cars cannot enter traffic unless it is safe to do so.
Edit: Actually, I have no idea. Did a quick read of California's right of way laws, and couldn't find anything that jumped out that would cover this situation.
I'm actually really curious about this now. It's a "wide" lane, with room for 2 sets of cars. The AV itself went into the right hand part of the lane while other cars were still in the lane. Presumably the lane is wide for making right turns, otherwise basically ever car there is making an illegal right turn (and law enforcement probably would have already noticed that by now).
So either the AV broke the law first (by doubling up in a lane), or the bus had the right of way. I'm with the other commenter, why isn't it just an actual turn lane?
Also, if the bus is at fault here, then this would be an amazing place to commit insurance fraud. Just sit in the right hand part of the lane, wait for traffic to pass by, then drive back into the left hand portion of the lane right as a BMW/Mercedes/Ferrari/etc drives by.
I wonder about this too. If you are new to CA, at first it seems crazy how aggressive people are about splitting the right hand lane to turn right... who would have the right of way if the bus was already stopped at a light, but the AV split the lane to turn right and passed the stopped bus? I like the idea of splitting the right lane to make right turns, but IMO, if you've done so, you no longer have the right of way for traffic in that lane.
It sounds like this happened too close to the intersection for parking to be legal. If I see a car stopped on the right side of the road at an intersection signalling a r right turn, I don't assume it's parking, I assume that it's stopped for a pedestrian in the crosswalk.
I agree with you analysis. If the AV was stopped completely, then it should wait until there is an opening before resuming motion, just as a parked vehicle would do.
But, if the AV was not stopped, I think the bus is now at fault for attempting to pass a moving vehicle within its own lane, which is illegal (I hope).
I don't think there's any rule of the road saying you lose or gain right-of-way depending on whether your car stops. If you park the car, yes, or if you are stopped long enough that a reasonable person would think you were parked. But just coming to stop does nothing.
In this case, I think it all boils down to vague ideas about right-of-ways when drivers casually divide lanes.
Looking at a map of the area, it's an extra-wide lane, as described in the report. Why would CA build it that way, instead of a full turn lane? I assume since it's technically a single lane, the fault actually lies with the bus, since it was attempting to pass within the lane?
Thanks for that. It's so obvious from that view, but harder to understand from the bird's eye view.
So from the bus driver's perspective, the AV was a parked car on the side of the road. From the AV's perspective, it likely should have treated the situation as if it was going from a parked position to merging into traffic.
That storm drain is way too close to the intersection to be a legal parking area according to the rules of the cities I'm familiar with. It's close enough that I would typically assume a stopped car signalling a right turn is waiting for pedestrians to clear out of the crosswalks. I wouldn't expect a car in the AV's place to swerve left, but I would definitely be looking at it as active traffic, not a parked car.
It sounds like the car attempted a lane change, but cancelled when it detected the bus. The bus didn't want to wait for it to return to its lane, so it just scraped the side of the car. If the car was further in the lane, it would have been rear-ended, but the report just says side scraped.
Note that neither the AV nor the bus ever left the lane. This all happened in a single 'wide' lane, wide enough for both vehicles to be in it at the same time.
splitting the lane for a right turn is common in CA (assume legal too). but you'd best be sure you can make it without having to cut someone off. if a human did this the result might be the same. tough problem. there's a chance that if the google car went faster than 2mph it might have gotten some respect (or around the sandbags before getting hit, if there was room in front)
Ok, I laughed out loud on that one. This quote in particular, "The vehicle and the test driver 'believed the bus would slow or allow the Google (autonomous vehicle) to continue.'"
Bus drivers in the Bay Area are notorious for ignoring traffic (and pedestrians). Apparently there is some indemnity or statutes that make suing either the transportation agency or the driver nearly impossible and so pretty quickly people learn that the bus drivers drive with impunity. Plenty of stories posted to the local traffic column in the paper, and shared amongst neighbors and in the department of safety's "blotter" feature.
Google needs to go back and program their cars to always assume that buses are out to get them and avoid them at all cost, They are an active traffic hazard often operated by a disinterested and distracted driver. The only way to "win" is to not be where ever the bus is.
Up next: Google self-driving buses, finally American buses run with the same attention to time tables that Swiss ones do. And added benefit: they obey traffic safety laws.
I have actually visited Mountain View, from Switzerland, and caught a bus on El Camino. Probably not far from where this incident happened. Yeah. Timetables are not a thing that happens there.
...I remember once in Zurich when my local bus was two minutes late. The people at the bus stop were really quite cross.
How is it possible to run buses with perfectly accurate timetables? I imagine traffic conditions are somewhat unpredictable in Switzerland, just like anywhere else.
There's enough slack in the timetable to account for delays. If the bus would normally take one minute to get from one stop to the next, the timetable will actually account 90 seconds, etc.
Then there are certain stops where the bus will, if it gets there early, stop and wait. They don't do this on every stop, but they're frequent enough to keep the service regulated. Intelligently, these stops are also the ones which synchronise with other transport systems. For example, when I take the train home from work I know that there'll be a bus waiting for me when I get off at my station.
It's not perfect, but the overwhelming majority of the time it Just Works.
In heavy snow, busses (and especially trams) tend to not run on schedule anymore in Zurich. During reasonable conditions, they normally do. One thing they have is buffer time at the end of the lines, so that if it's late on one run, it won't be late for all runs. Some longer lines don't run quite on schedule, especially during rush hour, but their high frequency usually makes that a non-issue.
I’ll report what I observed in 2 years of 3 to 4 times of using the bus a day in Germany.
Every time a bus was late, or tried to break the sound barrier™ in the hope of reducing some of the delay, was due to one of the following issues:
(a) A tourist with texan accent trying to get onto the bus, discussing with the bus driver if he can pay with credit card or in dollar (no), then asking the bus driver to wait while he’s going to get money from the nearest ATM (happens at about 5% of stops in the downtown areas where the tourists are)
(b) Rush hour traffic, 200 people squeezing into a single bus, and another few hundred waiting at the bus stop – busses coming every one or two minutes, and it takes quite some time until people stop trying to get into the bus, and leave enough space for the doors to close
(c) some kids with invalid tickets trying to cheat and getting caught
These issues can’t be fixed by automated busses, or trains.
Only by less tight schedules, and more busses and trains.
(a) is fixed because there's probably no-one for the Texan to talk to on the bus. For trains, the person to talk to is on the platform, and can simply let the train depart (example: Copenhagen Metro, although good luck spotting any staff).
It could be more easily fixed by accepting foreign credit cards (Gothenburg manages this, London if the card is contactless) or telling drivers not to wait.
(b) is partly solved if the cost of running buses/trains is significantly reduced, as that leaves money for extra vehicles.
Already many cities today operate busses by taking unemployed people and forcing them to take the job as bus driver for no pay (they'd have to continue living off of welfare), otherwise they'd lose 50% of their welfare money.
That's how you get bus drivers for free — automated vehicles can't really beat that.
I think that this amounts to a small pebble in the mountain of technologies that have to be developed to enable AVs.
Here's an easy solution: every bus stop has a kiosk that dispenses a RFID bus pass if you put in your payment info. Then, when the bus comes, you can enter from any door and the RFID receivers detect your presence. Attempts to board without a bus pass are detected with weight sensors and a camera network (current computer vision techniques are probably reliable enough for this application; they'll be even better in a few years). Unauthorized persons are advised to get off at the next stop by audio recording and identified by displaying a snapshot of their face (taken earlier on the bus). If they don't get off the transit police are summoned.
Fix with POP (Proof-of-Purchase). That is, you can enter the vehicle through any door, but you must have a valid ticket. Occasional inspectors check the tickets, and issue fines for people who don't have tickets such that the expected cost of not paying your fare is on the same order or lower than fare beating.
Put a fare vending machine which accepts cash and credits cards on the bus, which also makes it very obvious how to get a single fare. (This exists for example for trams in Berlin, modulo the accepting credit cards part)
We’re currently in the process of switching to RFID-cards as tickets, and allowing people with RFID credit cards to also pay via those.
But yes, all these issues are solveable – but "how close does the bus operate to the schedule" usually is not related to who drives it, but to how these issues are solved.
In fairness, I think the best way to fix (a), (b) and (c) would actually be by automated buses or trains, that would enforce reasonable limitations strictly, without the possibility of negotiation.
i.e. the passenger would be prevented from boarding the vehicle without a valid ticket and provided there was sufficient capacity for a speedy and comfortable boarding.
I wager that if this happened, it would both increase the use of public transportation and increase the public opinion of AVs (assuming it is done once the technology is acceptable). Probably not as profitable for the companies producing the consumer vehicles though.
Maaybe for the "easy" stuff like public transport. But for many of the specialised jobs that also boil down to a human operating a vehicle, e.g. excavators, log cutting machines, snowplows etc. it takes a human who already knows how to drive a car many months, even years to learn how to do it. Since the market for these machines is many orders of magnitude smaller than for cars, and each machine type is so specialised that there is limited overlap, so the cost-benefit ratio is much worse than for cars. Thus I think it will take a lot longer for these jobs to be automated away.
if replacing a machine costs $5 million, and the automatic model $1 million ontop of that, and you pay an operator $60k a year, the value proposition isn't very clear.
For classes of machines where only a few thousand, or even a few tens/hundreds of thousands of units exist total, the costs of developing the sophisticated AI that requires are not going to be that low for a lot longer than 5-10 years from now. Especially considering the market is split between many player and none are going to share their secret sauce AI.
"always assume that buses are out to get them and avoid them at all cost"
This is a case where perhaps the computers have too much imagination. We actually tell human drivers to drive that way, but as humans we all know that we don't really mean that we need to worry about someone driving in front of us suddenly slamming their brakes, drifting 180 degrees, and driving at us full speed. Tell a computer to assume too much malice and the car will refuse to even move, because it's pretty easy to specify the search algorithm that will find that outcome.
We have to specify the exact level of malice the computer can reasonably expect, which is way harder. And it will still, by necessity, at times be an underestimate.
You make an excellent point, however as I've been recently up to my hips in RNNs I wonder if we can figure out how to score encounters, can the car learn the level of malice to expect, can it learn it to the level of perhaps the individual driver shifts? My daughter took the 54 to DeAnza community college and learned which drivers were ok, which were mean, and which were indifferent. Would regular exposure of the car to the bus at different times of day allow it to figure that out? Can we start with an expected level of malice and tune it? Fun question to think about.
In Ontario at least (and maybe other places) it's actually law that you always yield to the bus. Well, maybe the laws actually aren't as strong as that, but the bus drivers know there are some laws that say they have more rights, and the rest of us kind of fall in line.
I mean, he's driving a bus and he thinks he has the right of way. This effectively means he has the right of way.
But once they're all robots they can communicate far more efficiently than a hand wave, and can actually agree on a fairly complex plan (to take over the world)
I drive defensively and lived in Boston. I couldn't deal with the mindset of Boston drivers. I moved to Seattle. I am so much happier here. Sometimes drivers cause traffic jams because they spend too much time waving at each other to go out of turn: "You go!" "No, you go!" "No, I insist, you definitely stopped before me!" These are my people.
Ugh. I am not an aggressive driver by any means, but I hate passive drivers. Near my house there is a grocery store with a three way intersection. Entering traffic has right of way to turn left or right without stopping (there literally is no stop sign). Traffic going straight through has to stop.
All the time I'm sitting there at a stop when someone pulls up and stops with no stop sign, then honks and waves at me to go on. No. I have a stop sign. You don't. I am not going. The last thing I need is to get half way through the intersection and you plow into me and claim I ran a stop sign.
No. Follow the damn law. If you stop first, you go first. If I have a stop sign and you don't, you go. If we stop at the same time, the person on the right goes first. It's the fucking law. If you can't figure it out, go back to driving school. Don't wave me on, because the cop and the insurance company don't take much stock in "waves". They expect people to follow the damn law.
I seriously need to move from Seattle to Boston. I can't stand the drivers here who completely vapor-lock at 4-ways until someone waves them through...
Of course I don't really like 4 ways in SF where you have to fight tooth and nail for your right of way, but some happy medium where everyone just efficiently goes when its their turn would be nice...
This is why I hate driving in those types of areas.
A good percentage of the time two people end up trying to enter the intersection at once because hand signals are not a clear communication method for signaling right of way.
For some reason 50% or so of drivers have no idea how a 4 way stop works. That drives me nuts that cars are totally unpredictable at 4 way stops. These people get to a 4 way stop and frantically wave everyone else through I presume because they don't know who has the right of way. Or they don't wait their turn and go at some random time. You never know what is gonna happen.
> That drives me nuts that cars are totally unpredictable at 4 way stops. These people get to a 4 way stop and frantically wave everyone else through I presume because they don't know who has the right of way.
There's all kinds of legitimate reasons you might be uncertain about who actually has right of way (and, even more so, have doubts about whether other drivers perceive the situation the same way, and agree with your perception of who has right of way.)
"When in Doubt, Bail Out" is actually the NHTSA gives to this situation, and emphasizes that it trumps all other intersection right-of-way rules. [0]
However it happens so often in very, very, very, very straightforward situations I can only concluded these people have no idea whatsoever how stop signs work. We aren't talking in any case with any ambiguity whatsoever. It makes stop signs so unpredictable that they become dangerous.
Could this have something to do with what seems to be a very simple driving test to obtain a license in the US? Here in Germany you are required to attend a lot of theoretical and practical driving lessons and expected to know traffic priorities / right of way by heart. The hierarchy of traffic lights > signs > whoever comes from your right is really a no-brainer here.
This made me think - and excuse my ignorance if this is a known and solved problem already - that for autonomous driver software there needs to be some significant degree of customisation allowable for local conditions, where 'local' seems to need to be 'in a local area, like the Bay Area' for example, not just country-wide right-of-way and correct side of road rules.
Here in South Australia public transport buses are extended unofficial better right of way standing (when departing from the curb), for example. There are also new laws determining the required minimum passing distance between a cyclist and a car to be 1m if travelling 60km/hr or less, and 1.5m if travelling > 60km/hr; and car drivers are allowed to cross double-white lines to avoid cyclists.
It seems to me there are ample opportunity for 'edge case' bugs to occur in the autonomous vehicle software when local conditions are taken into account, and an untested set of requirements are applied specific to a country / city / district.
Yeah, in Chicago, it's very clear that the onus is on the non-bus vehicle to avoid a potential accident. Which pretty much sums up the main tenet of defensive driving...
>so pretty quickly people learn that the bus drivers drive with impunity.
So true. There are no worse drivers on the roads than MUNI bus drivers. They are terrifying to drive near. I've nearly been crushed by MUNI buses multiple times, while driving in my own lane minding traffic laws.
A somewhat ironic thought experiment: If a reasonably practical early implementation of Google cars would be to replace municipal bus (drivers)... isn't it in the bus driver's self interest to try and get Google cars to hit them (and potentially keep them off the road.)
It was inevitable, so I'm sure they're quite pleased it was a minor issue and not something catastrophic. Someday in the future a self driving car is going to hurt or kill a person and then the real legal tests will begin, but this is the first step on the pathway to normalcy.
My personal fear is that Google and maybe one or two others will get self driving cars right, but then the imitations from other manufacturers will fall short. The liability needs to end up on the manufacturer of the self driving car system, this is not something to be taken lightly at all.
I expect self driving car liability to end up similar to vaccine liability. A fund and adjudication process is created to compensate those who have an adverse outcome.
Which are controversial in some circles because they're a rare example of a "get out of jail free" card for big companies. (I'm not agreeing with that POV but noting that it's pretty widespread.)
I actually agree that vaccines (and really drugs more broadly) are a rather unusual example of products that can be used as directed and things can still go bad--without the manufacturer necessarily being "at fault." [Edit: Where it's not the fault of the user or other human.]
Another way to do it would be for each manufacturer to have to price in liability coverage to their vehicle. This could be done as a yearly or monthly software license fee that the current owner/operator of the vehicle has to pay. It creates the right incentive for manufacturers because they can charge less (or make more money) if their cars are safer and have fewer claims.
We'd probably also need some amount of limits on full tort liability though since the manufactures would be bigger targets for lawyers than individuals.
> We'd probably also need some amount of limits on full tort liability though since the manufactures would be bigger targets for lawyers than individuals.
This is the exact problem I'm targeting. You need to cut attorneys out of the mix.
I wouldn't necessarily say cut out attorneys, but provide sane liability limits so that awards don't go much beyond costs actually incurred.
The only exception I'd make to that would be cases of withholding a known defect, like the self driving car equivalent of the Takata exploding airbag fiasco.
The only difference is the vaccine fund doesn't have to be proven and a large, large amount of cases they receive are highly unlikely to have been caused by vaccine issues.
But a car crash will be very easy to figure out who's at fault. Why would such a fund exist for something that can be pin-pointed directly at the party at fault?
> Why would such a fund exist for something that can be pin-pointed directly at the party at fault?
People should still be compensated when accidents happen, without the drag caused by trial attorneys attempting to extract as much as possible from self-driving vehicle manufacturers (which is going to slow down progress).
The benefits of self-driving vehicles from reduced fatalities and accidents alone are so great that a process and funding needs to be in place to allow continued innovation (if done with safety put first).
> People should still be compensated when accidents happen, without the drag caused by trial attorneys attempting to extract as much as possible from self-driving vehicle manufacturers.
Of course but the current laws today can drag this out when people are involved; why would cars be any different?
> The benefits of self-driving vehicles from reduced fatalities and accidents alone are so great that a process and funding needs to be in place to allow continued innovation (if done with safety put first).
We don't do this in any other industry as far as I know. It's a weird mechanic to make the manufacturers of automatically driving cars off the hook from accidents.
I understand the intent but I don't know how that works within our current legal system and wouldn't that encourage cheap, shitty-built cars since companies won't need to be liable?
The problem with mandatory compensation programs (aside from granting legal immunity to private sector entities, which I disagree with), is that they tend to break the discovery process via way of circumvention.
The discovery phase of a case is how truly damning evidence often comes to light. A fantastic example of this would be the Toyota unintended acceleration debacle.[0] If it weren't for the discovery process in those cases, nobody would really know what a total mess Toyota's code was.
As far as self-driving cars are concerned, I've no doubt top tier companies like Google and Tesla are going to do the best job they can, but eventually everyone is going to be in the space, and when a company with an institutional disdain for proper safety-critical software engineering practices ends up killing people, I want their feet held to the fire.
> I understand the intent but I don't know how that works within our current legal system and wouldn't that encourage cheap, shitty-built cars since companies won't need to be liable?
It works if you allow self-driving vehicle algorithms to be patented. You could then open them for public examination by a government agency.
If the algorithm performed to regulation agency expectations, accident victims would still be compensated for losses without punitive damages exacted.
Regulation isn't a magic bullet though. I've seen countless companies check the boxes of regulation for their software in the government space only to have them fail spectacularly because it was done as cheaply as possible.
Regulation will never cover all possibilities of a company acting shitty; if companies find ways to doing things cheaper and still being able to check that box just so they have no liability then they will do it.
I don't think these types of get-out-of-jail-free-cards, even though they're very well intentioned, are ultimately a good thing.
I didn't know a fund existed for vaccines but I found it might have an interesting application with self driving cars.
At its current state, self driving cars are not perfect. Roads were designed to be driven by people and it may take many years for road infrastructure and total adoption to properly support self driving cars. The one thing many will be able to agree on is there will be accidents and some people will be hurt at the fault of self driving cars but ultimately removing humans from driving will make roads a much safer place. The reason behind this particular vaccine fund (see link above) appears to align with problems that we will soon see as self driving cars hit the road and make mistakes but for the overall good.
>But a car crash will be very easy to figure out who's at fault.
There are lots of scenarios where a person avoids an accident that isn't clearly another driver's fault--skidding on ice, debris on road... Also cases involving pedestrians and bicyclists where it's arguably their fault but drivers still generally have the responsibility to not hit a child running out into the road.
Correct and in today's society that is covered through insurance. Are we saying we're eliminating the insurance industry for vehicles? That's essentially what this would do (the government basically assumes the role as insurance company).
The difference is that today, it's normally the case that it clearly isn't the car's fault--whatever combination of humans, pedestrians, bicyclists, and just acts of God it is. If the car's brakes don't work properly even though they've been properly maintained or an ignition switch fails, the manufacturer gets sued. The wrinkle with the scenario in question is that it's not normally the case that programmed instructions in a machine can play a role in an accident and that's considered to not be a fault of the company programming the machine.
I assume there would still be insurance. But if my self-driving car causes an accident, I certainly shouldn't be "at fault" from the perspective of my insurance premiums nor should I be able to be sued--as would be the case today.
> Are we saying we're eliminating the insurance industry for vehicles? That's essentially what this would do (the government basically assumes the role as insurance company).
Sort of. Self-driving vehicle manufacturers would self-insure.
It would seem to me that Google's autonomous car insurance, which they must have bought from somewhere, would take care of this in pretty much the same way as other insured drivers are currently protected.
It really shouldn't be very different from, say, a fatal accident from a driver in a corporate-owned car.
Sure...PIP attorneys are going to go after Alphabet and make a big ruckus, but in the end, Alphabet has to have all this covered already as part of the plan.
Here's the Autonomous Vehicle Accident Report filed with the CA DMV.[1] "The Google AV was operating in autonomous mode and traveling at less than 2 mph and the bus was traveling around 15 mph." The other vehicle was a 2002 Newflyer Lowfloor Articulated Bus, which is 61 feet long including the "trailer" part.
Here's where it happened.[2] You can see traffic cones around the storm drain.
This is a subtle error. Arguably, part of the problem was that the AV was moving too slowly. It was trying to break into a gap in traffic, but because it was maneuvering around an unusual road hazard (sandbags), was moving very slowly. This situation was misread by the bus driver, who failed to stop or change course, perhaps expecting the AV to accelerate. The AV is probably at fault, because it was doing a lane change while the bus was not.
Fixing this requires that the AV be either less aggressive or more aggressive. Less aggressive would mean sitting there waiting for a big break in traffic. That could take a while at that location. More aggressive would mean accelerating faster into a gap. Google's AVs will accelerate into gaps in ordinary situations such as freeway merges, but when dealing with an unusual road hazard, they may be held down to very slow speeds.
I wonder if Google will publish the playback from their sensor data.
> The AV is probably at fault, because it was doing a lane change while the bus was not.
Yes, probably true, but not crystal clear. The Google car never left the lane, so it comes down to subtle questions about appearing to be parked or impromptu division of lanes near right-hand turns.
This story is getting a lot of press coverage. Reuters, CNBC and Wired are already covering it.
When Cruise (YC 14)'s car hit a parked car at 20 mph in SF last month, there was no press attention.[1] Even though it was across the street from the main police station.
That Cruise crash is an example of the "deadly valley" between manual driving and fully automatic driving, The vehicle made a bad move which prompted the driver to take over, but too late. This is exactly why AVs can't rely on the driver as backup.
It's probably one of the interesting social outcomes of a mixed self-driving / human-driven traffic that all the assholes will have more opportunities to behave like assholes, because they will be able to rely on the programmed friendlyness of self-driving cars. Eventually, tuning that friendlyness factor will become the new car pimping.
Every time an asshole recklessly endangers the occupants of a self driving car, the car will have recorded the entire event. They should send these to a central clearinghouse and after a certain threshold hand them over to the appropriate prosecutor for charging and conviction. We'll all be safer.
If the asshole can't bring himself to drive civilly, eventually he'll lose his license on points and have to ride around in a self driving car!
No, (s)he wants to turn traffic safety violation reporting over to crowd-sourcing. There is a large difference. I am in favor of the same thing myself since humans drivers prove every day (look at the number of preventable accidents on roads) that they should not be trusted with operating large motor vehicles, especially if alternatives exist in the way of autonomous vehicles. If driving behavior is accountable in the same way as, say, banking transactions, then that seems like a good thing.
ATMs are good comparison. They are left unattended but photo document themselves and their environment. If you attack an ATM it will provide the evidence for a prosecutor to charge you.
I think it'll be great when road ragers can't count on anonymity to protect their reckless, unsafe and illegal habits. Think how pleasant it will be to drive when everyone actually drives safely.
Surveillance state seems like a real stretch in this case. There is no right to privacy when you're driving on public roads, you can now and will continue to be photographed while driving. It's already happening anyway, highways have road cams, people have dash cams. If there's a problem, it's already here.
This is the most interesting part of this story to me. That the test driver believed the bus would let the vehicle continue indicates to me that the human driver would probably not have done any better than the automated car in this case.
My standard comment to a new BMW owner: "So, do you have to prove you are an asshole to buy a BMW, or do they force you to take a class after you buy one?"
Mercedes are a special problem. Often driven by older male drivers who sometimes remember that their car can drive fast but not caring most of the time. Even while on the left lane.
They should invite a bunch of Prius drivers to MV to "test" with the self-driving cars on the road with them "driving normally". I'm sure they'd get some great test data out of it and help prevent lots of future incidents :)
Technically, even if self driving cars are more safe than human drivers (even if they're not perfect), should be good enough. But my lizard brain tells me that I'm putting my life in the hands of a machine that potentially has bugs, and that's a little scary.
Most of us are going to expect nothing short of perfection from these machines to really trust them.
The same thing happened with elevators. When they had operators who would physically make the elevator move and then these new fangled "magic" ones came in people freaked out, got used to it and now no one cases.
I get the feeling the it's the same way with cars just an order of magnitude bigger of a change since they're such a part of our lives.
But that also provides evidence of the parent's point that "Most of us are going to expect nothing short of perfection from these machines to really trust them."
We absolutely wouldn't be OK with elevators that now and then fail in a dangerous way. Doesn't mean it never happens but it's considered to be someone's fault when it does.
Elevators do fail occasionally. There are several videos of elevators moving before the door has finished closing, or stopping mis-aligned with a floor. It's just a very unusual occurrence.
Exactly this. When I was trying to get my pilots license for small aircraft I wasn't afraid of flying at all. But my lizard brain tells me that when I fly commercial I should be terrified because I'm not in control. Even though the commercial pilot is statistically magnitudes safer than me driving, and considerably safer than me flying.
Agreed. When I explain why autonomous cars are always going to be better than humans, I use this example:
When you drive, you only look at one thing. If you glance at the rear view mirror, you're no longer processing the front. If you look at a side view mirror, you're no longer looking at the front or back. All this is not counting blind spots. But imagine if you could see all around the car, all the time, and process all of it with the same level of importance.
That's what self-driving cars can do.
People I speak to are usually receptive to that, but there's still the lizard brain aspect, one even I'm victim to. ;)
There is an ethics/philosophy thought experiment (which I'm failing to remember the name of), which goes a little something like this (modified for the AI example):
Imagine that in the world as it stands today that the accidental death rate of auto crashes is 100.000 per year, and we'll call that Earth One. Now imagine a world in which AI reduces the accident rate to 20.000 per year, and we'll call that Earth Two. Given that this is two different worlds, and the types of accidents that human and AI drivers will get into are likely going to be different, then there is likely going to be a large number of people who die in Earth Two that would still be alive in Earth One.
In other words, if AI drivers become the norm, there are some subset of people who are going to die, but would have been alive if AI drivers did not become the norm.
Luckily, we don't live in counter-factual worlds like that, or have knowledge of other timelines, so we're spared from knowing that this would be case.
> Most of us are going to expect nothing short of perfection from these machines to really trust them.
I hope not. I'd like them as safe as possible, but I already expect that they'll be better than human drivers, and that ought to be sufficient to allow them.
However, there's another issue in play here: in addition to the possibility of holding self-driving cars to a higher standard than humans, people like the feeling of control, and will feel "better" about an accident where they see someone to blame.
Human drivers are better at judging the behavior of other human drivers than autonomous cars currently seem to be. We can tell when a bus is about to pull out, or not, due to various cues (engine noise, exhaust, the sound of hydraulic brakes, onboarding passengers,signal lights, the presence of traffic behind us, etc,) which the Google car was apparently blind to.
If you watch closely too you can almost always tell when a car is about to change lanes, even before (if they bother to at all) activate their turn signal or depart their lane. I'd love to see automated cars process stuff like this (small behavioral cues from human drivers)...maybe they already do? I'm guessing that they're not just relying on turn signals to predict intentions :)
I think it's a matter of framing. You put your life in the hands of a machine all the time. Even when you are driving yourself -- software and mechanical engineering conspire to consistently connect your movements on the steering wheel and brakes, as well as be ready to deploy safety measures in the event of an emergency. These things fail in many different ways...if you weren't ware that the auto machine you currently use has life-threatening bugs, take a visit to the NHTSA complaint/recalls databases: http://www-odi.nhtsa.dot.gov/downloads/
Robot-driving is just one more layer on top of that. And not one that frankly, seems substantially less safe than autopilot on planes, given how unreliable we ourselves are when it comes to driving. But sure, the emotional impact of hearing a self-driving car malfunction is always going to be emotionally stronger -- i.e. in the man bites dog way -- than the daily fatal accidents that happen to other people that we filter out.
The issue for me is drive-by-wire. I'm cool with a computer trying to steer as long as it gives up when I try to fight it. I also preferred the cruise control where you could feel the pedal moving under your foot, because that was the master control tied to the carburetor or sensor that managed fuel.
I still think that self-driving stuff should be more enhanced cruise control and less "there is no steering wheel or controls".
The problem with this is the limits of human attention. It's hard enough to maintain focus on long drives as it is; if the "enhanced cruise control" takes over the job entirely, the driver will have nothing to do and is likely to stop paying attention to the road at all. Then he'll either miss his chance to take manual control, or do so in a state of panic.
>Most of us are going to expect nothing short of perfection from these machines to really trust them.
Now, expecting perfection, and especially giving up 'better than what we have' in exchange for perfection is a bad thing, but but I do think it's completely reasonable to expect the self-driving cars to be way better than an average human, even better than the best humans, just because we lose so many people to auto accidents every year.
I think it's completely reasonable to want to reduce the risks associated with transport, and I think the only politically possible way to do this is with self-driving vehicles, because I don't think it's politically possible to remove the worst half of drivers from the road without offering them similar levels of mobility.
Its going to be similar to the software running on airplanes, you expect it to be essentially bug free(airplane software is held to a much higher standard) since you are trusting it with your life.
Except AI code inside self driving cars isn't crafted by hand, but is a matter of training data as much as code. How can you determine that a training data is "bug free" ? It barely makes any sense.
"Google said in the filing the autonomous vehicle was traveling at less than 2 miles per hour, while the bus was moving at about 15 miles per hour."
Google said in a statement on Monday that "we clearly bear some responsibility, because if our car hadn’t moved, there wouldn’t have been a collision. That said, our test driver believed the bus was going to slow or stop to allow us to merge into the traffic, and that there would be sufficient space to do that."
>That said, our test driver believed the bus was going to slow or stop to allow us to merge into the traffic, and that there would be sufficient space to do that."
it was a [double] bus. It isn't the kind of vehicle one stops out of courtesy to let the other car to merge (until such courtesy is the only option for the other car to merge, obviously)
Anyway, it is an obvious software failure as the Google car hit the side of the bus at slow speed. There is no reason to blame it on somebody.
> The vehicle and the test driver "believed the bus would slow or allow the Google (autonomous vehicle) to continue."
Love this. I'm shamelessly rooting for self-driving cars, and crashes are inevitable. Having the human in the car agree with the computer brings a lot of credibility to the report and follow-up.
I'm actually more impressed that they are trying to code "believed the bus would slow".
Understanding the expected behavior of other drivers is critical to making self driving cars work. And that seems like a pretty hard thing for a computer to figure out.
> Understanding the expected behavior of other drivers is critical to making self driving cars work. And that seems like a pretty hard thing for a computer to figure out.
You're definitely right.
With regard to this particular instance, based on my experiences in many cities in different countries, buses pulling out into traffic when they don't technically have the right of way is an expected behavior. :)
Unless I'm reading the report wrong, it appears that the bus did have the right of way. Indicating intent and hoping will get you in trouble with more than just busses.
> Unless I'm reading the report wrong, it appears that the bus did have the right of way.
Yeah, I was more trying to make an observation about general behavior of buses. When I read the article there was no mention of fault or reference to a report, so I didn't know. But I do know that buses disregarding right of way is fairly common. "With regard to this instance" was probably too strong/literal of a phrase to use to tie-in my observation.
Then we'll have the problem of us humans trying to predict the behaviour of the machine that unintuitively will be harder than predicting that of other humans. Of course the system will be much safer when there are no more humans involved in the decisions.
I think I have achieved 95% accuracy in attempting to predict driver behavior as much as much as 6-10 cars ahead of me. I say 95% conservatively. Though I have not exactly ever found myself making the wrong guess.
Looks like the far right lane has double the normal lane width to accommodate on-street parking. The Google car, looking to turn right on red, attempted to use the right-most part of the lane to pass cars that were waiting to go straight, but there were sandbags covering the storm drain near the corner, so it had to stop. When the light changed, the cars it had passed continued on while the Google car waited for a gap, which ended up being in front of a bus, likely caused by the bus accelerating more slowly than the cars in front of it. The Google car tried to use the gap to get around the sandbags, assuming that the bus wouldn't just plow into it, but the bus plowed into it. Perhaps the bus driver assumed the Google car was parked and wasn't paying much attention to it.
The question is, how much room did the AV have in front of the bus to merge out in front of it?
I'm wondering if the bus was cruising along and saw the AV inching out into its lane and just thought "Oh, once they see this big-ass bus coming along, they'll stop trying to merge." And the AV thought "Once that bus sees me inching out, it'll slow to let me merge." And neither one was right.
It is going to be difficult to predict the actions of irrational actors like bus drivers. You can usually assume that a driver has a vested interest in not damaging their vehicle, but my experience navigating California's city streets has consistently suggested otherwise when busses are involved.
I would group them into a category along with police cars and ambulances due to the prioritization of speed over safety. (Although, in my experience ambulances, are usually very good a prioritizing safety.)
The linked report doesn't say Google is accepting responsibility, so it'll be interesting to see what tact they take here. In _Toronto_, from talking with acquaintances that are drivers, I understand they work in near constant fear of accidents. They are expected to at least practically always be able to avoid accidents, and the feeling is that if something happens, they're presumed guilty (within management), and proceed from there.
I don't know if all professional bus drivers' conditions are the same as Toronto, but I can imagine this could be extremely distressing for the bus driver involved. At least nobody was injured.
> The linked report doesn't say Google is accepting responsibility
That's for insurance reasons. You're never supposed to admit fault at the collision in the US. You let the police decide and that's that. It's like the first thing listed on the guidelines for collisions on your insurance card.
Agreed re: insurance reasons. That said, the phrasing of the event is interesting "...self-driving car hits municipal bus..." and (in article) "...self-driving car struck a municipal bus...", and the co-driver (and vehicle) is quoted as "believed the bus would slow or allow the the Google (autonomous vehicle) to continue.", sound a little "soft", more like "yeah, I think that's my fault". I don't have 'favourite to win', regardless. There's another article[0] that I didn't see earlier that does in fact accept at least partial fault, and additionally mentions that the software was updated as result of this event. Thankfully it seems to be a low-cost (in terms of injury and damage) event, but fascinating given the young state of autonomous driving.
The description of events is slightly suspicious. For all the predictive smarts of the car and its impressive LIDAR, keeping from bumping into things seems like it would be the highest priority possible second only to loss of life. The corners of the car have some kind of distance sensor so the car would have to have known it was about to collide with something. In CPU time there was plenty of time to react and its always looking at all four corners.
The article make it sound like the AI blindly changed lanes into the bus. It seems most likely that the AI knew about the impending collision and decided colliding was safer than any other option. It'd be great to know what the other options were, but I imagine we'll probably never get more detail.
It didn't change lanes, which is why the description didn't say it did. The entire event took place in a single extra-wide right lane, which are pretty common. The extra width is to accommodate on-street parking as well as right hand turns.
Notice that it doesn't say it changed lanes; rather, it shifted to the leftmost-side of its lane, later to return to the center of the lane (which is when the bus hit it).
I get your point, though, and I imagine it could have gone something like this: Car predicts bus will slow down (as reported), moves back to center of lane. Bus doesn't, and the car's sensors inform of it. During the seconds (or the fraction of a second) before the collision, the car came up with the safest avoidance maneuver, and started executing it. Crash.
Maybe Google should train their cars at demolition derbies instead of on public roads. Have one group of vehicles trying to crash into another. And let competing teams of students program the crashers. Winning team gets $25K.
And play with touch football rules to keep the costs down. Cover vehicles with touch sensors and cameras that record each crash.
Is there any information available about how these cars take evasive action or attempt to reduce damage in the event of an imminent impact? The article's wording makes it sound like the car was at fault for the crash by re-entering the lane:
> "But three seconds later, as the Google car reentered the center of the lane it struck the side of the bus"
Presumably the car's software predicted the crash before it occurred but was unable to completely avoid it. I'd love to know what the programmed behavior is in these 'unavoidable' crash scenarios.
edit: formatting
This incident hints at what I believe is going to be a big issue in the transition to autonomous vehicles in busy urban areas. If AVs are too passive, human drivers will take advantage of them. If they are too aggressive, the accident rates will be high, and the risk of an AV causing a major accident increases. Threading this dichotomy is more difficult than solving the physics of driving.
The country where I live buses have right of way when leaving bus stops. They seem to actually try to hit cars which are in the next lane (even when the car has almost passed and it of course isn't a "right of way" situation anymore)...
So if you program an autonomous car it would be smart to program it to never get next to a bus :)
The vehicle and the test driver "believed the bus would slow or allow the Google (autonomous vehicle) to continue."
If the test driver stated that s/he thought the car should yield to the bus, would the test driver still have a job? I would have been shocked if the test driver said the software was at fault.
"The vehicle and the test driver "believed the bus would slow or allow the Google (autonomous vehicle) to continue.""
This sounds like a very strong assumption. When was the last time you saw a bus slow down for a car or anything much? There's a reason "Fuck you I'm a bus" is a meme.
Since the light had just turned green, the assumption was probably that the bus would cease accelerating long enough for the Google car to get around the obstacle, not that the bus driver would hit the brakes.
A Google Lexus-model autonomous vehicle ("Google AV") was traveling in autonomous mode eastbound on El Camino Real in Mountain View in the far right-hand lane approaching the Castro St. intersection. As the Google AV approached the intersection, it signaled its intent to make a right turn on red onto Castro St. The Google AV then moved to the right-hand side of the lane to pass traffic in the same lane that was stopped at the intersection and proceeding straight. However, the Google AV had to come to a stop and go around sandbags positioned around a storm drain that were blocking its path. When the light turned green, traffic in the lane continued past the Google AV. After a few cars had passed, the Google AV began to proceed back into the center of the lane to pass the sand bags. A public transit bus was approaching from behind. The Google AV test driver saw the bus approaching in the left side mirror but believed the bus would stop or slow to allow the Google AV to continue. Approximately three seconds later. as the Google AV was reentering the center of the lane it made contact with the side of the bus. The Google AV was operating in autonomous mode and traveling at less than 2 mph, and the bus was travelling at about 15 mph at the time of contact.
The Google AV sustained body damage to the left front fender, the left from wheal and one of its driver's-sidde sensors. There were no injuries reported at the scene.
My question is - why are bus drivers such assholes (as depicted by the comments here)?
Serious question I'm curious about. Is this societal, is it a technology problem (do the way buses work somehow encourage this behavior towards other motorists), is it likely a combination of both?
I hate El Camino road. The right most lane is always far too wide which means if a bus has stopped everyone expects you to still squeeze from the limited space. If you don't they will honk. If you do there is always a possibility that the giant bus might miss you and hit you.
However an important take away for all bay area drivers from the article is
You are supposed to HUG the right shoulder when making a right turn. I am yet to see a single driver who embraces that principle. It is not just for your safety and convenience of the traffic behind you but also because of the safety of bike riders, if you make a sudden right turn they might hit you and get injured.
I think this minor accident is actually a good thing for self-driving cars. It sets the expectation that they may not be perfect, and isn't a PR disaster in the way that a fatal accident may have been.
From here, more accidents are able to happen without being huge news stories. Undoubtedly, the first time someone dies in an accident involving a self-driving car, there will still be lots of questioning of the technology, but it won't come as a complete surprise, now that smaller accidents have occurred.
I don't know in US, but at least where I live when somebody hits you from behind, it's his fault. The idea is that you are supposed to drive cautiously and therefore be prepared if the vehicle in front of you behave erratically -- that is, even if the vehicle's driver is wrong (say, it's not his right of way), you still should be in a safe distance.
Seems like this was inevitable. Its astounding to me that it took this long and that there was no one hurt!
It is even more amazing that there have been apparently 0 injuries or fatalities. I wonder how these autonomous cars compare in terms of hours on the road to # of incidents (not accidents mind you, I hate that term).
The human in the car exchanged it and thanks to the multitude of sensors and cameras this will be the single best documented fender bender in the history of the planet.
It's interesting to observe the highly varying amount that other cars will allow for merging. Merging onto the freeway during a busy day (maybe 35MPH traffic), I tried to merge and the car I was planning on going in front of moved up until their bumper was about a meter from the vehicle in front of them and leaned on their horn. "I technically have the right of way in this situation and by God, I won't yield it to you!"
This was a situation where there were about 40 cars merging onto the freeway and they mostly just do a fairly standard zipper merge.
Some of these discussions seem to severely underestimate human intelligence and the ability to reason and take decisions rapidly.
There are hundreds of millions of cars operating in all sorts of conditions with little incident. There are literally hundreds of thousands of preemptive actions, foresight, experience and instinctive actions at play for every possibility on roads that millions of people adopt easily.
To say that's not good enough one must articulate a system that is clearly better or has the potential while avoiding a broad brush with 0.1% incidents or diminishing human drivers. That's an argument of convenience and could reflect a lack of understanding of the scope and scale of the problem. This is just sandbags, there are literally millions of obstacles and scenarios negotiated without incident everyday.
Crashes in snow is not always about bad decisions. It could also be extremely poor conditions that cars should not be in, inadequate vehicle or tyres and computing will not help if hardware is deficient. Presuming the worst of others or jumping to conclusions about their intelligence levels is unsavory and dangerous if used to push something.
An AI vehicle in any crowded Asian city is going to be literally stranded by indecision. On relatively empty and organized roads the computing power needed will barely scratch the surface of what a proper self driving AI system needs to be unless you redesign the roads and place constraints, which then becomes a different discussion to approach carefully.
The buses will drive right next to the lane marker, with their mirrors hanging over into your lane. This makes it so everybody has to creep over a little bit into the drivers side lane and hopefully everybody in traffic has a small enough car to deal with it. Otherwise, you have to hang back behind the bus as if it's in your lane and wait for it to get to a stop.
I've had my issues with Google autonomous cars (they drive slow and they used to be exceptionally slow at making right hand turns, causing traffic problems), but in this instance I'm happy to throw VTA under the bus, if you will, and lay blame 100% at their feet.