I'm fascinated by the accidents. The AV is stopped at a light. Someone rear-ends it. Minimal damage.
Probably similar accidents are occurring every minute between human drivers, going unreported as the rule.
AVs might one day even avoid this "victimization," if these events keep following a predictable pattern. AVs could exaggerate the gap, leave a precisely calibrated amount of extra space. When anticipating a rear end collision, the AV would honk and flash brake lights while scooting forward.
Google's absolutely correct that its AVs are never at fault in any of these accidents, legally speaking. Does blame change though if there are ways the AI can prevent this series of similar accidents, but they choose not to?
The AV yields to those running a red light, even though getting t-boned wouldn't legally be the AV's fault. That seems wise to me. Is it inconsistent to expect the AV to avoid getting t-boned, but not expect it to avoid getting rear-ended? I'm not sure...
Or, more broadly: How do you divide blame between two parties when one has superhuman faculties? Is the AI responsible for everything it could have conceivably been programmed to prevent? Or do you just hold it to a human standard?
Like all hard problems, neither extreme is very satisfying.
Does blame change though if there are ways the AI can
prevent this series of similar accidents, but they choose
not to? [...] Is it inconsistent to expect the AV to avoid
getting t-boned, but not expect it to avoid getting rear-
ended?
While I was in college I worked on some wheeled robots that played a competitive ball game. We wanted to avoid collisions between robots, and to win the game.
One of the things we found was: If one team has great collision avoidance and the other team has no collision avoidance, the team without collision avoidance always wins. When there's a contest for the ball the team without collision avoidance just blasts in there, and when the team with collision avoidance back off to avoid a collision they lose the ball.
If autonomous cars were so good at avoiding accidents that you could merge aggressively and they'd always brake, and run red lights in front of them and they'd always stop, manual drivers might learn to do that.
Riding in a Google autonomous vehicle would be a pretty shitty experience if you knew you'd get four or five emergency stops in every journey when assholes decide to cut you up :)
Since the AV would be keeping detailed records of everything, legal remedies are possible. I imagine someone would quickly lose interest in cutting off AVs if after a modest threshold they would start getting a bill every month.
The Googlecar is collecting all this information about the behaviour of the other drivers: number plates, accurate time and location, LIDAR, possibly even video. No idea about the US, but in the UK such driving is itself an offence, you just need a way to prove it.
> Google's absolutely correct that its AVs are never at fault in any of these accidents, legally speaking. Does blame change though if there are ways the AI can prevent this series of similar accidents, but they choose not to?
By definition, we don't do that with human drivers that don't meet the legal circumstances to be "at fault" even if in the specific case it would be possible for them to avoid the accident but they didn't. Why would do that for non-human drivers?
I think what brownbat is saying is that regardless of legal outcome, there is potential for AVs to do better than humans here. Not that they have to, just that they could.
That's a really great way to put it, captures the powerful intuitions on one side of the issue here.
On the other side, well, we've been down some similar roads before. It's easier to see in the exploding Pintos and combustible Volkswagens. It's a common theme throughout the macabre history of modern (US) product liability: How much responsibility does a company have for accidents that it could prevent, but at some obnoxious cost? [1][2]
At one point, some judges had a simple rule. Just run the numbers. If you can prevent more harm than it costs you to fix, you better fix it. If not, eh, good call, save your money. Don't spend millions to prevent one stubbed toe every ten years, that'd be a waste.
Seems sensible. Until it gets applied to harder cases.
Sure, they did the tests. They knew their Pinto would burn a few hundred people to death each year. But they ran the numbers. They used the right value for life, were careful in their choice of actuarial tables, and used conservative costs for the upgrades. It was just too expensive. I mean, they'd have to be hit just right, perfect angle and velocity. Better to put the exploding fuel tanks on the road.
Maybe not better for Lilly Gray, trapped in a burning vehicle, skin sloughing off, pain so extreme her heart eventually quit.
But you know. Better for society.
So the resulting case[1] tackles that inevitable question: is this really enough? Ford clearly forecasted the future. They saw Lilly Gray's charred corpse in the crystal ball, or someone like her. They didn't do anything to stop it. They followed the law instead, because it was cheaper or more expedient. Are we really going to let that slide?
Ok, that's verging towards polemic. Let's pause. Let's swing back the other way: How many accidents is a company expected to prevent? Should every car be wrapped in thousands of dollars of bubble wrap, or whatever the steel equivalent is? Should every car come equipped with roll cage and handy fire extinguisher, prepped for battle on the NASCAR track? Should these hefty tanks then run on nearly inert fuel to prevent engine fires completely, capping their range at a few paltry miles?
Clearly there are extremes.
So let's drop the absurd extremes and go back to the common, the real issue. Back to Google's problem: avoiding the simple rear end collision.
Given enough data, enough predictability, you're going to see some subtle shifts in the notion of blame.
Especially when you start looking at the issue of "fault."
In most (all?) US states, juries divine a "percentage at fault" for both parties before proceeding to divvy up damages.
You're going to get at least a few juries claiming that, sure, the rear ender was 99% at fault, everyone can see that. But that savvy defense lawyer has a point: Google was at least a teensy weensy bit at fault, just barely. With all their data and slick technology, surely they could have done something. Surely they're responsible for not improving their algorithm ever so slightly to handle this particular case. Let's call it 1%. No more, certainly, but no less.
Conveniently enough - it's enough to get the case dismissed without any damages whatsoever. At least in a few US states.[3]
N.B. - I sharked you a bit here and I apologize. In my original post, I said Google wasn't at fault, "legally speaking," but "legally speaking," nothing's really ever that simple.
[2] http://caselaw.findlaw.com/us-supreme-court/444/286.html - ok, doesn't really get all the way to this point, but the facts are startling, and prime the discussion. It's a harrowing story and explains a lot about how product liability works and why it exists.
> Consider another problem the vehicle already solves. The AV wisely yields to those running a red light, even though the AV has the right of way. If it didn't, but was instead frequently t-boned... Google could technically still say its vehicle "never caused an accident," right? It'd never be at fault for these, but the AVs are still designed to avoid them anyway. Is this inconsistent with a choice not to avoid rear end collisions at intersections?
This is called defensive driving. You as a human are supposed to avoid getting killed even in cases where you aren't legally liable. This includes making sure the intersection is clear before going in. It includes checking if anyone is coming down the wrong way of a one way street before going in or crossing it. It involves something as simple as making sure "right lane must turn" actually turned before someone magically appears out of your blind spot.
And, importantly, it also includes knowing what's behind you as much as knowing what's in front of you. If you see someone is going to rear-end you, you should at the very least step off the brake to make the collision less violent. If possible accelerate forward.
And I know this isn't possible [/practical] in most american cars because they're automatics, but don't hold the brake when you're standing still. It makes potential rear-endings work out better. And never turn the wheel before you intend to turn. If you get rear-ended you could get pushed into oncoming traffic.
All this is to say that while only one vehicle is to blame for a collision, two vehicles are responsible. (unless you hit a tree or some other static barrier, a tree cannot drive defensively)
Or, more bluntly, it might be the other person's fault, but you're the one who's dead.
I've been taught to always hold the break when at a stop light. If you get rear-ended, you won't roll into the intersection. Also, always hold the steering wheel, so you keep control if you get rear-ended.
I was taught to engage the hand brake when stopping at a junction or lights for this reason, a rear end impact would generally be at low speed compared to the speed of the potentail traffic in the crossing lane.
I've always assumed you can press the brake post fact to prevent rolling into an intersection, which I assumed was better than absorbing a lot of G-force.
Fundamentally, I think, if the impact is too strong for you to be able to react in time to brake before rolling too far, you would have rolled into the intersection despite holding the brake. Except it would be a skid not a roll.
The energy is still going to have to go somewhere.
Some of it will go into the crumple zones, the rest will go into burning rubber and throwing passengers against their restraints.
The question is whether a person is better able to withstand being pushed from the back with rapid acceleration, or absorbing the energy of a collision.
I do not have an answer, but I'm very interested where I could find one.
You mention whiplash in another comment. Wouldn't you avoid most whiplash with a properly configured headrest?
Not sure what you mean by this:
> pushed from the back with rapid acceleration, or absorbing the energy of a collision.
The occupants don't have to absorb the impact if the crumple zones take it!
I guess decent seats will do a lot to protect the occupants. A few years back I was a rear-seat-passenger in one of these https://upload.wikimedia.org/wikipedia/commons/5/50/Nissan_M... when it was rear-ended. Luckily our driver WAS on the brake and didn't go forward into the cars in front - unlike the car behind us and the car behind them and the car behind them! (I forget exactly how many cars were involved but it needn't have been as many).
It's not about impact being strong. The surprise of the impact will severely slow down your reaction. By the time you've reacted, it's probably too late.
> You as a human are supposed to avoid getting killed even in cases where you aren't legally liable.
The bar is much higher than that. You are supposed to protect others, not just yourself, and including people who are at fault.
If a pedestrian is jaywalking and you provably have plenty of time and ability to avoid them, but you strike and kill them, you are definitely guilty of something like manslaughter. Although people somethings believe otherwise, the law (and morality) do not have the property that initial minor transgressions by one party absolve the other party of all fault in a resulting interaction.
Sometimes it really is impossible, I was involved with a case where the victim literally ran across the street, hidden by a bus stop panel. Nobody saw nobody, driver well below speed limits.
In modern bucket seats the risk of whiplash injury is incredibly minimal. Your head doesn't have the distance to travel far enough to hurt your neck. On the other hand, standing on the brake will cause both vehicles more damage and cause the other car to decelerate more quickly, causing the occupants to experience more acceleration in a direction in which the restraints are less effective (getting pressed into seat-belts and airbags hurt more than getting pressed into a seat).
Standing on the brake is simply shifting the energy around at the expense of the other party
Yep, which is why I think it depends on what's in front of you. A human probably doesn't have enough time to analyze the situation and determine whether or not they could do better than holding down the brake. A computer can probably do better in some situations.
Well that depends in where you want the momentum to go and how rapidly do you want to and can afford to dissipate it, does it not?
I mean, if the impact is strong enough, you'll be sent flying whether you are or aren't holding the brakes. Just that if you're not, your steering wheel is still going to steer.
The kind of things I wish driving school could teach you. I don't know how, a set of busted cars on a driving loop with mild collision so you'd know what it feels and how to react. Kind like wet roads and collision avoidance classes.
The pragmatic thing to do legally is to license it before it is widely deployed and then hold it to whatever standard you established at that time.
That doesn't rule out making the standard more stringent over time, but determining what events the AI gets blamed for probably doesn't need to be a huge gray area.
While most human drivers have a hard time avoiding rear-end collisions in which they are the victim, it would save a fair number of people from gruesome deaths if they would build some sort of rear collision avoidance logic for situations where it may be possible for the system to get the car out of harm's way. There seem to be hundreds of accidents like this [1] annually where large trucks smash into cars stopped at red lights or that have slowed down during traffic jams.
Perhaps the cars should constantly be planning escape routes while slowing down and stopping with appropriate distance from those in front of it to allow for escape should it detect an inevitable collision from behind. Even where the only possible escape route involves hitting another car, it should be able to make the decision that a light collision with a vehicle in another lane is perferable to a large truck hitting it at 70mph.
"There seem to be hundreds of accidents like this [1] annually where large trucks smash into cars stopped at red lights or that have slowed down during traffic jams."
Isn't that solved by the automatic collision detection with emergency braking, as implemented by Volvo?[1]
If I'm not mistaken, the OP is postulating an avoidance mechanism for the front ("victim") car, not the rear car. Indeed the rear car is at a higher risk of injury than the front car in read-end accidents.
>While most human drivers have a hard time avoiding rear-end collisions in which they are the victim
This happens every day.
-car A comes to a sudden stop for a less than legitimate reason (e.g "I need to be in that stopped left turn lane, not this moving through lane but I was too busy being distracted")
-car B (behind A) begins to come to a stop as fast as possible
-car B sees car C in rear-view
-car B moves right one lane
-car C manages to avoid rear ending A by less than the length of B
Obviously this is set off by A having jack shit for situational awareness and B picks up the slack
The thing that really stands out here isn't that the accidents occurred (which is sort of amusing), but rather the excellent analytics the car produced. The car knew where it was, how long it was stopped, the conditions at the time of the accident, and the relative velocities of the vehicle that caused the accident. This is going to change the nature of automobile accidents entirely. The amount of data we'll be able to collect from even the simplest fender bender will be fantastic.
I'm sorta unconvinced we'll actually see any benefit. Humans are notoriously picky about their privacy; The ability to get black-box data for insurance purposes has been around for years, but most people would rather not, for good reason, have their insurers poking around their driving habits. I expect some backlash against cars collecting this data on their driving neighbors. What if the self-driving car isn't involved in an accident, but witnesses one? Can the courts subpoena google for the two driver's locations and velocities? That will be a fantastic NYT headline: "Are Google cars spying on you?"
Look at public health data. There's a vast goldmine of data that could be collected, that could track, trace, and storm-warning diseases, but that is for legal reasons hidden behind a confidentiality barrier. I'd like it to be a simple check if any of your partners were STD positive; This is currently information that is hard to get reliably (Sure, your partner can hand you test results, but not verifiable ones; The clinic won't attest to it if you call to reference your partner's results, so you can never be certain it's not a clever photoshop). This is data that has direct, tangible impact on those around you, and in many states it is a crime to not reveal certain STDs. Still, these spread, because we're afraid of making the STD-infected social pariahs, and I can't see a world where we don't have the same problem with bad drivers.
All of this information can be derived from a dashcam, and they're ubiquitous today.
And yes, in 10 years time there will be no such notion as 'private travel'. Between your phone's GSM/CDMA, bluetooth and wifi signals and dashcams, security and traffic cameras, there will be dozens of parties who monitor every move any person makes outside of their home in urban or suburban areas (be it by car, foot or bike). With different forms of computer vision, that data is sorted and linked to other recordings of objects and there will be dozens of databases that have exact information on where everybody is, 95+% of the time.
One of my pet research projects (although I'm not making much progress in terms of actual work or publications) is on a system of tracking 'people' in a generalized form. It's basically a concept of 'strands' of information along different axises ('location', 'finance', 'internet', 'health' and a few others) which can be joined by an overarching matching algorithm infrastructure. Furthermore, each 'strand' has a 'source' and one can join datasets by deciding 'this source I know is reliable, take this as truth' or from several less trusted ones by using voting or bayesian inference. It's basically a formalization of 'doxing' - an overarching framework to work with personally identifying data from sources of varying degrees of trustworthiness.
I'm sure many people are working on something like this already, but in private and with the goal of using it against 'us' (for a broad definition of 'us'). The only way (ok, maybe not 'only' way, 'one of' the ways) to defend is to acknowledge that privacy is dead and to develop offensive capacities; much like the only last resort against tyranny is a well-armed populace.
Where in Europe? They certainly aren't here in Portugal nor in Spain, and I don't remember seeing them in Belgium, which I've visited recently.
As for the legality, I wouldn't be so sure. Owning them is certainly legal, but filming the public street indiscriminately is usually considered against the Data Protection Directive, and I don't see why would dashcams be excluded.
Across Europe. OK I guess 'ubiquitous' might be an overstatement, but I regularly see cars with dash cams in Belgium, The Netherlands, France and Germany, which is where I have driven the last few years. I can't find any Europe-wide actual sales data, but I don't see why there would be huge differences in other countries. Also, they're readily on sale in e.g. Spain, Portugal and Hungary (just 3 countries I checked). Which of course doesn't necessarily mean that they're being bought en mass, but which does give an indication.
I rarely see them, but I was told that stores do sell a few dashcam kits. I wish it was a standard practice, I couldn't care less about privacy, people are far too crazy on the roads, even on parking lots (I'd even buy 4 cams to cover all sides).
They might not be illegal here in Germany, but dash cam videos are regularly thrown out as evidence by courts. (And that's quite surprising, the bar is very high for that to happen in Germany. No fruit of the poisonous tree here.)
Sure, so what? They're still mass surveillance even when their footage is not distributed, or used in court. That's my whole point - surveillance that is kept secret is the real problem, and anyone with a few $k or less will be able to set up a vast surveillance database in a few years - very little of which is covered by laws today (yes, some countries require you to register yourself with the privacy watchdog if you hold records of people, but how would anyone check? They're mostly toothless today already, in many places)
No they're not. They've been denied as evidence a few times in court, and you can't publish images/video with recognizable people on them, but that's not a problem if all you want to do is monitor/track people.
> I'd like it to be a simple check if any of your partners were STD positive; This is currently information that is hard to get reliably (Sure, your partner can hand you test results, but not verifiable ones; The clinic won't attest to it if you call to reference your partner's results, so you can never be certain it's not a clever photoshop).
If you want verifiable results, you can get them. Get yourself tested at the same place. Accompany your partner to pick up their results. You'll strain the relationship, but you can have the proof.
I'm thinking the same-thing but they could even take this data out of the self driving car, apply it to a normal car and even in that case it would be a huge change that could be incredibly beneficial!
Given the time we’re spending on busy scripts, we’ll inevitably be involved in writing bad copy; sometimes it’s impossible to overcome the realities of speed and deadlines. Thousands of minor creative failures happen every day in typical American copy, 94% of them involving human error, and as many as 55% of them go unreported. (And we think this number is low; for more, see here.) In the six years of our project, we’ve been involved in 14 minor grammatical errors of more than 1.8 million words of autonomous and manual writing combined. Not once was the autonomous writer the cause of the error.
(CA regulations require us to submit CA WG form OL316 Report of Copy Error Involving an Autonomous Writer for all errors involving our writers. The following summaries are what we submitted in the “Copy Error Details” section of that form.)
Have any of the AV teams said anything about testing in hazardous road conditions yet?
Driving rain, thick fog, heavy snow, sleet, the like. Maybe the answer they'll give is, "Don't drive you dummy", but that's not really an acceptable solution for most people.
When you look at just the miles driven in June (by comparing the totals with those in the May report), more than two-thirds were driven autonomous this month.
I think it's more interesting that people who relinquish their mobility to cars end up thinking that cars are the only potential source of mobility, forgetting that walking, mass transit, and bicycling are all viable means of transport for most people.
Well that already sort of happens to pilots. They spend so much time on autopilot that they become less and less skilled at manual flying. I'm not saying that they are not busy in the cockpit,but there was a case where a plane crashed because the pilot didn't know how to regain control of the plane after the autopilot disengaged.
What happens after an incident with an AV (let's say I bumped into it)? Do I just wait and call the police or is the AV reporting it automatically? It's a little odd to not have a person to interact with in that situation.
The humans do seem to drive a little crazy on California St. in Mountain View! I've had a near miss with a negligent driver on that street myself. Nice to see that they're moving the cars away from the quieter (more suburban) southern parts of the city near the hospital.
I have yet to see the new prototypes, which are allegedly making tours as well. Does anyone know what streets they frequent?
I've seen the new cars in the San Antonio / Middlefield area. They look small - golf-cart sized. When you can use them via an Uber-like service, I'm in.
Software and humans are completely different. Humans are, in a way, very fault tolerant. In software, one erroneous line of code can literally crash the car. In a human, one stray neuron will probably don't do much harm.
The problem is: the car interacts with its environment. When a new algorithm kicks in, the car will behave differently, and as a result the input data will be different, and thus the recorded data will have lost its relevance.
I've read reports that make me dread being behind a AV at a 4-way stop or a blinking red light - one passenger described waiting at a light for several minutes while drivers honked angrily behind them. Maybe AV's will encourage traffic planners to implement more roundabouts, which might be less prone to other driver's taking advantage. Or maybe AVs just need to get better at estimating the speed of oncoming traffic.
The description of June 4 sounds exactly like an accident where I was driving the car that touched the one in front of me. My foot slipped from the brake.
The driver in front reported injuries and asked damages, I'm still paying the consequences on my insurance rate. I still remember her face vividly, I noticed the moment when she started doing mental calculations about how much she could get out of that.
I hope she gets what she's got coming for her, and self driving cars can't come soon enough, especially for the sensor arrays mounted on each cars that will provide ample proof of what accidents are really like for insurance purposes.
So... does anybody know what truck manufacturers are doing? As far I can tell it will take at least 20 years before AV can drop steering wheel and deal with scenarios like police officer routing traffic around accident site.
On the other hand even current Google tech should allow for soft* road trains made from 2-4 trucks and only one (active) driver. Also new hardware price would have smaller impact on already expensive vehicles.
*without physical connection like in Australian ones
Daimler have a truck https://news.ycombinator.com/item?id=9555295
I'm not sure about the police routing around an accident thing. Google's cars can already recognise cyclists hand signals.
Side note: Google should whip up a google doc only url shortner... so people know its a document and not a risky link, but its still short enough to share easily...
I think the only way to have safe driving with autonomous vehicles is to have roadways that are autonomous vehicles only. When autonomous vehicles become commercially viable, I think having dedicated autonomous freeways can really remove a lot of the randomness and dangers of driving. We might be a 100 years away from this prospect, but it would be great if we lost no more lives on freeways due to car accidents.
I think Google can do better than just "not at fault". Many of us have avoided (chain reaction or other) accidents by evading a would-be rear-ender (although this can backfire in that if you use up your cushion you may then be pushed into the obstacle ahead in addition to being hit from behind). Also, there are times where overly cautious braking causes chain reactions and collisions (at least, an extremely competent+cautious friend of mine told me he caused one once - maybe it's rare).
Similarly, although we might do better to choose a world with different incentives (like one where 1/3 or more of cars are rigid rule-entitled automata), many of us have (selfish-rationally) made room for incompetent lane changers even if it means violating lane rules ourselves.
Probably similar accidents are occurring every minute between human drivers, going unreported as the rule.
AVs might one day even avoid this "victimization," if these events keep following a predictable pattern. AVs could exaggerate the gap, leave a precisely calibrated amount of extra space. When anticipating a rear end collision, the AV would honk and flash brake lights while scooting forward.
Google's absolutely correct that its AVs are never at fault in any of these accidents, legally speaking. Does blame change though if there are ways the AI can prevent this series of similar accidents, but they choose not to?
The AV yields to those running a red light, even though getting t-boned wouldn't legally be the AV's fault. That seems wise to me. Is it inconsistent to expect the AV to avoid getting t-boned, but not expect it to avoid getting rear-ended? I'm not sure...
Or, more broadly: How do you divide blame between two parties when one has superhuman faculties? Is the AI responsible for everything it could have conceivably been programmed to prevent? Or do you just hold it to a human standard?
Like all hard problems, neither extreme is very satisfying.