The same entities who would be responsible if engines started randomly falling out of cars: automakers.
If driving becomes a feature of the car itself and that feature fails in a way that causes and accident, there's no other answer. Maybe they'll find some kind of loophole where the self-driving feature is only covered for X years under a warranty? I don't think it would be too far-fetched for them to argue that the sensors for automated driving can deteriorate over time, at which point the consumer's insurer would become responsible for damages caused by any failure.
That being said, it might be in our best interest (or Google's?) to shoulder some of the liability if this issue would otherwise be a major hurdle preventing the adoption of self-driving technology by automakers.
The big problem is that current liability is relatively limited by fleet diversity.
Toyota has enough variation in their fleet that even if a floor mat can cause a stuck accelerator, it's only an issue for certain models of certain cars. Similarly, GM only had certain models with sidesaddle gas tanks, so even if someone causes an outcry with a doctored video, it only applied to some models of some vehicles.
But if Google had a bug, would they have the same variation in driving logic to limit the scope of the fallout? Wouldn't it more likely relate to all their cars?
What if an unforeseen environmental event compromises sensors in a way they didn't code for? Say a rare particulate cloud due a mining operation, or building collapse, that doesn't blanket the sensors with noise, but merely distorts their readings such that a large spate of crashes occur.
What if a more mundane overflow or date bug causes enough slightly-less-optimal decisions to be made that there's a jump in accidents that can be traced back to that bug?
What if someone discovers a 'blind-spot' in Google's logic that could be leveraged to reliably cause relatively-low-risk accidents and black-hats use that to perpetrate a sudden and distributed insurance fraud scam. Would they even be able to withstand the fallout of a "massive jump in Google car accident rates" story, until the truth was ferreted out? What happens to Google if that gets shared online? Is Google going to be able to withstand the fallout of the public realizing their cars can be scammed unless they're diligently watched? (which defeats the entire purpose of having purchased a driverless car)
The question is: at scale, is that a risk a company can reasonably carry on its own? And is that a risk we want to see a court handle in the usual way?
A massive manufacturing failure run through the courts can ruin an automaker. If such a failure was related to an inherent risk to driverless cars, it might ruin the industry if we allow it to run through the courts in the usual manner. (an automaker is ruined, premiums for remaining makers explode, it becomes near-impossible to turn a profit, the market collapses)
But the net impact of driverless cars might include broad social gains, rather than simply direct profit to the maker. [1] So if the risks materialize into a large charge quite early on, absent some mediation by society writ large, allowing that to be processed by the courts normally could spell the end to the industry and leave society at a net loss.
[1] driverless cars are likely to send impaired/distracted driving accidents trending towards zero. That's a social gain even for people who never own a car. But it's a gain that wouldn't directly translate into profits for the makers of driverless cars, that they could use to offset the incredibly high insurance premiums they would have to carry, to survive a fifty-year-storm sort of risk showing up five or ten years into the life of the company.
I understand that the scale might be an issue, but I really think you are worrying about it too early.
1st, Google will likely transfer all IP to a subsidiary which they own all the equity to.
2nd, If a lawsuit takes place and it is a specific make/model of car and it is due to world wide failures, it will likely take down 1 manufacturer, not the whole industry.
3rd, in the event that situation does raise risk profiles of automakers (more likely represented by higher require rates of returns for stock investors over insurance premiums), there will still be somebody who will try it. The reason is that if someone succeeds, they will have 100% of the market while everyone else will have 0% of the market for cars.
4th, if Google's subsidiary is sued and goes bankrupt, the IP will be sold in an IP auction which will likely go to someone else who will create self driving cars.
The end game is self driving cars, how we get there is anyone's guess, but edge cases will be handled by courts. I can't imagine any scenario which cannot be resolved.
In the event all scenarios fail, legislators will hopefully step in and create an imperfect, but practical framework.
>If driving becomes a feature of the car itself and that feature fails in a way that causes and accident, there's no other answer.
That is one answer but it is not the right answer if you want to encourage driver-less car use. A better answer is to put together a set of standards to govern the proper behavior of a vehicle operating systems (VOS). These standards could encompass, a proper dashboard, the ability for the driver to take control, how a car should act under various scenarios (e.g. some key sensor stops working on the highway). If the VOS follows the proper behavior, than there is no liability to the automaker, if it doesn't, the automaker is liable.
Agreed. I think it also means that driver-less cars will have copious amounts of telemetry and video/audio recordings, so they can assign blame to drivers of other cars, when possible.
However engines falling out of cars do not injure other parties. Driver less cars have the possibility of injuring others. Currently the blame would reside with the operator, but I am pretty sure some certain Texas courts would let the creator of the car, the software, the electronics, and whomever else had big pockets, get sued and lose.
Unless they totally remove the ability for the driver to take over control of the car I do not see how you remove fault from the person in that seat.
On the contrary, I think that if you as an owner of a self driving vehicle put it on the road, you are taking responsibility. BTW, this exact problem currently exists: Audi makes a car that can park itself. Is Audi responsible if it crashes? nope. If you don't want to take responsibility for that feature, don't use it.
I suspect that the we won't jump from fully manual cars to fully automatic cars. It will likely be similar to the auto-park function: we'll see more and more cases where automation is available at the request of the driver.
As car makers add these new functions, we'll start to get an idea of how liability will work with them. Some driver is going to get into an accident using this and then claim that the automation made a mistake. Should result in a very interesting trial.
It might end up taking a very long time to have the manual controls removed simply due to liability: as long as you can control it, it's your responsibility. Without the controls, it much easier to argue that Ford/GM/Toyata/Google/etc is really at fault for the accident.
In the 1980s, the rising threat of liability prompted vaccine manufacturers to pull out of the business. So Congress stepped in and created a new system for people who are injured by vaccines. Cases are handled in special hearings, and victims are paid out of a fund created by a special tax on vaccines.
That is interesting. I can't imagine the car manufacturers will be able to handle the costs of the liability by themselves (using our current model of car insurance), and you have to wonder if they should have to (how much is the "driver" responsible for? Is a "driver" in Manhattan riskier than one in Wyoming?)
It's also interesting that we might not even care if it weren't for the fact that we all have to pay for car insurance now. I mean, nobody wonders who's going to cover the liability of a subway system. Perhaps this will wind up a government service, out of reach of the personal injury lawyers. Or like guns, which enjoy some specific protections from lawsuits.
I know that small airplane manufacturers stopped production in the 80s for the same reason (liability costs) and didn't resume until a special liability limitation was passed in Congress (General Aviation Revitalization Act, 1994).
I think it shows a significant failure in our tort system that so many industries are crippled by liability unless they receive special treatment, and even then, sometimes it's not enough.
Vaccines don't compare to cars as much since the mere exposure to cars don't really cause injury or death. Unless they're in motion or were in the car while running (carbon monoxide poisoning).
Although so many people drive, and there are so many accidents, it isn't an imminent threat to all people. The mere exposure to a disease is often all that's needed to for it to spread to an individual's family and their families/friends and so on until we have an exponential infection rate. This puts vaccines on the critically needed list so the government stepped in to mitigate the greater harm of lacking them altogether.
I still think it will be a case of shared blame for driverless cars, although my hope is that blame would be largely unnecessary. If driverless cars can reach the same reliability of autopilots in aircraft, this will save lives.
The mere exposure to cars can cause injury or death. How about getting run over on the street? How about getting poisoned by exhaust fumes? It is an imminent threat to all people. Driverless cars are likely to be safer, and hopefully eventually cleaner, but ask the people who had to breathe that crap back when there was lead in gas whether exposure to cars is dangerous or not.
> I can't imagine the car manufacturers will be able to handle the costs of the liability by themselves
Manufacturers are already liable for the damages caused by crashes, etc., due to manufacturing defects. Heck, several of the textbook cases on manufacturer liability for damages resulting from defective products involve automakers.
> I mean, nobody wonders who's going to cover the liability of a subway system.
Actually, they usually do, especially if the owner and operator of the system are different. The difference is, most people are neither the owner nor the operator of a subway system.
> Perhaps this will wind up a government service, out of reach of the personal injury lawyers.
>Heck, several of the textbook cases on manufacturer liability for damages resulting from defective products involve automakers
The difference is that very few accidents are the result of defective automobile. In the hypothetical world where most(all) people ride in drive-less cars, the car manufacturer would be theoretically liable for EVERY car accident.
> The difference is that very few accidents are the result of defective automobile.
Doesn't really change anything fundamental to how the system works.
> In the hypothetical world where most(all) people ride in drive-less cars, the car manufacturer would be theoretically liable for EVERY car accident.
Probably not; maintenance is going to still be a big part of correct operation, and owners are almost certainly still going to be liable for accidents caused by the vehicle not being maintained in a safe operating condition, and likely the person giving commands to the car, as the legal "driver" whether in direct control every second or not, will remain initially liable for harms caused by the vehicle operating outside of legal rules for speed, direction of travel, etc. Especially, as is likely for the first very many years of self-driving-capable cars, the car allows manual operation as well as automatic operation.
They may have the opportunity to <i>prove</i> that the improper operation resulted from a manufacturer's defect, and shift liability to (or seek contribution from) the manufacturer. But, again, that's true, now, too.
Why would the manufacturer be liable? An automated car crashing isn't necessarily a design defect. driving inherently creates risk, whether automated or not, and that risk should be carried by the owner of the car and his insurer.
You're riding along in your self driving car, and it suddenly veers off to the side, killing a pedestrian (or a pedestrian jumps in front, killing himself in the process). Who is at fault there? It's not black and white, but I'm going to say it's more on the manufacturer than the owner - either way, someone is getting sued and we need a legal/financial system that can handle the new paradigm.
The risks to manufacturers today are well understood by both the manufacturers and their insurance companies. We have decades of history to look at.
With driverless cars, on the other hand, who knows? What will the rate of accident be? Will they be minor or major? How many can be properly blamed on the operator? How many cars will they be responsible for? Will these accidents be spread out over time, or lumped together one way or another? Is the manufacturer liable for an accident caused directly or indirectly by another driver, human or computer?
Before driverless cars can happen, those questions need to be answered or mitigated one way or another. And it will have to be done in a way that is financially acceptable to those involved. I expect lots of legislative wrangling over it.
Either way, each accident will have orders of magnitude more data to describe the situation than we have now. Imagine an airplane black box on steroids. Blame will be efficiently placed, I think.
Liability would be determined by a court on a case by case basis. The Deep Water Horizon Oil Spill will be a good guide. Multiple manufacturers involved in a difficult court battle.
1) First, you would determine which vehicle malfunctioned. If a blowout of a tire is the malfunction rather than the automated driving system only, the court would need to assign damages to A) the tire manufacturer for the blowout and B) the automation creation company for not designing a system that detects the blowout and prevents the crash...
In this example, 100% of the blame may fall on the tire manufacturer if it is determined that there is no way for the automated system to do anything else. Like a near by cliff with an oncoming truck for example.
Most cases should assign damages based on percent of fault.
Overtime, legislators will probably set rules to make legal actions happen faster and with more efficiency.
This entire process will be lengthy, expensive and challenging. But, the overall net benefit of going through the process will be driverless cars and we call all agree that the benefits of driverless cars will be amazing.
Note: Google Chrome says the word "driverless" is misspelled, they should add it to their dictionary.
In theory, self-driving cars should all perform equally within the same range and, probably, within the same manufacturer - you'd assume they'd put the same system into each car, and keep it updated to their latest best effort. In fact, this principle should perhaps be enshrined in law.
Therefore, individual owners do not need to be taken into account. Simply calculate the average cost of an accident and multiply it by the likelihood of that maker's system causing a crash, and send the owners a bill each year.
It doesn't even really need to be determined who "caused" the crash in each specific instance. It should be plainly obvious, statistically, which self-driving systems are more likely to be involved in accidents - premiums would be weighted to reflect that, and this would encourage a virtuous cycle of improvement incentivisation.
But some people will be on the road more often than others, so the likelihood of an incident will be different.
I think it would be much better to add a premium onto the fuel the car uses and all claims would go to a central organisation.
Self driving cars should level out risk we currently have in different driving demographics i.e. a teenager in car is as likely to crash as someone who has been on the road for 20 years. So it's not like you'd be paying more "liability coverage" than you should.
Ah yes that is a good point. Fuel use would be a reasonable start as a proxy for kilometers driven, but why not just go the whole way and measure the actual kilometers?
I agree about the central organisation. I don't believe private insurance companies add much value and with the removal of personal driver fault they add no value at all. There is no reason it can't be managed via a not-for-profit, competent central organisation similar to any other aspect of registration.
In that case whats the incentive on a manufacturer to build driver less cars at all? They sell plenty of cars right now but with none of the increased risk in liability.
One thing the article didn't discuss: the massive amount of data that driverless cars will collect and store. The "automotive black boxes" will help us analyze and simulate accidents in ways we just can't now. As the responsibility for driving switches to the manufacturers, so will the liability. However, with fewer accidents and better data (when an accident does occur) the industry should be well positioned to bear the burden.
Its hard for IT folks NOT to 'Download & Apply' the latest release to get 'that' feature. I have learned over period off time to wait for a stable release. Its possible that Car Maker releases a bug, which lets say " doesn't apply the brakes" in time. What happens say 100,000 ppl 'Download the latest automatically at night 3:00 AM " ? Who is responsible ?
Some obvious implementation details that spring to mind...
Whoever owns the code, owns the car's liability. It is in the best interests of the manufacturer to be able to detect a vehicle not running a signed release, and require an explicit liability acceptance from the owner. Doing so whenever registration changes hands is probably a requirement, too. Or disable the ability to change the code unless you ask for an individual key that requires you to accept liability when you use it.
No reason to roll out to all the 2017 Toyota AutoCorolla Xv examples at once -- after QA acceptance, do staged roll-outs to increasing group sizes. Best if the first few hundred were all Toyota employees.
Roll-back to the previous version or a "known good" release, should be simple and obvious.
The automakers could do what some web apps do, roll out new releases to a percentage of users. They could start with cars in areas where accidents are less likely to cause injuries (not densely populated, flat terrain).
I actually think that the entire problem is that currently cars have drivers. So the current situation is that drivers are forced to behave reasonable. But this is because there is a human in the loop, who will act due to distractions (or malice) less than optimal. On the other hand a driverless car always performs as designed and therefore there are a lot less possible failure modes involved, in fact I think essentially the only two failure modes are failures due to improper testing and design flaws ( the manufacturer is responsible) or due to neglected maintenance or actually unforeseeable conditions ( the owners insurance is liable). So I think that a driverless car should be viewed like a malfunctioning coffee machine instead of like a car that is involved in an accident.
Auto makers and car insurance companies will not go quietly into the night when it comes to allowing drivers to shift liability. They both profit too dramatically from things as they are, to just let drivers suddenly not carry the coverage.
We might very well wake up to a comedy of drivers still having to carry their full insurance on their driverless cars (it'll be argued that it's part of the privilege and responsibility of owning a vehicle).
this is not a problem because if other cars with the same program don't crash it's not the programmer's. The owner of the vehicle will pay.
But if others also crash with similar situations, then the programmer/company will pay.
That said, I will never get a self driving car. I like automation in everything except cars. Driving should be a fun experience.
In a few short years, laws will prevent you from driving on the public roads without some level of automation.
There are something like >30k deaths on the highways per year these days. That's the equivalent of all American Vietnam war casualties every two years. The highways are more dangerous than any war the US has been in since the WW2. And a very large proportion of those deaths can be called "innocent" -- not at fault in their death in any way.
There really is only one reason this is accepted by the public. Because there is no alternative. There is no policy you can establish that would stop the deaths that would not also inconvenience everyone too much.
Once self-driving cars fall down in price enough to be a real possibility, every highway death where a non-automated car was involved turns from something sad, but inevitable into something that is squarely blamed on the existence of that traditional car. "This wouldn't have happened if Datsundere wouldn't have put his personal enjoyment above the safety of everyone else on the road."
Since car makers will be happy to lobby for this (why not, they get to sell everyone something new), since there will be enough relatives of dead people so the grassroots support is strong, since the federal government has control over roads, and since there are no strong lobbies to oppose it, this will pass as a law as soon as it's actually feasible to replace/refit all the cars on the road.
The best you can hope for is a car that will allow you to turn off the automation in situations the car deems safe, with the car automatically taking over if it feels like something is wrong.
Beyond that, the only place you will have a fun driving experience in the future is on the track.
I think you underestimate the extent to which people truly, honestly don't believe that it will happen to them, provided they are in control of the car. A self-driving car just doesn't give you that assurance. Besides, who would trust their adorable children, the lights of their very life, to a robot? A mindless, unthinking automaton, a heartless killing machine with no soul?
By the time legislation is being contemplated, everyone will have seen groups of autonomous vehicles moving in near perfect synchronization through lights, around obstacles, etc. The statistics will be overwhelming. The evidence will be all around us and only the last few hold-outs will need to be convinced with financial incentives (insurance rates) or legal ones.
I would mis-quote Linda Hamilton from Terminator 2:
"The [car] would never stop. It would never leave him, and it would never hurt him, never shout at him, or get drunk and hit him, or say it was too busy to spend time with him. It would always be there. And it would die, to protect him."
You won't have a choice. At some point, automated driving cars will drive better than people statistically speaking. They'll save more fuel and traffic will flow much more efficiently.
Imagine if 4+ vehicles make it through every busy intersection light because autonomous vehicles are never busy daydreaming or checking text messages. Imagine if the oscillation of busy traffic on the interstate basically went away so you didn't have random stop-n-go (think metering lights but much more efficient).
Once some of these things begin to be proven - something that Google and other manufacturers will be guaranteed to demonstrate early on - legislation will quickly follow mandating that all new cars be autonomous.
Liability insurance for non-autonomous vehicles will skyrocket. Driving yourself will be something you only do on specialized entertainment tracks and video games.
How would you explain planes then? We have had autopilot for awhile but there is no law that planes be autonomous.
Anyway, the question is who is liable. If we take the plane industry as an example, then if the autopilot is at fault the seller is liable. It is not reasonable for an autonomous vehicle to say 'I give up, have the controls back' whenever it is about to crash. Especially not if the would be driver cannot drive (blind, elderly, young children, etc...)
The only issue is that Google and other autonomous car manufacturers don't want to be liable because the potential liability will be enormous but they want autonomous cars.
The prediction is that those companies will try/are trying to get legislation to shift liability to the owners of the cars possibly as part of their insurance (autonomous car coverage).
Another question is if a machine kills a human even if by accident, is the entity who created the machine at fault? Again, it will be a matter of politics and power rather than right and wrong. If it was otherwise, corporate officeholders would have long been in jail.
The FAA decrees lots of safety equipment to be present and used on commercial aircraft. It's worth noting that plane autopilots are not at the door-to-door level that we're talking about with autonomous cars. Also, Pilots are much better trained than drivers, so there's less impetus to get them away from the controls. Sooner or later, though, planes will be flown completely by wire. When it's demonstrable that autonomous planes fly better than real pilots gate-to-gate, that will become legally enforced.
Anyway, the question is who is liable
In this sub-thread, I was just addressing the previous poster who said that he would never own an autonomous vehicle.
Just to throw my two cents in... in all likelihood, Google will need to demonstrate that their vehicles are more safe than "normal drivers". When you buy a Google car, you will need to buy auto insurance that covers an autonomous vehicle. Depending upon actuarial tables, that insurance will be more or less than your current insurance. Google's liability would likely only extend to extreme defects in the AI and/or death-causing defects that are systemic and can be shown to be the result of negligence or not disclosed to customers.
politics and power rather than right and wrong
I object to the immediate contention that you or anyone else really knows what's right or wrong at this point. Your statement above "to shift liability to the owners of the cars" gives away where you think liability lies. I don't think that's at all clear. If Google creates all the cars from here on out and deaths on the highway are cut in half... does that mean that they're liable for the deaths that still occur? That number of deaths would have occurred anyway. Google did nothing wrong, they actually saved tens of thousands of lives. So what's that worth? Suing them out of existence for the deaths they didn't prevent?
In order to book those lives saved, we'll all need to pay for our own insurance for most accidents.
>That said, I will never get a self driving car. I like automation in everything except cars. Driving should be a fun experience.
Yes, but you don't have to live in a binary world, in which you have to choose one or the other. Imagine you get stuck in traffic for an hour or so (a common occurrence in most cities), wouldn't it be nice to turn on autopilot so that you can sit back and browse the web on your iPad. When you clear traffic, you turn off autopilot and continue to enjoy your driving experience. Similarly, as you're driving on the road, you remembered you wanted to send an email to your mom, so you turn on autopilot for 5 minutes to write your email, and then continue on your merry way.
If I'm driving along, over-correct on a turn and smash into your fence, it's my responsibility. If I do nothing wrong, but my tire blows out, I'm still responsible.
If my tire blows out because the repair shop over-inflated it, or because there is a material defect in the tire, that's where responsibility lands.
The question is, when my robot car overcorrects (ie. driver error), am I responsible, or is the manufacturer? And in what circumstances is the manufacturer negligent?
I'd dare say that once automated cars move out of California and Nevada and start showing up in Minnesota and New York, things will get complex quickly.
I get that liability rules would not be simple, but how would they be different? If a car manufacturer sells cars whose brakes fade quickly and a collision occurs as a result of a driver not being able to stop fast enough, whose fault is it? It's not immediately obvious, but we have a ton of existing law to deal with situations like that.
I mean go through your examples. If your self-driving car crashes because you maintained or modified it improperly, it's your fault. If your maintenance shop did the bad maintaining or modifying, it's their fault. If there was a design or manufacturing defect, it was the manufacturer's fault. If it was nobody's fault (e.g. meteor hits car) then the insurance company pays if you insured against it and nobody is "liable" (i.e. property owner takes the loss) if you didn't.
What's the new thing? Why is it different that a car crashed because its computer improperly decided to turn left vs. because its mechanical steering mechanism failed?
This seems like a situation where something has been discovered to have a computer in it and now some people think we have to throw away the existing rules and start over for no apparent reason.
If you are not in control of the car at the moment of the accident, then the manufacturer is liable for providing faulty equipment.
Good intentions does not do away with liability.
For that matter, why would you even purchase insurance for a vehicle like that. It should be supplied by the manufacturer.
Possibly even the car should not be owned by you, since you are not 100% in control at all times. But that's debatable, since people are attached to car ownership.
If a street racer modifies their car? If you didn't perform routine car washing so the sensors where unable to sense? The car was in a wreck last month and the repair shop didn't do as good a job as they should have? The car is 30 years old and stuff naturally wears out? There are so many variables that manufacturer pays maybe a first approximation to an answer but no where near the whole answer.
All of that is part of the reason why I will never own a robot car until they hammer out those details. Currently, when I buy a car, the manufacturer/dealer specifies what they are responsible for and how long. The rest is on me. So I have to purchase driver's insurance.
A robot car? I'm not the driver. The manufacturer is, by robot proxy. So, perhaps the manufacturer should be insuring the car. After all, at that point, I'm just a passenger.
Scale. They won't do it if they can't make it work out financially. And who pays them? There are paralells with the banking industry here.
If a banker insures a derivative, and it goes bad, no big deal. If every banker insures the same type of derivative at the same time, and they all go bad for the same reason, you have economic meltdown. We don't want that to happen with cars.
>Scale. They won't do it if they can't make it work out financially.
By all accounts driverless cars are expected to significantly reduce the number of collisions. I don't see how that could be less viable than the existing system.
You would hope so. But you cannot deny that the system will have to be different. The interconnected nature of the cars will matter. People may object to insuring the manufacturer of the software responsible for the crash. There is a lot to sort out that makes it a pretty tricky subject for the insurance companies, manufacturers and owners. I'm sure it can be done, but it's a big thing.
I'm thinking int he context where every car is driverless. It's probably even more complex when they aren't all driverless.
> People may object to insuring the manufacturer of the software responsible for the crash.
What people? The manufacturer either self-insures or buys insurance on the open market, just like everyone else that has a civil liability risk, and builds the cost of that insurance into the price they pay to the work. I mean, public opposition could, in theory, lead to a law against third-party insurance for this use (though its hard to see, given the wide range of things for which insurance is marketed -- if public opposition was going to ban insuring industries, there are a lot of existing industries that you'd think would run into problems way before vehicle-control-software manufacturers), but nothing can stop them from self-insuring. There is really nothing special or unusual about this.
> I'm thinking int he context where every car is driverless. It's probably even more complex when they aren't all driverless.
Its probably not at all that complex in either case. As now, the operator is probably liable for damages resulting from operation outside of the bounds of legal parameters -- including failure to maintain the vehicle in safe operating condition, and the manufacturer is probably liable from harms from manufacturing defects, and these liabilities often overlap.
The particular types of manufacturing defects that are possible change, but that doesn't really change the basic equation.
The total number may not be higher. But you may have one manufacturer that introduces a bug in a firmware update that kills a huge number of people in one go. The next week they issue an update to fix the problem.
Does it make sense to increase the driver's insurance premium because they (unknowingly) bought a faulty product?
If the insurance market trusts it then yes. This isn't a hard concept. The same lines of reasoning govern planes with autopilot, or even automatic fridges. Insurance + engineering standards = alignment of incentives and economic growth.
so join the manufacturer as a defendent or sue for contribution. Self driving cars would not be the first time in history where a third person was injured by a defective product used by a diffrent person.
Exactly right. A self driving car is not something easily done without a fairly sophisticated control system, part of which can be providing insurance compliance to 'electronic checkpoints' (you fail - you park). The new smart kids doing data mining can find those nuggets of information that allow their particular underwriters to make a better profit than someone else's. The question of liability can now be fought where it should be - between insurance providers.
That said, I'm sure there's already legislation being prepared to make it an easier thing for the favored than for others.
Yes. If self-driving cars significantly reduce accidents, insurance companies can use it as incentive to lower rates. The rates for cars that can only be driven manually might even skyrocket.
Just imagine once drivers of autonomous vehicles take manual drivers to court for causing fatal accidents "that could have been avoided by using a cheaply available autonomous vehicle". The word "negligence" will be thrown around. Some law suits will succeed in some states. Insurance rates for manual driving will rocket skyward, making buying that new autonomous car or kit for your current car cheaper than buying the insurance for 2 years.
Is it possible for a manufacturer to just provide a statement upfront when you buy the car that they can't guarantee that the car is 100% accident-proof, and you use the car at your own risk? That might put people off initially, but as people get more exposure and realize the cars are more or less safe, they'd be willing to use them and just get insurance like they currently do.
It's much more complex than that. We're in 2013 and there are still major companies being breached within due to buffer-overflow based security exploits. Buffer overflows.
That's just one example: there are countless of zero days exploits and, if anything, security has only been getting worse and worse, with more and more insecure technologies that we rely upon daily.
What makes you think these cars are going to be secure? What makes you think the company providing the roadmap data is going to be secure? What makes you these cars are going to be running on a separate network and what makes you think that network would be secure?
Regarding liability and thinking that "automakers" are going to be the one liable is an over-simplification: just look at the latest Boeing. It's back an forth between at least three companies to determine who's liable.
Who is going to be reliable when there shall be a human death involving a security exploit in the OS / software stack in one of these cars?
There isn't just one OS in these cars running a monolithic instance of DRIVE.EXE.
If one sensor sees a person and another sensor does not, the car doesn't flip a coin before driving through that spot. It doesn't drive through that spot.
The real failure mode for automated cars will be how easy it is to make them stop in their tracks because someone fooled just one of the sensors into seeing a human.
If driving becomes a feature of the car itself and that feature fails in a way that causes and accident, there's no other answer. Maybe they'll find some kind of loophole where the self-driving feature is only covered for X years under a warranty? I don't think it would be too far-fetched for them to argue that the sensors for automated driving can deteriorate over time, at which point the consumer's insurer would become responsible for damages caused by any failure.
That being said, it might be in our best interest (or Google's?) to shoulder some of the liability if this issue would otherwise be a major hurdle preventing the adoption of self-driving technology by automakers.