Considering this feature is out for anyone to use, my question is: who is responsible for the damage? Suppose you have a video like this one clearly showing autopilot being at fault, will Tesla cover the costs of damage? (of both the tesla car and the other party's car?)
The short term answer is probably going to be "the driver, until this becomes a sufficiently common issue that legislators get involved"
The longer term answer is probably going to be "AI insurance". The bigger question is whether it will be the vendors or the owners who pay the premiums. My hope is that the onus would be on vendors, since it's their software, but my gut says that the cost will instead fall upon either owners or the taxpayer.
This seems like a profoundly irresponsible move. After selling expensive snake oil FSD hardware for years, Tesla is putting a flawed system into production that people will trust too much and which will inevitably lead to accidents. This could lead to legislation against FSD and set the whole industry back, preventing more responsible players from bringing their superior products to market in a timely fashion. Maybe a conscious play from Tesla to handicap their more technologically developed rivals?
Being allowed to drive is almost a necessity in today’s world whereas the need for Self-Driving is not. I don’t think it’s a fair comparison to make. The same way there are safety restrictions on vehicles in regards to other aspects, the same could apply to an extremely poor driving algorithm if one were to exist.
I agree with you on that as well. But for now, seeing the kind of errors that self-driving cars are making, I feel less safe if those would fill the roads. In the future I would rather see things on rails for long distance, and maybe self-driving vehicles on prepared infra for shorter distances.
We train student drivers under professional or at least careful supervision, and we know the learning curve for humans in general. We test them to specific standards for general driving skills, and only then allow them to drive on public roads. And if they commit specific egregious mistakes we suspend that right.
The AP's learning curve is unknown, we don't train it to any specific standard, we don't expect it to pass any test to validate it can or should be allowed to "drive" on a public road, and the AP's "right" to drive is neither officially given, nor withdrawn after making repeated, serious mistakes. Keep in mind that the AP is "the same driver" across all cars.
Major rule of functional safety analysis: Look for common-mode failure mechanisms. If the same fault can suddenly manifest in many units, that undermines one level of "random failure" statistics.
And as a comment on Ars pointed out: these are self-reported videos, so a dozen near-accidents in three hours of videos sets a floor on how bad the driving could be.
Don't forget that all AP statistics are collected from supervised scenarios and never report on how many disengagements they had, or how much of the traffic conditions they cover. In other words driving straight lines for a million miles even with the driver saving the situation hundreds of times will count as a million "self driven" miles. Under these conditions student drivers are the absolute best since their accident rate is pretty much zero, because there's someone next to them to supervise and correct.
And that's not a floor on how inadequate the system is, it's a ceiling on how much risk the drivers are willing to take. Right now most drivers do not explore further and anyway never let the whole situation unfold. Those almost crashes in the most basic of driving conditions are where the humans draw the line because they now know that's the upper limit of what it can do, the lower is facing real traffic or less supervision.
The AP gets a free pass because we want to let the technology evolve but unless we treat its "driver's license" like we do a human driver's, then the only option is to never allow more than a few seconds of "hands off wheel" on the road. Allowing the human to completely disconnect should be illegal.
We don't allow manufacturers to put a switch that disconnects the pollution control devices because "the driver is responsible for this". We should at least try to technologically prevent the driver from not driving because a system with a fancy name will. And Tesla is doing the absolute minimum there to get better stats and leave the impression the car drives itself. Every car drives itself straight to the crash site.
No, it has validity anywhere, unless your implication is that in the USA any human can get behind the wheel of a car and legally drive. The whataboutism "argument" I was replying has no merit and neither does yours, or at the very least you haven't thought it out properly. I'd like to see any self driving car navigate Indian traffic and after that you tell me if it's safer.
Even a student driver is capable of navigating more complex traffic than the AP after just a few days or weeks of practicing. And at the end of that practice is a test that the AP would fail. After so many years the "FSD" AP still barely manages the most basic of driving feats, expected from a human before they get the driver's license.
Considering this feature is out for anyone to use, my question is: who is responsible for the damage? Suppose you have a video like this one clearly showing autopilot being at fault, will Tesla cover the costs of damage? (of both the tesla car and the other party's car?)