Hacker News new | past | comments | ask | show | jobs | submit login

There's a third type as well - 'Shadow Mode' where the software is running constantly but the driver is in full control.

So if there's an accident, Tesla can check to see if the autopilot would/could have avoided it. If they can turn round to lawmakers and say that "X% of accidents could be avoided if hands-off autopilot was legal" it should help speed up the regulatory side of things.




Which is utterly disingenuous too.

"It would have avoided this accident" (by braking, steering). It can say nothing about the future, "... but it would still have been in a collision 0.42 seconds later".

For all the collisions it would have avoided, there's another subset where it would only have "delayed" the collision. But that won't be mentioned. Because it doesn't fit the narrative.


But this way they can't tell how many accidents the hands-off autopilot could have caused. Number of accidents avoided alone is not enough to say this technology would decrease the number of accidents.


That is only partially true. They can resimulate what the automation _would_ have done given all the telemetry and video data leading up to the event.

This is why "partial automation" for the initial data collection still produces valid results. You can replay data against updated models and do "what if" testing without actually sending the car back out on the road again.


They don't have to have that information if what they're angling for is a sound bite to help get regulatory approval.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: