>>If autopilot is 10x safer then preventing its use would lead to more preventable deaths and injuries than allowing it.
And yet whenever there is a problem with any plane autopilot it's preemptively disabled fleet wide and pilots have to fly manually even though we absolutely beyond a shadow of a doubt know that it's less safe.
If an automated system makes a wrong decision and it contributes to harm/death then it cannot be allowed on public roads full stop, no matter how many lives it saves otherwise.
Depends on what one considers a "problem." As long as the autopilot's failures conditions and mitigation procedures are documented, the burden is largely shifted to the operator.
Autopilot didn't prevent slamming into a mountain? Not a problem as long as it wasn't designed to.
Crashed on landing? No problem, the manual says not to operate it below 500 feet.
Runaway pitch trim? The manual says you must constantly be monitoring the autopilot and disengage it when it's not operating as expected and to pull the autopilot and pitch trim circuit breakers. Clearly insufficient operator training is to blame.
> And yet whenever there is a problem with any plane autopilot it's preemptively disabled fleet wide and pilots have to fly manually even though we absolutely beyond a shadow of a doubt know that it's less safe.
just because we do something dumb in one scenario isn't a very persuasive reason to do the same in another.
> then it cannot be allowed on public roads full stop, no matter how many lives it saves otherwise.
ambulances sometimes get into accidents - we should ban all ambulances, no matter how many lives they save otherwise.
So your only concern is, when something goes wrong, need someone to blame. Who cares about lives saved.
Vaccines can cause adverse effects. Let's ban all of them.
If people like you were in charge of anything, we'd still be hitting rocks for fire in caves.
Ok, consider this for a second. You're a director of a hospital that owns a Therac radiotherapy machine for treating cancer. The machine is without any shadow of a doubt saving lives. People without access to it would die or have their prognosis worsen. Yet one day you get a report saying that the machine might sometimes, extremely rarely, accidentally deliver a lethal dose of radiation instead of the therapeutic one.
Do you decide to keep using the machine, or do you order it turned off until that defect can be fixed? Why yes or why not? Why does the same argument apply/not apply in the discussion about self driving cars?
(And in case you haven't heard about it - the Therac radiotherapy machine fault was a real thing, it's being used as a cautionary tell for software development but I sometimes wonder if it should be used in philosophy classes too)
And yet whenever there is a problem with any plane autopilot it's preemptively disabled fleet wide and pilots have to fly manually even though we absolutely beyond a shadow of a doubt know that it's less safe.
If an automated system makes a wrong decision and it contributes to harm/death then it cannot be allowed on public roads full stop, no matter how many lives it saves otherwise.