This was envisioned in several movies but my favorite example was Oblivion [1]. The tech must state their name and tech number clone number but I can imagine where this could go horribly wrong if one can't speak for some reason. Curious if drone AI developers will factor in potential failure scenarios rather than being reactive and just review logs to see why their drone took out a friendly.
Some day humans will not be in the loop because people will do what people can do.
The company notes that the drone cannot decide to kill someone itself and needs a “human-in-the-loop” to make the decision and pull the trigger.
Still, this is scary