Does the US not have regulations about UAVs automatically firing of weaponry? Don't all current UAVs have human operators. The so-called "man in the loop". I'm having trouble finding evidence one way or the other but does the man in the loop have to pull the trigger or can he override a pre-programmed fire order. Allowing robots to automatically fire ordnance seems something congress would legislate against -- or at least I hope so.
I'm a little concerned with a swarm of drones controlled by one human. Humans have limited attention span -- they just can't watch 30 video feeds. Could this pre-programmed approach with limited human control lead to greater civilian causalities?
Disclaimer: I am not a UAV operator. I am basing the following on information from mostly documentaries, but various other sources as well.
I believe that is correct, current UAVs (Unmanned Aerial Vehicles) used by the US military always have a human operator that is controlling the vehicle. If I understand correctly they do not control the flight of the vehicle in real time, but rather they give commands along the lines of "circle above this area" or "track/follow this particular target on the ground". The vehicles is semi-autonomous in that it figures out what it needs to do to fulfill the operator's command (ie. adjust yaw, pitch, roll, thrust, etc). I believe to actually engage a target the operator must manually trigger it.
However, this is not the same as the new swarming drones this article covers. Most do not consider UAVs currently used by the military to actually be drones (which implies a high degree of autonomy), instead they are typically thought of as remotely controlled. In contrast, it sounds like these drones are fired once with a target specified ahead of time and these drones are completely autonomous from that point on, swarming, swarming around and firing on a target all on their own. These drones are probably small and they probably dont have a video feed for each one for people to monitor.
Disclaimer: I was a Shadow 200 TUAV pilot in the US Army from 2001-2005, and yes, I spent a year in Iraq (OIF II circa 2003-2004) where I got 480+ combat flight hours.
You basically wrote the exact same comment I was about to write with striking similarity. For reference, this is a 18 year old me in the back of a Shadow GCS at Ft Huachuca, AZ, where all Shadow pilots are trained: http://en.wikipedia.org/wiki/Ground_control_station
The military never uses the word "drone", but prefers UAV aka Unmanned Aerial Vehicle, just as you mentioned. For what it is worth, this new navy thing isn't really that novel. The latest generation of Tomahawk cruise missiles do something very similar. You launch several of them into a battle theater and they circle around sending / receiving target information and telemetry between each other and the control stations. See: https://medium.com/war-is-boring/u-s-marines-can-now-call-in...
So in this case, what makes these drones any different than a cruise missile? You give them coordinates to destroy a target, the weapon then deploys to the target and uses technology to navigate without a pilot directly "steering" it or deciding when it goes "boom".
For what it's worth, the US has already deployed weapons that pick their targets autonomously. [1] These artillery-launched UAVs just seem like a fancier version of the same idea. Whoever gives the go-ahead to fire it will just have to weigh the importance of the target against the extent of the collateral damage.
Just because there must be a human in the target designation loop, does not imply that there must be a human in the target elimination loop. After the targets are designated an entirely autonomous swarm may indeed be the perfect way to eliminate them.
Comes with its own risks, of course, but efficiency dictates that this is going to happen regardless. Best that the safeguards remain in place where they don't compromise that efficiency rather than pretending that we can just agree as a species not to pursue that efficiency gain.
Does the US not have regulations about UAVs automatically firing of weaponry?
Even if we do have such regulations, I doubt they'll last long. We'll have to respond in kind once China, Russia, Iran, etc start developing fully automated weapons systems. When Russia can send 1,000 aircraft into a theater because they only require 10% the personnel and we can only send 100 aircraft because we require human operators, the pressure is going to be pretty intense be fully automated. And, even if automation doesn't increase capacity, automation will reduce reaction times, thereby providing other benefits.
Allowing robots to automatically fire ordnance seems something congress would
legislate against -- or at least I hope so.
Victory in [symmetric] warfare is already largely dependent on economic advantage, but future war will be entirely determined by economic strength: assuming similar levels of technology, my coalition wins, for example, if we can field 10,000 automated aircraft and your coalition can only field 8,000. I would expect Congress to legislate against this; when China develops fully automated systems and starts deploying them, I would expect Congress to gut those regulations or to let the DoD proceed quietly with developing a to-be-held-in-reserve software update for full automation.
I agree this is not a good outcome, but I'm not sure how the world can avoid it.
Do supervised drones make sense in a symmetric war? Missiles cost roughly an order of magnitude less than a drone (and can be launched in large numbers from the ground), so I wonder what the value of sending drones into a well defended airspace would be.
edit: added 'supervised', because a missile is a sort of drone.
I think this is more likely to be used in battlefield contexts where you have a very clearly defined military opponent. Navies don't do counter-terrorism, they are for conventional conflicts.
for the most part, yes, but there have been a few navies involved in suppressing pirate and possibly terror activity (piracy around the horn of africa, among other places.
There will not be autonomous strikes. Too much potential for collateral damage. Also one person never watches 30 feeds. It's usually 1 or 2 analysts per feed.
I imagine there will only be specific use cases for this system.
there is no way they can put 1000 analysts on a small operation like kicking rebels out of a village :)
Another example - Russian anti-ship missiles attack in a "pack" where they automatically identify and assign targets between them.
The point is one way or another the "human in the loop" is just temporarily and very limited configuration, and it will be gone once we start to see real mass application of drones, especially when by multiple actors, as like in CIWS systems humans just can be fast enough and wouldn't be able to manage an encounter between for example hundreds of drones.
even in a simulation ad to give a use case, the drones are used to target a small village in the middle of a desert. it's somewhat obvious what the use case will be: don't send troops in so that US troop loses can be limited and in the meantime we can continue to kill all those pesky insurgents in 5-6 hut villages...
Those "pesky insurgents" are managing to kill a lot of people from their "hut villages" and oh by the way, are funded by some of the richest and most powerful nations on Earth. So don't let the "huts" confuse you, there's still capable forces inside waiting for the opportunity to further their agenda.
Also, the battlefield may be comprised of "huts" but the fighters are living in cities and using technology far beyond "hut level" throughout the world.
These "pesky insurgents" have also managed to export their agenda around the world (beyond the borders of their huts), and using technology have slaughtered thousands of civilians (sound familiar?)
> using technology have slaughtered thousands of civilians (sound familiar?)
Yes, yes it does. What's the count again? Them: 5000, US: 100,000? Why take an eye for an eye when you can take 20 eyes for an eye? Maybe with this technology we can get that up to 40 or 50! Hoo-rah! /s
"Them" Who is them exactly? Iran? Saudi Arabia? Syria? FSA? ISIL? Al Qaeda? the dozens of tribal militia funded by both (all) sides ... the list could literally go on forever and so can the casualty count - speaking of...
5K is a ridiculously low number considering the documented casualties on all sides in the last (roughly) two decades of warfare (although it shouldn't be limited to this timeframe).
I was counting U.S. civilian casualties from Al Quaeda on 9/11 against civilian casualties in Iraq + Afghanistan with all figures coming from my highly imperfect memory. There are a number of obvious reasons why this comparison isn't entirely appropriate, I can think of a number of more appropriate comparisons, and you can probably do a better job than I. But before you rise to the occasion, let's re-focus on the core argument.
My claim: We (the US) are unmatched in our ability to "export our agenda" by "using technology to slaughter civilians." Terrorists do the same thing just on a much smaller scale. A direct comparison is appropriate, as is dismissing their relative contribution to the civilian death toll by calling them "pesky hut-dwellers."
I trust independent reporting which in this case is plentiful. No need to rely on "propaganda" from either side, thankfully I can (and you can to) easily find reports from the ground by actual citizens, and reports from news agencies around the world not under the influence of the US or it's allies/adversaries.
U.S. President George W. Bush said "When I take action, I'm not going to fire a $2 million missile at a $10 empty tent and hit a camel in the butt. It's going to be decisive."
This doesn't look at all decisive to me ... there's absolutely no way those little drones can pack the fire-power shown in that simulation.
The bigger question is why don't we try some bridge building first?
Are we sure that the drones themselves will be carrying payloads though? Just judging by the video and also just thinking about logistics of accurately bombing something from that airframe plus general sizes of ordinance, it looks more like they are lasing targets for some kind of guided ordinance launched from another platform.
The key benefit here is for them to be sent out to autonomously seek and 'destroy' without having to waste man-hours trooping through the bush looking for a target that may or may not be there.
"The ONR demonstrations, which took place over the last month in multiple locations, included the launch of Coyote UAVs capable of carrying varying payloads for different missions. Another technology demonstration of nine UAVs accomplished completely autonomous UAV synchronization and formation flight."
New rules say not to be negative - but in my very strong opininion, producing newer and better surveillance/recon tech is not the way to go for humanity. The same goes for producing new high tech weaponry.
It appears that if something becomes possible and advantageous (according to any imaginable reasoning) some people will use it. It's unlikely that each technological genie can be put back, instead we as a species should be considering effective reactions.
What's an effective reaction to the fact that offense fundamentally outstrips defense as technology grows? I can only think of two paths: a) Luddism, b) world government by benevolent AI. Both of these paths are very risky. So in the short term, it seems worth it to incrementally slow down the growth of offense technology, by using activism etc. That gives humanity a little more time to figure things out.
This seems like a logical progression of standoff artillery. A MLRS that can launch drones is that more lethal and the base system and harder to back compute its location.
- modern warfare to evolve (devolve?) into something that has more in common with a RTS video game than 'conventional' warfare, where soldiers have more in common with today's professional eSports gamers.
- the surge in anti-drone technology, including anti-drone drones, and the inevitable escalation there (stealth drones?)
The first major drone on drone conflicts will likely be in the middle east, and will be transferred technology (deriving from the US, Russia, Iran, or China).
It will be something like fighters in Yemen, backed by Iran, against Saudi Arabia using US technology. Or Israel vs Hezbollah. The timeline is within five to ten years.
Stealth'y drones already exist. The X-47 can land on aircraft carriers, and do mid-air refueling:
Navy develops them, and -- a lot of commenters on here are overlooking -- others then copy them, and Navy rules, whatever those may be, do not apply to those other people.
Just a public service reminder not to make unfounded assumptions. Let's not assume that rules will keep these in check.
There appears to be a strong opposition to weapons systems that can autonomously decide to open fire, but it is rarely explicitly evaluated against the alternative.
I wonder: why does it appear preferable to have a human control a weapon?
Is it because we assume the human is capable of compassion and empathy? If so, what about anger, fear, prejudice and hatred?
It seems to me that a machine with clear mission objective, rules of engagement and criteria for friend-civilian-foe identification may actually be preferable in terms of minimizing unnecessary war casualties.
Ultimately, it is humans who'd program these machines and you get to do a lot more calm thinking when coding than when squeezing a trigger. Also, code can be reviewed and tested, split-second decision cannot.
Moreover, down the road, when humanity reaches a state where most wars are fought by machines against machines, the pointlessness of the exercise may become a lot more apparent.
> Moreover, down the road, when humanity reaches a state where most wars are fought by machines against machines, the pointlessness of the exercise may become a lot more apparent.
Dial 1-900-you-wish. What will happen is that the richer part of the world will fight wars against the poorer part of the world where they won't be able to afford the machines.
This gets rid of the biggest thing holding back the next big war: the fact that your own troops are also in danger.
How many corporations have the GDP of a small country? What if corporations could fight wars themselves? They could with drones.
I read somewhere that in Mexico, there's a big oil reserve. However corporations are unwilling to tap it because of the cartels, and the dangers they provide. Imagine the slippery slope if BP decided they'd take care of the cartels themselves.
"What will happen is that the richer part of the world will fight wars against the poorer part of the world where they won't be able to afford the machines."
This is the principal reason why "drones, OK, land mines, not OK." A land mine is a drone that waits for you. It is simple and cheap. Any armed group could afford them. Drones require high technology, among which satellites to relay video links and commands, etc. The "human in the loop" is just a bureaucrat rubber stamping, in real time, decisions made by a de facto autonomous weapon.
A drone also isn't there thirty years later when the country has a different name and a different form of government and an eight-year-old puts an errant foot where there was once a target designated.
>Is it because we assume the human is capable of compassion and empathy?
No - it is simply because a human is self aware and will act selfishly. By making a human accountable for the actions they can reasonably be expected to follow the rules.
Human selfishness is a weaker guarantee than programming.
Human can still chose to abandon or misinterpret their self interest. Strong emotions, hatred, prejudice or a belief that punishment won't be performed or that the deed can be covered up make humans likely to do so under some circumstances.
OTOH, machine has no choice at all. It always follows the rules.
The barrier between us strapping a gun onto one of these, and hooking the trigger to some AI seems to be not one of technology, but politics. I really think we need a Geneva Convention of robots, and I hope we actually follow it (including clandestine operations).
Missions can be pre-programmed, but there "will always be a human monitoring the mission", it added.
Great use of it.
Reminds me of Eclipse Phase nano swarms. The video suggests these are less than a meter in length and not yet weaponized (although the nice CG at the end suggests they will be firing off missiles to blow up random buildings in a desert like environment).
Whatever your opinion is it doesn't really matter, the US government will continue in this direction because it is expedient.
I'm a little concerned with a swarm of drones controlled by one human. Humans have limited attention span -- they just can't watch 30 video feeds. Could this pre-programmed approach with limited human control lead to greater civilian causalities?