This isn't surprising. I think the future of warfare will be massive armies of drones controlled by algorithms. If we can remove the pilot from the equation, we can drastically increase the size of our military. That's not to say that algorithmic warfare will be good. I'm merely saying it is inevitable.
Can't we just do that in a computer and pretend that things were destroyed and killed? Maybe calculate the transfer of wealth that would have occurred as a consequence of inferior algorithms leading to the destruction of your virtual societies?
This only works when both sides are sufficiently advanced and willing to take part in the system. You would still need an army for when enemyCountryX decides that they can just pummel you because you're busy playing video games.
That was the basis of these thoughts, except for the fact that in that episode they would calculate the casualties and vaporize them; my idea is based on the pure transfer of 'wealth' that would have occurred through the destruction and re-allocation of the 'resources' from the 'loser' to the 'winner'. No actual death need occur directly due to violence but only through purely economic and financial means (job loss, starvation), similar to what has already happened after the last 'Financial Crisis' in developing countries.
Why would there be anything past the first "battle" - terrorist fill drones with dirty radioactive dust, launch from sea into US, slowly release payload.
Game over, half the population on the east or west coast becomes extremely ill or develop cancers, food and water sources permanently tainted.
Instead of making drones we should be making drone detection and capture systems.
"radioactive dust" does not work that way. You should run some numbers and see if your movie-plot threat is plausible before proposing it as a risk. (hint: it's not)
US soldiers and others near battlefields overseas are constantly getting ill from dust from the (badly named) depleted uranium ordinance that the US uses (and white phosphorus weapons).
That's what leads me to believe this is plausible.
> US soldiers and others near battlefields overseas are constantly getting ill from dust from the (badly named) depleted uranium ordinance that the US uses (and white phosphorus weapons).
No, they aren't. What's your source claiming this? Why do you say depleted uranium is "badly named"? And if you took some of this dust and spread it on "the east coast" or "the west coast", wouldn't that dilute the concentration by more than a factor of a million and turn an already speculative and unproven risk into a truly truly insignificant risk?
Did you purposely link to heritage or was that an accidental result of google?
Because that is pretty much a showstopper as they are wild right-wing nuts.
Okay now you are using wikipedia. Read the rest of the page and realize that only the US, UK and Israel say the DU is okay, the rest of the UN says not. I wonder who has DU ammunition.
Should those drone detection and capture systems be controlled by humans or algorithms? Just because you remove the word "drone" doesn't make algorithmic warfare any less inevitable.
Landmines already make lethal decisions on their own. They just use far simpler heuristics.
There's not going to be some big red switch that is flipped and suddenly the machines decide what to shoot at. Instead there will just be a gradual extension of fire control mechanisms.
Landmines already make lethal decisions on their own. They just use far simpler heuristics.
And they have an appalling track record for making the wrong decision, because those simple heuristics also tend to produce answers like "Kill this five-year-old child because they're playing in a field where a war took place twenty years ago".
If you're trying to make an argument for why automated killing machines are a bad idea, landmines are just about the perfect example.
"on Their Own" isn't correct, there is still legal liability, even if it's distant, someone has to ok the algo, someone has to fund it, people have to start the process off... Heat seeking missiles aren't new, and are also automated yet lethal...
Lethal Autonomous Robots are inevitable. They think faster than humans, are much more tactically coordinated and all of them can be updated with new instructions and bug fixes. Just ask a marine captain how difficult it is to bug fix and update his men and you can see the value.
The best way to use drones is to think of them as the appendages or hands of one robotic brain instead of individual units. The beauty of networked robots is that they are ultimately coordinated, one knows what all the others know at all times. This is the absolute dream of military commanders. They would adopt the idea much more readily if it didn't put them out of a job. The only reason we can hang on the nostalgia of men at war is because the US hasn't faced a serious military threat in a very long time. Eventually this will no longer be the case and drone on drone combat will be the only reasonable option.
One objection that people have is that drones will eliminate the human cost of war and make destructive combat more common. I say that frequent combat is just fine as long as humans aren't involved. In the future opposing sides will resolve conflicts by destroying a few hundred million dollars of equipment instead of destroying human lives. I'm hoping that total war will be phased out as machines become powerful enough to dominate the battlefield.
Lethal Autonomous Robots are inevitable. They think faster than humans
On the contrary. They follow orders faster than humans. They don't think at all, in the sense that a human can with their adaptability and ethical awareness and ability to make a judgement call that a situation is not something the predetermined rules were designed to handle.
This is the main reason I don't see LARs becoming widely used. They are superior to human warriors in the kind of war that is decided by split-second reactions, when you know exactly who the enemy is and you're otherwise roughly evenly matched. Very few modern wars are actually like that.
The beauty of networked robots is that they are ultimately coordinated, one knows what all the others know at all times. This is the absolute dream of military commanders.
But that dream becomes a nightmare if the system is ever compromised. Of course, no battlefield-wide, 100% computer-driven, 100% remote-controlled system would ever be deployed with a security vulnerability, for the same reasons that no military equipment ever fails under combat conditions, no military unit in the field ever finds itself unable to communicate with its commanders, no computer virus has ever found its way into a secure installation, no software bug has ever caused military equipment to fail, and no human involved in the military or intelligence communities or their suppliers has ever been disloyal.
One objection that people have is that drones will eliminate the human cost of war and make destructive combat more common.
What is the point of having these machines, if they will not ultimately result in a human cost? Weapons of war are made for one purpose, and one purpose only: to be able to kill people. Any more desirable outcomes, like taking control of a hostile country without actually killing everyone, tend to be predicated on the threat that you can do so if you need to.
What if a drone could drop something like a small cluster-bomb munition, but instead of bomblets, the payload was a bunch of smaller quadcopter drones equipped with TASERs and programmed to immobilize anyone in the area? This wouldn't be free of fatalities, but it could involve far fewer of them.
And then what? You'd have to have a commando team in these tribal areas close enough to catch the incapacitated victims before they can stand up again.
I was thinking in the context of domestic use along the border or for use with drug interdiction.
Also, I can think of uses for temporarily incapacitating people. The US government sends in an airstrike along a corridor covered by airbases. However, when the opposing air forces try to scramble their jets, their pilots are incapacitated and harassed by quadcopter drones deployed from a container dropped earlier by a stealthed drone. I think there would be a lot of uses for such capability for special forces units.
A cluster bomb delivered in that same container would be cheaper, more effective and with less risk to fail.
In military applications, it is almost always easier to permanently incapacitate people. Tech for temporary incapacitation is needed only in those very rare cases where we really, really need someone alive; where the capture is so vital that it's worth risking and often spending our troop lives to achieve that.
You wouldn't feel playful about being harassed by Tasers. But yes, this is bending over backwards to avoid killing people. It would be easier to just bomb them.
It'd be more reliable just to blow the pilots aircraft up before they got off the ground. (Or the runway for that matter.) You can disable a fighter with a couple of bullets from a handgun easily enough, they're fragile in that way.
Tiny drones with tasers wouldn't be much of a match for a bunch of guys with automatic rifles - a rifle has a much greater range and a single burst of fire would destroy the drone. Also, it's hard to taser someone who's wearing body armor or is in a vehicle. This kind of weapon would only work on unarmed civilians.
Quadcopters have terrible speed and maneuverability against free falling bomblets and would require a lot of expensive GNC, precision tracking in each (disposable) unit. Basically you have a lighter version of the "hit a bullet with a bullet" problem as in ABM.