Where Eagles Dare
Automation is starting to impact more and more fields of expertise were humans were considered to be essential: driving cars, working in a factory, serving food and, lately, the art of killing one another. As the era of mass warfare came to an end, the population of western countries became increasingly unsupportive with the idea of massive casualties; war became asymmetrical, and there are fewer incentives to fight in a non-defensive war. 33 years ago Stanley Kubrick started filming Full Metal Jacket; the idea of waging a war like the Vietnman War seems improbable today.
With fewer men on the field and a strong public opposition to military deaths (and a not so strong opposition to military spending), it isn’t a surprise that military commands in the whole world started to invest and research new ways to wage war. The usage of unmanned vehicles have now has become commonplace in militaries in the whole world.
Military UAVs like the recently retired General Atomics MQ-1 Predator, one of the most common military drones used by the US military, are remotely controlled by a drone operator safely positioned far from the front line. This connection to a human operator is one of the weaknesses of this class of armaments since the disruption of this signal (by saturation of the satellite uplink with junk data, or a hacking attack) can render the drone virtually useless or impair its ability to react to an ever-changing combat situation (the satellite delay is already a problem for military drones); furthermore, in the remote event of a full-scale war against an opponent of a comparable technological level, the satellite constellation can be put in danger itself by anti-satellite plane-carried missiles or by the usage of nuclear weapons in the low orbit.
Another issue with current drone technology regards long-duration surveillance missions: modern drones produces terabytes worth of surveillance data, with not only can become difficult to transmit and store, but also difficult to properly analyze since most of it isn’t useful for the mission. An AI could easily analyze this data without needing to broadcast them to a human operator, understanding what’s happening on the battlefield and then contacting the HQ for further instruction.
To solve and prevent these issues the global military industrial complex is moving towards semi or fully autonomous war machines. AI-assisted drones could act as the ultimate force multiplier for the human soldier of tomorrow, providing virtually constant surveillance, destruction of targets of opportunity, and in general assisting the pilot, sailor or infantryman on the battlefield of the future. Furthermore, military AIs could still carry on their mission even if their link to the military chain of command were to be severed (fulfilling a role similar to the Soviet’s “dead hand” doctrine), acting as a deterrent against attempts to behead the military leadership; it would be impossible for the enemy to find and destroy every single autonomous AI ready to carry on retaliatory strikes.
AI-reliant companies and startups already see defense investments as an opportunity, despite Google leaving Project Maven (a machine learning platform for military drones) . A very interesting project being developed by the US military is the Perdix . The Perdix is an unarmed drone that operates in a swarm of several dozens of units; one unit is a cheap and expendable UAV not bigger than a hand, but in a swarm, Perdix can scan an area, identify enemy combatants and priority targets (using face recognition technology) and ask for permission to guide a missile fired for a secondary platform (which could be a missile boat or a bomber several kilometres away) right on target; the Perdix swarm can also act as a defensive equipment, acting as a decoy to defend a friendly fighter from enemy guided missiles or scout the terrain to protect friendly infantry from ambushes.
The advantage of using a swarm of cheap drones instead of a single, more expensive unit is obvious for the military since a swarm of small, independent AIs is much more difficult to identify and disable in a single hit.
In the short to middle term AI weapons will require human assistance in fields like target discrimination, priority engagements, and attack authorization, (mostly for political concerns) but in the future, fully autonomous AIs could be used in situations where required human presence can be detrimental to the success of the mission (like in high-speed air engagements, submarine warfare or operations in low satellite coverage areas – which includes areas outside of Earth’s gravity well).