Are Killer Robots the Future of Modern Warfare
As AI and Machine Learning continue to develop, how will this affect the way we fight wars.
Over the past few years, militaries around the world have been developing new technologies that implement machine learning and artificial intelligence (AI) to improve their warfighting capabilities. The increased use of unmanned armed vehicles (UAVs) has sparked recent debate over the employment of these on the battlefield. We are only just beginning to work through the potential impact of AI on warfare, but all the indications are that they will be profound and troubling, in ways that are both unavoidable and unforeseeable. [A]
Is banning autonomous technology for military use is even practical?
Once one country demonstrates the use of autonomous weaponry, it creates a compounding effect of other countries trying to stay competitive, comparable to what we have seen in the Cold War. Likewise, The US and China already appear to be in a pseudo ‘arms race’ for AI technology. As AI technology develops, the balance of international power is likely to shift as well. The US government has already allocated $2 billion dollars to continue AI weaponry development. [B]
Additionally, these technologies are developing and could be potentially superior in the commercial sector. [C] Though not all civilian companies are onboard with the idea of utilizing autonomous weapons systems. In 2015, over 3,000 researchers, scientists, and executives from companies including Microsoft and Google signed a 2015 letter to the Obama administration asking for a ban on autonomous weaponry. [D] No such agreement was signed. In fact, a report by Harvard’s Belfer Center for Science and International Affairs alleged, ‘rapid progress in artificial intelligence that has invigorated companies such as Google and Amazon is poised to bring an unprecedented surge in military innovation.’ [D] Companies can also use the applied learnings from the development of these technologies to their own consumer AI development as well.
The essence of AI and machine learning military implementation is not only for strategic advantage but also to minimize human casualties. For example, after the War on Terror began in 2001, Congress mandated that one-third of ground combat vehicles be unscrewed by 2015. [D] During this period 2,357 American soldiers were killed. [E] Although this target was not met, the US military has continued to put numerous resources and funding into projects that support the continued usage of UAVs on the battlefield. The ability to use drones, controlled by military pilots, enables militaries to expand the battlefield and keep aviators out of the line of fire. Foreseeably, the end goal would be to keep all soldiers, sailors, and airmen out of the line of fire.
Furthermore, AI and machine learning also supports military leaders in non-combat functions. [F] AI and machine learning facilitate leader’s ability to understand their soldiers, just as the consumer industry does. Leaders can use these systems to predict when troops might be unreasonably fatigued or to predict when programs might likely face overruns. Additionally, military leaders can use this data to create better tactical strategies as the battlespace continually changes. Lastly, AI could also facilitate as decoys and to better sniff out potential decoys. Overall, AI and machine learning have a tremendous ability to help troop readiness through data analysis.
Do we regulate?
Former NATO secretary general Anders Fogh Ramussen argued that AI must always involve human beings. [G] This standard seems reasonable, and for that matter, has also been followed up until now. Despite the progress made, current machine learning methods lack robustness and predictability. They are subject to a complex set of adversarial attacks, issues with controllability, and a propensity to cause unintended consequences. These machines are entirely dependent on the ability of their own sensors and programming. These vulnerabilities make them susceptible to interference, whether it be and combative attack or due to environmental effects. Governments must ensure computers are wholly trustworthy before entrusting more life-or-death tasks to them. [F]
The verdict seems clear that machine learning and AI will continue to be a part of military functions. There are still significant and troubling questions that need to be answered. As we remove ourselves from the line of danger, how will this change the way we fight? Should we regulate? How do we regulate? Regardless of personal position, it is clear that implementation of AI in battle must be examined at not only a national but also an international forum.
Word Count: 722
References:
A Kenneth Payne Artificial intelligence is about to revolutionise warfare. Be afraidNewScientist (September 12, 2018).
B Drew Harwell Defense Department pledges billions toward artificial intelligence researchThe Washington Post (September 7, 2018).
C M. L. Cummings Artificial Intelligence and the Future of WarfareResearch Paper (January 2017).
D Tom Simonite AI COULD REVOLUTIONIZE WAR AS MUCH AS NUKESWired(July 19, 2017).
E Neta Crawford War-related Death, Injury, and Displacement in Afghanistan and Pakistan 2001-2014Costs of War (May 22, 2015).
F Michael Horowitz et al. Artificial Intelligence and International SecurityCenter for New American Security (July 10, 2018).
G Jill Aitoro AI warfare is coming, and some global leaders say NATO isn’t readyDefense News(February 15, 2018).
F Peter Eckersley The Cautious Path to Strategic Advantage: How Militaries Should Plan for AIElectronic Frontier Foundation(August 13, 2018).
We must absolutely regulate. If you haven’t already, please watch the Black Mirror episode “Metal Head.” Without giving too much away, the episode follows a woman as weaponized dog-robots tirelessly hunt her down. AI is already positioned to be the next step in human evolution, one that potentially invalidates our existence. Why do we need to weaponize it?
This is an interesting article! The use of AI to analyze fatigue in soldiers is really interesting and seems like it can help the well-being of those serving in the military. Perhaps that can also help in cases of accidents. I don’t recall the name of the ship, but I remember reading in the news that a ship got crashed, resulting in significant damage and some injuries. Some of that may be able to be avoided either with AI that uses sensors to prevent collisions or AI that can detect over-fatigued crews.
Great article. Two key themes jumped out at me: national vs. international regulation, and the potential impact of non-human casualties on the seriousness of warfare. Firstly, as you mentioned, I think it would be extremely difficult to regulate this issue at a national level without a very strong international framework supporting it due to the strategic advantage it would confer to the non-regulated country. We haven’t had much success bringing countries together to establish strong frameworks prior to a crisis (seems like we’re much better after a crisis has already happened…), so not sure how optimistic I am on that. On the casualty front, I truly wonder if by removing the element of loss of human life, we wouldn’t be turning it into a giant video game where two drone armies fight each other, and if that may not incite one side to start targeting civilians to impose a greater cost on the other side, and thus end up increasing casualties overall. Again, not sure how the military and the government can balance short-term benefits vs. long-term risks when operating in a competitive global environment.
To JP’s point above, I think there is a different dilemma that stems from those who can afford (technologically and monetarily) to farm out their killing to robots. If one side possesses AI driven autonomous robots and the other doesn’t, who bears the ethical and legal risk of civilian deaths and collateral damage? Who ultimately makes the decision whether or not to fire into the crowd of people, or the religious site that the enemy is using as cover and concealment? I agree that regulation makes the most sense, but how do we enforce it?