Killer Robots: The Future of Warfare?

The debate continues hotly on an international scale to define, establish a framework, and set norms for autonomous weapons. This will greatly impact the future of warfare, and the international landscape as Great Power Competition returns between major international players.

If you could achieve your objectives in battle or win a war against your adversary while risking none of your own soldiers, sailors, and marines, would you? Now let’s suppose your adversary has the same technology, and you realize a conflict using those weapons would be horribly destructive for both sides. Do you still want to fight that war? How do you get rid of them? Regardless of the consequences or ethical implications, the problem remains: the technology has already been developed. You can unilaterally get rid of yours, but you have now subjected your country to defeat or coercion. Ukraine gave up its nuclear weapons in 1990 with a promise to from Russia and the United States to respect its sovereignty. Twenty-four years later, Russia annexed Crimea and threatened nuclear use if any nation attempted to push them out.1

With weapons technology advancing faster in the 20th and 21st centuries than ever before, nations have had to wrestle with difficult questions that balance competing interests:

  • the defense of sovereignty against adversaries vs. the destructive power of weapons (e.g. nuclear)
  • the reduction in loss-of-life through smarter weapons vs. removing humans from the battlefield both physically and emotionally (e.g. the invasion of Normandy vs. launching tomahawk missiles into Syria)
  • the advantage of more advanced weapons systems vs. the ethical questions surrounding them (e.g. an autonomous weapon could better identify and kill an enemy combatant, minimizing bloodshed, but if an autonomous weapon wrongly kills a civilian, or a child, who is to blame since no one gave the formal order?)

With the advent of Artificial Intelligence, weapons could “hunt, identify and kill the enemy based on calculations made by software, not decisions made by humans”, as evidenced by an exercise near Fort Behning, GA using drones2. In light of the questions above and the questions unmentioned, it is certainly worth investigating the implications of this type of capability.

Several arguments for autonomous weapons have been made. In a comprehensive report, the 2012 Defense Science Board identified “six key areas in which advances in autonomy would have significant benefit to [an] unmanned system: perception, planning, learning, human-robot interaction, natural language understanding, and multiagent coordination”3. One might imagine the number of lives saved if autonomous weapons could be deployed with the ability to recognize the speech patterns of a leader such as Osama bin Laden, and play a significant role in defeating terrorism without placing any troops in harm’s way. You trust your iPhone with the ability to unlock when it recognizes your face. Would you trust that the same technology could choose to kill the right person if it meant preventing terrorist attacks or significantly shortening a war, massively reducing the loss of life? Some argue that AI is better than humans at pattern recognition. Would we trust it with this responsibility?

On the other side, many concerns have been raised regarding the ethical implications of autonomous weapons. They have been compared to a smarter version of land mines, which were banned in the Ottowa Convention of 1997, giving precedence to opposing arguments regarding weapons that can kill indiscriminately. Attribution is also a major issue because it is possible for autonomous machines to kill without being able to identify a responsible party. In the same way “little green men” unaffiliated with any nation participated in the aforementioned invasion of Crimea, autonomous weapons could be deployed with no nation being required to take ownership of the actions conducted by them4. Additionally, autonomous machines could make mistakes and wrongly identify a target.

Many more arguments regarding autonomous weapons exist in papers, articles, and journals, but it is far more worthwhile to summarize what is being done today by the nations involved in shaping the future of autonomous weapons.

Over the past century, international venues have been key in establishing international law and multilateral treaties. They have typically started well in advance of negotiations by establishing international norms. This is where the debate stands with autonomous weapons. In 2012, the United States concluded that it is unethical to remove human responsibility and decision-making from the use of deadly force. Current U.S. policy states that “Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”5 Additionally, 82 nations met at the United Nations Convention for Conventional Weapons this year from April 9-13 to discuss the matter and establish norms surrounding autonomous weapons. While agreements have not been reached, most nations agree that maintaining human control over weapons and compliance with Humanitarian International Law is key, and “momentum is growing towards solidifying a framework for defining lethal autonomous weapons”6

(773 words)

  1. “Russia Was Ready to Put Nuclear Force on Alert, Putin Says” article by Laura Smith-Spark, Alla Eshchenko, and Emma Burrows, CNN, March 16, 2015, https://www.cnn.com/2015/03/16/europe/russia-putin-crimea-nuclear/index.html.
  2. “A Future for Drones: Automated Killing” article by Peter Finn, Washington Post, September 19, 2011, https://www.washingtonpost.com/national/national-security/a-future-for-drones-automated-killing/2011/09/15/gIQAVy9mgK_story.html?noredirect=on&utm_term=.7b0473a955d9.
  3. Defense Science Board, Task Force Report: The Role of Autonomy in DoD Systems (Washington, DC: Office of the Under Secretary of Defense for Acquisition, Technology and Logistics, July 2012), 31.
  4. “’Little Green Men’ or ‘Russian Invaders’” article by Vitaly Shevchenko, BBC News, March 11, 2014, https://www.bbc.com/news/world-europe-26532154.
  5. US Department of Defense (2012). “Directive 3000.09, Autonomy in weapon systems”(PDF). p. 2.
  6. “Lethal Autonomous Weapons: an Update from the United Nations” article by Tucker Davey, Future of Life Institute, April 30, 2018, https://futureoflife.org/2018/04/30/lethal-autonomous-weapons-an-update-from-the-united-nations/?cn-reloaded=1.

Previous:

Additive Manufacturing…For the Body?

Next:

Zhima Credit – Will Alibaba’s social credit system turn China into a “Black Mirror” episode?

Student comments on Killer Robots: The Future of Warfare?

  1. Although I agree with the sentiment that the use of autonomous weapons can lead to a slippery slope in warfare (similar to nuclear weapons) I do think that the benefits outweigh the risks. Many theorized that nuclear weapons themselves would lead to the destruction of humanity, however, studies have shown that the effect of nuclearization has actually decreased conflict throughout the world. Similarly, while autonomous weapons could be equally destructive, I believe that their effect will lead to stability in conflict ridden areas of the world. There is still the possibility that governments will learn to abuse these weapons and limit freedom of the population, however, as long as there are stringent policies put in place in order to limit their use, the overall net impact on the world will be positive.

  2. It’s an interesting conundrum of when we are willing to allow for Type I vs Type II error in the technology (recognition), and when the risks end up outweighing the benefits. The moral argument is almost separate in my view to the technological. Certainly within the next decade we will be able to reduce risk of any error, but should we still allow for there to be fully automatic weapons systems? On one had it could be argued to reduce the mental health risk and the physical risk to the soldiers that are deployed, but is it even fair to pit humans against non-humans? The moral hazard of bringing a country to war if there are no casualties of ones own army on the line is an interesting, and somewhat terrifying one.

Leave a comment