The Role of Robots in Warfare

The military is trending towards fully autonomous robots to prevent loss of soldier lives, but are we prepared for the trade-offs?

With 1.3 million active personnel, 800,000 reserve personnel, and a budget of $597 billion dollars (3.3% of GDP)[1], the US military delivers the value of protecting the security of the United States.  Historically, this task has been accomplished with “troops on the ground,” but as technology improves, robots have begun taking the place of soldiers. Though this transformation has many benefits, there are significant ethical and technological issues to consider.

The operating model of the U.S. Military has evolved over time. In addition to protecting the U.S., the military has been increasingly involved in global security issues (the U.S. military currently has operations in Yemen, Uganda, Iraq, Syria, and Cameroon). Operationally, the U.S. Military is an impressive machine. The organization processes incredible amounts of data to make decisions and is able to effectively manage nearly 2 million people in a highly hierarchical structure.

Navy’s Phalanax Close-In Weapons System
The Navy’s Phalanax Close-In Weapons System

Ground and aerial robots were first used during war in 2002 in Afghanistan and their use in the U.S. military has since increased dramatically. In 2000, the U.S. military had fewer than 50 unmanned aerial vehicles (UAV), by 2010 they had more than 7000.[2] Most military robots are piloted remotely and are therefore “semi-autonomous” but technology is trending towards fully autonomous vehicles.  In 2014, PackBots (the most common military robot) were given some autonomous functionality enabling them to travel to pre-determined waypoints. The Navy’s Phalanax Close-In Weapons System is fully autonomous and is enabled to shoot down anti-ship missiles in a last-resort situation.[3] Piloted robots save lives on the battlefield and are useful in many situations including bomb disposal and building searches. However, as we begin to consider autonomous robots that think for themselves without human intervention, the value proposition gets more complicated.

iRobot SUGV PackBot
iRobot SUGV PackBot

Autonomous robots have been described as the “third revolution in warfare after gunpowder and nuclear arms.”[4] Unlike nuclear, robots are cheap to build and do not require hard-to-obtain materials – many experts believe the development of autonomous weapons will trigger an arms race.  Robots would have to be programmed to make decisions in complex wartime situations and to assess extremely complicated situations (Ex: Is the individual running away from me a civilian or a combatant?). Robotic behavior would never be 100% predictable due to the wide variety and uncertainty of inputs contributing to autonomous decision making.[2]  Autonomous weapons lower the threshold of going to battle due to the decrease in loss of life. Furthermore, they remove the human element and disconnect us from the reality of war.

There are arguments to be made in favor of autonomous weapons. Robots do not have emotions influencing decisions. Autonomous weapons may eliminate instances of war crimes. One particularly intriguing example of autonomous robots is the Battlefield Extraction-Assist Robot (BEAR) that was being developed to recognize injured soldiers and transport them away from the battlefield.[5]

The value of human lives is difficult to balance against the potential negatives of fully autonomous robotics. There are several steps that could be taken to protect against potential negative outcomes.

  • Stick with semi-autonomous weapons: Semi-autonomous or “piloted” weapons keep a human connected to the robot. Decisions are still made by a human being (with an understanding of the realities of war) but soldiers’ lives are protected. This also creates a clear sense of responsibility. If a mistake is made, the burden falls on the individual. There is no confusion over who should take blame for a robot’s actions.
  • Global Community Responsibility: Several large organizations (Future of Life Institute) and notable individuals (Stephen Hawking, Elon Musk) have spoken out against the use of fully autonomous weapons. The global AI community and intergovernmental organizations need to discuss and define international policy on development and use of autonomous weapons. This is crucial to preventing the mentioned AI arms race. Policy and regulation could be used to maintain a high barrier of entry to war by limiting the development of robotic weapons.
  • Value Added Applications: The use of fully autonomous robots in some situations such as the BEAR rescue robot mentioned earlier is an opportunity to leverage AI for value adding purposes with far fewer risks of negative consequences. Identifying these opportunities and incentivizing development of these technologies is a good way to embrace the trend of autonomy.
  • Targeted Technical Development: If fully autonomous weapons are an inevitable trend in military operations, it is crucial that we tackle the existing technical challenges – notably advanced sensor development and real-time data processing. This technology is a necessity for ensuring robots do not mistakes based on faulty input data.

Digital transformation has changed much about the world and warfare has not been excluded. To ensure the world is secure for many years to come, it is crucial we are intentional about how we approach the use of autonomous robots in war.

[799 words]

[1] “Department of Defense (DoD) Releases Fiscal Year 2017 President’s Budget Proposal,” press release, February 9, 2016, on Department of Defense website,, accessed November 2016.

[2]  Lora G. Weiss, “Autonomous Robots in the Fog of War,” IEEE Spectrum, July 27, 2011,, accessed November 2016.

[3] Raytheon, “Phalanx Close-In Weapon System,”, accessed November 2016.

[4]  Future of Life Institute, “Autonomous Weapons: An Open Letter from AI & Robotics Researchers,” July 28, 2015,, accessed November 2016

[5] Barb Ruppert, “Robots to rescue wounded on battlefield,” November 22, 2010,, accessed November 2016

[6] Jon Cartwright, “Rise of the robots and the future of war,” November 20, 2010,, accessed November 2016


Tesco: A digital transformation



Student comments on The Role of Robots in Warfare

  1. This is definitely a new twist on ethical dilemmas arising from technological innovations, and pretty much captures the most frightening future imaginable – intelligent armed robots going to war with humans. Considering such technology getting completely out of control is certainly unsettling, but I imagine it’s quite a remote possibility based on the fail-safe mechanisms that programmers would likely encode into such robots. Nonetheless, it does beg the question of whether we should be pursuing this technology at all considering where it might lead. Unfortunately I imagine that its development is inevitable, as governments will probably engage in a robotic weapons arms race, as you mentioned. As such, the responsibility falls upon those of us who are aware of the potential consequences to take a stance and speak up – like you mentioned with the Future of Life Institute.

  2. Interesting post Jessica. One additional thought comes to mind: while saving soldiers lives by substituting them with machines is an undeniably positive outcome, I fear that removing the risk to our soldiers could make us even more callous to the harm that warfare causes. It may lead to further dehumanization of the “enemy” and since the collateral damage is remote and impersonal, we may be less sensitized to it. Balancing these tensions will no doubt be a crucial task for our military leaders and for us as a society.

  3. Well, this is terrifying. I’d add that this is only the newest edition of what is a very old debate over the tradeoffs associated with distancing troops from battle. We’ve had this debate over guns, cannons, and submarines, to name a few developments. My hope is that this time around, the ethics of how and when semi-autonomous weapons are being used isn’t drowned out by the discussion over their existence. Hypothetically, we wouldn’t have a problem with autonomous or semi-autonomous weapons that do the right thing, every time. It’s only when such weapons are used in unethical ways that we start to see problems.

  4. What scares me the most is the passage about them not requiring any rare materials or incredibly elaborate manufacturing equipment to build. One could literally build a warbot in his garage using electromechanical scrap and sharp items (not to mention the easily accessible black market for guns). This raises all kinds of security concerns about criminals, terrorists and other non-state actors having easy access to what is essentially “affordable WMD’s”.

Leave a comment