What does the future of autonomous warfare look like? Four critical questions, answered.

As warfare is increasingly dictated by machines, critical questions around strategy, technology, and morality arise at every turn. But there’s one thing we know for sure: There’s no reversing the rise of autonomous systems.

The upcoming NEXUS 22 symposium, hosted by Applied Intuition in collaboration with the Atlantic Council, will bring together senior leaders to discuss the complex issues at the intersection of national security, defense, and autonomous systems. Ahead of the gathering, experts from the Scowcroft Center for Strategy and Security’s Forward Defense practice addressed the most important questions about these systems and how they will shape the future of warfare.

1. Trade-offs and trading up: What opportunities do autonomous systems create for defense planners?

While the nature of war is timeless, the character of warfare is always changing. Emerging technologies such as artificial intelligence (AI), autonomous systems, and quantum computing bear vast implications for the conduct and deterrence of military conflict in the future. When effectively deployed together, these technologies will likely precipitate a revolution in military affairs, with the potential to reshape the global balance of power.

However, new technologies by themselves do not change warfare—new applications and concepts for using those technologies do. Throughout history, emerging technologies are often first used to incrementally improve existing procedures (i.e., evolutionary change) before they are deployed in transformative ways (i.e., revolutionary change). The threshold between evolutionary and revolutionary change is often human imagination.

Today, autonomous systems and other AI-enabled technologies reside at a similar inflection point. To maximize the utility of these systems and shape development pathways accordingly, it is essential to creatively consider the key opportunities that autonomous systems can create, as well as the barriers to the achievement of those opportunities.

First, these systems can save both hard and opportunity costs. Military personnel spending accounts for a major portion of the overall defense budget, and it has accelerated since 9/11. By performing a variety of mission sets ranging from analyzing data to piloting vessels, autonomous systems can replace human operators. While it is unlikely that the size of the force will decrease as a result, these systems can free up military personnel to tackle other missions—creating a more cost-effective force at a time when the real purchasing power of the US defense budget is decreasing. However, current military command structures are not equipped to realize this opportunity. In the case of remotely piloted aircraft, the principle of “human in the loop” guarantees that too many service members are tied down to review and make decisions for robots. To capitalize on the promise of these emerging technologies, command structures and norms must adapt to trust these systems to act with increasing autonomy.

Second, autonomous systems can ameliorate longstanding defense tradeoffs. In particular, defense planners have struggled to strike the right balance between “quality” and “quantity”—i.e., less numerous and more sophisticated warfighting capabilities to counter great-power adversaries versus more numerous and less sophisticated platforms to expand US global presence and deter smaller acts of aggression. In the future, drone swarms may be the solution to this tradeoff. Maneuverable and “attritable” (i.e., cheaper and relatively expendable) enough to fulfill key missions in wartime, and deployed in sufficient numbers to ensure greater presence in peacetime, autonomous systems can be the crux of a new US force posture. 

The barriers to realizing this opportunity are both structural and psychological. Structurally, the Department of Defense (DoD) needs to adapt its acquisition approach to develop a high-low combination of autonomous systems that can serve different mission sets, thereby “competing by cost” in addition to “competing by differentiation.” Psychologically, the White House needs to establish clearer diplomatic messaging about the value of autonomous systems relative to human life to ensure that these systems can adequately deter adversaries.

When it comes to emerging technologies, defense planners should always look to “trade up,” or they will risk falling behind competitors. However, if autonomy is to radically change the future of warfare in ways that are advantageous to the United States, defense planners must imagine both the opportunities that autonomous systems can create and a pathway toward making them a reality.

Christian Trotti, assistant director of Forward Defense at the Atlantic Council’s Scowcroft Center for Strategy and Security.

2. Laying down the law on LAWS: What level of risk are US commanders willing to accept in deploying autonomous systems?

Twenty-first century battles are characterized by speed and precision. Whoever—or whatever—can rapidly make sense of a battlefield and execute the kill chain (i.e., the structure and steps of an attack) fastest comes out ahead. As computing power, AI, and next-generation communications revolutionize future conflicts, lethal autonomous weapons systems (LAWS) can decipher and act on more data, more quickly, than humans can. The United States, its allies, and its competitors are already pouring billions of dollars into autonomous weapons research and development; and last year, a United Nations report detailed the first known combat use of LAWS in Libya. Though many have condemned so-called killer robots, commanders also see LAWS as an opportunity to improve battlefield decision making and minimize unnecessary loss of life through:

  1. Lower risk to friendly forces: LAWS are uncrewed.
  2. Greater accuracy: LAWS can process exponentially more data from more sensors. 
  3. Increased speed: Targets can be found, identified, and destroyed from a single platform in the blink of an eye without off-boarding data and waiting for an engagement decision from afar.
  4. Simplified logistics: Machines require only fuel and repair.
  5. Improved strategic decision making: By making tactical calls, LAWS allow commanders to focus on higher-level decisions.

LAWS have some serious drawbacks, however. Military leaders will have to accept risk and responsibility on behalf of machines—especially when taking human lives. The military already employs a five-step methodology to estimate collateral damage before engaging with targets—but the fog and friction of war ensure that almost no decision to take lethal action is without risk of fratricide or collateral damage. As the world begins to consider the implications of fully autonomous warfare, the United States must begin to lay the groundwork for implementing LAWS ethically and effectively by confronting core questions: 

  1. What is the acceptable collateral damage risk for LAWS? Is it more or less acceptable than the risk for a human operator?
  2. Who decides when, where, and how often LAWS are deployed? 
  3. Who is responsible for the actions of LAWS: the operator, commander, or programmer who writes the targeting algorithm?
  4. How are decisions evaluated “right of boom” (i.e., in the wake of an attack)? What degree of AI explainability should be required?

For each question, there is no one right answer. To prepare for the proliferation of LAWS in future conflicts, DoD must begin to consider clear rules of engagement and predetermined risk protocols before the first bomb is dropped, round is fired, or torpedo is launched.

Lt Col Tyson Wetzel, senior US Air Force fellow at the Scowcroft Center for Strategy and Security, with contributions from Caroline Steel, young global professional in the Scowcroft Center’s Forward Defense practice. The positions expressed by Lt Col Wetzel do not reflect the official position of the United States Air Force or Department of Defense.

3. Science fiction vs. reality: How does autonomy feature in the DoD’s broader integration of AI-enabled technologies?

The discussion of developing AI for the defense and security communities is frequently framed in the public discourse as an effort to develop autonomous systems and science fiction-type capabilities such as drone swarms when, in reality, the discussion is both more nuanced and broader. Fully autonomous systems are not imminent, as a National Security Commission on Artificial Intelligence report articulates, which means that the mid-term future of conflict includes humans and narrowly autonomous machines (i.e., systems that require certain degrees of human involvement) operating together. Moreover, the applications for AI in defense (in conjunction with other emerging technologies) are more expansive than autonomous robots, with relevance for most if not all DoD operational missions and functions. 

Most notably, AI-enabled processing of intelligence is improving and speeding up human decision making, a crucial outcome if human operators are to keep up with the accelerating pace of operations and the growing amount of accessible data.

Training is another useful example. The combination of AI and virtual and augmented reality can greatly reduce training costs and timelines while increasing effectiveness. AI’s ability to speed up and reduce costs associated with engineering and manufacturing, maintenance and logistics, and detecting cyber and disinformation attacks are also central to DoD efforts to develop and deploy AI-enabled capabilities. 

To realize these expansive benefits, DoD must balance a need for increased urgency in AI development and adoption with a laser-like focus on ensuring the safe development, extensive testing, and ethical use of these transformative capabilities across the defense enterprise. Autonomous systems are one of many useful AI applications in defense that may become operational on different timelines, and defense planners must therefore pursue a clear and coherent modernization strategy that maximizes the promise of these emerging technologies over short-, medium-, and long-term horizons.

Tate Nurkin, nonresident senior fellow with the Forward Defense practice area at the Scowcroft Center for Strategy and Security.

4. Keeping up with conflict: How should international law be adapted to the realities of autonomous systems and other emerging technologies on the modern battlefield?

As emerging technologies like AI and autonomous systems become increasingly operational and effective, there is a clear need for an updated version of the Geneva Convention to account for human rights, privacy, and attribution. When the Geneva Convention was created in 1949, digital methods of disseminating content were limited to radio and TV broadcasts, and computing power was in its infancy. Since then, not only has boots-on-the-ground warfare expanded to the Internet through cyberattacks and social-media propaganda, but information warfare has been normalized and robots are an increasingly important force on the battlefield. The ongoing Russia-Ukraine war is a perfect example of how nonkinetic applications of autonomy and AI, particularly through cyberattacks and disinformation campaigns, are used in tandem with kinetic weapons like Turkey’s Bayraktar TB2 drone and Russia’s hypersonic weapons.

Yet, despite the looming threats of further invasion by Vladimir Putin, coercion by China, regional aggression by Iran and North Korea, and the erosion of the rules-based international order, an updated version of the Geneva Convention does not exist. International law must provide a new framework:

  1. It starts with definitions and terminology. Global actors have not agreed on definitions of peace and war. Is it war when Russia hacks into the Ukrainian power grids in the dead of winter or into government systems to disrupt command-and-control mechanisms? Or when Iran shoots down a US drone? Is it peace if citizens of another country are under the onslaught of foreign disinformation that clouds what’s real or not? Is psychological pain worse than bodily harm? Is an autonomous drone worth a human life?
  2. The framework must also include guidelines outlining data privacy, ethics of lethal autonomous weapons and nonlethal AI algorithms, and an agreed-upon attribution process. For example Clearview AI’s facial recognition technology was lauded for its wartime use by the Ukrainian government to identify dead Russian soldiers, while the neighboring European Union, enjoying peace, bans its member states from such use. International law should be able to guide the use of such technology in the spectrum between war and peace. 
  3. The final component is accountability and enforceability. When algorithms behind the autonomous systems miscalculate or the human in the loop makes a judgment mistake and civilians are accidentally killed, who is responsible? How do we hold algorithms or machines responsible when they are inanimate objects? Do we instead punish the engineers behind the system or is there a third way? Furthermore, how do officials attribute an attack, such as cyberespionage, when AI can be used to cloak the origin of the action? The framework must establish a process of attribution in the age of autonomy, and when this new framework is violated, the perpetrating actor must be identified and punished accordingly. 

Like the Syrian Civil War before it, the Russia-Ukraine War is demonstrating a real urgency and need for a new Geneva Convention governing applications of critical technology areas such as AI and autonomy. While several existing initiatives can act as precedents, such as the Ethical Principles in AI adopted by the US Department of Defense in 2018 (which I helped consult on) and the Cybersecurity Strategy of the European Union, there is significant work to be done. It is time for a new code of conduct.

Evanna Hu, nonresident senior fellow with the Forward Defense practice area at the Scowcroft Center for Strategy and Security.


For more insights on the future of autonomy in national security and defense, register for the May 17 NEXUS 22 symposium of May 17, featuring distinguished experts such as former Under Secretary of Defense for Policy Michèle Flournoy, investor Marc Andreessen, Defense Innovation Unit Director Mike Brown, and Rebellion Defense Chief Executive Officer Chris Lynch.

Further reading

Related Experts: Christian Trotti, Tyson Wetzel, Tate Nurkin, and Evanna Hu

Image: Drones are displayed at the Shenzhen International UAV EXPO (World Unmanned Aerial Vehicle Exhibition) in Shenzhen, China, on May 21, 2021. Photo by The Yomiuri Shimbun/REUTERS