National security impact

The full text of this paper is split across the various articles linked below. Readers can browse in any order. To download a PDF version, use the button below.

Throughout history, technologies—from the Gatling gun and the steam engine in the First Industrial Revolution, to the mechanization of warfare and the rise of the assembly line in the Second Industrial Revolution, to precision-guided weaponry resulting from the computer revolution—have shaped and reshaped strategy, tactics, and the character of war. Not least, the unprecedented existential threat of nuclear weapons forced a paradigm shift in the very idea and conduct of war among major powers.

Now, the emerging technologies of the still-nascent Fourth Industrial Revolution, though often largely civilian purposed or of dual use, are once again upending all things military in ways previously unimaginable. Already, AI, big data, unmanned air and sea drones, 3D printing, and, most of all, increasingly autonomous weapons have begun to raise new ethics questions and alter warfighting, logistics, and military organization. Over the coming two decades, the synergy of this suite of technologies, with AI as a synthesizing enabler, may have as revolutionary an impact on the conduct and strategy of war as nuclear weapons have had since 1945.53 The classic security dilemma—what one nation sees as weaponry to improve its defenses is viewed as a threat by another, creating a cycle of one-upmanship (in other words, an arms race)—is a driver of the imperative of new tech innovation that, in turn, raises the stakes of confrontation.

Disruptive technologies pose new risks and challenges to strategic stability across increasingly contested global commons—air, sea, cyber, and space. New technologies could undermine nuclear second-strike capabilities, the basis of deterrence and strategic stability. For example, hypersonic missiles and/or glide vehicles traveling at Mach 5 or faster (five times the speed of sound), now in various stages of development by the United States, China, Russia, and India, could nullify or evade missile defenses and create a “use it or lose it” situation for nuclear-weapons states. Similarly, swarming unmanned underwater vehicles (UUVs) could locate and/or disable ballistic-missile nuclear submarines, which are a key component of US nuclear deterrence. Other disruptive scenarios would be the use of cyber warfare to disable a nation’s command-and-control capabilities or the use of directed-energy (laser) anti-space weapons to disable or destroy the satellites upon which so much modern warfare and communication depend. Yet, these emerging threats to nuclear crisis stability have not sparked new codes of conduct, norms, or “redlines” to constrain these mutual vulnerabilities.

In some respects, the prospect of fully autonomous weapons is not a huge technological leap from current—and, in some cases, long-deployed—precision-guided or “smart” weaponry. They may be best understood as a spectrum with varying degrees and gradations of complexity, autonomous capability, and human involvement—from the automatic, like machine guns, on one end, with varying degrees of human supervision to fully autonomous on the other. The US Navy’s Harpoon semiautonomous anti-ship missile—which, once fired, determines what are enemy ships and where they are— has been deployed for more than three decades. Similarly, the Navy’s HARM (high-speed anti-radiation missile), a semiautonomous missile, seeks out enemy radar on its own once fired. Likewise, the US Tomahawk anti-ship missile, once launched at a data-target area, flies in a search pattern and is able to locate, choose, and fire at a target on its own. The US Navy’s AEGIS and the US Army’s Patriot missile-defense systems, for example, have various modes of semiautonomous and autonomous modes. In both cases, human control (human action involved during action or supervision, with an ability to intervene analogous to avoiding “flash crashes” in financial markets) has been a factor in both accidents and avoiding them.

To date, there are only a few lethal weapon systems that can be called fully autonomous, with sensors and algorithms that decide who, when, where, and what the target is once launched. The Israeli HAROP, an anti-radar missile, is a prominent example. As discussed in his invaluable book Army of None, Paul Scharre explains that the HAROP, once it is deployed and programmed to search in a particular space, can hover with a 350-kilometer range for more than two hours, to search for a target and decide on its own when and what target to hit. Illustrating how rapidly these emerging technologies are diffusing, the HAROP has already been exported to China, India, South Korea, and Turkey. Moreover, some ninety nations have surveillance drones, and at least sixteen have armed drones. Over time, drone technology will become more sophisticated, cheaper, and more widely diffused.

The weapons systems discussed above are just a sampling of technologies with military applications that are racing ahead of a set of rules, norms, or codes of conduct for governance. A UN Convention on Certain Conventional Weapons (CCW) in Geneva has been examining the issue of autonomous weapons since 2014, and has yet to decide on a definition. The US Department of Defense has clear guidance authorizing the development and deployment of various degrees of semiautonomous weapons (cyber defense is exempted), but draws a firm ethical line. As former Defense Secretary Ash Carter explained, “in every system capable of executing or assisting the use of lethal force, there must be a human being making the decision. That is, there would be no literal autonomy.” The fear of a Terminator-like future world has sparked a “Campaign to Stop Killer Robots.” The alarm initially raised by prominent scientists and technologists like Stephen Hawking and Elon Musk has grown. In 2017, a group of more than three thousand AI and robotics scientists and experts sent an open letter to the UN CCW, cautioning against the use of lethal autonomous weapons.

Such concerns are legitimate; technology, of course, is imperfect. The history of complex systems with many moving parts is that they are never 100-percent error free; it is assumed that, however rarely, complex systems will fail. Think of National Aeronautics and Space Administration (NASA) space- shuttle failures, or Japan’s Fukushima nuclear accident. The most chilling, if nearly forgotten, example of technological error occurred at the height of the Cold War, on September 26, 1983. A new Soviet satellite early-warning system mistakenly warned that it detected a US missile launch, and was ready to counterstrike, almost ending the world. Only the cautious skepticism of duty officer Lieutenant Colonel Stanislav Petrov, who suspected it was an error—and upon checking with Soviet ground-based radar confirmed there were no missiles launched—prevented global catastrophe.

In recent years, there have been some catastrophic failures with US semiautonomous weapons systems, well documented in Paul Scharre’s 2018 book Army of None. In 1988, in the midst of combat during the Iran-Iraq war, the AEGIS defense system’s radar mistook an Iranian civilian airliner that had taken off at the same time as an Iranian military plane for a threat and shot it down, killing two hundred and ninety passengers. Two other prominent examples have occurred with the Patriot and Aegis missile-defense systems. In 2003, during the Iraq war, there was a friendly-fire incident in which Patriot PAC-2 missiles misidentified a British Tornado fighter jet, and a second incident in which a PAC-2 mistakenly detected an anti-radar missile and fired, and its missiles eventually found a US F-15/Hornet in the vicinity. The causes varied, including glitches in systems within systems, human error, overreliance on technology, and either too much or too little human supervision.

These tragedies should serve as early warning. And to a large extent, they have. NASA and DoD have left no stone unturned in evaluating their respective tragic errors and taking lessons learned, to improve safety and precautions against future problems. But, as AI gets smarter and “deep learning” enhances AI capacity, these technologies will become more complex, faster, and more difficult for humans to control. Testing and evaluation are considered the key to limiting possible errors, but become increasingly difficult as autonomous systems become faster and more complex. The growing complexity, speed, and self-direction of AI, software, and algorithms make it ever more difficult for humans to understand what autonomous systems are doing, how they do it, and, thus, how to prevent or control errors.

Autonomous cyber warfare

The cyber realm, both cyber offense and defense, is another area where AI-powered autonomous systems are looming, and are likely game changers. Think of the well-known twenty- first-century cyber disruptions: a Stuxnet with complex, precise programming, wreaking havoc on Iran’s nuclear program; Chinese intellectual-property theft from US firms; Iran hacking into tens of thousands of Saudi computers; the Russian cyber- hacked denial of service to the entire nation of Estonia; cyber hackers compromising data from five hundred million Marriot hotel-chain accounts; and a data breach at the US Office of Personnel Management (OPM) compromising more than four million US government employees. Then consider that these events occurred without AI-powered autonomous malware.

It must be recalled that anything in the digital universe that can be communicated with is vulnerable to being hacked. The risks of autonomous malicious software that can spread, replicate and update itself, and adapt and respond to cyber defenses are among the growing risks that AI brings to cybersecurity. As a report by a leading cybersecurity firm explained, “Weaponized AI will be able to adapt to the environment it infects. By learning from the contextual information, it will specifically target weak points it discovers or mimic trusted elements of the system. This will allow AI cyberattacks to evade detection and maximize the damage they cause.” With the deployment of 5G and a world of billions upon billions of IoT-connected devices, the potential risks increase exponentially. An F-16 jet, for example, has thousands of sensors. The US Department of Defense and intelligence community have thousands of separate computer networks. Similarly, many major corporations also have a multiplicity of computer networks.

Fortunately, AI’s impact on cybersecurity is a two-way street, enabling both cyber offense and defense. To date, cyber offense, with a low bar of entry (basically a laptop and easily obtained hacking programs) has been cheaper, easier, and more effective than defense. Big data have helped improve attribution of cyberattacks and made “active defenses” or counterattacks an option for both governments and businesses. But, there are indications that AI may be a great equalizer, shifting the balance toward defense. Obviously, it would be a near impossibility for human manpower to respond in real time to such a scale and scope of cyberthreats.

Cybersecurity is an area where automaticity, with humans out of the loop, is not necessarily a bad thing; in many respects, it is essential.

Robert A. Manning

Thus, cybersecurity is an area where automaticity, with humans out of the loop, is not necessarily a bad thing; in many respects, it is essential. Autonomous cyber defense is a growing field, pursued with great urgency. For example, part of the Defense Advanced Research Projects Agency (DARPA) $2-billion-plus AI R&D includes several programs, and cyber challenges create highly advanced algorithms that can stay one step ahead of high-tech hackers.Already, DARPA cyber challenges have stimulated some remarkable AI autonomous defenses. One new AI-enabled program called Cyber-Hunting at Scale (CHASE) uses sophisticated algorithms and advanced processing speed to track huge volumes of data and find advanced attacks hidden within incoming data. Another such system “is fully autonomous for finding and fixing security vulnerabilities,” not just identifying vulnerabilities but applying “patches” to fix them, even reasoning which patch and when to apply it. The next wave of autonomous defense is “counter-autonomy,” which not only exploits flaws in malware, but finds vulnerabilities in offensive algorithms and attacks them. This could mean offensive and defensive autonomous systems battling each other. The implications of AI-powered cyber defenses for the battlefield are a new factor still being intellectually digested.

New challenges to strategic stability: hypersonic and counter-space

But, AI-enabled cyber offense and defense are not the only new factors complicating strategic stability. Another is the development of highly maneuverable hypersonic space vehicles and cruise missiles, traveling at Mach 5 or faster, that can evade missile-defense systems and conceal their targets. While they are dual use (and have potential for commercial air travel) the focus of nations developing the technology is on military use. The United States, China, and Russia are leading the race, with India and France also pursuing the difficult technology, while other nations are also at early stages of development. Deployment is projected in the early to mid- 2020s. There are two main types of hypersonic vehicles under development: hypersonic glide vehicles (HGVs), which are launched by rockets at the edge of space and glide in the outer atmosphere; and hypersonic cruise missiles, which are rocket- powered, faster versions of current cruise missiles. They appear intended as kinetic weapons with their speed and force of impact hitting a target, rather than delivering warheads.Some argue that they are inherently destabilizing to nuclear- weapons states, as there is little warning time and a risk of decapitating command and control, thus threatening the assured second-strike capability on which deterrence is based. This could result in a “launch on warning,” use-it-or-lose- it scenario in an escalating conflict. While efforts to develop “counter-hypersonic” weapons by the United States, if not China, are underway, the difficulty cannot be overstated: if missile defenses are trying to “hit a bullet with a bullet,” imagine trying to do that at five or six times the speed of sound. That this hypersonic race is unfolding as US-Russia strategic arms control appears to be unraveling suggests arms control to ban and/or limit exports will be particularly problematic.

Space

NASA, Unsplash

Yet another growing concern with regard to strategic stability is the increasingly crowded and contested domain of space, upon which daily modern communications (TV, Internet), navigation (GPS), military command and control, surveillance, reconnaissance, and intelligence are greatly dependent. Once the sole province of the United States and the Soviet Union, there are a proliferation of space powers, and of counter-space activities—actions to jam, deny, disable, or destroy low- and medium-orbiting and geosynchronous satellites. As of April 2020, there were 2666 satellites in orbit, increasingly more commercial than military satellites. Just under half belong to the United States. Russia and China account for five hundred and thirty-two, with China the fastest-growing space power. EU nations, India, (which launched one hundred and four small satellites from a single rocket in 2017), and Japan are also major actors in the space environment, though new challengers including North Korea and Iran are part of the landscape. Space is decreasingly monopolized by governments, as commercial space activities—including satellite launches, asteroid mining, and space tourism—are rapidly growing.

Against this backdrop, space has become a geostrategic contested domain, one increasingly reflecting major-power competition. A number of nations have developed, or are pursuing, a range of both land- and space-based counter- space technologies. The United States is particularly concerned about Russian and Chinese capabilities, which a recent Defense Intelligence Agency report says, “are developing jamming and cyberspace capabilities, directed energy weapons, on-orbit capabilities and ground-based antisatellite weapons.” Most dramatically, in 2007, a Chinese “hit-to-kill” missile blew up its own low-orbiting satellite, creating thousands of bits of potentially dangerous space debris. India recently demonstrated that anti-space prowess, similarly destroying one of its satellites (creating some four hundred pieces of space debris), highlighting that space power is shifting from West to East. More recently, there are reports that the United States and Russia have been conducting risky close-approach missions called “remote proximity operations,” maneuvering their respective satellites near each other, which could be used for intelligence gathering or counter-space operations.

There are a variety of counterspace systems and technologies, some ground based and orbital-space based. Ground-based antisatellite missiles (ASAT), which can also be air launched, use an onboard seeker to locate and kinetically destroy or disable satellites, though not by using directed-energy weapons (DEW) such as lasers, high-powered microwaves, or other radio-frequency weapons. Unlike ASAT, DEW attacks may only temporarily disable satellite functions. Electronic warfare is another type of anti-space weapon, applying jamming or spoofing (sending a fake signal with false information). In addition, there are a number of threats from orbital (space- based systems) that can do temporary or permanent damage to satellites—including radio-frequency or microwave jamming, chemical sprays, robotic arms to disable devices, and kinetic- kill vehicles.

While space has tended to be viewed as an offense- dominant domain, some argue that there are a number of countermeasures, including a trend toward small microsatellites, that offer defense some advantages. These include using multiple frequencies, using multiple military and commercial satellites (whose signals can be intermingled) for certain missions to create redundancy, and using hundreds of tiny microsatellites for single-purpose functions with some redundancy.

Regardless, outer space is a critical, if vulnerable, global commons—one on which all nations rely, to varying degrees, for the daily function of their economies, societies, and militaries. Such mutual vulnerability would suggest considerable overlapping interest in ensuring the domain’s peaceful use. Yet, there is a woeful dearth of international cooperation, rules, and norms—despite mutual vulnerabilities and common interests like mitigating space debris—and governance institutions are largely outdated.

Read subsections of this post

Read the full paper

Issue Brief

Jul 7, 2020

Emerging technologies: new challenges to global stability

By Robert A. Manning

The world may be fast approaching the perfect storm, with the intersection of two major global trends. At a moment of historic transition, when the post-WWII and post-Cold War international order is eroding amid competing visions of world order and renewed geopolitical rivalries, the world is also in the early stages of an unprecedented technological transformation

China Cybersecurity

Image: Fighter jet. Cibi Chakravarthi, Unsplash