Should the U.S. use autonomous cyber weapons?

Stuxnet appears to have unleashed autonomous destruction with no human in the loop

From Jason Healey, New Atlanticist:  Stuxnet, part of the "Olympic Games" covert assault by the United States and Israel on Iranian nuclear capability, appears to be the first autonomous weapon with an algorithm, not a human hand, pulling the trigger. While the technology behind Stuxnet or other autonomous weapons is impressive, there has been little or no ethical debate on how (or indeed whether) such weapons should be used. . . .

Engineers have already produced weapons that could engage targets on their own, though militaries have chosen not to enable this feature, uncomfortable with delegating to a machine decisions on whom to kill, what to destroy, and when. . . .

Deputy Secretary of Defense Ashton Carter recently signed a directive clarifying how the department would, or would not, limit use of violence by autonomous and semiautonomous weapons. The DoD directive specifies that "[a]utonomous weapon systems may be used to apply non-lethal, non-kinetic force" only; so any decisions that might harm human beings must be made with an operator, trained in the laws of war, in the loop. But the directive is just as clear that this commonsense restriction somehow doesn’t apply to cyber capabilities.

Details on Olympic Games are difficult to come by but it appears Stuxnet was just such an exception, set loose with only algorithms-rather than a human-to tell it whether to unleash Hell. Stuxnet’s creators had at least three reasons to be confident they could forego having a human in the decision loop: It was beautifully engineered and extensively tested to destroy equipment that met an exacting set of criteria that only existed in one place, within Iranian nuclear facilities. It was also operating in a closed network with no reason to suspect it might escape to create collateral damage. And even if it did have any problems, or if its creators completely lost contact, Stuxnet was programmed to deactivate itself. . . .

When Michael Hayden, the former director of both the NSA and CIA, said that with Stuxnet we had "crossed the Rubicon," he meant that it was the "first attack of a major nature in which a cyberattack was used to effect physical destruction." But in the longer term, Stuxnet may be far more important because it appears to have unleashed autonomous destruction with no human in the loop. Defending against autonomous weapons may necessitate autonomous defenses and where is the end to that loop?

Jason Healey is the Director of the Cyber Statecraft Initiative at the Atlantic Council. You can follow his tweets @Jason_Healey. This piece first appeared on The Huffington Post.  (graphic: websigmas.com)

Image: dynamic-internet-services%209%2010%2012%20cyberwarfare_0.jpg