August 1, 2016
How Soon is Too Soon for New Weapons?
Historical perspective should inform the aims of the Pentagon’s Third Offset strategy.
By James Hasik
RAND’s call for more missiles for more stand-off warfare seems uncontestable today, but the history hasn’t been uniform. After the Second World War, the US Strategic Bombing Survey found that “strategic” bombing hadn’t been all that strategic, simply because it wasn’t nearly as effective as advertised. From that vantage point, the enthusiasm of industrial-bombing advocates in the RAF and the USAAF in the 1930s was all too early. At the time, nuclear weapons were already getting past the lack of precision, as long as one was willing to blow up whole cities at once. By the late 1950s, as RAND’s Bernard Brodie would tell us, we’d have a strategy for the missile age. Air-breathing aircraft then started looking obsolete to cost-cutters. In Canada, Avro's Arrow fighter project was cancelled partly on the promise of Boeing’s Bomarc anti-aircraft missiles. But even these long-range weapons would be useless against Soviet ballistic missiles.
At times from past to present, we’ve similarly heard that naval battles would be all submarines someday. Also in the 1940s, Admiral Karl Dönitz kept a plain seascape on his office wall entitled “Fleet Review 1954”. But for all the theorized vulnerability of surface craft to shore-based mobile missiles, they’re still awfully useful against lesser arrays of threats. Just today it wasn’t only the USAF’s drones, but the USMC’s manned helicopters too, flying from USS Wasp, that hit Da’esh around Sirte in Libya. Remotely-controlled drones are often enough, and they are notably favored for their uninhabited footprint, but they don’t handle the pop-up targets of rolling ground battles like gunships.
Could they someday? As Davies and Davis mention that Slate reported in late June, there may be some wickedly lethal artificial intellgence coming. Nick Ernest, a recent PhD engineering graduate of the University of Cincinnati, wrote software for a Raspberry Pi machine that has defeated at least one retired USAF fighter pilot in simulated combat. As one engineer insisted to me at an arms show a few years ago, Boyd’s energy maneuverability theory suggest that all air combat maneuvering is a matter of math. Besides, in the missile age, maneuverability doesn’t matter like it once did, and computers can be taught well enough to release weapons on suspicious signatures on the wrong bearings.
Navies’ goalkeeper guns do that today, but flying killer robots may be a step too far. As a rather senior analyst from the Pentagon’s Office of Net Assessment noted to me a few years ago, plenty more robotic weapons were technically possible years ago—we’ve just not wanted to cross that sociological bridge. Even so, before we get enthused abot autonomous everything, just how hackable from afar is that $35 computer-fighter-ace? And thus maybe any drone? And just how would we know?
Even then, autonomy requires considerable human mastery. As Daniel Michaels and Andy Pasztor wrote in this Monday's Wall Street Journal, one of “Aviation’s Lessons for Self-Driving Cars” comes from Shawn Pruchnicki, a former pilot who teaches air safety at Ohio State University: “it’s quite ridiculous we would give somebody such a complex vehicle without training.” Note then that I have not addressed the organizational and doctrinal challenges of introducing new classes of weapon systems into established armed forces. Michael Horowitz, who teaches at the University of Pennsylvania, has already written that book. Military-technological development is as much a matter of security problems, domestic politics, and even bureaucratic rivalry as advances in the underlying science and engineering. This cautionary note might apply to additive manufacturing, rail guns, lasers, and ground robotics—Third Offset fields of technological endeavor whose proponents suggest have great promise, but on which we’re yet waiting.
James Hasik is a senior fellow at the Brent Scowcroft Center on International Security.