January 19, 2011
Highlight - Kramer

Vice Chair of the Atlantic Council and former Assistant Defense Secretary for International Affairs Franklin D. Kramer delivered a speech at the Black Hat Federal Briefings in Arlington, VA. In his speech, Kramer reinforced the need for the United States to strengthen its information technology infrastructure.

The full text of the speech can be found below:

 

CYBER CONFLICT: CHALLENGING THE FUTURE

Franklin D. Kramer
Black Hat Conference, Washington, D.C.
January 18, 2011

I want to talk today about the future of cyber conflict—about the challenges such conflicts hold and about challenging ourselves to shape those future conflicts—and to be successful.

As the Pentagon saying goes, predictions are very hard, and especially difficult about the future. But shaping the future is possible because that depends on what is done today. And what should be done today is what I want to talk about—and I want to talk about the offense and the defense; about the interplay of policy and technology; about markets and resources; and about people—about security providers, security predators, and the just plain public that needs security in their businesses, their social interactions, and their individual activities.

I’d like to start by setting a framework; then discuss government policy—what government policy should be, what would be good policy—and finally talk about how technology could support such policies.

Let me begin with two propositions, the first more hopeful than the second.

First, we have only begun to think about cyber conflict—because it is new. Fifteen years ago, there were no such cyber conflicts. Last year—2010—delivered us cyber conflict in multiple ways—the Chinese attack on Google; Wikileaks; and the STUXNET virus--to say nothing of an increase in intellectual property theft, in the power of the ZEUS virus, and in cyber criminality in general.

That, of course, was the hopeful proposition—but it is hopeful precisely because it is all new.

Let me give a historical analogy—an analogy, not a proof. In the 1600s, London had a great fire which nearly destroyed the city. It came about because an advance in living conditions—wooden houses for many—was not matched by security measures. There were no fire fighting technologies, no fire fighting processes, and no resources devoted to fire fighting. That was still largely true in the 1800s—and, in the United States, we had the great Chicago fire. Now, however, in the modern world, while fire may strike, it is not the city-devouring scourge that it once was.

Fire engines, fire hoses, water supplies, and fire hydrants, and highly trained fire brigades do yeoman work, and new building materials also make a difference.

It took a while for the technology and the processes to develop—but they did. In the cyber domain, things move faster. Often, at Moore’s Law speed. This conference itself is an example of moving information forward. That is the hopeful side.

But let me turn to the less hopeful. Ask the wrong question, and you generally will get the wrong answer. And cyber—and what to do about cyber conflict—is an arena where there is generally no agreement on what is the question, certainly no agreement on what are the answers, and evolving so fast that questions are transmuted and affect and change the validity of answers that have been given. All this difficulty is added to because of the multiplicity of stakeholders—differences in the types of users, think the Department of Defense and Mr. Jones who wants to buy a fence online; differences in the types of providers, think Intel and the developers of Angry Birds; differences in the type of governance—think ICANN and the Peoples Republic of China.

And you have even further serious constraints:
--The desire for connectivity often outweighing the desire for security.
--Multiple authorities with multiple impacts on the problem, ranging from DOD/NSA to DHS to NIST to industry regulators like the FCC or public utilities commissions, and all affected by private sector actions.
-No agreement on the value of regulation, nor understanding of different types of regulation.
All this ends up leading to no clarity on the problem to be solved nor on solutions sets to be provided.

Cyber, in short, presents a classic so-called “wicked problem”—not easily susceptible to resolution. And yet, I am nonetheless on the side of the hopeful. I think there is a good way forward—or, more accurately, there are multiple good ways forward. I do not have, nor do I think we need, a “science of everything” for cyber conflict. But I do think we can and should be appropriately granular—though, as I will describe, that will require two critical populations—and by that I mean the policy makers like me and the technological experts like you—to better understand one another—and for each to enlarge its problem space so that each sees the problem not only from their own perspective but from the others’.

So, let’s get granular. Let’s focus on particularized problems which allow for focused solutions. But let’s also think about how strategic frameworks can make a difference.

A good place to start is what conflicts are we really talking about.

Let’s take five widely known problems—four from last year and one from 2008.

The attack on Google; WikiLeaks; STUXNET; ZEUS; and the use of cyber during the Russian-Georgia conflict.

From a policymaker’s perspective, these can be categorized into problems of war; covert action; espionage; critical infrastructure protection; and crime.

From a technological point of view, they raise issues of remote attack (with multiple vectors); close-in attack; insider attack; and possibly in the broader Iranian nuclear context, supply chain attack. All involve critical technical vulnerabilities and exploits.

From a user point of view, they raise the issues of protection, prevention, and resilience—and the questions of scale, resources, and governance necessary to accomplish those tasks.

You will recognize that, at one level, these categories are just different ways of describing the same thing. But at a different level, the categories bring different information and capacities, and different risk evaluations and objectives to the issues. And, most importantly, by integrating that information and those capacities and those evaluations and objectives, they offer prospects for solutions that are greater than each such approach—standing alone—could achieve.

A word about solutions, however. The current system—at least in my opinion—is not fully fixable. I will leave to each of you to decide where we stand on a 1-10 scale of information security, but I doubt, thinking about the whole system as it actually operates, that there will be many votes in the 8-9-10 category, and I myself would put it at far less. I do not want to be misunderstood here. The problem is not necessarily technical. Rather, change—change of the system—is in order as a fundamental requirement.

However, and this is my second point, we need to avoid the desire for the perfect as the enemy of the good. Manmade systems have always had failures. Money is a pervasive manmade system that operates effectively even though there are counterfeiters, bank robbers, and Bernie Madoffs. Significant cyber attacks may not be preventable, but limiting consequences may be good enough. To specify an example: hypothetically, if there were a massive electric grid attack but good enough success to keep it at the “brown out” level with key installations working, that, for many purposes, could be “good enough.”

This concept of “good enough” is relevant also to the distinction in cyber conflict between national security problems and other serious cyber problems.

Cyber attackers of all sorts can have very advanced capabilities. But their objectives can differ importantly:
--there can be attacks with potential national security consequences where the aim is to undermine key capabilities, such as the military or the electric grid or for espionage, and
--there can be attacks with criminal objectives where the aim is to generate funds, for example, through selling or using data or extortion or funds transfer.
What is a “good enough” solution for each may be very different.

Let’s begin with the ultimate national security objective: war. It requires in the cyber arena—as it does in others—a combination of offense and defense. This integrated approach is being undertaken by the U.S. Cyber Command—run by Gen. Alexander who, in his second hat, also runs the National Security Agency.

Defense is critical because, as Deputy Secretary of Defense Bill Lynn has publicly stated, even U.S. classified networks have been penetrated.

So, what is the DOD trying to do:
--First, it is seeking to protect a very large enterprise—15,000 networks, 7 million computers
--DOD knows the current technical side well—its problems are scale and the resources necessary to act; there is competition for the resources—for example, the Afghan war takes $100 billion annually—money not available for cyber efforts.
--DOD’s second problem is how to operate despite attack
--all believe that under current conditions, cyber offense beats cyber defense.
--so, the question is, expecting to have defenses penetrated, how to keep a brigade combat team, an air wing or a carrier battle group effectively operating.

DOD’s requirement, therefore, is to develop
—first, an overall approach including technologies and processes that can generate
—second, courses of action that provide
—third, resilience, and it needs to have,
--fourth, the people capable of doing this.

But DOD’s strategic framework goes beyond defense and resilience. DOD is also focused on “active defense” and “offensive” cyber. “Active defense” means using sensors and capabilities at the perimeter of the DOD enterprise to affect the attacker. “Offense” means using cyber as one would any other DOD capability—kinetic or electronic warfare, for example.

To do this last effectively requires integration—not cyber on cyber, but cyber with other capabilities against an adversary.

Up to now, I am talking about war as commonly understood—think the Russia-Georgia conflict. There has been good deal of discussion, however, as to whether the U.S. is already in a cyber war. But war is not limited to one domain. Whether we are in a war and whether we will use our integrated capabilities is largely a Presidential decision, which—at the front end—is normally made with other elements of the political spectrum, in particular the Congress.

Distinguishing the threshold is often put in legal terms—but war can start for multiple reasons and normally harm or threat of harm is a key—so there could be enough harm from cyber attacks for the President to determine to utilize the armed forces (making his decision with the Congress). The critical decision will be a policy and geopolitical determination as to whether or not we are in, or will go to, war.

On the other hand, it may be that wartime authorities will not be invoked, but we would still face serious problems resulting from cyber attacks. That, as many have suggested, fits many aspects of our current situation--and it is conceivable that there could be further deterioration still without a decision for war.

Such a circumstance does not mean that the United States is defenseless nor that it cannot go on the offensive. We would be in a gray area—conflict but not war, and the question becomes how to go on the offensive when operating in the gray areas.

The short answer is that one size will not fit all. The longer and most appropriate approach will be for the government to develop a menu of responses—a whole of government approach—which can then be applied as determined in the particular instance to the problems at hand.

Now, what do I mean by a whole of government approach.

It ranges from the diplomatic to the economic to the electronic to the kinetic. It does not mean only cyber on cyber.

A first level of response could be diplomatic—bilateral or multilateral. In the Google case, for example, the Secretary of State called for an investigation into the alleged intrusions by the Chinese government.

A second level of response could be economic. In the non-proliferation and the counter-terror areas, the use of economic sanctions is well-accepted. Adapting an economic sanctioning regime to the cyber world would potentially be valuable, and it would add to the government’s available arsenal of responses.

At the third level, there is the potential use of either cyber or kinetic responses. Kinetic responses are unlikely unless the President determined a conflict situation existed. However, it is worth noting that in certain cases, such as the 1989 intervention in Panama, the United States has used military force in support of what has included law enforcement issues (there drug dealing). Moreover, in the counter-terrorism arena, both the Obama and the Bush Administrations, with the support of the Congress, have chosen to seek out adversaries by various means, including highly kinetic.

Cyber responses to cyber attacks or other uses of cyber by adversaries, such as communications or recruiting by terrorists, could also be utilized in certain circumstances. Such responses could include disabling actions taken against web sites or servers from which attacks or other actions were generated—or they might focus on other aspects of an adversary’s capabilities.

I do not want to leave the impression that cyber conflict is well understood or an easily controlled problem. There are significant dangers:
--The ease of entry could mean that non-state actors could engage—and their calculations could be less rational and potentially highly escalatory.
--The ease of use might mean that cyber is used too quickly—and that harm becomes great very fast—or that contained war is less possible.
--The ease of entry and the advantage of the offense over the defense means that it will be difficult for the United States to dominate in cyber as we have become accustomed to dominating in other domains—and that there will be uncertain results as a consequence.

So, at this first level of cyber conflict—war and gray areas—you can see that there are critical defensive and offensive capabilities required, including
--an effective defensive approach
--implemented at scale, with adequate resources
--a capacity for resilience since defensive failure and penetration of systems must be expected
--an active defense that will limit continued adversary offensive capabilities, and
--an offensive set of capabilities, integrated and utilizing a whole of government approach.

Now, let’s turn from war to a second arena—and focus on the electric grid. As you will understand, it is conceptually more difficult than the DOD arena—critical infrastructure protection will not only include issues of effective defense, scale and resources as well as gray area covert action, but also multiple entity interaction including public-private interface, and important issues of regulation and authorities.

The obvious background on the grid is that we are all dependent on it (including the DOD); it appears subject to attack—there are research papers published in China on how to disrupt it via cascading failure; and there are public articles (notably in the Wall Street Journal) on how it has been penetrated.

But the most dramatic event is, of course, STUXNET—which, while not grid-directed, showed the vulnerability of control machines—which are the very type of machines upon which the grid depends for effective operation.

STUXNET shows also that not only are the offense and defense at play—but if one accepts Sunday’s New York Times (and many other stories) that the offense is well ahead of the defense.

It would be entirely unfair to say that there has been no attention to those problems of defense. The North American Reliability Council has issued standards for cyber security; NIST has recently set forth a 500-plus page set for the Smart Grid. But it would be fair to say that there is a great concern that the grid is at high risk. One member of Congress has put the risk of attack at 100%. Members of industry have concerns that the NERC standards are compliance-focused, as opposed to being security effective.

There are technological gaps, since there is no architecture designed to protect a massive integrated structure against a deliberate attack such as is possible from a cyber adversary; and there is no sense of how much resources to spend on what are obviously private structures and capabilities—but with public impact.

This last is why electricity is a regulated industry—but it is generally regulated at the individual state level. The Federal Energy Regulatory Commission, which has authority over transmission lines, has required some steps—and, as noted, NERC has responded—but from my perspective the response currently falls in the category of: “NOT ENOUGH.”

The situation is much like that concerning the environment in the early 1970s—when there was a problem and legislation was required to get the market to respond.

For the grid also, it likely will take legislation (though it is possible that FERC and NERC could be more demanding). The key will be to move the market in such a way that good solutions can—and will—be adopted.
--It will be important not to freeze bad solutions. Outcome, not design, specifications should be the focus.
--Costs need to be accounted for, and “unfunded mandates” avoided.

There is a technological side here in addition to regulation—and an important focus needs to be on resilience. If it was difficult to protect DOD networks—where DOD is an entity—it is inconceivable that the electric power industry with, for example, some 3200 companies engaged in generation and transmission, can be immune to cyber attack. How to operate effectively under cyber attack is a key question.

So, with respect to the grid, as with respect to DOD, the policy objectives and the technological capabilities need to be integrated.

There is a lot of work to be done, and there is a need to have a much greater dialogue. The industry, the Administration, the technology providers, and the Congress need to get on with this job.

We do not want to have a STUXNET-type epidemic in the U.S. electric grid.

Let me move from the grid to a new area.

Espionage is another critical category for the policymaker. It is both a national security problem and a problem for private industry.
--There are numerous articles about Chinese cyber attacks into high level governments systems—presidents, prime ministers, secretaries of defense—in the U.S., U.K., France, Germany, and India—among others
--The German Chancellor raised the issue when in China 1 ½ years ago
--The Google attack demonstrates that even the best company can be penetrated.

There is a need for enhanced security against cyber espionage—but espionage is tricky. Penetration generally only involves taking—so the question is how to know and how to stop such action.

There are a number of possible techniques that would be valuable—but most require technological advancement to be available at scale and to be cost-effective. They include:
--changing addresses/moving targets
--non-persistence
--limits on who can come in/strong authentication
--review of what comes in/sandboxes
-- limits on what goes out
--review of what goes out

WikiLeaks probably would not fit under the heading of classic espionage –but it surely demonstrates the problem. And it also forces consideration not only of the remote attack, but also the insider attack.

There is an enormous role for the technical community to play with respect to this problem set. Espionage is the enemy of trust and of privacy—and a more effective solution set is important.

Now, let’s take the concepts that we have discussed and apply them to the cyber vulnerability of the private sector and the private individual and to the issue of crime. This is what many of you do every day—and what many of the conference sessions are about.

You know very well—far better than I—the specific techniques and the development of architectures, so let me focus on four issues:
(1) a broader regulatory approach
(2) an enhanced public-private interface
(3) development of international norms, and
(4) a technological focus on resilience.

On the regulatory side:

The first issue is whether users should receive protection from the network.
--Internet Service Providers see a lot—but should they be required to take action
--think about botnets: if your computer is part of a botnet, it affects not just you but others—like second-hand smoke—and ISPs can have a good view of botnets

So, from my perspective, they should use that capability. But how precisely to use it demands consideration---should the effort go to individual computers? to servers?—and just exactly how, and with what conditions?
--What about joint ISP actions? Should the government be involved?—to authorize?—or to participate?
--and what if there are unforeseen consequences? What limits on liability, if any, might be appropriate and under what circumstances?

All this needs to be considered to figure out protection from the network.

The second question is what should entities like large corporations be required to do. Melissa Hathaway has recommended a requirement for the Securities Exchange Commission to require corporations to disclose cyber issues—and thereby to create an increased demand for cyber security.

So, regulation could make a significant difference and we need to focus on it.

But cyber security cannot stop at the water’s edge if it is to be as effective as possible.

There are numerous calls for international approaches—in the UN, the ITU, and ICANN as well as for expanding the European Union’s Cybercrime Convention.

Purely hortatory efforts need to be avoided. Capabilities cannot be eliminated in this arena—there are too many positive aspects to cyber—and the United States does not want to be bound to treaties that no one else would adhere to.

Yet cyber is a key component for many countries—and a focus on international norms limiting destructive practices and enhancing international commerce might find a significant international consensus. At least, that deserves exploration.

As with many international treaties, understanding how to deal with the underlying technologies can make very significant differences—and this brings me to a third suggestion. I said earlier in this discussion that the policy community and the technology community would each benefit by enlarging their problem space and working together effectively.

The proposition I would like to put to you is whether there might be created a combined “think tank/skunk works” type of entity, with flexible personnel procedures that would allow government and nongovernment people easily to work together.

A think tank/skunk works could make a significant contribution to analysis, operations, and technology development. To be effective, it would need persons skilled in geopolitics, operations, and technical capabilities. It could look at issues such as deterrence and defense; governance and law enforcement; critical infrastructure protection; and resilience. It could be a catalyst that undertook its own work and helped guide others. It would have the advantage of the substantial individual brainpower found in the private sector and the policy insights of the public side. It would be necessary to have a system that allowed for flexible approaches to time devoted and to constraining factors such as classification.

There are, to be sure, already multiple public-private interfaces in cyber, some with substantial technical focus such as the efforts of DARPA or the National Science Foundation. The proposition here is that a focused center with appropriate insight might be able to make advances that combine policy and technical requirements and that are difficult to do either wholly within the government or wholly by the private sector.

Finally, let me close with the specific challenge of resilience, which you have already heard me discuss in multiple contexts. You are all aware of the so-called advanced persistent threat. You have all reviewed STUXNET. And you are all aware of the relative power of the offense over the defense. In war, in gray areas, in critical infrastructure protection, we cannot assume protection and prevention will be adequate.

And so we need resilience. But we will not achieve resilience without adequate technological advances. We will need a multiplicity of techniques and mechanisms. We will need to develop (and here I am relying on an analysis developed by Harriet Goldman):
--diversity
--redundancy
--integrity
--isolation/segmentation/containment
--detection/monitoring
--least privilege
--non-persistence
--distributedness and moving target defense
--adaptive management and response
--randomness and unpredictability, and
--deception.

It is within the capacity of those at this conference to contribute to those solutions. What I would like to propose as Black Hat continues its efforts that it consider devoting elements of those efforts to advance the specific techniques of resiliency.

So, let me finish where I started. If we are to advance on the problems of information security, to make the digital world more valuable by being more secure, we will need your help—in fact, your best efforts—both technological but also to help in developing the appropriate policy frameworks.

Thank you very much.
 

 

RELATED CONTENT