Back to event page 

The Atlantic Council of the United States

Addressing Cyber Instability

Welcome and Moderator:
Jason Healey,
Director of Cyber Statecraft Initiative,
Atlantic Council

James Mulvenon,
Cyber Conflict Studies Association (CCSA);
Vice President,
Defense Group, Inc.;

Gregory Rattray,
Cyber Conflict Studies Association (CCSA);
CEO and Founding Partner, Delta Risk, LLP;

Kris Martel,
Senior Cyber Security Architect,
Intelligent Decisions;

Michael Mulville,
Cyber Executive,
Cisco Systems

Atlantic Council,
Washington, D.C.

Date: Tuesday, July 10, 2012

Transcript by
Federal News Service
Washington, D.C.

JASON HEALEY: Good morning, everyone. Welcome to the Atlantic Council. My name is Jason Healey, and we’re actually starting a couple of minutes early because you guys were here early. So thank you very much. Hope that you all got a great lunch for today’s event from the Atlantic Council and the Cyber Conflict Studies Association on addressing cyberinstability. That was a great lunch and I think there’s still some out there. In this event, we’re sponsored by Cisco and Intelligence Decisions. So thank you to them at the beginning.

So about today’s event. We’re going to be hearing from – first from James Mulvenon, who’s the chairman of the Cyber Conflict Studies Association and the vice president of Defense Group, Inc., who’ll be telling you about the Cyber Conflicts Studies Association and the research project that we’ve been undertaking over the last several years in general about cyber – about stability and instability and national security generally. Following him, Greg Rattray, the president of the Cyber Conflicts Studies Association, will be then extending that – those thoughts about stability and instability and looking at how that applies to cyberspace and cyberconflict in particular.

After that, I will be moderating a panel discussion with Greg, Kris Martel and Michael Mulville on the issues that we’ve talked about – on stability and instability in cyberspace and cyberconflict. And we should have plenty of time for your questions, not least because we’re starting a few minutes early. So without anything else, I’d like to introduce James Mulvenon.

JAMES MULVENON: Thank you, Jay. Good morning. I’d also like to just extend some quick thank-yous to the Atlantic Council who always put on fantastic events for us. I also enjoyed the lunch. Thank Jay personally, who used to be the executive director of CCSA and has gone on to bigger and better things here, running the Cyber Statecraft Initiative at Atlantic Council, who’s a close partner of ours; Intelligent Decisions and especially Marc Kolenko who couldn’t be here today but was an important – an important sponsor behind that; Cisco and then – and to Hannah Pitts, who is the current executive director of CCSA and is the brains behind the organization of this whole thing.

Just a brief introduction about the Cyber Conflicts Studies Association, it was really the brainchild of Richard Clarke and Greg Rattray when they were both on the White House staff, set up as a not-for-profit organization with the explicit goal of trying to foster the development of a field or a discipline of cyberconflict studies akin to that of the nuclear warfare field from the ’50s and ’60s. And being a RAND alum myself, you know, that really resonated with me, and I saw many of those linkages. And we’ll talk about those today.

It began with a meeting up at MIT that has now sort of faded into the rose-colored mist in history as a seminal event where a lot of people got together and yelled at each other about cyberconflict. And over the years, CCSA had sort of meandered until we had – we got a very wise and generous nongovernment benefactor who gave us a very large grant to do a study on what we had laid out as a cyberconflict research agenda.

And in particular, we were looking at five core areas – a strategic or international level, which I’ll talk about in my introductory remarks; looking at operational and military and intelligence issues, that Bob Gourley, who many of you know, and Sam Liles, provided the inputs for in the chapter. We looked at sort of new security ideas, new security agendas. And that was Greg Rattray and Jay Healey. And Greg will talk about that in his remarks.

We looked the legal and ethical dimensions of the cyberconflict problem. And there Hannah Pitts was one of the lead authors as well as Maeve Dion and Enakin Tick (sp) from Estonia that many of you know. Then finally looking at methodological issues, how in fact do we build a methodological canon to go after the issue of cyberconflict? And again, Greg and Jay made major contributions to that.

And along the way, we put out a primer for Congress on critical infrastructure protection and we’re currently finishing a program that lays out a strategic plan – we’re not going to call it the cyber Manhattan project – but it lays our strategic funding and resourcing and R&D plan that we think, in terms of national investment, is required to actually deal with some of the problems, I think, that we describe in the report, which by the way we have a limited number of paper copies of but we’re, you know, like good cyber people, going to be distributing primarily through electronic means. And then in the next couple of months, there will be a much larger book-length volume that will also be primarily available electronically.

For my own short comments though, I was stuck, you know, as Thomas Kuhn has taught us, that it’s a little hard to know sometimes when you’re at a strategic inflection point. It’s really hard to know when you’re on the precipice of a new era, particularly when you’re in the middle of all of the noise and chaos of it. And there are a lot of people in this room, many of whom we’ve worked with for years, who have studied cyber for a long time.

And we all had hypothesized the notion of cyberconflict. We had all correctly identified, I think, the natural convergence between the technology trends that we saw, all of the resulting social and political and economic and military implications of those technology trends, and how that was going to converge with conflict. And we had all re-read our literatures on military history and military technology development, and the waves of new types of technology and how that did – how did that impact war. And we said, that’s going to happen.

But there was an aspirational tone, frankly, to what we were talking about. And many of us were confused and bewildered, knowing the vulnerabilities that exist and knowing the tools and the attack capability that exist, why we had not seen the big attack that Richard Clarke and others have described in their recent books. We wondered, is there a tacit form of deterrence in place that maybe we don’t even understand, or is it simply that potential adversaries in this arena have simply not had the right circumstances, the right perceived cost/benefit of actually carrying out the attacks against the vulnerabilities that we’re all too familiar with?

But given the recent unpleasantness – and I’m reading my youngest daughter Harry Potter right now, so I’m tempted to call it the malware-that-must-not-be-named. (Laughter.) And by the way, I’m proud to report, for the U.S. government people in the audience monitoring my comments, that I live in blissful ignorance of any information related to the malware-that-must-not-be-named. I know that most people in D.C. like to intimate that somehow they have special privileged access, and the wink and the nudge, and I’m really in the know. And I’m here to tell you, I have no idea whatsoever.

But, you know – but if asked, I remember the quote my grandmother always said. She’d put me on her knee, and she said Jamie boy – she said, when hostile forces question you, she said, always admit nothing, deny everything and make vigorous counteraccusations. And I said, OK, grandma. (Laughter.) But I think General Hayden said it best – who’s also a very careful parser of words on this issue – on “60 Minutes” recently when he said we had it – that Stuxnet – oops, I said it out loud – has, in fact, crossed a Rubicon, that we have seen finally at large-scale a piece of software have demonstrable and destructive physical effect in the real world to achieve a national security interest.

And so we have to ask ourselves, you know, what does this really mean for us? What are the implications? You know, have we entered that new era of cyberconflict that we have now – and what – and what is that going to look like? And I think that was, you know, frankly, completely coincidental with the release of our report, but we’re going to take the coincidence and run with it as hard as we can.

I will say that that Chinese linguists and Russian linguists and Farsi linguists in my employ have been spending a lot of time over the last month or two monitoring foreign reaction to the various articles about Stuxnet and Flame. And we have been struck by the relative silence from the channels that we expected within the Chinese military and other places to be quite hyperbolic about it. And the only way I can explain this silence, having been involved in U.S.-China cybersecurity dialogues and things where they’ve not, clearly, been shrinking violets about pointing out American hypocrisy, is that they are simply too busy modifying the code for their own purposes to spend any time on press releases.

So for me, Stuxnet, to just go back to the nuclear analogy – and we talk in the – at length in the book about the strengths and weaknesses of the nuclear analogy in particular. And even as a former RAND guy, it does have clear drawbacks. But for me, Stuxnet wasn’t Trinity, it was Hiroshima. It was – we had seen declared tests. We knew about capabilities, but in terms of actually seeing those capabilities married in a way that had a demonstrable, destructive effect.

Now, you could say, well, that’s hyperbolic. You know, tens of thousands of people died at Hiroshima. You know, you’re misunderstanding the nature of cyberconflict. I grant you all of that. And certainly later in the Cold War, global thermonuclear war would have essentially caused an extinction event on the planet. But in terms of that opening, destructive strike that really focused everyone’s mind, for me, Stuxnet is Hiroshima.

And therefore, over the weekend, as I was thinking about what I was going to say this morning thinking about this new era, I went back and read what I thought was an incredibly prescient book, which is Bernie Brodie’s “The Absolute Weapon.” Written in 1946, had some remarkably prescient things to say about what was to come. And in particular, you can see the seeds in Brodie’s writings of what we then later mined in our studies of Kahn and Wohlstadter and Ellsberg and Shelling (sp) and all the greats in the canon, trying to understand what the strategic, with a capital S, implications were for cyber.

And the conclusion we came to is in the title of our – of our event today. The conclusion we came to is that the current strategic cyberenvironment is structurally – fundamentally and structurally unstable, currently. And that, you know, one really gets the sense that we are hurtling along down this path with, you know, the – you know, the belief that somehow we’re going to make it up as we go along.

Now, if you actually go back and read some of the core documents of the Cold War, you’ll realize that a lot of what we did in terms of nuclear deterrence in the Cold War was making it up as we went along. And there was a lot of dynamic offense-defense push-pull. In retrospect, it often seems much clearer, and we can often look back and sort of eliminate cognitive dissonance by sort of squaring the edges on things.

But let’s talk about instability, our inability – first and foremost, in terms of a descriptor – our inability to establish credible deterrence. Now, there’s been some interesting writing in recent years about cyberdeterrence and how you would institute it, but I would argue that we’re a hell of a long way from that right now, for a whole variety of reasons that other people have explicated at length, not the least of which is inability to establish proportionality or disproportionality of response, and so on and so forth.

We live in an environment in the cyberrealm, in my view, that’s offense dominant, with a high degree of incentive to preempt, which is a core element of the instability. We live in an environment that once you have crossed that line, you have incentives to escalate quickly, and there are few clear thresholds on escalation control. And then finally, very few if any identifiable mechanism within this environment for war termination.

So all of the key phasings that we dealt with in the strategic literature, we see strategic instability in the cyberenvironment. Now, where does this instability come from? And by the way, I’m not talking about the attribution problem anymore. Five years ago, it was very easy for me to deflect questions from senior policymakers about cyberdeterrence by saying, oh, there’s an attribution problem. We have no idea who’s doing it. It could be the Romanians; it could be some kid in Tarzana, California. You know, forget about it. Everyone throws up their hands and says, you know, it’s impossible.

We’ve made tremendous progress, thanks to many people in this room, on the attribution issue. There are still going to be problems in terms of trilateral deterrence, about false flags and all kinds of other things, but we’ve made tremendous progress on that front. And despite that progress, we still live in an environment of structural instability.

Primarily let’s begin with the basics. We’ve built this entire edifice of global cyberspace on a flawed architecture. The architects of this architecture tell you that it’s flawed. They tell you that it was never meant for malicious behavior, that they never built in security – they thought it would slow down the network. Some of you may have gone – there was a Council on Foreign Relations meeting two years ago. One of the architects of ARPANET, I won’t name him, stood up.

I felt like it was an AA meeting. I’m Irish, I know that’s like. And he says, you know, my name is, you know, James M. and it’s all my fault, you know? And we’re like, no, no. Then everyone’s like, hello, James. You know, just like an AA meeting. (Laughter.) But he basically said, look, I turned down the proposal to add security to the stack because I thought it would slow down the network too much.

I mean, we’ve been dealing with the consequences of that – of an architecture that provides anonymity and deniability, that has fundamental authentication problems. And we’ve been gluing security on to the side of it ever since. Now, there’s many people in the room who work in the security and hardware and software industry who will tell you, we’re making progress. We’re doing this; we’re doing that.

But I think if we look fundamentally, the architecture itself was not designed for the things we’re actually asking it to do. And those who are saying, well, now we need to re-architect, we need to have a national, you know, R&D cyber Manhattan project plan, understand also that that’s like repairing a plane at 30,000 feet while maintaining cabin pressure, because of the burden on that architecture from global commerce.

The other half of this problem is that, frankly, the technological change is so rapid that it’s outpacing any attempts by policy or legal or regulatory frameworks to actually govern it. Those of you who regularly follow the Ziegfeld Follies up on Capitol Hill of various cybersecurity bills understand the dilemma posed by bills that are atavistic, backward facing, while the technology continues to move.

There is a lack of any effective governance regimes. And Greg can talk about this in the context of his experience with ICANN. But this – for me, 2012 continues to be the year of sovereignty on the Internet, where we have this fundamental clash of ideas between the United States, frankly, on the one hand, almost alone, against the rest of the world that wants to shift Internet governance to forums like the International Telecommunications Union and other state-based fora, while we continue to push an agenda that says, no, there’s places at the table for privacy advocates, for nongovernmental organizations.

And it fundamentally revolves around a discussion of what is Internet sovereignty. And countries like China and Russia and others have come to the clear conclusion, they’ve understood that every node, every switch, ever router, every client is within the boundaries of a sovereign nation-state, and therefore governed by its laws, or travels on a submarine cable or a satellite connection that’s owned by a company that’s incorporated in a sovereign country, and therefore governed by its laws.

In other words, with the exception of sea land, there is very little part of the architecture that does not fit into a traditional Westphalian sovereignty orientation, and yet we want to ascribe to it this sort of meta-sovereignty level that says, you know, I have these rights as a global Internet citizen, you know, to go onto some cosplay chat room and talk about painting myself blue and running around in the woods like in “Avatar” with a bow and arrow. And that’s – it’s in the bill of rights. I’m allowed to do that and I’m allowed to do it anonymously. And so that clash of ideas – and I would even posit that even within the U.S. government there is a disagreement about this. I work with agencies every single day that have a definite sovereignty view of the Internet, in strong contrast to that of Secretary Clinton and the State Department.

There’s also a relatively low cost of technology and ops and a low barrier to entry. Now, you could say, well, thinks like the malware-that-must-not-be-named clearly required state-level resources and had – you know – you know, you had to have your own, you know, bargain-basement centrifuge cascade in order to test it against and everything else. But what about all the people who’ve downloaded the code for Stuxnet and Flame, and now can – and so they have benefited from the millions of dollars that were spent, potentially at – on – of state funds, and they get that – you know, basically the advantage of economic backwardness Gerschenkron used to talk about.

We all are familiar with the fact that the – you know, from the way it’s destroyed our vacations that cyber operations are running at net speed, which affects our ability to even assess what’s going on or even decision making. And the operations themselves – I mean, remember how many billions we spent trying to build a nuclear C2 infrastructure so the president had, what, 20 minutes to make a decision? Now we’re talking about hundredths of a second, milliseconds, nanoseconds. You know, is it even possible for us to then build this gigantic edifice of C2 and COOP (ph) and everything else?

And you know – you know, being Irish, I have to point out that the problem’s getting even worse with the advent of social media, which has allowed enumeration and targeting that was never possible before without a global SIGINT infrastructure, the move towards mobile, the push towards all kinds of connectivity where security is, frankly, an afterthought, and the movement towards cloud. I would argue many of which most of us, as users, are completely opaque to whether in fact we are in a cloud at any given moment or whether we’re actually on mobile or at – you know. And it’s been designed that way for ease of connectivity, but we’ve sort of lost the sovereignty over our own decisions about security and where we want to be on the system.

So the implications of all of this, for me as one of – first and foremost – and this is the natural lead-in to Greg’s discussion – we can’t achieve stability alone. We can’t build safe havens, we can’t seize high ground. But we also know that there’s going to be difficult trade-offs. You know, how do we balance Internet freedom with the control and authentication and trusted identity that we need to restore security to the network? How do we balance innovation with state investment? And frankly, how do we balance the fundamental notions of openness versus security?

You know, that quasi-AA meeting I mentioned before, this person stood up and said, I don’t think the Chinese are doing the wrong thing by creating an Internet DMZ. We need an Internet DMZ. We need to build the walls as high as we can. You know, we need to start bailing blood out of the canoe. And, you know, everyone sort of shouted him down and said you’re being emotional, you’re overreacting. But that is a natural human impulse, that says, you know, just cut the wires, without recognizing, of course, that we no longer can cut the wires.

So we need a – both a national and a global security strategy to deal with this, and it has to be based, as Greg will talk about, on notions of resilience and efforts to clean up this ecosystem. And with that, let me introduce Greg Rattray, as president of the Cyber Conflict Studies Association and former official at the White House and former senior commander in the Air Force on cyber issues, to talk precisely – see, I just get to describe the problem. Greg gets to tell you how to fix it.

GREG RATTRAY: Well, the panel will describe how to fix it.

MR. MULVENON: (Laughs.) All right. Thank you very much. (Applause.)

MR. RATTRAY: It is hard to follow James. You know, I don’t quite have the, you know, vivaciousness and loquaciousness of my co-conspirator on many projects. And you know, he described sort of the dynamics that cause instability from traditional national security perspective very well. Having studied under Tom Schelling – and James knows the canon and sort of described why, with a traditional national security frame, this problem is tough and we’ve got to work on it.

What I plan to do with this portion of my remarks – because you have to bear with me; I’m on the panel too – is go a little bit more into why this environment is just structurally instable, and then talk a little bit about what do you – what does that mean for you, and how do we behave, given – if you accept instability – which I think generally the national security community and sort of the cyber community, you know, very much does not want to accept the notion that its environment is inherently instable – it does have implications for what we should do, both sort of on our own initiative and then, as I’ll talk a little bit more in addressing cyberinstability on the panel, in terms of collaboration – because I think we need to be doing both things, things to protect ourselves in a competitive sense and to deal with the collaborative opportunities there.

So talking about it as an environment: I grew up as an Air Force officer serving under some of the people in this room and was steeped in a traditional national security view. Left the Air Force in 2007 and became ICANN’s chief security adviser – so the global coordinator of the domain name system, weak authorities in order to accomplish that mission and with a lot of pressure on it to perform in a security sense. And you know, my views have been shaped a lot by that experience in the sort of late portion of the last decade, in that there is – this is a global problem. And it’s an environmental problem as much as it is a sort of a traditional national security problem.

So I got a few aspects of the environment that make it instable that I wanted to discuss, one of which sort of builds fundamentally off of James’ proposition; it’s about how the Internet was designed and how it evolved. I take a little bit different perspective, and James and I have had this discussion. The Internet design was not flawed, right? You know, A, it was an experiment. It was largely, you know, a successful experiment in terms of, you know, what DARPA funded in terms of connectivity and sort of, you know, multipath routing, for a lot of reasons. But for the globe it has been a highly successful experiment. Can you imagine where we’d be in 2012 economically, in terms of innovation, in terms of global political dialogue, without the Internet? It would be a very different world, right? So – and most of those things are seen, at least from our eyes here in Washington, DC, as very important things.

The problem is the inherent nature of the open, interconnected Internet makes it very difficult to secure. So we are fundamentally in a big tradeoff space. I mean, we basically are dealing with too much of a good thing: too much openness, too much interconnectedness. And those of us who focus on security have got to remember we’re part of a balancing act that, you know, our society and our government needs to perform related to what we want out of – out of the Internet. Security people like borders. Those who want to promote more of the Arab Spring and the rise of democracy don’t necessarily want to promote borders. So again, we’ve got to deal with that.

I would sort of ask everybody who hasn’t had a chance – the U.S. issued an international, you know, strategy for cyberspace back in May of last year. That is a very thoughtful document, unlike – there’s a proliferation of national strategies related to cybersecurity right now. But the U.S. did grapple with the fact that that we want a lot of different things, conflicting things, with relation to what we want out of cyber. And that document does a very good job of articulating a number of different things that we want, all the way from innovation and openness through Internet governance that’s private-sector and multistakeholder – and I’ll talk a little more about Internet governance – and – but is trusted and securable. And again, you got to probably put each time we’re dealing with an issue in context to make – to make those tradeoffs.

Something else that James didn’t mention but – or did mention but I’m going to go into in a little more detail is the pace of technological change. As a guy who works with enterprises, was in the Air Force, who’s spent a lot of times trying to secure different, you know, companies, government organizations – the fact that your infrastructure is constantly changing on you, that the way your people use the technology is rapidly evolving, again, makes security of that very, very difficult.

You know, a question I think we’re going to have to answer, as you’ll hear, is: Is the pace of technological change excessive or just unstoppable? And I’ll give you – you know, we’ve mentioned some of these, but the proliferation of mobile devices and now the essential nature of mobile devices to do things like even run control systems; you know, the social media – James pointed out that that has risks in terms of people’s ability to target individuals and weaknesses in enterprise; the move to cloud computing, which could be good from a security perspective but also presents a lot of risk.

Mobile in particular – and I spend a significant amount of time now in the financial services sector. The bank security guys understand that mobile is not a secure environment. Yet there is absolutely no way within the U.S. banking industry or any banking industry that you can convince the CEO and the board that you shouldn’t have mobile banking, because the competitive pressures if you don’t do it and your competitor does, and the customer is moving to your competitor because they got easier access to what they want to do in the – in terms of banking – those are the types of tradeoffs that are fundamentally difficult and make sort of our headlong adoption of the technology – which is economically efficient, makes life easier – presents security people with a very difficult environment to manage. And you know, one of the things I’m going to suggest is we really need to think about this as a risk management challenge.

(Inaudible) – in the long-term strategic view, the question is will the pace of technological change continue? I mean, if it slows down, that would help on security. Jay’s actually written some work on cyberfutures where he posits, you know, technological change that becomes beneficial and sort of removes some of these challenges – I will tell you, I’m a skeptic, as I’ll describe in a moment. But I just don’t see anything in the environment that sort of indicates that the pace of technological change is slowing in the near term. And sort of spending the last 15 to 20 years focused on this issue, to me it seems faster every year. It might be just that I’m considerably older – (chuckles) – than the first time I was dealing with these issues, but the pace of change seems, you know, almost uncontrollable when you look at it from a sort of security perspective.

There has been sort of holy grails in terms of defensive technological innovation. The thing that really comes to mind, to me, is sort of the early portion of the last decade, the 2000-to-2005 time frame, where we started to have inklings that this was really going to be difficult to handle. And when we talked about it sort of from a White House perspective, we would go and, OK, how are we going to solve this problem? Why are we having intrusions? Well, if we just did better with firewalls, OK. If we just implemented PKI – once we get the PKI implementation done, we’ll be fine.

The reality is, you know, most of these technological solutions on the – on the defensive side are not scalable and not holistic. And therefore, while the technologists can and should and will be sort of trying to innovate and keep up, I’ll just tell you, my sort of broad view is highly skeptical that we’re going to get sort of a technological rearchitecting of foundations or a tool set that changes the dynamics that James described. And therefore you have to strategically plan for your most likely future to be one where defense is very difficult.

Then sort of one of the aspects that I think is underattended to, though – it’s – I hear more and more talk about it, which is a good thing – is the dynamics I’ve been describing have allowed certain actors to take advantage of the permissive conditions that the Internet provides. And there is a significant Internet underground. It is fueled by crime. You know, it has significant well-resourced actors that organize very large, you know, portions of computing power and the movement of software, malware, that can be used for a lot of different malicious purposes – certainly phishing and financial crime, but those botnets can also be used to steal information. They can be used for disruptive purposes.

A significant issue, and I think one of increasing importance, is the disruptive attacks that are built on the back of botnets that are fueled by crime are used for political purposes. That was clearly the case in the Estonia attacks and other attacks that we’ve had. So you’ve got nonstate actors growing capabilities that are destructive and coercive against state actors.

The approaches that we’ve had in terms of trying to change the environment so we don’t have this underground have not worked. I think we are seeing a lot more attention to this issue. Some in – most – well, some in this room are probably aware that in the United States over the last year or two, there’s been a big focus on botnet and ISP responsibilities and, you know, is there some responsibility for carriers – who used to be able to pretty much throw up their hands and say, you know, we just transit traffic; we don’t look at it – to take a role in remediating, you know, the presence of malware on computers that as an ISP they’re connecting, as a Web hoster they’re providing.

But we’ve reached a tipping point, and I – you know, to me the Internet underground is a significant actor in cyberspace and one that, when I get to the collaborative remarks, should really be the focus of where nation-states – even China and the United States – can collaborate in order to try to change some of these dynamics of instability.

I do think these nonstate actors, you know, haven’t watched, you know, what the U.S. national security community pays attention to. We tend to mirror-image. You know, we tend to think our adversaries – if they don’t do things the way we do them, they aren’t as dangerous as we know we are, right? I think this is a – can be a very, you know, dangerous sort of set of assumptions that we generally use. I think we dismiss the technological capabilities of anything but the highest-grade state actors and foreign intelligence and military services. I think, you know, we need to be careful of that; that nonstate actors and even lesser state actors can rearchitect things like Stuxnet and pose very significant threats.

So I think we unfortunately reached a tipping point about five years ago, moving towards instability in the sort of professionalization of an Internet underground that now is a serious national security threat. And we need to sort of move our way towards another tipping point, which sorts – removes that. And again, a little bit later I’m going to describe some of the ways that we can hopefully attack that problem.

In terms of living with instability, right – so if my sort of first set of remarks were about the fact that we basically are living with too much of a good thing that we really like – we really like the Internet; we really like the way the Internet works and, you know, why it works the way it does in the macro sense – and you know, hopefully we’ve provided some description about why it’s just not going to be easy to layer on security in addition to taking all the benefits. What do you do about that?

And James mentioned this – and we’ll probably return to this theme a few times – is, you know, you have to basically be resilient, right? You know, what I get concerned – increasingly concerned with in the cybersecurity dialogue is it’s not a risk management dialogue. It’s a dialogue about fixing the problem. Well, we have a problem that we want. You know, we want the Internet the way it is, but we don’t want it to be – have any sort of bad things happening on it. You really can’t have it both ways.

And I sort of have a note to myself to mention – where I really became interested in this was around – god, it’s too long ago – (chuckles) – 25-plus years ago, when William Gibson wrote a novel called “Neuromancer,” which I read when I was young Air Force officer, intelligence – you know, just working intelligence with an F-15 unit. But he described a future where computers were very important, and people were attacking. And these novels – and there’s another guy named Neal Stephenson, who’s a science fiction author – you know, are described as disutopian (ph), because they posit a future where there’s very little state control.

But if you actually read the novels, the people are having a good time in those novels, right? (Laughter.) It’s not like people don’t like their life. So they’re – it’s not disutopian (ph) in that sense. It’s disutopian (ph) from the sense of state actors are in control of an international environment that they like. Unfortunately, those have been very – well, again, I’ve fallen into the “unfortunately” – those are prescient novels, right? And you know, the issue is how do we adapt to that future? So I’ll give you, you know, a few sort of high-level points. And then I think, Jay, I’ll wrap up this sort of initial set and we can start the panel.

If you’re trying to live with instability, what you can’t do is stop bad things happening. To try to assume that is an avoidance strategy. And again, I’m arguing, to a degree, that cybersecurity – you know, much of the dialogue is an avoidance strategy when it really needs to be about resilience and how you sort of take advantage of the opportunity to – that the Internet presents while mitigating the bad things that can happen because it’s an inherently insecure, instable environment.

Probably the biggest choice you have at all levels – and I’m going to just hit personal, enterprise and national – is make decisions about how you use it, right? You can avoid most of the risks; you just don’t use the Internet, right? You know, it’s amazing in 2012 how that seems like almost an impossible thought. But I go back to sort of the mid-’90s, when I started to engage in this. And this was the era – (inaudible) – and others – of DOD undertaking a revolution in military affairs and moving towards network-centric warfare.

You know, for a decade, from ’95 to – you know, to 2008 maybe, we sort of went headlong into unassumed risk, putting all of our operations more and more in a networked world, you know, not sort of positing we had choices. You know, we didn’t have to run air operation centers over communications networks that ran on, you know, the global backbone that many people have access to, right?

We’ve got to sort of fundamentally think about, as we go forward, how to reduce the amount of risk we’re assuming by how we use it. That goes to the personal level, as I’ll talk about when we talk more about cleanup and what we can do in a positive way, making choices about how you use the technology. If it’s something really important to you, do you really want to do banking online? I’ll tell you, your bank wants you to; it’s a hell of a lot cheaper for them to deal with you as a client if you go online. But in terms of your information and your money, you need to make a conscious choice about that.

At the enterprise level, you know – you know, I’ve hit on the fact that it’s really about agility and understanding what you want to assume in terms of risk. And on the defensive side, we need to move toward strategies that aren’t as passive. We need to think about sort of deception. There’s actually – Ed Amoroso, who’s the chief security officer for AT&T, wrote a book that didn’t get enough press. But the first chapter in that book is about defense and the ability to deceive adversaries about where they are in the network or what they’re getting out of the network when you talk about APT or espionage threats. And I think we’ve got to think about more creative strategies in a competitive sense.

And at the national level, I’ll just go back to – we want a vibrant Internet. You know – you know, the challenge for the U.S. and many – well, and the thought comes to mind that I am going to hit on – is to continue to reap the benefits. We are the global leader. I had an interesting discussion with a guy from Georgia Tech that works on botnets about a sort of technological soft power, in – (inaudible) – terms, that we have. People – he’s an expert in the domain name system, which is a fundamental way that you track botnets and can cut off botnets.

He says the globe goes to the U.S. for DNS resolution. Our engineering prowess, our – sort of the confidence that we will run the Internet well – the globe still comes to us for services, which provides us a lot of power in terms of how this thing – and a lot of security, in a sense – in terms of what will happen on the Internet. We want that. We will not have that if the Internet becomes a sort of bunch of national fiefdoms, and a lot of walls and borders are built.

The question is, you know, if we’re going to keep it open and we’re going to be the most reliant, how do we come – become resilient? And I’ll tell you, this is not something that’s a happy story. Resilience is going to be expensive. We’re going to have to make choices that mean we don’t use systems that would cost less if they were on the Internet, but because we want to be resilient aren’t on the Internet. The human capital necessary to see what’s happening to you and respond effectively is expensive. This is another layer of investment – significant investment that we will need if we really want to be resilient in cyberspace.

And then I have to conclude – you know, James talked about the thing that we can’t talk about. But if you are in a glass house, which I argue we are, you should not be the one initiating throwing rocks at each other because, you know, there – we will have, you know, rocks come back at us. And we probably have more glass to break with those sorts of rocks than most people. So you know, I think we’ll probably get into that a little bit more as well. But James and Jay, I’d like to wrap up my remarks there. (Applause.)

MR. HEALEY: Great, thank you. And let’s go ahead and assemble the panel up on the front. And so while we’re – while we’re assembling – thanks for switching – we’ll – (inaudible) – this for a couple minutes. Maybe – we’ll shoot for maybe a half hour, perhaps a little bit less, for us to have some comments. And I’ll ask some questions. There’s certainly plenty there for us to cover. I’d really like to leave lots of time. We’re due to end at 1:45, so hopefully that gives us at least half an hour for a more (general ?) discussion and questions. I’m going to try and leave even more than that. All right. So thank you very much.

And let’s start off with some initial comments, and I’m going to start. Greg, you probably don’t have any additional comments from that, but just in case you do, I will start –

MR. : Yo, Jay, why don’t –

MR. HEALEY: Yeah, I’ll start on my – actually, why don’t I start – why don’t I start on my right here with Kris Martel from Intelligent Decisions. I’m not going to introduce each of the speakers. You have the bios. I do want to say if you don’t have a seat, you can come up and use the four that we just abandoned. For those that are standing, please feel free.
Two administrative comments before I go to Kris Martel from Intelligent Decisions, Mike Mulville from Cisco. First, Greg had mentioned the cyberfutures work that Atlantic Council has done. We’ll talk about that a bit more. Those papers are going to be outside. Second, a quick thanks to Simona (sp) and Sarah (sp) of the Atlantic Council for helping put this together. And with that –

KRIS MARTEL: Hi. Is this on?

MR. : Let me turn – yeah.

MR. MARTEL: Kris Martel; I work with Intelligent Decisions. A little two-second overview: I worked in the intelligence community as a contractor and a lot in the federal government, DOD, GSA and a whole bunch of other agencies, and then before that, I actually had my own small business in Florida, where I was a – kind of did all – everything from engineering and designing networks to administering their security and so forth.

So a couple of the areas that were very interested, we kind of hit on a little bit, one of them being hacktivism being one of the major instability areas that we need to look at and we need to face. And this will tie into my second point I’m going to talk about in a little bit is – so the nonstate actors, you have the community in general. You’ve got Anonymous, LulzSec, you’ve got the WikiLeaks people who actually take an interest in different topics and it’s their – you know, it’s their way of actually trying to put political pressure on states and governments. So you’ve got LulzSec and Anonymous who have actually gone after the government and industry, like Sony Entertainment who – and then you’ve got the HBGary-type issues – who actually, when you – when you step, infringe upon what they consider their rights, they actually group together and they make it their – they do their attacks – denial of service, bring down services. You have here the copa (ph) where they actually group together and they make – they do their attacks, denial of service, bring down services. You have your – the copa (ph) where that denial-of-service bring-down the FBI and the Department of Justice, from denial of service bring down Web servers. So the capability is there to mass against any kind of decisions that the federal government may make, and so there’s kind of a fear of retribution with the hacktivists. So you don’t – you don’t know where they’re coming from, they’re people working out of their homes. They group together. There’s – you know, there are a few leaders. The government knows about them as well. But their goal is to penetrate the government – you know, penetrate organizations, bring them down to teach them a lesson, not necessarily to cripple them in this mass event, but the capability is there. And we’ve seen it. Forty-five minutes after the copa (ph) stuff had come out, the DOJ site was down, just from a denial of service – incredibly fast, incredibly efficient.

When you combine – so we have to look at this. And when you combine these types of different hacktivist groups, you put them together and you start combining their intelligence that they’ve gathered, access to database, LulzSec, for example, with their Stratfor incident, where there’s, you know, millions of email addresses and emails that government agencies were requesting this company, Stratfor, to do investigation for, gather information. If you take that, you combine it with WikiLeaks, and the goal of these organizations is actually to get in and get the databases of all the government agencies. That can be – that could be incredible consequences when they put these things together. And they actually do this compromise.

So that brings us to the second point of how do we – how do we defend against this, right? If you can’t really – if you’re afraid of retribution, you’re afraid that – you can’t take on everybody. You know, you can’t take on these high school kids, you can’t take – that are, you know, signing up to loan their machines for this type of hacktivism effort – how do you fight against this?

And so we’ve got to look at education, OK? A major conflict is the education: the fast pace, Greg has said, resilience in cyberspace. And there’s huge underground cybercrime. This is true. But our education, when we’re teaching these – when I started off, I was a – I got an undergraduate degree in anthropology and human biology, and then I went and became a network administrator. That’s how I started off. And so I – and I was very curious. I liked it, I learned it, right? And so I was a very good troubleshooter. And I said, this is really fun. Then all of a sudden I started doing more and more work, and I started my own company and kept going from there.

Now they have all these cybersecurity degrees. They have, you know, master’s degrees and even Ph.D.s in computer forensics even to all kinds of stuff. But where – and the people I know and that I work with and the curriculums I’ve seen is, we’re teaching based on the past and current what’s happening, when it’s changing way too fast. Resilience: How do you get resilience? You actually have to teach for the future. You’ve got to teach out of the box. You’ve got – we have to teach our – the future generations of the cyberanalysts and security analysts and network administrators not to think about, you know, oh, how do I lock down this box, OK? And this box is, you know – we’re going to the Web. This is all going to be irrelevant to almost everybody. A systems administrator, network administrator is going to be pretty irrelevant in the future as we move to the Web 2.0 and the cloud and all that. We’ve got to figure out how we’re going to secure this, how we’re going to teach these people to think out of the box and attack these problems head-on in the future, and that’s not teaching them the way security is now. Security is not this monolithic thing that you can apply to every device or you can apply this security to a person or a device. It’s – you – it’s – you need to – we need to work on, OK, data, and information is what’s out there. That’s what our goal is – not to secure the networks. Our goal is to secure the information. And so we’ve got to think about that way. That’s how we need to attack these problems in the future, because the information’s all going to – always going to be there, and that’s what it’s all about. How the means of how we’re going to do that, that’s going to be – that’s a little bit easier problem. But how we secure the information is not going to be, you know, on necessarily all routers, firewalls and servers and workstation endpoints, because we’re going to that Web, we’re going to the cloud, and that’s where everything’s at. So –
MR.HEALEY : Can I pull on two thoughts that you had in there?


MR.HEALEY : I thought that it was really interesting in the future of how the cloud might change these dimensions of what’s needed for the workforce. And to tie – tying that even more closely on what we’re thinking about here for instability, what else would – do we need to do differently? I mean, so if we – is it training more for resilience? And how do we do that? Does that happen at universities? Does that happen in other places?

MR. MARTEL: Right. Well, yeah. We need to – we need to start young. We need – and we need to foster – so you look at these high schoolers, and these kids, they’re incredibly intelligence and they love tinkering and they – you know, they like to do their own little hacking or whatever they want to do, right? It’s so easy because all the information is out there. We – it was already discussed that, you know, the Flame, the code, you know, that was millions of dollars spent on creating this – the most incredible malware ever, and now everybody can get it. So now they have access to this, and so they can take that and they can learn.

These are the types of people that we need to focus on and bring them in. These are the guys – you know, these people that want to lead hacktivist groups and that type of stuff, we need to pick their brains, understand where they’re thinking and teach that kind of – and allow – it’s not just the instructors. This is in college and high school as well – not the professors, the instructors that are doing the teaching. It’s the students being an interactive learning experience so that we can – we can understand what – where they’re coming from, what they learn and what experiences they can bring and don’t trap them in a box of here’s how cybersecurity works, here’s how you do networking, there’s this certification and these are the rules you need to learn by. It’s let them expand, let them think and actually tap their brains for alternative ways of learning.

MR. : I’m sorry. I was jumping – (inaudible). Please, please.

MR. MARTEL: And so yeah, that’s – so that’s – those are my points. Those are a couple of the big areas in – with cybersecurity and in cyberconflict that we need to address. In order to get – to be resilient, we need to start teaching from a younger – from a younger point of view, to be resilient, and not teach – and keep them in a box.
MR. HEALEY: So I’d like to – when you were talking about the hacktivism. And we’re familiar with it; we’ve been seeing Anonymous and these other people and WikiLeaks, to some degree, a lot in the news. And when we see that, we tend to see that as, wow, that’s an – that’s an individual, localized tragedy, whatever happened to that company. But I think you were – you were starting to touch on – this is endemic to cyberspace and cyberconflict, as we’re dealing with now. And it might not just be individual crimes, but we’re stuck in a future where that’s – where we have that, and it might even add up to strategic kinds of instability, you know, of these non-state actors – the way that James introduced us into. Can you – can you talk a little bit more about that threat? I mean, how do you think those groups may or may not be adding to this overall strategic instability within – in the domain?

MR. MARTEL: No, they’re definitely adding to the instability, the strategic instability, in this domain. The hacktivism groups – so you get – some of the big ones, you know, the LulzSec, the Anonymous and the WikiLeaks – they see that they have the capability. There’s – they are legitimate; they have shown that when they set their mind to something they can actually do it. And they have crippled companies.

And it’s not just worrying about from the state perspective. You’ve got – with Sony, right? And I know this because my kids, they were really – (laughter) – they were really upset when Sony was brought down and hacked because their games came offline for a month, right? So all of a sudden they couldn’t play their online games because their accounts – you know, they had to bring their servers down and so forth. So it not only affects, you know, the federal government, the states and different countries, it affects everybody. And the other thing is it was interesting to them, too, at the same time. They’re kids, they’re curious, so they’re like, how did they do this? You know, they’re asking these kinds of questions. (Laughter.) I tell them it’s not a good thing; you don’t want to do this. You know, you can tell them, you know, a little bit more horrific stories than they probably would actually happen to them. But, you know, there’s not going to be someone coming and kicking down the door and, you know, doing a strike on your house, likely. But it was interesting to them. So while they were upset, it actually had intrigue. They’re young, and so that type of thing is interesting to them.

Even Occupy Wall Street, right? They started going – they had Anonymous people actually going out there and representing on Wall Street, and that – those kinds of environments. It’s a nonthreatening way, but you can – they can do a lot of damage to the government and states and countries.

So how do – how do you fight against that? You don’t fight against it, because you – and you’re worried about retribution if you try to – and you try to bring them down. For example, it happened with the Stratfor incident. That’s why they actually attacked Stratfor, because they were providing information on these hacktivist groups. So they were like, well, if you’re going to do that, we’re going to take you down. And then what happened? They were offline for over a month. And they are, you know, compromised on a lot of information – user accounts across the federal government and other companies. So there is that fear, that retribution of – that these groups are going to do. So, you know, what do you do, embrace it? I don’t have the answer. Are you going to embrace them? Are you going to try to recruit them? Are – how –

MR. HEALEY: But it’s kind of instable on its own, because –

MR. MARTEL: It is.

MR. HEALEY: You know, you see kind of the U.S. government and a lot of people in the U.S. government don’t want to talk about China. You know, wow, we don’t want to poke them; they’ve got so much of our debt. It’s one of those things that you talk about in SCIFs, and closed rooms and not – and not in public. And if you kind of mention China in a lot of rooms, you know, they kind of freeze.

But also I think you see a lot of think tanks and others that say, well, I don’t want to come out and pick on Anonymous and call them out for doing bad things, because then they might target us.


MR. HEALEY: And – which is kind of unstable, because we may not be talking about the problem because we’re feeling inhibited in our own speech, which is –

MR. MARTEL: Right.

MR. HEALEY: We need someone to go attack Anonymous, because that’s exactly what they don’t want, you know, they come out against, is inhibiting free speech. What do you think about – I mean –

MR. MARTEL: Well, and that’s – and I don’t have the answer for that, but –

MR. HEALEY: Do you – do you see groups that maybe aren’t saying as much about the hacktivists?

MR. MARTEL: Yes, definitely. And, in fact, that is – that’s exactly what happened. Working in the government, I know that – you know, there are investigations, there’s always investigations, and you know they do this research on it. But you’re walking on eggshells. You’re tiptoeing around a lot of topics and subjects because you’re afraid that, you know, you’re going to – if they disclose a source that they used, that they will be attacked similar to Stratfor – or when – whatever the cause is. So a lot, not just in firstly government, but other – in the – in the public and commercial world as well.

MR. HEALEY: So is this longstanding – so with CCSA, we’re putting together the first cyberconflict history book, to really – to really look back. And Karl Grindal, who is our product manager here, he’s actually been writing on the history of hacktivism and looking at this. And back to the ’90s and Cult of the Dead Cow and some of these other groups that we thought was really bad, and now we’ve got these hacktivist groups. And it turns out they kind of passed. And, you know, hacktivism the way they were doing it kind of had passed by 2003.

I really liked – there was a good piece in Wired that had just come out about Anonymous and how Anonymous has been organized and come together. Do you – what’s your gut feeling right now? Do you think this might be like that earliest face of hacktivism; that maybe this peters out in a couple years? Or is this longer term?

MR. MARTEL: Yeah, no. I think this is going to be a longer term, because there’s –

MR. HEALEY: Dang it.

MR. MARTEL: Yeah, I know. (Chuckles.) The level of communication around the world, it’s – you know, you – if I – if you say something right now, you know, how would – there could be a live feed going in China and Russia. And everybody else, they could be looking at – you know, listening to what we’re saying. So there’s absolutely such a high level of communication and networking bandwidth out there. And then you tie in social media and – honestly, it’s whatever the topic of the day is, people will jump on. And they have no problems, you know – a lot of people just let their computers do their botnet thing and allow them to be part of this organization.

And sometimes that’s good. I mean, Google has programs where they actually use botnets to actually do mass computing capabilities, and that can be a good thing as well. But you know, if they take over and they have access to those botnets, then what happens, right? They can take down major systems, you know. They could – you know, you could get into the SCADA systems and you could start shutting down the Eastern Seaboard or whatever it – you know, they could cause major problems.

MR. HEALEY: And before we go to Mike, I – one last question. You – when you talked about cloud, you said as we go towards cloud, a lot of the people that are doing – you know, supporting the desktop or supporting the servers, especially in small and medium-sized organizations like the Atlantic Council, that can all go to the cloud. And that seems like it could be a time that to reinvest – and all those people – and start focusing towards resilience. I mean, one of the things that James had talked about was attack is better than defense and always has been. Is the cloud one of these inflection points where we – that gives us a chance to claim that back and get defense better than offense?

MR. MARTEL: Yeah, definitely. That’s where a – we need to focus on securing the cloud and doing it right. We’re actually – you know, we baked the security in. No longer gluing it on, gluing those pieces of security on or brushing it on as an afterthought. Actually bake that security in and make it part of – make it part of the cloud. And – but you’ve got to do that – again, there’s going to – and as we move forward, the system administrators and network administrators, those types of things, they’re going to be – they’re going to be cut back. There’s going to be fewer and fewer of those. We’ve got to – we’ve got to think more cybersecurity from the – from the – from the cloud perspective, Web 2.0.

MR. HEALEY: Great, thank you very much. And this is one of the things that we talk about in our – in our cyberfutures paper. Right now, and for the past decades, for – in cyber – in cyberspace, it’s been easier to attack than to defend. Offense is easier than defense. And what we really – I want to try and look at in this panel, is how can we try and flip that? How can we get it so that defense is better than offense – D is greater than O – or what we would really like, the cyber paradise, where defense is much, much better than offense? Because if we can build this so that defense is much, much better than offense, it will help give us one of the real strategic stability that we might be looking for.

Great, thank you very much. And Mike?

MR. RATTRAY: (Off mic) – I do have to emphasize that that – you know, that paradise is a paradise for the current rulers of China too, right? And we’ve got to balance whether we want, you know, certain types of security paradises at the cost of other tradeoffs.

MR. HEALEY: Hacks. Absolutely.


MICHAEL MULVILLE: So I’m Mike Mulville; Cisco. I’ll say I’ve worked at a broad brush of security companies in the enterprise commercial space, as well as a CTO of a – of a big integrator, SAIC, for the federal space – and today work mostly in the DOD and Intel side, on cybersecurity. And I – you know, I’ll kind of start out – because Cisco, right, a large company, about 70 percent of the global networks run on our technology. So I’ll kind of be as realistic about this as well without being too grim, right, but I will kind of come out and say what we see with customers in discussions.

But I can’t tell you over the years how many conferences I’ve been to that talk about monitoring challenges and identity access, right – years of this. And you know, when we talk to customers today all up and down the stack and the federal government, there’s still a fundamental step that they’re on, right? They’re talking about upgrading their intrusion detection censors. They’re talking about still installing identity access-type solutions. And it can’t – you can’t help but step back and say, OK, we’ve been at this for years, right? I mean, the technologies to do some of these things have been around for quite a while. But it’s clear the organizations, the agencies, are struggling to implement this. It’s just – I mean, there are pieces there that kind of work for a limited segment or for one app but not the others. And so you have to kind of wonder: Are we at a point of diminishing returns for the approach we’re taking, right?

So the technology continues, right – as has been kind of communicated earlier – to continue to improve in leaps and bounds. We don’t – at least Cisco – don’t see that stopping, right, although, you know, a lot of phone conversations now happen over the network, right? It’s not – it’s no longer the old, you know, Ma Bell separate circuits and so forth. It’s all now running over the Internet. I mean, your home Cox Cable or whoever is running this over the Internet. So it’s all now integrated.

And so that the approach of, you know, the risk or lack of risk – the ability to combat a process and how to manage that – those are all things these organizations and agencies struggle with day in day – just the grappling of what’s an event and how to respond to it. It’s very defensive, by and large, especially at the civilian level of the – of the federal government, trying to just understand what’s happening. I mean, FISMA is there for a reason. Again, that’s a process that’s there. But they struggle day to day with just trying to pass the checkmark on that. So I think different approaches. And with the paper and the instability, it – I think it really kind of brings to light the fact of maybe there’s a different approach that we need to grasp from a strategic perspective. It’s a challenging thing.

I kind of equate it to, gosh, you know, if the Titanic could have just turned five degrees when it knew there was a problem, boy, what a different story that would be. But what a challenge it was to change the Titanic just five degrees. And it’s the same type of thing with cybersecurity, right. There’s technologies out there – people are implementing the most fundamental solutions that exist there, and for them to climb up the stack to implement those larger capabilities is really a challenge. You know, the technologies of mining data, alerting data – all good things. But if you don’t know how to implement it or you don’t have people that are educated enough to be able to piece together solutions to try to figure out what’s happening, it doesn’t really buy you a lot.

It’s like the forensic capability of cybersecurity. Those are really high-end people that have kind of eaten the raw meat of cybersecurity to be able to debug and track down what happened on different systems at different times and piece it together. It’s like a supersystem administrator. And to get that level of expertise is really very difficult. It takes a lot of time, special training, and there’s an inherent knowledge of what to do. And I don’t think we can rely on those few people guiding us forward on our defenses.

You know, there’s discussions about deterrence. From what I’ve seen, I’m not so sure a lot of the federal agencies or even commercial companies are in a position to even know what to deter. They’re kind of in this fundamental mode again that I’ve mentioned, of what’s happening on my network. And, you know, with a lot of things that happened with Google two years ago, they don’t even know what was happening in 30 of the companies. They didn’t know what was happening until, you know, days, weeks later. I think that’s a clear indicator that there’s – deterrent is just void from that.

Now, for us to be able to get in that position, I think, is going to take a sweeping change of an approach. The technology – as more capabilities are built into it, for good and for bad, right? I mean, whatever – however we use them as humans on the planet here, really, I’m not so sure defense – defense is going to get us partway there, but it’s going to have to be a different type of approach to, I think, resolve the problem, right? Turn things off or enable things in a much more real-time fashion.

Now, that’s clearly a struggle today. So I – I’ll stop there because I could certainly go on for a long time, but –

MR. HEALEY: Well, you ended on a really interesting point there, that defense only gets you part of the way, and – so I’d really like to pick on that. I mean –


MR. HEALEY: – what are – what exactly did you mean by that, and who – who’s involved in that?

MR. MULVILLE: So, when we – you know, I think a lot of the customers that we deal with, when you kind of read between the lines, they are greatly relying on technology to enable the Internet, the highway, but also to defend it, right? I mean, there are processes of when they see something red on the screen, they call someone in. But if they don’t see something red on the screen, everything’s OK.

So there is this expectation that the technology will mature over time and kind of take care of the problem. And it’s slowly getting there but, again, like – the years of people trying to implement who’s on the network and what are they doing, it’s still a void. So in terms of defending, again, I’m not so sure that we’re really, as a company, the agency, is in a position to really take significant steps to be like, hey, we’re 90 percent defended against anything that happens, right? I just don’t think we’re there.

MR. HEALEY: So when you’re saying turning things off, did you mean turning off our own things or turning off systems that are –

MR. MULVILLE: Switch off – I mean, I think there’s been different agencies that, about a year ago, got hacked, they just unplugged the whole agency –
(Cross talk.)

MR. MULVILLE: – from the government or from the Internet. You know, that’s how you stop it, right?


MR. MULVILLE: It’s not – that’s not a – it is a last line of defense, but it’s not really a reasonable defensive approach.

MR. HEALEY: So what role do you think – so we’ve got an instable – you know, a domain of instable cyberspace that we’re seeing conflicts, both nonstates and states that we’ve heard about so far. You’re saying it’s difficult for companies to defend and, when it comes to deterrence, you know, that it’s not really clear how they would deter. What does that mean about the future of nonstate actors and companies in this, and especially with the relationship to your government? So if it’s instable, if you’re seeing the nonstate and state action, what then?

MR. MULVILLE: Well, in a lot of the customers we have, the solutions are just so vastly different across the board, right? And I’m – it’s not just technologies, it’s the approach they use.

You know, you’d like to kind of standardize everything, and here’s the – you know, here’s the size of the donut that you have to kind of eat, that kind of thing. But that’s just not realistic from where we are today, and it – I think it kind of calls back to, you know, do we just blow up the architecture or try to bolt on at 30,000 feet of what we’re doing to try to change that methodology.

And it’s clearly more than just technology that will get us there. It’s perhaps a different approach, and I certainly don’t have all the answers, but a different approach from the process to get us there. I mean, FISMA’s a step, but it’s a step that’s clearly hard to implement and isn’t totally effective; it’s partially effective.

So how do we kind of take those different steps to get there? I think the challenge is that, you know, banks, I think, is an example that was made earlier. There’s great pressure to go online in the mobile world, right? Those pressures are financial pressures, right? They want to keep the customers. And that is true with all companies, right? They are deploying cloud solutions so that they can lower their cost and get more revenue.

It’s the way the world works, but at – along the way, they are accepting the risk unbeknownst to us. And so how do we – how do we kind of take that defense, move it up several categories in your – like your web browser, you move up to high security. How do we do that from a general sense and an agency perspective? I think that’s – you know, technology is part of it, but it’s some type of different approach to the solution, a strategic approach –


MR. MULVILLE: – to the solution.

MR. HEALEY: Yeah, because I’m very cautious that any collection of individual decisions to run a network more securely is going to change the instability of the cyber domain. You know, we haven’t seen a real cyber war yet. If we were at cyber war, we would get to stab people and, you know, we haven’t crossed that threshold yet. And if it stays this unstable for a long time, if it becomes OK for nonstates to reach out and smack states or states to reach out and smack nonstates, or states to smack states before there’s an actual conflict, it’s – I only see the instability getting worse on the institutional – I mean, on the – or cyberspace, writ large.

MR. MULVILLE: I would say, I mean – and Kris, you’ve probably added this, too – is that I – my sense is that what’s been happening with the different malware and the botnets is that people are kind of dipping their toes in the water to see what happens, right? So you push a little bit, you see what happens and what you can get for something that potentially happens on a grander scale. I’m not trying to be too gloomy, but, you know, it’s like anything else. You’re kind of testing your system to see what’s out there, and everybody’s learning from it. So –

MR. HEALEY: Yeah. Kris, then Greg. Go ahead.

MR. MARTEL: So I also wanted to just comment on what Mike had said about what, for example, with FISMA, that goes back to the securing the traditional way of security and understanding and making sure.

The FISMA is – I mean, it does not mean that you’re secure, it means that you checked some boxes to say that you did – you’re doing something a specific way, and if you think you’re secure because you have a FISMA report that says my systems are secure, you’re very wrong. I mean, there’s, you know, breaches every day that are on FISMA-compliant systems. So FISMA is just a – is a – and we’ve been doing it for years. It’s a way of checking a box to make us feel good in a – and it’s not very effective in actually addressing what the cyber – some of the cybersecurity instabilities are.

MR. HEALEY: Right.

MR. RATTRAY: Yeah, Jay. You know, I think we’ve got to turn to the audience pretty quick, right? So there’s a couple of other sort of strategic-level themes I wanted to throw in. You know, if we – James and I painted a sort of dire – and I think it is dire – or a very difficult situation related to instability.

What you’ve heard on the panel so far is what do you do yourself to compete effectively with a focus on – how do I compete effectively by making myself safer. One thing that I have said many times – some in the room have heard me say it more than probably four or five times – is Americans don’t think about collaboration as, you know, naturally part of how we manage this problem. So one of the things that Jay and I and CCSA spend a lot of time on with this study was alternative approaches to looking at how to solve what is a global problem, right?

So we’ve talked about sort of an engineering view of re-architecting the plane at 30,000 feet. Think about the Internet not as an engineered problem but as a biological ecosystem problem and a complex, adaptive system. You basically have an Internet that’s an organism or, more appropriately, an ecosystem of organism, the behavior of people, the behavior of technologies, and how do you evolve it in the direction that you want to go, right?

So if you get a chance to, you know, take a look at the executive summary, but hopefully, when we get the study out in a couple of months, the sort of concluding chapter, think about models related to the environment and public health. I mean, one of the fundamental issues here is if I’m insecure, I pose a problem to everybody else. That’s the same problem as you have with pollution. It’s called – in economics, it’s called externalities. There are approaches to managing externalities at everything from the – you know, getting people to believe that, you know, cleaning up the environment is good for everybody.

I mean, you saw a real tipping point in the last decade of companies marketing based on green, right? That’s what we want. That – you know, you want the technology companies going – being more secure is actually a sales, you know, advantage for us. You know, at the enterprise level, you have companies – and actually, we – my company does some work in Japan. It was interesting to see Toshiba’s annual cybersecurity document. It was about a healthier cyberspace for Toshiba, right? I mean, elsewhere in the world, these messages resonate a lot more easily than competitive messages.

And I think public health is even a stronger analogy. You’ve got everything from the – stopping the emergence of things like botnets and disease or the propagation of code to monitoring the current health of the system and shining the light on the places where there’s problems so you can provide assistance as well as put accountability so if, you know, SARS is breaking out in a particular country, every country around the globe wants to know that as soon as possible, and both help and quarantine themselves against, you know, traffic that’s coming from there.

And then I think even more fundamentally, what are the conditions for disease, right? What regulatory structures, what cultural structures actually cause cyberspace to be messy and unstable, and those things are measurable. I mean, there’s a sort of growing energy around this.

The OECD actually has a pretty significant project initiating about what are the metrics at a policy level for cyber health, and then working with CERTS to actually go measure at a national level, you know, what’s happening. So – because I have come to the conclusion that the current models have failed, having served as a military officer and working a lot with the law enforcement community in relation to watching the problem worsen. And I won’t name names, but two very senior FBI guys have basically told me in the last two weeks, yup, the problem’s gotten worse on my watch – (chuckles) – you know, that it is not going to be their – those approaches are not scalable and they’re not collaborative. I do think one of the paths forward to addressing instability is through collaboration.

And even the U.S. and the Chinese can collaborate. Botnets are not good for either – you know, either country. And I’ll just point out one example, sort of at the state level of what traditionally would be called confidence-building measures.

The Chinese, the Japanese and the Koreans – three countries who don’t have a great history with each other over the last hundred years – have an agreement about the – what to do when botnet attacks emerge, particularly for politically motivated purposes. And having their ISPs, their search and even their political leadership is necessary – engage in not allowing this to escalate to a state-versus-state conflict. That’s some of the stability we need to achieve.

There are people here in Washington – certainly, the State Department is focused on stability and confidence-building measures in cyberspace; those are very right-minded efforts. So I just want us to kind of come back to some of the strategic-level issues.

MR. HEALEY: Great. Thank you.

I’ve got one substance comment – moderator’s privilege – and one administrative comment. And then we’ll be going to the audience. And we’ll have microphones on each side. So I’ll – please put your hand up and I’ll call on you, when we get to that point.

The substance point is – I’d just like to continue to emphasize this bit on cyber instability. And think about what that means in strategic terms. It means that there’s advantages for nations to attack early, because it’s offense-dominated. If it’s instable, then you want to hit the other guys before they can get their act together. It goes right back to early air-power enthusiasts where we’re striking first.

There are so many issues that come out of this. For example, in early stages of U.S-China tensions, the U.S. has shown that we’re willing to use cyberstrikes before there’s actually fighting going on. That’s a difficult norm that we’ve just taught the Chinese, that it’s – it is OK to strike first before there’s an actual conflict. So regardless of whether it’s new legislation, new military tactics, new military plans, developing norms with other countries – all these issues of traditional national security, international relations, are impacted by this issue of instability.

Right now, the U.S. apparently wants norms that make it not OK for Chinese espionage but OK for covert cyberstrikes. That’s a difficult position to put the State Department in – (laughter) – to say spying’s not OK but covert attacks are. But that – if you’re looking at our behavior and what’s bugged us and hasn’t bugged us – that’s what we’re saying.

So we always – so we – it’s in the U.S. interest to shift this to being more stable, not least because we have so much at risk. It’s in our interest to get this from being offense-dominated to more defense-dominated, especially if it can be done in ways that keep our freedoms and the Declaration of Human Rights in place.

So with that, let’s go ahead – and we’ll shift to some questions.

That was the first hand I saw – two, and then three, OK?

Q: (Name inaudible) – New America Foundation. You spoke about hacktivism and the need for education. The Pentagon and its intelligence agencies are recruiting, and companies are always seeking expertise.

But you mentioned education. How do you reach out to the disinterested hacktivists who just play with computers in their basement? How do you get them involved – like were you thinking of, like, police reaching out to people who were involved with crime when they were young or something? You know, just educate them about what else they could do? Or – what did you have – expand on that, please.

And also, how do you plan to adapt security to match these people who just spend all their time figuring out ways to do one over on you. Do you just keep, you know, staying attached to the new?

MR. MARTEL: OK. So I can’t – I can’t tell you, this is how you’re going to get all the disinterested hacktivists or the kids that are borderline – wanting to do the malicious hacking versus being the ethical-hacker type stuff. Inherently, they’re going to go one way or the other; there’s going to be an interest most likely, and they’re going to follow that.

But the – you know, the Department of Defense, the Pentagon, the federal government – they have program(s) – they can, you know, put together programs that do incentivize them. Maybe not as an – and as I’m saying is, you want to allow them to be expressive in – this is my take on it – is – allow them to be expressive and think outside the box. Take what they’ve got, take what the knowledge that they have, and build upon that. Don’t try to impress upon, here’s how we’ve always done things; this is the – this is the way things are, this is the way you need to learn how to do this and this is how we work. Actually let – embrace what they have, and work from that.

As far as how do we – how do we get there – and this goes back with the – with like Cisco and other types of commercial products out there – if you can dream it, right, they’re going to build it. That’s how technology works, OK? Apple – you know, your iPad and all that.

You got to – we – again, we have to look at alternative ways; say, OK, well the technology’s not there for us to do that, so let’s make it that way. This is what we want to do. This is the new road we’re going to do. So if we’re going to do cloud, this is how we’re going to secure it. And we’re still going to have all the – we want to keep the capabilities. We want to make it convenient; we don’t want to stifle commerce and we don’t want to do any of that, but we do want to make it secure.

So rather than just rush forward and just do cloud, let’s actually secure it first. Let’s actually build this in, figure out the best way to do this. And it’s going to be expensive; there’s going to be a cost to this. But it’s better than the alternative of not being – of being insecure.

MR. HEALEY: And I would add to that, also. We need to scale. Because so many of our things are – we come and we have great ideas and they’re good small projects. But if we want to shift cyberspace itself to being a more stable domain, that is difficult to do one hacktivist or even one hacktivist group at a time.

And so what my boss, who is the CISO of Goldman Sachs, always came back to the lowest cost of control. You know, what are the – where can you get the most leverage by taking your actions. FISMA, these other areas aren’t great ways to get the most action with the – with – using the least amount of effort. I think we’re approaching this, but we need to figure best how to scale and use the big efforts of – OK.

Go ahead, and we’ll come back to – (inaudible).

Q: I’m Bruce MacDonald, with the U.S. Institute of Peace.

I want to return to the nation-state level again for a moment, and say that what – I think in the discussion, there’s been a – what I would call a blurring of the lines between what I would call strategic stability or instability, and tactical. There’s no doubt about the fact that we are seeing at the technical level – indeed, even at the – at the special operations level – operations such as Stuxnet and so forth. But at – one of the things that scares a lot of people is the question of this huge, catalytic – catastrophic cyberwarfare that leaves smoking ruins in industries around the world – or in, most particularly of course for us, in the United States.

There’s a dimension I share with what James Mulvenon said. I share almost everything with what he said, but I’m a little more optimistic at the strategic level, and I’d be interested in the panel’s comments. And that is, one aspect of cyber that is different from nuclear and space both is the fact that we don’t know – there are no satellites we have that can see what cyberweapons the other side has. And countries, typically speaking, tend to be risk-averse, all things equal, when they have a lot to lose. And if you’re China or the United States, with these vast, huge economies –

So my question is: Isn’t, in fact, there an element of stability in that because we don’t know what the other side did, each – take U.S. and China – each side is somewhat self-deterred because they don’t know what the other side can do to them, even if both sides have phenomenal abilities to destroy the others with cyber. But aren’t they in fact somewhat deterred by that? And isn’t that stabilizing?

MR. RATTRAY: Right, right. A lot of thoughts come to mind. So I’ll try to parse maybe three levels.

First, at the – it’s the sort of, you know, the nature of how much you know about your adversary’s capabilities, and is that stabilizing or destabilizing? I think every – Tom Schelling – that you go – the history of U.S. strategic thought probably goes the opposite direction than you did, Bruce. And certainly the agreements we’ve signed in the nuclear age promoted transparency into each other’s capabilities that – as a political scientist I say the risks of miscalculation from not knowing what the other side can do generally tend to be higher than, sort of, the self-deterring effort related to, you know, not knowing.

It’s a calculus – a strategic calculus thing like this, with, you know, with my academic background I can say people can come down differently on this sort of fundamental logical proposition there.

The other is, you know, there are a lot of intelligence means to sort of make assessments of the other side, right. So we do – both sides probably do know something about the other’s capabilities. And I think as James has said, certainly over the last five years, I think, attribution problems have gone down in our cognizance of the capabilities of the other side. Having served as an intelligence officer, including during the fall of the Soviet Union – (chuckles) – we still make miscalculations even when we have very good intelligence. But, I sort of disagree with you that opaqueness is sort of a stabilizing force here in terms of, you know.

(Chuckles) I’m looking at General Casciano, who was my mentor as an intelligence officer. John, if you have any thoughts in this regard – is also a teacher at the Political Science Department at the Air Force Academy. You know – I don’t know – I sort of come down differently than you do on this.

Q: Yeah, I think it’s something about which, you know, people can disagree and do disagree. But the lack of knowledge of the other side does have a deterrent effect. I mean, there’s no question about that. And I think we saw that during the nuclear age or the Cold War or whatever we want to call it.

MR. HEALEY: The issue that worries me a lot is that the Chinese don’t have a good way to connect their technical level with the political level. The technical level, their interagency is actually really good at talking amongst themselves. But this mirrors what we’ve seen on – in real world physical events, after the EP-3 crash. You know, we – our senior level calls their senior level and they either don’t take the call or they are not – they can’t find out within their structure what’s happening.

And it is – with cyber in a conflict so unstable – and here that one of the other main possible adversaries doesn’t have good positive political control or can get information about what’s happening at the technical level up to the Communist Party level. When you add that instability with this lack of internal transparency even within China – Russia’s much better because they’re like us – that is really instable.

And OK. And let’s go up here to the front and then Clay (ph) and then Frank (ph) and then Sara (ph).

Q: Ali Durkin from the Medill News Service. How do we deal with balancing the right to privacy with cybersecurity?

MR. MULVILLE: Wow, that’s –

MR. : No, I’ll just take that one.


MR. : (Chuckles).

MR. MULVILLE: That’s a – that’s the million-dollar question and I think every home consumer feels strongly about that. Right?

A quick example: You know, when Google came out and said they were going to now kind of track things, I mean, certainly my wife and other relatives I know had a strong opinion against it. I think like most things that happen in life, there’s – the dollar kind of tells and the desire of what they want kind of speaks a voice. It may be indirectly, but I think that is by-and-large kind of steering the privacy of what happens, right. If you don’t – if you don’t like what Google’s doing, you go to Bing and you do – right, you use other capabilities. And that is a voice, for good or bad. And I’m talking consumer.

But the consumer from a privacy perspective, I think, has a lot of weight in the fact that they choose what to buy. And those are determining factors, in some ways. Now, you know there are different technologies that we can deploy that do certain things and track users. And even at private companies they employ some of those as part of the employee acknowledgement. But even different countries – you know, Germany’s a lot more stringent in terms of who does what, can you take pictures of people, are you being videoed, right. All those things. Different countries have different levels.

I think America’s a fairly open society in general, especially on the Internet use. I think the economy drives that. And I think that as other companies try to, perhaps, lock that down for general consumer use, that’s going to be a real general challenge. And that I think is the metric for how we balance it.

I’m not so sure there’s direct legislation that I would see coming and maybe you guys have different opinions on the locking down – maybe 10 years from now or 20, but not immediately, I don’t see it.

MR. MARTEL: I also think that – so balancing social security in with privacy, you need to have, like I said – mentioned this earlier is bake it in; it needs to be there. You need to account for this. This is something that you need to have, especially as you’re going forward with the cloud and Web 2.0. But then you have your social media, which that’s – I mean, you can put all the security in the world, but it’s not going to keep the users from posting crazy pictures on Facebook and sharing or disabling certain types of security because they want a feature; they want a function; they want to be able to do something else.

So there needs to be – the security needs to be there. But it’s ultimately – it’s a pressure point based on the different consumers.

MR. HEALEY: And also I think – I think cybersecurity itself is – it’s such a limited paradigm.

For example, look at what we’re talking about right now in this instability. Most of the issues that we’re talking about that would help make it more stable are the kind of technologies, norms, practices that least affect privacy. For example, if we just set a global goal: zero botnets. Let’s just get rid of them. Just like we said zero nuclear weapon, you know. Let’s just get rid of botnets. Let’s just reduce it as much as we can.

Those tend not to be the technologies that most impinge on you and me and us at home. If we look at it like Greg mentions, an ecosystem, if we think about – my preferred – as an environment, it takes it away from the security versus privacy debate and gives us other ways to think about it. Because every time we hear General Alexander say security, we’re all going to be thinking, like hell. And the more we can shift out of that security versus privacy and give us something else, a different narrative to think about.

Another one I like, sustainable cyberspace. Everything that we could get out of a sustainable cyberspace, we also get out of secure, but gives us a different mindset to think about it. A sustainable cyberspace is likely to be a stable one.

OK. Clay and then Frank and then the woman here and the red tie and then – OK. So – (chuckles) – I’m trying to keep it straight.

Q: Hi. Clay Wilson, University of Maryland – University College. I’d like to ask about cooperation, international cooperation. How do you get it?

I think back to the Kyoto Protocol, where the United States had a chance to cooperate with the rest of the world. And we had different priorities perhaps than other parts of the world. And so I – that kind of challenges your issue about pollution. What are the advantages and disadvantages for each one of the players? And that may determine how you get cooperation. If you think about health, then maybe the world has something more in common and you get more cooperation.

I would say economic priorities are different for different parts of the world. I don’t want to pick on any country, but let’s take India as an example – they’re interested in growth. China – they’re interested in growth. So to them, the free flow of intellectual property is probably not such a bad thing to lose control of because it allows economic growth, which fits their particular priorities.

MR. HEALEY: And – we’ve got a lot of people waiting, Clay, so –

Q: OK. All right. So I just want to ask that question. How do you foster growth?

MR. RATTRAY: So I just want to sort of reinforce Clay’s notion. I had a very interesting discussion about a month ago where two former directors of the National Security Agency were sort of beating the drum about, this is the largest wealth transfer in the history of humankind between, you know – as a result of cyber-espionage.

And Richard Cooper, who was a very known international economist at Harvard, stood up and goes the U.S. economy was fundamentally, you know, stood up on the backs of stealing industrial technology from England, right – (laughs) – this is a – you know, this is a natural part of how nations compete. And my personal take is, the cyber-espionage problem is an espionage and counterintelligence problem. It’s a very difficult one. The upside of it is I heard General Hayden say this is the golden age of SIGINT, right. So – (laughter) – that’s a – to the first point there is a lot of – there is a lot of international dialogue and global dialogue.

I guess I’ll boil down to one point. There is extensive daily global collaboration on the security and health of the network.

If you look at how ISPs work together to avoid disruption when, you know, just accidentally routing – routes are injected into the Border Gateway Protocol that manages where Internet traffic is – (snaps fingers) – they respond very quickly. And I’ll tell you, the guy in the United States know how to call the guy in Pakistan when the wrong YouTube route is put in there. Those things only last for hours. The collaboration actually occurs subgovernmentally, and it’s very strong. The CERT network, which is kind of in between, is also very strong. So we need to figure out at the intergovernmental layer how to leverage that nongovernmental collaboration.

MR. HEALEY: OK. If our colleague Bill Woodcock was here, he would come back and say, you know, when it comes down to it, the government has very few direct levers to deal with some – you know, to some of these cyberattacks, and it’s the private sector that has the direct levers.

Frank. And then sir, and then you and then in the back.

Q: I just want to go to the risk management point. It seems to me that what Greg said about collaboration is certainly right, and maybe the health model. But you can’t leave out the technology part. And one of the things I think that happens in discussions like this is we don’t hear what technology parts you’d really like. I mean, I keep hoping Cisco will say, well, this is what we’d like to see. And instead I heard, I don’t have all the answers. (Laughter.)

There’s some good work that’s been done with some of the think tanks. MITRE in particular has got some lists of things that can be done, and some others. And I would just love to hear what you guys think about issues like diversity or nonpersistence or, you know, least privilege. I mean, can they really work at scale? Because that’s the question that Jay points out.

The scale issue – and I think somebody else pointed out the – I think you did, sir – with respect to forensics. We’ve got about a hundred – (inaudible) – forensics guys out there; it’s not going to handle the whole Internet, right? So is there a technology scaling-up that’s possible if we did, quote, the Manhattan Project? That’s sort of point one.

And then just a common point: With respect to deterrence – again, in these kind of discussions, one of the things that tends to happen is it’s sort of cyber-on-cyber; I don’t think that deterrence really depends on cyber-on-cyber. And I’d just like to get a comment on that.

MR. HEALEY: And we’ve got about 10 minutes, so let’s try and make sure that we get through – (inaudible).

MR. MULVILLE: I’ll say a couple things real quick, if I can. So a couple things is that, you know, Cisco tends to partner with a lot of companies for different reasons. And we continue to build technologies that are, you know, much more focus of the network and what’s happening and the intrusions and all those types of things. So it’s not that we’re not building these things up; it’s not that the large integrators aren’t coming up with different solutions, right? And there’s a lot of companies out there that have point products.

But again, those only address a sliver of the problem. And there’s really no company that does end-to-end cybersecurity that has the 90-percent answer – or even a hundred-percent answer. So as that tends to mature, right, there are new things that we use to deter and invoke advanced persistent threat or portions of it – what can we do.

But there’s also concerns, too, is that even if we could shut down a botnet, right, or deter botnets to 90 percent, wouldn’t some other mechanism pop up that is a cybersecurity threat? Right? I mean, it’s – you know, botnet is not everything, right? Advanced persistent threat is not everything. There are many different avenues that are yet to be undiscovered – just, right, the Macintosh, right; they haven’t even – or the Apple; they haven’t even gotten to the point where they’re really –

MR. : Or a supply chain.

MR. MULVILLE: Yeah, a supply chain, right? It’s a – it’s a broad spectrum of things that Cisco certainly plays in, in a lot of ways. And I – even on the supply-chain security of how things are manufactured – and you can validate that, right? And we work with federal agencies all the time to try to make sure that they understand what’s happening and what we can do. But those are – again, there’s these little checkboxes you can go through, but there’s a lot of unknowns in cybersecurity that we haven’t seen yet.

So I’ll stop there – (inaudible) – Greg, if you want to – or Kris.


MR. MARTEL: No, I’ll pass on this one.


The – one, I think it’s going to be difficult to switch what we’ve heard about here – from, you know, this instability – by working at the edges. It can get done, but it takes a long time. And the threats tend to involve as your changes to the edge evolve. I’m most interested in what can happen at the core – you know, at the ISP level, cloud, these places where you can really ease your effect at scale because you’re working the main channels where it works.

On deterrence – I’ll pick up you point there – I’m really – you know, a point that Greg always makes is why – when you talk about cyber deterrence, why do we always assume it’s the United States that’s going to be doing the deterring? It’s at least as likely as other nations are going to be trying to deter us. And I’m finding it – I’m still working my way through this. If you are Iran and you wanted to deter the United States from doing another Stuxnet, that’s going to help us think about deterrence – because we know the U.S. can use other kinds of kinetic force as part of our deterrence posture. We also need to think – other nations can use kinetic force to help as part of their cyberdeterrence posture as well.

Sir with the red tie, and then ma’am – the miss, here.

Q: David Hoffman, from The Washington Post. I’d just like to ask a question about offense, especially if all this instability is partially coming from offense. Doesn’t the ambiguity of our own efforts, you know – Stuxnet, or as James said, Hiroshima – I mean, aren’t we contributing somewhat to this instability? And what can we do about it? Should we want a more clear and transparent offensive program, or is that even possible? Or is that perhaps undesirable in the world we live in?

MR. RATTRAY: I’ll give a couple of macro reactions. First, a point that probably resonates with Frank and others – you know, the Stuxnet use was a counterproliferation operation; it wasn’t a cyberattack. It was a – designed to cause strategic stability in the Persian Gulf, right? So you get in a cybersecurity dialogue and, you know, it’s an – act that causes instability in this realm. But in the greater national security purposes – and if you read Sanger’s work, he apparently – you know, taken at face value, the president wrestled with this, right?

So you know, you make this – you cross the main decisions about, you know, taking more instability in cyberspace in order to achieve an objective, which in this case was to try to keep Iran from going nuclear sooner and having the Israelis bomb them and having a war in the Persian Gulf, right? So that, I think, is an important aspect for cybersecurity, guys, to remember we’re not the be-all, end-all of sort of these bigger strategic stability discussions.

My sense is – and you know, I sort of pushed this from within the government when I was on the National Security Council, continue to push it and started this organization CCSA in part – is I do think that this is an issue that’s risen high enough in the nation’s concerns that it needs to be a public dialogue, right? You know, and that – yes, like with everything – and there’s secrets around nuclear weapons and, you know, space and space control capabilities. And there’s going to be sensitivities, but we need to have a national dialogue about the tradeoffs here. And again, does the U.S. want to be perceived as militarizing cyberspace, or do we want to be seen as the promulgators of global health of cyberspace in a larger national security sense?

MR. HEALEY: And I would say – I mean, to me it’s a very short answer: Yes.

MR. : Yes, what?

MR. HEALEY: That – (laughter) – yes, that this – that the U.S. offensive posture and our not talking about it and hinting at it is adding to the stability, without a doubt. And this huge decision that General Hayden called a Rubicon – you know, it wasn’t made in hearing from the people that say, well, what about defense; defense is important, too. I mean, it was made by a small coterie of people as part of a covert action. The president would have been hearing a very small, very limited number of voices as that – and according to the Sanger article, they thought it was ironic, that here are attacking when we think – you know, when defense is such a big concern.

Now, I hope there is actually more concern and it wasn’t mere irony that they felt. But a second Rubicon was crossed also – and we’re – I’m going to have a blog coming out in the next day or two on this. But this also appears to be the first truly autonomous weapon. After it was launched, it doesn’t appear that it checked back to say, should I start breaking stuff? So that is a huge step – two huge steps in modern warfare, in the history of warfare, that were made behind closed doors as a covert action. And I am not comfortable with that.

This was a small step for covert action. Both of these things made a lot of sense, to cross both of those Rubicons as a covert action. But for the future of cyberconflict in this incredibly instable domain, those aren’t – those aren’t difficult things to cross for a small number of people that I really respect – real patriot – but that’s very dangerous.

Last question, since I was soap-boxing. Ma’am. I want to get some – (inaudible).

Q: Hi. My name is Jeanne Destro; I write a blog called Digital Media Roundup. And my question is about mobile devices that people take to work. And from what I’ve heard, there was a cybersecurity hearing on Capitol Hill a couple of months ago. And the woman from McAfee mentioned that that’s another attack vector, and people aren’t really taking it all that seriously. There’s more of a movement towards, bring your own device. I was just wondering, in terms of national security and best practices, if it might be better and cheaper for companies and the government just to give people their own Blackberry again or – maybe not a Blackberry, but their own device – as opposed to allowing people’s personal devices into the network. Any thoughts on that?

MR. MARTEL: Sure. So BYOD is a big thing. In fact, Obama signed some – an executive order in April talking about mobility and going – and moving to the cloud-type stuff, and it’s supposed to happen within the next year, the capabilities for all federal agencies. But BYOD and – depending on what environment you’re looking at. So you’ve got your intelligence community, you’ve got DOD, you’ve got some civilian-type agencies and some of them are already doing that. They already actually have their own BYOD policy. The important thing is is to actually define what your capabilities are and what you’re going to allow on the BYOD.

And again this – it’s looking at a different paradigm of how you’re going to secure it. So don’t look at as a – we can’t, remember I said security is not this monolithic thing? You can – it’s a not a laptop anymore where it can do everything. You know, you got your USB ports. You got all this capability, you have a full operating system and a laptop. These are mobile devices, iPads, iPhones, Blackberries.

So think of them – you can lock them down in terms of roles. So if you’re using a device in the intelligence community, you want a mobile device instead of carrying papers around that are classified papers. And you just want to use it as a presentation device or a type of device that’s like a manual. You can disable all of the wireless capabilities. It’s still a mobile device, by definition, but you’re using it to bring your electronic – your electronic paperwork and documentation around. It’s a lot easier than carrying massive stacks of papers.

If you want to say, OK, I want to allow my people to do email with this, there’s the rule. You disable everything else from your MDM solution. So you’re going to say these are what the applications that are able to be on there and I want to allow them to do email and this is their capability.

I want – and then another rule could be full desktop, right. So that means you don’t have any data, anything that’s stored there. You’re just adding the capability. So you need to have virtual desktop capability back to wherever it is that you’re working.

So you can do this. It’s a matter of how are you going to secure. So you’ve got to – it’s easier to look at the – do security around a role versus a device because otherwise you’re going to have to have security specific to all these different devices. So really understand what the role is that you’re trying to do with a device and then you can lock down the security based on that.

MR. MULVILLE: I’ll just say initially, you know, a year or two ago – I think companies started to kind of go down the, hey, let’s see if we can lock down the Android or the – or the iPad and stuff like that. But I think a lot of companies have given that up and the customers we’re talking to before they allow a bring-your-own-device, a BYOD onsite, they’re really trying to tackle that identity access role based authentication model. If they have that, then I think there’s a greater clearance. It kind of echoes what Chris is saying, but we’re seeing that with customers in DOD and other places that’s just once they kind of get down the fundamental baseline then we’ll allow these devices to come onsite.

MR. MARTEL: And you can have it where they can supply you your – the device or you can BYOD, bring your own device. But if they do, they’re going to have to sign – understand there are restrictions based on their capabilities. So if there’s data email even that if it’s lost you’re going to lose everything. They’re going to wipe your device remotely and you’re going to lose everything on there – for example – that could be a policy. So it’s a matter of, you know, getting your policy in place, having your MDM solution and securing by a role.

MR. HEALEY: OK. Thank you. My apologies, we had a lot more questions – especially the gentleman in the back I was hoping to call on. My apologies. We are out of time.

The last administrative announcement is that we will be posting this online, both to the CCSA website as well as, I believe, the Atlantic Council website. And it should probably actually already be up, the report. We didn’t have enough copies and we’ll be having a press release that’s coming out that will have some more – some more of this information.

So please, we’d look forward to seeing you at additional Atlantic Council events, for future Cyber Conflict Studies Association events. If you have any questions about the report as it comes out, our contact information is all over it, both between the report and for Atlantic Council. Thank you very much for your participation, your questions and we really look forward to seeing you again at future events. Thank you very much. Thank you to the panelists. (Applause.)


Related Experts: Gregory Rattray and Jason Healey