The Atlantic Council’s Cyber 9/12 Strategy Challenge, a one-of-a-kind cyber policy and strategy competition, is an intensive and exhausting process for student competitors. Before they set off on this demanding journey, these competitors (and their academic advisors) may ask themselves why this competition is a good use of their effort, how it complements their education, why it’s important to their future career, and why a diverse group of organizers and supporters are passionate about helping them succeed. Before dedicating themselves to the competition, these competitors and their advisers deserve to understand (with apologies to film director Frank Capra) “Why we fight.”
I’ve had the privilege to coach teams from Royal Holloway University of London in three Cyber 9/12 Strategy Challenges: London 2018, where the team earned first place; Geneva 2019, where the team was a semi-finalist; and London 2020, where the team earned third place. From this vantage point, I have watched my student competitors and others as they grapple with the competition. I have also seen these same students learn a number of valuable lessons.
Throughout the challenge, teams simulate analyzing and synthesizing information about cybersecurity threats and then briefing senior government officials with findings and policy recommendations—all under high-pressure. The competitors delve into information assembled into three separate briefing packs, called an Intelligence Report, that include real and fictional research, online media, private sector threat analysis, government intelligence documents, and even a television news update.
The competition forces students to use a variety of disciplines they might not otherwise employ on their academic journey. Teams must look beyond their individual domain specialty, whether it be computer science, law, cryptography, political science, risk management, or any of the wide array of education backgrounds represented. The competitors must be prepared to justify their recommendations within the emerging framework of international law, which increasingly pervades state decision-making on cyber operations. The competitors assess the risks and potential impacts of hostile cyber operations and countermeasures; then, the competitors articulate their assessments to expert judges playing the roles of decision-makers in government.
Throughout the competition, teams are encouraged to think holistically about the needs of an entire society, deliberate on how to prioritize domestic and international responses to the crisis, and consider non-cyber impacts and responses. The competitors’ chances of success in the competition increase tremendously if they exhibit an appreciation of the practicalities needed to implement their recommendations—like the time and resources needed to adopt new laws or procedures, to commission new offensive cyber programs, to task or redeploy limited civil service resources, to leverage support from non-state actors such as the community of CISOs and security vendors, to persuade international partners to participate in multilateral action, or any number of other responses they wish to suggest.
Teams are forced to confront the reality of decision-making in an atmosphere of less-than-complete, potentially inaccurate, and sometimes conflicting information. They must sift through messy and diverse sources of intelligence and synthesize a picture of threats that can be explained to non-expert decision-makers in minutes—all while being careful to assign appropriate degrees of confidence to different elements of their report. They must learn the difference between acting as an honest broker of available evidence (which is the job of an analyst) and acting as an advocate for a specific outcome (which is not).
The best competitors learn and demonstrate good teamwork skills. They face difficult choices in how to allocate tasks among themselves. The time pressure of the competition begins at a relaxed pace with weeks available to produce and deliver Round 1 submissions. Those selected to advance to Round 2 have a single overnight window to absorb significant new intelligence and revise their view of the situation. The very few teams who advance to the Final Round face the highest-pressure component—they are given only a few minutes in which to absorb a critical additional intelligence before briefing the judges who simulate government leaders—often comprised of people who have served in the senior civil service roles the students simulate.
The competition itself is a labor of love for a large group of volunteers from industry, government, and academia. The organizers and volunteers put in a considerable amount of effort to develop a competition’s intelligence pack and recruit and coordinate the expert judges who simulate decision-makers.
Each competition reflects local values, methods, and standards. Judges in London simulate UK government officials; judges in Washington, DC simulate US government officials; and judges in Geneva simulate a multinational “task force of European leaders” including heads of government and defense. Competitors must be prepared to make recommendations that are most appropriate for the relevant environment.
Of course, no competition is perfect and no simulation is perfect. For that matter, the process being simulated is itself far from perfect. Judges and competition officials must eventually rank teams. Despite tremendous effort from organizers and judges, reasonable people can argue about aspects of the competition process as well as the results.
But I find that the students who take the most from the competition are those who embrace it for the learning opportunity it represents. I’ve watched students climb and conquer steep learning curves. I’ve seen cryptography students gain a better understanding of politics. I’ve watched students of law and international relations learn to appreciate the intimate practicalities of cyber operations. I’ve seen computer science students learn how international law influences operations. And I’ve watched as all of them learn more about how the decision-making “sausage” is made.
Students interested in cybersecurity learn valuable lessons from the Cyber 9/12 Challenges that they are unlikely to encounter anywhere else in academia. Competitors finish the competition better prepared to communicate their ideas to a wide variety of influencers and decision-makers. They all finish better prepared to address the complex threat-filled environment presented by modern cybersecurity. They all finish better prepared to contribute new ideas and new thinking that may someday help to reduce the risks of unnecessary conflict carried out through the cyber domain.
These are all good reasons to compete.
And in the context of this competitive simulation, this is, I believe, why we fight.
Robert Carolina (BA, University of Dayton; JD, Georgetown University Law Center; LL.M-Intl Business Law, London School of Economics) began teaching legal and regulatory aspects of cybersecurity at Royal Holloway University of London in the 1990s. He is the author of the Law and Regulation Knowledge Area of CyBOK: The Cybersecurity Body of Knowledge. This is a revised version on an article originally written for a Royal Holloway newsletter. Correspondence to: [email protected]
Thu, Apr 9, 2020
Cyber 9/12 was a resounding success, with twenty-two teams, over forty-five judges, and six keynote speakers participating virtually from across the United States. We asked nine individuals who contributed to Cyber 9/12 DC in different ways to share their insights, experiences, and takeaways from the event.
New Atlanticist by Will Loomis
Tue, May 12, 2020
Cyber 9/12 sharpened our ability to analyze an evolving situation and identify its key issues, adapt to unexpected changes, and recommend effective responses to manage the crisis.
New Atlanticist by