The 5×5—Baseball and cybersecurity: Stealing insights from America’s pastime
Whether you have played, watched, hated, or never heard of baseball, lessons from the sport can be applied to many things in life—including cybersecurity.
As it happens, cybersecurity is more relevant to Major League Baseball than you might think. In 2015, The New York Times reported that the Federal Bureau of Investigation investigated St. Louis Cardinals employees for gaining unauthorized access to the computer networks of the Houston Astros to steal information ranging from scouting reports to internal trade discussions. The incident represented the first-known act of team-on-team cyber espionage in professional sports history. While teams hacking each other for advantage on the field may be a rarity, 93 percent of sports franchises have suffered at least one cybersecurity attack in the past three years, according to cybersecurity company Acronis. (Over the next several weeks, the Cyber Statecraft Initiative will be releasing limited edition trading cards of our 2020 class of fellows. Keep an eye out on social media and our website to collect them all.)
Cyber Statecraft Initiative experts go 5×5 to draw parallels between America’s pastime and today’s cybersecurity issues.
#1 Baseball is a game of statistics. What cybersecurity statistic do you think more people should be paying attention to?
Jason Healey, nonresident senior fellow, Cyber Statecraft Initiative; senior research scholar, Columbia University School of International and Public Affairs: “Richard Bejtlich had this right. The main statistics we use measure how fast we throw the ball or can run the ninety feet from home to first. None of this tells defenders if they’re actually winning the game. How many adversaries, right now, are in your network? That’s the actual score. CrowdStrike has operationalized this to specific goals for how long to detect an adversary, how long to identify the malicious activity, and how long to eject them.”
Trey Herr, director, Cyber Statecraft Initiative: “Dwell time—there’s plenty of variation in how different vendors measure this, but the perception that a cyberattack begins when it’s discovered is one of those corrosive assumptions that hold back deeper policy and technology design reform.”
Christopher Porter, nonresident senior fellow, Cyber Statecraft Initiative [Porter’s responses do not represent the opinion of the US government]: “Even the most sophisticated cyber threat groups compromise the overwhelming majority of their targets—let’s say 95 percent as a rough estimate—without using zero days or even necessarily bespoke tools. They live off the land, use social engineering to get a beachhead, or repackage criminal tools. The traditional push to improve intelligence-sharing by pushing indicators won’t keep these groups out. It’s necessary to do a better job there, but we shouldn’t oversell the likely benefits. Indicator sharing is table stakes, not a solution, in this environment.”
Paul Rosenzweig, resident senior fellow, national security & cybersecurity, R Street; founder, Red Branch Consulting: “The lack of statistics. We don’t collect data like in baseball; hence the Solarium call for a Bureau of Cyber Statistics. If you insist on an answer: median dwell time of an intruder in a system before discovery. If it is going down, we are improving.”
Josephine Wolff, nonresident fellow, Cyber Statecraft Initiative; assistant professor of cybersecurity policy, Tufts University Fletcher School of Law and Diplomacy: “Just as baseball players used to be measured in runs-batted-in (RBIs) and home runs, we’ve been weighing the number of breaches and other security incidents firms suffer too highly as a barometer of their cybersecurity. I’d argue for paying more attention to the recovery time and annual losses, the on-base plus slugging (OPS) of cybersecurity if you will, because those often offer a much more nuanced yardstick for what it means to win games in cyberspace.”
#2 You might not hit a home run during every at bat, but what is the “walk”—unsexy, long unacknowledged, but useful for the team—of cyber policy?
Healey: “For most of us, it’s patching. For technology companies and those who develop software, it is writing secure code to begin with.”
Herr: “Asking the question ‘has this been done/proposed/tried before?’”
Porter: “Diplomatic engagement. There’s little doubt in my mind that bilateral negotiations, trust-building, “gentlemen’s agreements,” and more formal arrangements are among the most effective ways of reducing cyber threats to the United States and its allies. It doesn’t help sell products or justify big budgets, but it works.”
Rosenzweig: “Two-factor authentication; Domain-based Message Authentication, Reporting, and Conformance (DMARC) for detection; Hypertext Transfer Protocol Secure (HTTPS) for web security.”
Wolff: “There’s a lot we can learn from the unforced, mental errors by firms—the decisions that exacerbate already bad security situations. For instance, Uber failing to report its 2016 data breach or the Equifax executive who dumped company stock right before their 2017 breach was announced. Security incidents can’t be eliminated, but these incidents highlight the critical importance of how firms respond to them and whether they acquit themselves admirably in the wake of a crisis—or instead make everything worse.”
#3 Strikeouts and errors are inevitable, but hustle and learning can never take a day off. What cybersecurity failure of the last five years holds the most valuable lessons?
Healey: “The United States has learned the hard-won lessons that talking about deterrence is meaningless when other states use cyberspace to undermine our security. It is only taking hard decisions to take meaningful action that matters. It is very possible, however, that we will over-learn this lesson—forgetting that the United States has itself long used cyberspace to undermine others’ security—and escalate into yet more dangerous kinds of cyber conflict.”
Herr: “The US Office of Personnel Management (OPM) breach—it reminded us how complexity breeds insecurity and how valuable data can be at once aggregated and correlated.”
Porter: “For years, we’ve focused on protecting silos—the government protecting its own networks and eventually a limited number of well-defined industries we deem ‘critical.’ But adversaries keep attacking a wide variety of targets, often specifically because they believe attacking those targets will weaken the United States. So, if they think their activities will hurt us, shouldn’t we be interested in stopping them even if the target doesn’t fall neatly into a predefined bucket? To mix sports metaphors, play man-to-man instead of zone. Remember: Even election infrastructure wasn’t considered critical until after 2016. Looking back, we don’t have a good track record of picking winners in terms of anticipating what will be considered a critical target.”
Wolff: “Collecting better data on which different recommended best practices and security controls are most effective against different types of threats—not flashy, not likely to make headlines, but crucial for trying to develop any kind of reliable, repeatable progress over time.”
Rosenzweig: “Ransomware—learn to back up.”
More from the Cyber Statecraft Initiative
#4 Everyone is talking about China, Russia, Iran, and North Korea as top US adversaries in cyberspace. Beside those four, what country/entity is the biggest wildcard to look out for over the next five years?
Healey: “The US government itself. The United States has been increasingly perceived as a rogue, thumbing its nose at global organizations and norms. When tied to a more aggressive strategy where the best defense is a good offense, the next few years may be especially dangerous and destabilizing.”
Herr: “There are a handful of well-organized and highly capable groups out there gaining access to sensitive information solely to leak it, like the group Phineas Phisher or the Panama Papers incident. The United States hasn’t seen much attention from one of these groups yet, but that could change quickly and painfully for the national security community.”
Porter: “Unlike conventional weapons development, which requires a certain scale of investment not available even to most rich (per-capita) countries, cyber programs can be built on a scale of millions of dollars and capable of doing damage that even superpowers would care about. That’s not to say that they’re on par with those world-class programs, but they’re able to get in the game. So countries with young, educated populations and a sufficient level of information technology (IT) development, with training classes available in their native language, and a strong investment in security services can have important cyber programs if they want them. Vietnam fits the bill, as do a number of countries in Latin America. These countries could end up as allies, adversaries, and every mix in between. I think the biggest wildcard will be that even some countries we think of as close allies today will increasingly conduct operations targeting the United States and its interests—it’s going to get a lot messier out there as countries balance their security and economic interests in less defined alliances.”
Rosenzweig: “Non-state actors. Within five years we will see a catastrophic cyber failure that will come from the deliberate action of irrational actors.”
Wolff: “I think the biggest wildcard may be Ireland because so many US-headquartered multinational tech firms have their European headquarters there, and their data protection authority hasn’t even started to publish General Data Protection Regulation (GDPR) orders yet. So, while they’re certainly not an ‘adversary’ in any military or traditional sense, the Irish government is likely to play a major role in shaping the US tech sector’s data protection practices in the future.”
#5 Some baseball purists claim the designated hitter forever tarnished a beautiful game. Cybersecurity has experienced a few rule changes with negative and unintended consequences: Which has been most impactful?
Healey: “Cybersecurity has the reverse problem: It wasn’t pure and then ruined. It was ruined and now we’re trying to fix it. The Internet was flawed to begin with, by design. Security was left out (at least in small part because the National Security Agency didn’t want widespread encryption) so much of cybersecurity over the past decades has been trying to add Band-Aid rule changes. Now, in the words of Beau Woods, it’s Band-Aids all the way down. This is why many researchers argue for a new Internet 2.0 that is secure from the start.”
Herr: “US intelligence community efforts to standardize intentionally weak or NOBUS encryption standards. They’ve enabled a decade of attacks over the web, undermined trust in US companies and standards processes, and given the contemporary National Security Agency a generational trust deficit to overcome.”
Porter: “Cyber tools were born as tools of espionage, used by secretive intelligence agencies. That still happens obviously, but they’re increasingly used by militaries. A spy agency can justify its budget by just being ready to do its job of collecting intelligence if called upon, but a military needs to show action for all that investment. The incentives are aligned for a lot more aggression in cyberspace, including targeting civilians, and that worries me. It’s a risk when the superpowers undergo this development, but even more worrying when regional rivals start arming themselves in this way.”
Rosenzweig: “The increased role of the US Cyber Command (USCYBERCOM) and the resulting militarization of the problem or, conversely, the failure of the Department of Homeland Security/Cybersecurity and Infrastructure Security Agency to adequately take on the civilian protective/preventative role.”
Wolff: “No question, the GDPR has been the most significant set of rule changes and we’re still watching the consequences––both positive and negative––play out. One of the unintended consequences I sometimes worry about with the GDPR is whether it will move the needle even further in the direction of equating privacy and data protection with notice and consent click-through agreements that nobody reads.”
Simon Handler is the assistant director of the Atlantic Council’s Cyber Statecraft Initiative under the Scowcroft Center for Strategy and Security, focused on the nexus of geopolitics and international security with cyberspace. He is a former special assistant in the United States Senate. Follow him on Twitter @SimonPHandler.
The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.