The 5×5—The cybersecurity implications of artificial intelligence
The arrival of ChatGPT, a chat interface built atop OpenAI’s GPT-3 model, in November 2022 provoked a frenzy of interest and activity in artificial intelligence (AI) from consumers, investors, corporate leaders and policymakers alike. Its demonstration of uncanny conversational abilities, as well as, later, the ability to write code, stoked the collective imagination as well as predictions about its likely impacts and integration into myriad technology systems and tasks.
The history of the field of AI stretches back to the 1950s, and more narrow machine learning models have been solving problems in prediction and analysis for nearly two decades. In fact, these models are already embedded in the cybersecurity lifecycle, most prominently in threat monitoring and detection. Yet, the emergence of the current generation of generative AI, powered by large language models, is producing markedly different capabilities than previous deep learning systems. Researchers are only beginning to explore the potential uses of generative AI systems in cybersecurity, as well as the potential threats arising from malign use or cyberattacks against generative AI systems themselves.
With cybersecurity playing a significant role in recently announced voluntary commitments by leading AI companies, a sweeping Executive Order on AI expected next week, and leading AI companies allowing their products to be used to construct increasingly autonomous systems, a discussion about the intersection of generative AI and cybersecurity could not be timelier. To that end, we assembled a group with diverse perspectives to discuss the intersection of cybersecurity and artificial intelligence.
#1 AI hype has risen and fallen in cycles with breakthrough achievements and paradigm shifts. How do large language models (LLM), and the associated hype wave, compare to previous AI paradigms?
Harriet Farlow, chief executive officer and founder, Mileva Security Labs; PhD candidate, UNSW Canberra:
“In my opinion, the excitement around large language models (LLMs) is similar [to excitement around past paradigm shifts] in that it showcases remarkable advancements in AI capabilities. It differs in that LLMs are significantly more powerful than the AI technologies of previous hype cycles. The concern I have with this hype—and I believe AI in general is already over-hyped—is that it gives the impression to non-practitioners that LLMs are the primary embodiment of AI. In reality, the natural language processing of LLMs is just one aspect of the myriad capabilities of AI, with other significant capabilities including computer vision and signal processing. My worry is that rapid adoption of AI and increasing trust in these systems, combined with the lack of awareness that AI systems can be hacked, means there are many productionized AI systems that are vulnerable to adversarial attack.”
Tim Fist, fellow, technology & national security, Center for a New American Security:
“While people’s excitement may have a similar character to previous AI ‘booms,’ such as in the 1960s, LLMs and other similar model architectures have some technical properties that together suggest the consequences of the current boom will be, to put it lightly, further reaching. These properties include task agnostic learning, in-context learning, and scaling. Unlike the AI models of yore, LLMs have impressive task performance in many domains at once—writing code, solving math problems, verbal reasoning—rather than one specific domain. Today’s ‘multimodal’ models are the next evolution of these capabilities, bringing the ability to understand and generate both natural language and images, with other modalities in the works. On top of their generality, once trained, LLMs can learn on the fly, allowing them to adapt to and perform reasonably well in novel contexts. LLMs and their multimodal cousins are AI architectures that can successfully leverage exponentially increasing amounts of computing power and data into greater and greater capabilities. This capacity means the basic recipe for more performance and generality is straightforward: just scale the inputs. This trend does not show any clear signs of slowing down.”
Dan Guido, chief executive officer, Trail of Bits:
“It is both the same and different. Like the hype surrounding LLMs, prior hype cycles arose due to the promise of fundamentally new capabilities in artificial intelligence, although not all the promised effects materialized. What is different this time is that the results of AI are immediately available to consumers. Now, AI is doing things that people thought computers could never do, like write stories, tell jokes, draw, or write your high school essays. This has occurred due to both fundamental advances like the Transformer model, and Sutter’s bitter lesson that AI becomes better with more computing power. We now have the computation to provide immense scale that was previously unachievable.”
Joshua Saxe, senior staff research scientist, Meta:
“The hype around LLMs rhymes with past hype cycles, but because AI is a real and substantive technology, each wave of hype does change security, even if less than AI boosters have anticipated. The hype wave of the 2010s fueled ideas that AI would fundamentally transform almost every aspect of cybersecurity practice, but, in fact, only disrupted security detection pipelines—for example, machine learning is now ubiquitous in malware and phishing detection pipelines. Similarly broad claims are being made about this current hype wave. Many of the imagined applications of LLMs will fall away, but as the bubble deflates we will see some genuinely new and load-bearing applications of LLMs within security.”
Helen Toner, director of strategy and foundational research grants, Center for Security and Emerging Technology, Georgetown University:
“I believe expectations are too high for what generative AI will be able to do this year or next. But on a slightly longer timeframe, I think the potential of the current deep learning-focused paradigm—LLMs being one its many faces—is still building. The level of investment and talent going into LLMs and other types of deep learning far outstrips AI waves of previous decades, which is evidence for—and a driver of—this wave being different.”
#2 What potential applications of generative AI in cybersecurity most excite you? Which are over-hyped?
Farlow: “In my experience, most people still use the term ‘AI’ the way they would ‘magic.’ I find too many conversations about how AI should be used in cybersecurity are based on trying to replicate and multiply the human workforce using AI. This is a very hard problem to solve, as most AI technologies are not good at operating autonomously across a range of tasks, especially when there is ambiguity and context-dependence. However, AI technologies are very good at assisting in narrow tasks like phishing and fraud detection, malware detection, and user and entity behavior analytics, for example. My focus is less on AI for cybersecurity, and instead on transferring cybersecurity principles into the field of AI to understand and manage the AI attack surface; this is where I think there needs to be more investment.”
Fist: “I predict that most people, including myself, will be surprised about which specific generative AI-powered applications in cybersecurity end up being most important. The capabilities of today’s models suggest a few viable use cases. Proof-of-concepts exist for offensive tools that use the capabilities of state-of-the-art generative models (e.g., coding expertise, flexibility) to adapt to new environments and write novel attacks on the fly. Attackers could plausibly combine these capabilities with an ‘agentized’ architecture to allow for autonomous vulnerability discovery and attack campaigns. Spearphishing and social engineering attacks are other obvious use cases in the near term. A Center for a New American Security report lays out a few other examples in Section 3.1.2. One important question is whether these capabilities will disproportionately favor attackers or defenders. As of now, the relative ease of generation compared to detection suggests that detectors might not win the arms race.”
Guido: “To judge whether something is overhyped or underhyped, consider whether it is a sustaining innovation or a disruptive innovation. That is, are any fundamental barriers being broken that were not before? Currently overhyped areas of cybersecurity research include crafting exploits, identifying zero-day vulnerabilities, and creating novel strains of malware. Attackers can already do these things very well. While AI will accelerate these activities, it does not offer a fundamentally new capability. AI shines in providing scalability to tasks that previously required an infeasible amount of effort by trained humans, including continuous cybersecurity education (AI is infinitely patient), testing and specification development, and many varieties of security monitoring and analysis. In July, Trail of Bits described how these capabilities may affect national security for the White House Office of Science and Technology Policy.”
Saxe: “Much of what people claim around applications of generative AI in cybersecurity is not substantiated by the underlying capabilities of the technology. LLMs, which are the most important generative AI technology for security, have a few proven application areas: they are good at summarizing technical text (including code), they are good at classifying text (including code and cybersecurity relevant text), and they are good at auto-completion. They are good at all this, even without the presence of training data. Applications that exploit these core competencies in LLMs, such as detecting spearphishing emails, identifying risky programming practices in code, or detecting exfiltration of sensitive data, are likely to succeed. Applications that imagine LLMs functioning as autonomous agents, solving hard program analysis problems, or configuring security systems, are less likely to succeed.”
Toner: “I am skeptical that deepfake videos are going to upend elections or destroy democracy. More generally, I think many applications are overhyped in terms of their likely effects in the very near term. Over the longer term, though—two-plus years from now—I think plenty of things are under-hyped. One is the possibility of mass spearphishing, highly individualized attacks at large scale. Another is the chance that generative AI could significantly expand the number of groups that are able to successfully hack critical infrastructure. I hope that I am wrong on both counts!”
#3 In what areas of generative AI and cybersecurity do you want to see significant research and development in the next five years?
Farlow: “While there is no denying that generative AI has garnered its fair share of hype, I cannot help but remain somewhat cynical about the singular focus on this technology. There is a vast landscape of AI advancements, including reinforcement learning, robotics, interpretable AI, and adversarial machine learning, that deserve equal attention. I find generative AI fascinating and exciting, but I also like to play devil’s advocate and note that the future of AI is not solely dependent on generative models. We should broaden our discussions to encompass the broader spectrum of AI research and its implications for various fields, as well as its security.”
Fist: “I am excited to see more research and development on AI-driven defenses, especially in the automated discovery and patching of vulnerabilities in AI models themselves. The recent paper ‘Universal and Transferable Adversarial Attacks on Aligned Language Models’ is a great example of this kind of work. This research suggests that jailbreak discovery of open-source models like Llama is highly automatable and that these attacks transfer to closed-source models like GPT-4. This is an important problem to highlight. This problem also suggests that AI labs and cybersecurity researchers should work closely together to find vulnerabilities in models, including planned open-source models, and patch them before the models are widely deployed.”
Guido: “In July, Trail of Bits told the Commodity Futures Trading Commission that our top wishlist items are benchmarks and datasets to evaluate AI’s capability in cybersecurity, like a Netflix prize but for Cybersecurity+AI. Like the original ImageNet dataset, these benchmarks help focus research efforts and drive innovation. The UK recently announced it was funding Trail of Bits to create one such benchmark. Second would be guides, tools, and libraries to help safely use the current generation of generative AI tools. Generative AI’s failure modes are different from those of traditional software and, to avoid a security catastrophe down the road, we should make it easy for developers to do the right thing. The field is progressing so rapidly that the most exciting research and development will likely happen to tools that have not been created yet. Right now, most AI deployments implement AI as a feature of existing software. What is coming are new kinds of things where AI is the tool—something like an exact decompiler for any programming language, or an AI assistant that crafts specifications or tests for your code as you write.”
Saxe: “I think there are multiple threads here, each with its own risk/reward profile. The low-risk research and development work will be in taking existing LLM capabilities and weaving them into security tools and workflows that extract maximal value from capabilities they already offer. For example, it seems likely that XDR/EDR/SIEM tooling and workflows can be improved by LLM next-token prediction and LLM embeddings at every node in current security workflows, and that what lies ahead is incremental work in iteratively figuring out how. On the higher-risk end of the spectrum, as successor models to LLMs and multimodal LLMs emerge that are capable of behaving as agents in the world in the next few years, we will need to figure out what these models can do autonomously.”
Toner: “This is perhaps not directly an area of cybersecurity, but I would love to see more progress in digital identity—in building and deploying systems that allow humans to prove their humanity online. There are some approaches to this under development that use cryptography and clever design to enable you to prove things about your identity online while also preserving your privacy. I expect these kinds of systems to be increasingly important as AI systems become more capable of impersonating human behavior online.”
More from the Cyber Statecraft Initiative:
#4 How can AI policy account for both the technology itself as well as the contexts in which generative AI is developed and deployed?
Farlow: “As I am sure readers are aware, the question of regulating AI has become quite a philosophical debate, with some jurisdictions creating policy for the AI technology, and others focusing on policy that regulates how different industries may choose to use that technology. And then within that, some jurisdictions are choosing to regulate only certain kinds of AI, such as generative AI. Given that AI encompasses an incredibly large landscape of technologies across an even broader range of use cases, I would like to see more analysis that explores both angles from a risk lens that can be used to inform internationally recognized and relevant regulation. While some AI applications can be risky and unethical and should be regulated or blocked, such as facial recognition for targeted assassinations, policy should not stifle innovation by constraining research and frontier labs. I would like to see regulation informed by a scientific method with the intention to be universally applicable and adopted.”
Fist: “End-use-focused policies make sense for technology used in any high-risk domain, and generative AI models should be no different. An additional dedicated regulatory approach is likely required for highly capable general-purpose models at the frontier of research and development, known as ‘frontier models.’ Such systems develop new capabilities in an unpredictable way, are hard to make reliably safe, and are likely to proliferate rapidly due to their multitude of possible uses. These are problems that are difficult to address with sector-specific regulation. Luckily, a dedicated regulatory approach for these models would only affect a handful of models and model developers. The recent voluntary commitments secured by the White House from seven leading companies is a great start. I recently contributed to a paper that goes into some of these considerations in more detail.”
Guido: “In June, Trail of Bits told the National Telecommunications and Information Administration that there can be no AI accountability or regulation without a defined context. An audit of an AI system must be measured against actual verifiable claims regarding what the system is supposed to do, rather than against narrow AI-related benchmarks. For instance, it would be silly to have the same regulation apply to medical devices, home security systems, automobiles, and smart speakers solely because they all use some form of AI. Conversely, we should not allow the use of AI to become a ‘get out of regulation free’ card because, you see, ‘the AI did it!.’”
Toner: “We need some of both. The default starting point should be that existing laws and regulations cover specific use cases within their sectors. But in some areas, we may need broader rules—for instance, requiring AI-generated content to be marked as such, or monitoring the development of potentially dangerous models.”
#5 How far can existing legal structures go in providing guardrails for AI in context? Where will new policy structures be needed?
Farlow: “Making policy for generative AI in context means tailoring regulations to specific industries and applications. There are a number of challenges associated with AI that are not necessarily new—data protection laws, for example, may be quite applicable to the use of AI (or attacks on AI) that expose information. However, AI technology is fundamentally different to cyber and information systems on which much of existing technology law and policy is based. For example, AI systems are inherently probabilistic, whereas cyber and information systems are rule-based. I believe there need to be new policy structures that can address novel challenges like adversarial attacks, deep fakes, model interpretability, and mandates on secure AI design.”
Fist: “Liability is a clear example of an existing legal approach that will be useful. Model developers should probably be made strictly liable for severe harm caused by their products. For potential future models that pose severe risks, those risks may not be able to be adequately addressed using after-the-fact remedies like liability. For these kinds of models, ex-ante approaches like licensing could be appropriate. The Food and Drug Administration and Federal Aviation Administration offer interesting case studies, but neither seems like exactly the right approach for frontier AI. In the interim, an information-gathering approach like mandatory registration of frontier models looks promising. One thing is clear: governments will need to build much more expertise than they currently possess to define and update standards for measuring model capabilities and issuing guidance on their oversight.”
Guido: “Existing industries have robust and effective regulatory and rule-setting bodies that work well for specific domains and provide relevant industry context. These same rule-setting bodies are best positioned to assess the impact of AI with the proper context. Some genuinely new emergent technologies may not fit into a current regulatory structure; these should be treated like any other new development and regulated based on the legislative process and societal needs.”
Toner: “Congress’ first step to manage new concerns from AI, generative and otherwise, should be to ensure that existing sectoral regulators have the resources, personnel, and authorities that they need. Wherever we already have an agency with deep expertise in an area—the Federal Aviation Administration for airplanes, the Food and Drug Administration for medical devices, the financial regulators for banking—we should empower them to handle AI within their wheelhouse. That being said, some of the challenges posed by AI would fall through the cracks of a purely sector-by-sector approach. Areas that may need more cross-cutting policy include protecting civil rights from government use of AI, clarifying liability rules to ensure that AI developers are held accountable when their systems cause harm, and managing novel risks from the most advanced systems at the cutting edge of the field.”
Simon Handler is a fellow at the Atlantic Council’s Cyber Statecraft Initiative within the Digital Forensic Research Lab (DFRLab). He is also the editor-in-chief of The 5×5, a series on trends and themes in cyber policy. Follow him on Twitter @SimonPHandler.
The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.