AI governance on a global stage: Key themes from the biggest week in AI policy
The week of October 30, 2023 was a monumental week for artificial intelligence (AI) policy globally. As a quick recap: In the United States, one of the longest Executive Orders (EO) in history was signed by President Biden, aimed at harnessing the opportunities of AI while also seeking to address potential risks that may be presented by future evolutions of the technology. In the United Kingdom, international stakeholders came together to discuss risks at the “frontier” of AI and how best to mitigate them. Twenty-nine countries signed on to the Bletchley Park Declaration (“Declaration”). In the midst of all of this, the Hiroshima AI Process launched by Japan under the Group of Seven (G7) released its International Guiding Principles for Organizations Developing Advanced AI Systems (“G7 Principles”) as well as a voluntary International Code of Conduct for Organizations Developing Advanced AI Systems.
In light of what was arguably one of the busiest (and perhaps the most impactful) weeks in AI policy since the public release of ChatGPT thrust AI into the spotlight almost a year ago, there’s a lot to unpack. Below are some key themes that emerged from the conversation and items that will be increasingly relevant to pay attention to as efforts to govern the technology progress globally.
A commitment to taking a risk-based approach to regulation of AI technology
Across all of the activities of last week, one of the themes that came through was the continued emphasis on a risk-based approach, as these authors highlighted in their piece on transatlantic cooperation.
While some efforts more directly called this out than others, it was a throughput that should rightfully remain top of mind for international policymakers moving forward. For example, the chapeau of the G7 Principles calls on organizations to follow the guidelines set forth in the Principles “in line with a risk-based approach,” and the theme is reiterated in several places throughout the rest of the document. In the Declaration, countries agreed to pursue “risk-based policies…to ensure safety in light of such risks.” The Executive Order was a bit less direct in its commitment to maintaining a risk-based approach, though it seems to suggest that this was its intent in laying out obligations for “dual-use foundation model providers” in Section 4.1. The application of requirements for this set of models appears to indicate that the Administration sees heightened risk associated with this sort of model, though moving forward a clear articulation of why these obligations are the most appropriate approach to managing risk will be critical.
In digesting all of the activities last week, a central theme to note is that the global conversation seems to be moving away from an approach focused solely on regulating uses of AI but is now also seeking to regulate the technology itself. Indeed, all of the major efforts last week discussed risks inherent to “frontier models” and/or “advanced AI systems,” suggesting that there are model-level risks that might require regulation, in addition to context-specific, use-case based governance.
What to look out for:
How the term “frontier models” is formally defined, including whether international counterparts are able to come to agreement on the technical parameters of a “frontier model”
- The Declaration discusses ‘frontier models’ as “those highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks—as well as relevant specific narrow AI that could exhibit capabilities that cause harm—which match or exceed the capabilities present in today’s most advanced models” while the Executive Order provides an initial definition of a “dual-use foundation model” as “(i) any model that was trained using a quantity of computing power greater than 1026 integer or floating-point operations, or using primarily biological sequence data and using a quantity of computing power greater than 1023 integer or floating-point operations; and (ii) any computing cluster that has a set of machines physically co-located in a single datacenter, transitively connected by data center networking of over 100 Gbit/s, and having a theoretical maximum computing capacity of 1020 integer or floating-point operations per second for training AI”. The G7 Principles merely discuss “advanced AI systems” as a concept, using “the most advanced foundation models” and “generative AI systems” as illustrative types of these systems.
- With that being said, it will be interesting to see how definitions and technical parameters are established moving forward, particularly because using floating point operations per second seems to be the way the conversation is currently trending but is not a particularly future-proof metric.
Continued conversation about what the right approach is to govern risks related to “frontier” AI systems
- With the introduction of both voluntary agreements (e.g., in the Declaration and in the G7 Code of Conduct) as well as specific obligations (e.g., in Section 4.2 and 4.3 of the Executive Order), there is sure to be additional discussion about what the right approach is to managing risk related to these models. In particular, keep an eye out for conversations about what the right regulatory approach might be, including how responsibilities are allocated between developers and deployers.
Whether specific risks related to these models are clearly articulated by policymakers moving forward
- In some regard, it seems to be a foregone conclusion that “frontier” AI systems will need to be regulated because they present a unique or different set of risks than those AI systems that already exist. However, in setting out regulatory approaches, it is important to clearly define the risk that said regulation is seeking to address, demonstrating why that approach is the most appropriate one. While the EO seems to indicate that the US government has concerns about these AI models amplifying biosecurity and cybersecurity related risks, clearly explaining why the proposed obligations are the right one for the task is going to be critical. Also, there continues to be some tension between those who are focused on “existential” risks associated with these systems and those that are focused on addressing “short-term” risks.
A major focus on the role of red-teaming in AI risk management
Conversations over the last week focused on red-teaming as a key component of AI risk management. Of course, this was not the first time red-teaming has been highlighted as a method to manage AI risk, but it came through particularly clearly in the EO, the G7 Principles, and in the Declaration as a tool of choice to manage AI risk. To be sure, Section 4 of the AI EO directs the National Institute for Standards and Technology (NIST) to develop red-teaming guidelines and requires providers of “dual-use foundation models” to provide information, including results of red-teaming tests performed, to the US government. Principle 1 in the G7 Principles discusses the importance of managing risk throughout the AI lifecycle and references red-teaming as one of the methods to discover and mitigate identified risks and vulnerabilities. The Declaration doesn’t use the term “red-teaming” in particular but talks about the role of “safety testing” in mitigating risk (though it is not clear from the statement what exactly this testing will look like).
One of the interesting things to note is that in the context of AI systems, the term “red-teaming” seems to indicate a broader set of practices than just attacking and/or hacking a system in an attempt to gain access and involves testing for flaws and vulnerabilities of an AI system in general. This is a departure from how red-teaming is generally understood in the cybersecurity context, likely because there is an ongoing discussion around what tools are most appropriate to test for and mitigate a broader range of risks beyond those related to security and red-teaming presents a useful construct for such testing.
Despite red-teaming being a significant focus of conversations as of late, it will be critical for policymakers to avoid overemphasizing on red-teaming. Red-teaming is one way to mitigate risk but is not the only way. It should be undertaken in conjunction with other tools and techniques – like disclosures, impact assessments, and data input controls, to ensure a holistic and proportionate approach to AI risk management.
What to look out for:
If and how different jurisdictions define “red-teaming” for AI systems moving forward, and whether a common understanding can be reached. Will the definition remain expansive and encapsulate all types of testing and evaluation or will it be tailored to a more specific set of practices?
How red-teaming is incorporated into regulatory efforts moving forward
- While the events of the last week made clear that policymakers are focused on red-teaming as a means by which to pressure test AI systems, the extent to which such requirements are incorporated into regulation remains to be seen. The Executive Order, with its requirement to share the results of red-teaming processes, is perhaps the toothiest obligation coming out of the events of the past week, but as other jurisdictions begin to contemplate their approaches, don’t be surprised if red-teaming takes on a larger role.
How the institutes announced during the UK Safety Summit (the US AI Safety Institute and the UK AI Safety Institute) will collaborate with each other
- The United States announced the establishment of the AI Safety Institute, which will be charged with developing measurement and evaluation standards to advance trustworthy and responsible AI. As Section 4.1 tasks NIST with developing standards to underpin the red-teaming required by Section 4.2 of the Executive Order, this Institute, and its work with other similarly situated organizations around the world, will be key to implementation of the practices outlined in the EO and beyond.
An emphasis on the importance of relying upon and integrating international standards
A welcome theme that emerged is the essential role that international technical standards and international technical standards organizations play in advancing AI policy. Section 11 of the AI Executive Order, focused on advancing US leadership abroad, advocates for the United States to collaborate with its partners to develop and implement technical standards and specifically directs the Commerce Department to establish a global engagement plan for promoting and developing international standards. Principle 10 of the G7 Principles also emphasizes the importance of advancing and adopting international standards. The Declaration highlights the need to develop “evaluation metrics” and “tools for testing.”
International technical standards will be key to advancing interoperable approaches to AI, especially because we are seeing different jurisdictions contemplate different governance frameworks. They can help provide a consistent framework for developers and deployers to operate within, provide a common way to approach different AI risk management activities, and allow companies to build their products for a global marketplace, reducing the risk of fragmentation.
What to look out for:
Which standards efforts are prioritized by nations moving forward
- As mentioned above, the United States and the United Kingdom both announced their respective Safety Institutes during last week’s Summit. The UK’s Institute is tasked with focusing on technical tools to bolster AI safety, while NIST is tasked with a wide-range of standards activities in the Executive Order, including developing guidelines for red-teaming, AI system evaluation and auditing, secure software development, and content authentication and provenance.
- Given the plethora of standards that are needed to support the implementation of various risk management practices, which standards nations choose to prioritize is an indicator of how they are thinking about risks related to AI systems, their impact on society, and regulatory efforts more broadly. In general, nations appear to be coalescing around the need to advance standards to support the testing and evaluation of capabilities of advanced AI systems/frontier AI systems/dual-use foundation models.
How individual efforts are mapped to or otherwise brought to international standards development organizations
- In addition to the activities taking place within national standards bodies, there are also standardization activities taking place at the international level. For example, International Standards Organization/International Electrotechnical Commission Joint Technical Committee 1 Subcommittee 42 has been hard at work on a variety of standards to help support testing of AI systems and recently completed ISO 42001. As such, mapping activities are helpful for fostering consistency and for allowing organizations to understand how one standard relates to another.
- Participating in and/or bringing national standards, guidelines, and best practices to international standards bodies helps to create buy-in, facilitate interoperability, and allows for alignment. As individual nations continue to consider how best to approach implementation of various risk management practices, continuing to prioritize participation in these efforts will be crucial to a truly international approach.
The events of the last week helped to spotlight several areas that will remain relevant to the global AI policy conversation moving forward. In many ways, this is only the beginning of the conversation, and these efforts offer an initial look at how international collaboration might progress, and in what areas we may see additional discussion in the coming weeks and months.
GeoTech Center
Championing positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.