Last week, industry representatives, researchers, and government officials from twenty-seven countries descended upon Bletchley Park, an estate an hour outside of London where British mathematician Alan Turing and his team broke the Nazis’ Enigma code in World War II. But the group assembled last week sought to break a code of a different kind: how to tackle the risks associated with the latest advancements in artificial intelligence (AI). The inaugural AI Safety Summit, hosted by the UK government, wound up a busy couple of weeks in AI. On October 30, US President Joe Biden issued an executive order on the safe, secure, and trustworthy development and use of AI, and a few day earlier the United Nations launched a high-level advisory body on AI, to which one of the authors has been appointed.
The most notable outcome from the UK gathering was the Bletchley Declaration, which struck all the right chords. It emphasized the importance of involving multiple stakeholders and international cooperation, and it underscored the responsibilities of “actors developing frontier AI capabilities.” Headlines have largely focused on the remarkable feat of bringing China, the European Union, and the United States together to sign a declaration on AI governance, at a time when they seem to agree on very little else. The declaration reveals that countries across the ideological spectrum are worried about the potential for harms caused by AI systems and recognize that solutions cannot be built by one country alone. And thus, on the same stage, US Secretary of Commerce Gina Raimondo declared that “even as nations compete vigorously, we can and must search for global solutions to global problems,” while Chinese Vice Minister of Science and Technology Wu Zhaohui called for collaboration to mitigate potential unintended harms of frontier AI models. Wu’s presence at the closed-door government leaders’ meetings on the second day of the summit—which was not publicly acknowledged at the time—was also notable, but necessary given China’s position of influence in AI development and global uptake of the technology outside of Western democracies.
Headlines have largely focused on the remarkable feat of bringing China, the European Union, and the United States together to sign a declaration on AI governance, at a time when they seem to agree on very little else.
There were other important developments, too. The United Kingdom and the United States signed essential bilateral agreements on shared AI safety standards. On the US side, the National Institute of Standards and Technology, which is already building out standards on AI risk identification and mitigation, will liaise with the UK’s Frontier AI Taskforce going forward. However, given that AI safety is both a technical and a national security issue, the UK’s National Cyber Security Centre and US Cybersecurity and Infrastructure Security Agency would also likely need to be involved in discussions. The UK government’s announcement of the creation of an AI Safety Institute, which “will act as a global hub on AI safety, leading on vital research into the capabilities and risks of this fast-moving technology,” can also help harmonize the UK and US approaches to AI risk across the board. It might also serve as a model for further engagement with other likeminded partners.
The summit was also remarkable for including low- and middle-income countries in the Global South. Brazil, India, Indonesia, Kenya, Nigeria, the Philippines, Rwanda, and Turkey all participated in deliberations. That said, Global South representation outside of governments was middling at best, with the list of participants crowded, instead, with European and US research institutions—often funded by big tech conglomerates, which were also present in full force at the summit. No doubt the opaque process leading up to the summit, with nongovernment participants being informed of their participation a month or less before the gathering, contributed to the homogeneity of the final list. With UK visa wait times globally averaging three weeks, and the challenge of funding travel being a persistent barrier to Global South participation in key forums, one hopes that future iterations are far more inclusive. This is especially pertinent if the summit is to grow, perhaps in the style of the United Nations Climate Change Conference, also known as COP, as a mix of open dialogues among multiple stakeholders paired with closed-door multilateral deliberations.
The summit’s limitations in the realm of inclusivity were exacerbated by the agenda. The Bletchley Park gathering reflects a peculiarity of the world since ChatGPT’s release, after which conversations about AI governance have—while gaining greater public attention—shifted sharply toward existential or catastrophic risk. This is in part because of the influence of US tech companies, which are increasingly expanding their executives’ engagement with governance processes and shaping the agenda. This is a boon, because without private tech leadership buy-in, no set of rules is likely to be effective. But it is also a barrier to deeper engagement on the real, near-term risks of AI, such as misinformation, copyright infringement, and election disinformation, as well as other threats to democracy. Nebulous long-term risks may well be more amenable to help band together twenty-seven countries—and generate positive press and share value—than fractious, seemingly intractable challenges. Future iterations of the summit—South Korea in six months, and France in a year—should carve out space within the agenda to engage tech giants, which wield the computing power, data, investment, and agenda-setting power, on AI safety issues that are affecting communities and nations today.
Ensuring AI safety will be the monumental task of this era. It’s another moon shot. And to paraphrase US President John F. Kennedy about the original moon shot, humanity must find workable solutions to the real and present harms of AI, not because they are easy, but because they are hard.
Dame Wendy Hall is a nonresident senior fellow at the Atlantic Council’s GeoTech Center. She is regius professor of computer science, pro vice-chancellor for international engagement, and the executive director of the Web Science Institute at the University of Southampton.
Trisha Ray is an associate director and resident fellow at the Atlantic Council’s GeoTech Center.
Further reading
Fri, Nov 3, 2023
Dispatch from Bletchley Park: Where does transatlantic AI cooperation stand?
New Atlanticist By Mark Boris Andrijanič, Nicole Lawler
Politicians and business leaders just met outside of London for a summit on how to regulate artificial intelligence. Here’s how to take the collaboration to the next level.
Mon, Oct 30, 2023
Experts react: What does Biden’s new executive order mean for the future of AI?
New Atlanticist By
US President Joe Biden has issued an executive order intended to make artificial intelligence safer, more secure, and more trustworthy. Atlantic Council experts share their insights.
Fri, Oct 27, 2023
The 5×5—The cybersecurity implications of artificial intelligence
The 5x5 By Maia Hamin, Simon Handler
A group of experts with diverse perspectives discusses the intersection of cybersecurity and artificial intelligence.