Dispatch from Bletchley Park: Where does transatlantic AI cooperation stand?

From the outside, Bletchley Park looks like the setting of a nineteenth-century English costume drama. But the Victorian mansion fifty miles outside of London has a claim to being the birthplace of modern computing. During World War II, it’s where Alan Turing’s machines helped crack the “unbreakable” Enigma code. This week, it was where politicians and business leaders sought to crack another difficult problem: how to regulate artificial intelligence (AI). 

On November 1-2, the United Kingdom hosted a landmark AI Safety Summit at Bletchley Park. Earlier in the week, US President Joe Biden signed a sweeping executive order seeking to make AI safer and trustworthy, and the Group of Seven (G7) announced a new code of conduct and international guiding principles on AI. This week has proven just how transatlantic the global AI debate has become. 

At the AI Safety Summit, British Prime Minister Rishi Sunak outlined his hopes that the United Kingdom will achieve a lasting impact on the global regulatory debate. As the global leader in AI technologies, the United States has staked out its aims to use its market influence to shape AI rules. In the European Union (EU), negotiations on the AI Act—intended to be the first major AI legislation with global reach—are nearing the finish line, with a final text expected before the end of the year.

As AI advances, it’s essential that like-minded allies and partners continue to work together to seize enormous opportunities offered by this technology while also mitigating its risks to citizens and democracies. As different regulatory approaches and governance frameworks emerge around the world, achieving commonly held standards and principles on AI becomes increasingly important. The question remains whether voluntary commitments and non-binding principles alone are as good as it is going to get on a global AI approach.

As transatlantic partners lead the global debate and craft a system of AI governance tools and mechanisms through both individual and multilateral efforts, here are the three main observations the authors took away from the AI Safety Summit about how to move transatlantic AI cooperation to the next level: 

  1. While transatlantic partners widely agree on the need to adopt a “risk-based” approach to global AI governance, there is significant variation in AI taxonomies, including even the definition of “risk-based” itself. Transatlantic partners should seek to standardize language to create a shared “frame of reference” when dealing with scientific concepts. The EU and United States already have an effective mechanism on technical standards through the Trade and Technology Council (TTC). While total harmony is not achievable across legal systems, AI taxonomies are useful as a way to lay the groundwork in order to better understand, measure, and manage risk together. Moreover, standardization enables greater interoperability, which is crucial for enhanced integration, scalability, and oversight of AI systems. So far, US and EU officials have agreed to sixty-five definitions related to AI through the TTC Joint Roadmap for Trustworthy AI and are now in consultation with stakeholders. The joint roadmap provides a useful model for how transatlantic partners and allies—beyond just the United States and EU—can align on technical standards. 
  1. While the Summit’s focus on “frontier risk”—defined as “general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models”—is important, is this really a priority among international players? Is it the most pressing AI risk at the moment? While the Bletchley Declaration on AI was a significant win for the UK government—an unprecedented agreement between twenty-eight signatories, including the United States, EU, and China—future summits should consider the “full spectrum” of potential AI harms (to include frontier risk) in order to better address today’s risks to human rights and societies. These harms include AI-enabled disinformation and algorithmic bias.
  1. While aligned on the Organisation for Economic Co-operation and Development’s (OECD’s) AI principles at the Group of Twenty (G20), the Global South is more inclined to view AI as an economic and growth opportunity. If transatlantic partners hope to ever “multilateralize” a regulatory approach on AI, they should consider how to engage with like-minded partners in the Global South—in ways that go beyond prescribing rules. The AI Safety Summit was a good start to have a broader international discussion on AI harms, but future summits should do more to address how to leverage AI to tackle some of the biggest challenges of our time, from climate change and food security to access to healthcare and education. With future summits set to occur in France and South Korea, the door should be open for a country in the Global South to host. 

While the United States, United Kingdom, and EU offer different visions on how to regulate AI technologies, transatlantic partners are able to lead the global debate in spite of their differences. Their efforts should continue to focus on gathering a broad international consensus on technical standards and core principles, as these are the building blocks for a more harmonized set of regulatory activities across the board. 

The AI Safety Summit presented an evolution rather than a revolution in the international dialogue on AI governance. It echoed the policy discussions in other fora, including the G7, G20, OECD, and even the TTC, while bringing more nations and other stakeholders into the conversation. How transatlantic partners and allies leverage that momentum will be critical to avoid regulatory fragmentation and ensure the trust of citizens in technologies that can deliver, if leveraged responsibly, massive improvements to their well-being. While there is a clear need for a coordinated approach to AI governance, transatlantic partners also need to collaborate closer on innovation policies and investments to ensure that the world’s democracies remain at the forefront of AI for the long term.


Mark Boris Andrijanič is a nonresident senior fellow at the Atlantic Council’s Europe Center.

Nicole Lawler is a program assistant at the Atlantic Council’s Europe Center.

Further reading

Image: Britain's Prime Minister Rishi Sunak speaks to journalists upon his arrival for the second day of the UK Artificial Intelligence (AI) Safety Summit at Bletchley Park, in central England, November 2, 2023. Justin Tallis/Pool via REUTERS