AI safety concerns transcend borders. To meet the challenge, US efforts need to go global.

How will the incoming Trump administration approach artificial intelligence (AI) governance? The answer to that question depends in part on what the outgoing administration does with its remaining time in office.

This month, the US government is moving forward with key international engagements to shape consensus on shared principles of technology development. The US State Department and US Commerce Department will host the first-ever meeting of the International Network of AI Safety Institutes in San Francisco on November 20-21. This event, which aims to shape the future of AI with global cooperation at its core, will assemble technical experts to align priority research topics, increase knowledge sharing, and improve transparency on AI safety efforts in preparation for the Paris AI Action Summit in February 2025.

The need to verify the robustness and assurance of AI systems continues to grow as the technology rapidly proliferates and cases of model failures spread. In two examples, a New York City chatbot recently offered unlawful advice to small business owners, and several major large language models have revealed protected information through novel “jailbreaking” techniques. Developer codes of conduct and guiding principles call for independent evaluations and risk mitigation reporting to help identify and prevent similar problems, but such practices remain uncoordinated and inconsistent across the AI industry.

Since the first AI Safety Summit one year ago at Bletchley Park in the United Kingdom, a surge in global efforts to make AI safer has turned nascent concerns into a powerful, global movement. Today’s network of individuals and institutions working to improve the trustworthiness of new models spans a diverse range of stakeholders with several promising agreements already in place.

Such an ecosystem offers promising potential for safety experts to build awareness of emerging threats and collaborate to reach shared solutions. Yet this emerging network must address some key challenges to establish a sustainable foothold in the global conversation. The AI governance space already includes layers of interconnected institutions with overlapping mandates, while information asymmetries and contrasting motivations across stakeholders makes it challenging to reach consensus, even on broad goals. The International Network of AI Safety Institutes must work toward several crucial objectives as it ventures into these waters.

Short-term priorities

Establish a narrow focus on technical safety measures

The Network should first craft a mandate that remains clear of contentious policy debates, instead focusing on the multitude of technical-based solutions, including model evaluation methods, red-teaming practices, and other safety mechanisms. Such a scope of work would allow the group to avoid political pitfalls and establish a broader body of contributors to improve information sharing. The Network should advise and assist policy-making institutions with technical reporting, but it should preserve its independent integrity by operating as a community of technical experts rather than regulators.

Build capacity in AI safety research

The November event will convene existing AI safety institutes from the United States, United Kingdom, Japan, Canada, and Singapore, with additional representatives from Australia, France, Kenya, South Korea, and the European Union. The growing list indicates a strong potential for the group’s future, but it remains heavily weighted toward higher-income countries in the West, limiting its impact.

Moving forward, the Network should establish outreach channels to researchers and developers in regions without existing safety organizations, particularly in Latin America and Africa, while providing technical resources to policymakers in those countries. These efforts could expand the use of effective evaluation and auditing methods, improving the analysis of model performance before market delivery.

Share updates from industry on frontier AI safety commitments

Despite the current attention on responsible AI, global regulatory efforts remain fragmented and uncoordinated, making voluntary efforts by major developers some of the most impactful guardrails in the field. A cohort of major AI companies pledged increased transparency and dedication to risk management at the Seoul convention in May 2024, and leading US firms have since established the Frontier Model Forum to publicly share research results. The November convening should include a comprehensive report on their current safety projects and a roadmap for continued research.

Long-term goals

Advise standards-setting bodies

Network participants will likely influence AI standards for their respective governments and will therefore consolidate a wealth of knowledge to shape common technical guidelines. While this body lacks the mandate to produce those standards on a global scale, they should advise international organizations already working on these frameworks. Similar organizations have done so in the past, such as the National Institute of Standards and Technology’s collaboration with the International Electrotechnical Commission and the International Organization for Standardization on cybersecurity and privacy, helping to align international approaches and enhance cross-border interoperability.

Establish dialogue channels with major AI policy organizations

Similarly, the Network should engage with other institutions, such as the Global Partnership on Artificial Intelligence and the United Nations’ AI Advisory Body, to ensure their policy recommendations are informed in design and effective in implementation. From cities to multilateral forums, the number of organizations including AI in their scopes of work continues to grow. An integrated engagement process could provide the foundation for consistency and interoperability across the international regulatory spectrum, preventing policy-related conflicts and roadblocks.

Expand membership

While preserving a smaller group could certainly improve expediency and alignment when making decisions, it limits the scope and scale of the Network’s impact. Kenya’s inclusion in the November event indicates a wise intention to expand membership to underrepresented regions. Future iterations of this event should continue to convene more diverse participants, including subject matter experts from academia and civil society to ensure that key sectors with deep knowledge bases can help accelerate outcomes and drive responsible AI innovation and adoption.

Perhaps the most pressing question is whether to invite a Chinese delegation in the future. While including China in policy-related efforts may prove a bridge too far for some members, the Network’s narrower scientific and technical mandate could allow it to include Chinese AI experts in future conversations, particularly considering Beijing’s increased focus on AI safety research. The group will need to weigh the potential benefits of the technical expertise that the leading minds in China’s AI space can offer with the risks of working with a misaligned governance system.

Even as the United States undergoes a political transition in the new year, and even if the mission of the AI Safety Institute changes, AI safety will remain an important and far-reaching issue. The convening later this month is a critical opportunity to establish a clear direction for AI safety research that is bigger than any one country.


Will LaRivee is a resident fellow at the Atlantic Council GeoTech Center.

Further reading

Image: US Commerce Secretary Gina Raimondo speaks on Day 1 of the AI Safety Summit at Bletchley Park in Bletchley, Britain on November 1, 2023. The UK Government are hosting the AI Safety Summit bringing together international governments, leading AI companies, civil society groups and experts in research to consider the risks of AI, especially at the frontier of development, and discuss how they can be mitigated through internationally coordinated action. Leon Neal/Pool via REUTERS