China International Organizations Politics & Diplomacy Technology & Innovation United States and Canada

Issue Brief

March 14, 2024

Leveraging generative artificial intelligence to outcompete strategic rivals

By Jeffrey Cimmino and Andrew A. Michta

The Scowcroft Center’s project on twenty-first-century diplomacy

In 2022, the Scowcroft Center for Strategy and Security launched a project, with funding from Dataminr, to address how US diplomacy should adapt to meet twenty-first-century challenges. The first paper produced as part of this project considered the changing context of twenty-first-century diplomacy and how US diplomacy could begin to adapt. In 2023, the center hosted a workshop that brought together diplomats, scholars, technologists, and other experts to share their insights on the challenges and opportunities presented by artificial intelligence for twenty-first-century diplomacy. This paper benefited greatly from the insights of workshop participants, with the authors drawing on their analysis and recommendations.

Introduction

Three decades after the end of the Cold War, the United States is in a new era of strategic competition. Navigating this era successfully requires the United States to leverage diverse instruments of power—diplomatic, economic, military, etc.—to deter, outcompete, and, if need be, defeat revisionist autocratic rivals.

Much scholarly attention has focused on the potential military flashpoints of strategic competition, such as a potential invasion of Taiwan by the People’s Republic of China. Given Russia’s demonstrated willingness to use force to prosecute its revanchist aims, the risk of direct military conflict between nuclear-armed rivals should not be taken lightly. Nevertheless, the military domain is but one dimension of this competition, and the United States needs to shore up its strength in other areas, including diplomacy.

The purpose of the Scowcroft Center’s project on twenty-first-century diplomacy is to produce the analysis and develop the ideas necessary to strengthen US diplomacy. This project focuses particular attention on technology’s role in bolstering the exercise of diplomacy.

The first issue brief in this series, “Twenty-first-century diplomacy: Strengthening US diplomacy for the challenges of today and tomorrow,” addressed the changing context in which US diplomacy is practiced and began to point toward ways the United States could adapt its diplomacy to the twenty-first century. This issue brief will build on this body of work by outlining the challenges and opportunities that are posed by artificial intelligence (AI) in the practice of twenty-first-century diplomacy, and offering recommendations for how to efficiently leverage AI to strengthen diplomacy. Notably, this paper will focus primarily on generative AI (GAI), the subset of AI oriented to creating new content.

As argued in this first issue brief, the diplomatic domain has been a key arena of strategic competition. China has taken steps to deepen its influence in international institutions, both to shape norms and divert attention away from Beijing’s human-rights abuses. China uses public diplomacy to cultivate favorable narratives about the Chinese Communist Party (CCP), including through Confucius Institutes at universities. China has also sought to bolster its “discourse power,” defined by our Digital Forensic Research Lab colleague Kenton Thibaut as “a type of narrative agenda-setting ability focused on reshaping global governance, values, and norms to legitimize and facilitate the expression of state power.” Toward this end, China has worked to increase channels for its messaging through traditional and social media, tailor content to target audiences more effectively, and embed Chinese norms and standards regarding digital connectivity.

This challenge is not limited to China: Russia also uses diplomacy to expand its global influence, especially in the Global South. For example, while a March 2022 United Nations General Assembly resolution condemning Russia’s 2022 invasion of Ukraine received support from more than 140 countries, there were notable pockets of opposition or abstention, including much of Africa. This was due, in part, to the diplomatic overtures Russia has made since its 2014 invasion of Crimea, including signing nineteen military cooperation agreements in sub-Saharan Africa over the last ten years and a tour of Africa by Russia’s foreign minister in the weeks leading up to the vote. It is also rooted in historical relationships going back to the Soviet Union; the Soviet Union was, for example, an early supporter of South Africa’s ruling African National Congress.

The salient point is this: Diplomacy is critical in strategic competition.

While AI has received attention for years, interest in it has erupted in recent years, in both the scholarly and popular imaginations, particularly as tools like ChatGPT have grown ubiquitous. The entirety of the US government, including the executive and legislative branches, are required to meet the challenge, and fortunately, the State Department has not sat idly by while AI has advanced.

In 2021, Secretary of State Antony Blinken spoke about the modernization of American diplomacy at the Foreign Service Institute, where he highlighted five pillars guiding the transformation of US diplomacy. This included building “capacity and expertise in the areas that will be critical to our national security in the years ahead, particularly . . . emerging technologies.” He noted that the United States has a “major stake in shaping the digital revolution that’s happening around us and making sure that it serves our people, protects our interests, boosts our competitiveness, and upholds our values.”

In October 2023, the White House published an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The executive order offers guiding principles for the development and use of AI and outlines actions for a broad array of departments and agencies. This includes calling on the secretary of state to use “discretionary authorities to support and attract foreign nationals with special skills in AI.

Also in October 2023, the US State Department released Enterprise Artificial Intelligence Strategy FY2024-FY2025: Empowering Diplomacy Through Responsible AI. The strategy begins by stating outright that the exponential increase of easy-to-use generative AI and the “once-in-a-generation opportunity” it provides the State Department must underpin the strategy moving forward. It then outlines four objectives for the department, including leveraging secure AI infrastructure; fostering a culture that embraces AI; ensuring AI is applied responsibly; and innovating. Each objective emphasizes the importance of the State Department’s staff in this monumental effort and indicates that AI will not replace employees but rather serve as a tool that will allow them to turn from rote administrative tasks to higher-level actions like personal diplomacy.

These steps demonstrate a recognition on the part of the US government and its diplomatic apparatus that it needs to get smart on AI and emerging technologies more broadly. This issue brief will support this effort by both explaining the risks and prospective benefits of AI for diplomacy, and recommending a path forward for seizing the latter.

In brief, this paper will recommend the following courses of action for the US government.

  • Look to GAI as a critical asset in engaging the information space;
  • Work with allies and partners to shape norms around AI and cooperate to dominate the commanding heights of this technology;
  • Leverage GAI as a tool of soft power;
  • Establish standards for transparency and ethical guardrails in the use of GAI; and
  • Reorient the State Department’s workforce to integrate AI.

Opportunities presented by GAI for diplomacy

Despite its challenges, GAI holds great promise for the future of diplomacy, and it presents a number of opportunities that should be exploited prudently. This issue brief is intended to be a summary, so this will not be an exhaustive list. Some of the opportunities presented by GAI include facilitating communication and enhancing negotiations; assisting in content distillation; and augmenting strategic communications.

  • GAI and negotiations. GAI has a role to play in facilitating interactions between diplomats and enhancing negotiations. Andrew Moore, chief of staff to former Google CEO Eric Schmidt, noted last year that ChatGPT and similar tools are already helping diplomats prepare for negotiations. He writes, “As Nathaniel Fick, the inaugural U.S. ambassador at large for cyberspace and digital policy, recently quipped, briefings generated by the AI-powered ChatGPT are now ‘qualitatively close enough’ to those prepared by his staff.” He also cites the example of IBM’s Cognitive Trade Advisor, which has supported negotiators by answering questions about trade treaties that might otherwise delay talks.

    Looking ahead, Moore argues that AI could play a more direct role in shaping diplomatic agreements, adding, “As more and more parties develop their own AI, we could see AI ‘hagglebots’—computers that identify optimal agreements given a set of trade-offs and interests—take on a key role in negotiations.”
  • GAI and content distillation. GAI has an important role to play in helping diplomats sift through vast quantities of information. This even applies to internal or even classified stores of data. As Ilan Manor, senior lecturer at Ben Gurion University of the Negev, has argued, “Instead of ChatGPT, imagine a ‘StateGPT’ able to analyze decades of internal documents generated by the State Department. Diplomats could view this internal AI to track changes in other nation’s policy priorities, identify shifts in foreign public opinion, or even identify changes in how America narrates its policies around the world.”

    One caveat to this opportunity: The proliferation of AI-enabled disinformation could prove challenging to using GAI in a content-distillation capacity. Outputs generated by AI-sifted datasets should not be seen as foolproof.
  • AI and strategic communications. GAI’s information-analysis capabilities can be further leveraged to help with strategic communications. As Moore notes, the use of technology to solicit input from citizens is well-developed:

    “More than a decade ago, Indonesia pioneered a platform called UKP4, allowing everyday citizens to submit complaints about anything from damaged infrastructure to absent teachers. Although technology can be misused for manipulation and misinformation, artificial intelligence can also serve as a powerful tool to identify these misbehaviors, creating an ongoing struggle in the race between AI that will help and AI that will harm.”

    The ability to solicit input or review datasets documenting the needs and concerns of a body of citizens can be used to tailor public diplomacy campaigns. Moreover, AI can help diplomats engaged in public diplomacy to test messaging by sorting through reactions from various sources, including news and social media, to public-facing campaigns.

    Furthermore, Jessica Brandt, policy director for the Artificial Intelligence and Emerging Technology Initiative at the Brookings Institution, has argued, “We can use AI-enabled sentiment analysis tools to better understand where authoritarian narratives are taking root in target societies around the world, so that we know where to focus our attention and resources.” Does it make sense to respond to a certain bit of critical information, or would doing so “actually give oxygen to something that otherwise would not get much traction?”

    Given scarce resources, knowing where and when to devote energy to messaging and counter-messaging is just as important as knowing how to message. Using the nuance and know-how of diplomats and the data-distillation abilities of GAI in conjunction will be a boon to State Department efforts.
US Secretary of State Antony Blinken meets with Nigerian Minister of Communications, Innovation and Digital Economy Bosun Tijani during the Digital Technology Showcase at 21st Century Technologies in Lagos, Nigeria, January 24, 2024. ANDREW CABALLERO-REYNOLDS/Pool via REUTERS
  • GAI and cultural diplomacy. GAI has a key role to play in strengthening US soft power, as it can support cultural diplomacy, including preserving cultural heritage through digitization of documents, audio enhancement, and content restoration.

    Consider one example of the potential of GAI outside the realm of diplomacy: genealogy. FamilySearch, one of the foremost providers of digitized ancestral records, has collected billions of documents in need of transcription in order to be searchable. As one senior manager said last year, “In just a couple of hours, the computer can index more than you or I could do in a whole lifetime if we did nothing besides indexing for the rest of our lives.”

    The United States is a global leader in the development of GAI technology, and it could take advantage of this to bolster its soft power by enhancing its cultural restoration and preservation efforts. As outlined above, the United States could help states index cultural documents, as well as preserve and restore important audio and video files, and items of cultural significance. For example, the United States and Cambodia first signed a cultural cooperation agreement on “preserving and restoring Cambodia’s rich cultural heritage” in 2003, and since then, the United States has helped return more than one hundred antiques and assisted with preservation efforts. The use of GAI could contribute to this agreement by helping to identify stolen antiquities, determine forgeries, and restore damaged artifacts or locations. Doing so will help the United States to form and solidify positive diplomatic relations with countries in the Global South that have been victimized by the looting and trafficking of cultural artifacts.

Challenges of GAI for diplomacy

AI holds much promise for the future of diplomacy, but it is important to be clear-eyed about the risks and challenges associated with it. In particular, GAI can be harnessed toward nefarious ends, especially in the realm of disinformation and misinformation. US officials also should be wary of AI’s limits. While it can be a major asset to diplomats, caution and a healthy awareness of where AI might fall short or have negative externalities is essential. Finally, AI, broadly, is predicated on a triad of algorithms, computing power, and data. This paper will not explore all three in a technical manner, but on the latter (data), there is a risk of distrust from diplomats and the public, absent transparency and ethical guardrails.

This section will explore each of these challenges in greater detail.

  • GAI and the information space. GAI can be used to amplify disinformation and misinformation, muddying the environment in which diplomats engage in public diplomacy. For example, in late 2022, Graphika, a research firm that examines social media and disinformation, uncovered a pro-China information campaign leveraging deepfake video technology in the form of fake news anchors. The videos of AI-generated avatars were distributed by pro-China social media bot accounts and disparaged the United States while portraying China as a good geopolitical actor. While these videos were not compelling to audiences and had few clicks, the utilization of this technology quickly accelerated. A year later, The New York Times reported on a pro-China YouTube network of channels that used GAI to cast aspersions on US policy. According to the report, “Some of the videos used artificially generated avatars or voice-overs, making the campaign the first influence operation known to [the Australian Strategic Policy Institute] to pair A.I. voices with video essays.” The clips covered an array of issues in a manner designed to favor China, from advancing “narratives that Chinese technology was superior to America’s, that the United States was doomed to economic collapse, and that China and Russia were responsible geopolitical players” to “fawn[ing] over Chinese companies like Huawei and denigrat[ing] American companies like Apple.” The network was sophisticated, gathering 120 million views and 730,000 subscribers across thirty channels.

    Deepfakes, relying on AI to generate misleading audio and video content, proliferate on social media, in contexts involving state and nonstate actors. In volatile situations, GAI can be leveraged to distort the information environment, promote damaging narratives toward the United States, and potentially stoke violent action against US diplomats.

    “The affordability and accessibility of generative AI is lowering the barrier of entry for disinformation campaigns,” Allie Funk, co-author of Freedom on the Net 2023: The Repressive Power of Artificial Intelligence, told MIT Technology Review. Funk, research director for technology and democracy at Freedom House, noted that the spread of AI-generated content is “going to allow for political actors to cast doubt about reliable information.”

    GAI, therefore, increases challenges for diplomats operating in the information space, especially as autocratic rivals seek to leverage this technology to cast the United States in a bad light.
  • The limits of GAI. Diplomats should be aware of GAI’s limits and potential negative externalities. Diplomacy, especially when engaged in at a personal level, demands a high degree of skill and psychological competence. This includes understanding the idiosyncrasies that shape human-to-human engagements, and gauging potential cognitive biases shaping an interlocutor. Therefore, while GAI can augment the capacity for communication, a human touch remains essential, especially for personal diplomacy.

    There also is a risk of GAI degrading institutional expertise. Quickly generating summaries of news and conversations is a value-added capacity of GAI; however, the ease with which diplomats can scroll through an AI-generated distillation may discourage them from following up with more in-depth primary or secondary sources developed by humans. AI can often be helpful for generating a top-line summary, but it should not be viewed as a substitute for cultivating a 360-degree perspective on an issue.

    In addition, while GAI use for translation can facilitate communication, an overreliance on AI-driven tools may result in a loss of nuance or bad translations. A report by The Guardian in September 2023 observed, “AI-powered translation tools are particularly unreliable for languages that are considerably different from English or are less comprehensively documented.”

    In one example cited by that report, an individual arriving at a US Customs and Immigration Enforcement (ICE) detention center spoke Portuguese, but no one on staff did. ICE staff used an AI-powered voice-translation tool to communicate “but the system didn’t pick up or understand his regional accent or dialect. So Carlos spent six months in ICE detention unable to meaningfully communicate with anyone.” In addition, some Afghan refugees have had substantial difficulties with AI translation tools, as Dari, an official language of Afghanistan, is not an option on many of these tools.

    The lesson: while GAI is a powerful tool for facilitating communication, users of the technology should be wary of its limits, especially in life-transforming situations—or high-stakes diplomatic conversations. The State Department will need to use AI alongside its staff of technical experts and translators to ensure that it is used properly.
  • Data and distrust. As noted earlier in this paper, AI is built on a triad of algorithms, computing power, and data. A technical discussion of each of these elements is beyond the scope and purpose of this paper, but a discussion of ethical concerns surrounding the third element is warranted.

    Data is a critical component of GAI, shaping the outputs generated by these systems. In a diplomatic context, data is vital for officials seeking to leverage AI to understand more clearly how residents of a country think, perceive, and make sense of the world around them.

    Nevertheless, data also presents potential pitfalls, absent ethical guardrails. First, illegal or unethical data collection, real or perceived, can foster distrust and backlash. This could consist of illicit solicitation of private information absent consent.

    Second, transparency with regard to datasets is important so that diplomats can have confidence in the outputs generated by AI, as well as a healthy awareness of where GAI might be imperfect. The inclusion or exclusion of certain datasets will inform the outputs generated by an AI system, biasing results in a certain direction that diplomats will need to be aware of in case they need to correct them. For example, if one created a GAI system intended to generate a top-line summary of US attitudes toward race in the early nineteenth century, and it only pulled from abolitionist newspapers, the output would be misleading. In addition, the State Department should seek tools that have resolved the “black box” problem, or when it is difficult or impossible to determine how a GAI model reached a particular result. Doing so will allow for a better understanding of why errors occurred and how the model can be improved to avoid them in the future. Transparency regarding both what data has been fed and how results are reached will build trust and caution in the use of GAI in diplomacy.

Recommendations for US diplomacy

Having explored challenges and opportunities related to GAI and diplomacy, the final section of this paper will offer recommendations for the United States to mitigate the challenges, take advantage of the opportunities, and position itself in such a way that it becomes a global leader in the integration of GAI and diplomacy. In doing so, the United States will strengthen a key instrument of power, better enabling it to outcompete autocratic rivals.

Toward this end, this issue brief proposes the following recommendations for the US government:

  • Look at GAI as a critical asset in engaging the information space. US diplomats have already started to leverage tools that enable them to attain better situational awareness of changing information environments, as noted earlier.

    Moreover, elements of the State Department that directly engage publics and counter disinformation targeting the United States should see GAI as a critical tool for allocating resources. It is simply not possible to respond to every piece of disinformation proffered by a hostile state actor; the key is to respond quickly and effectively when there is the highest risk of damage. GAI can help in this regard.
European Commission Executive Vice President Margrethe Vestager delivers remarks as US Secretary of State Antony Blinken hosts the fifth US-EU Trade and Technology Council Ministerial Meeting at the State Department in Washington, DC, January 30, 2024. REUTERS/Leah Millis
  • Work with allies and partners to shape norms around GAI and cooperate to dominate the commanding heights of this technology. This is another pillar of action found in the initial paper in this series. Without recapitulating its arguments, the salient point is that efforts to integrate GAI and diplomacy will be bolstered if the United States works in concert with allies and partners.

    One component of this is working with allies and partners to lead in the development of GAI and innovative applications of GAI for the diplomatic domain. The US-EU Trade and Technology Council is still young and holds promise for overcoming transatlantic differences over technology; meanwhile, countries participating in the Quadrilateral Security Dialogue, known as the Quad, have publicly expressed an interest in working together more closely on emerging technologies. The United States should deepen those and other efforts to work collaboratively with allies and partners to maintain technological leadership in the twenty-first century.

    Another piece of this is shaping norms around emerging technologies, such as AI, and advancing a democracy-affirming approach to these technologies—in contrast to China’s efforts to export autocracy and oppression. We reiterate the call of the first paper for the United States to “export digital information infrastructure that promotes freedom, privacy, and the rule of law.” China has a leg up on the United States in Africa and other parts of the world when it comes to funding information and communication technologies. Now is the time for the United States and its allies and partners to leverage their technical know-how and wealth to become a viable alternative to countries developing their digital infrastructure and integrating AI. This includes continuing to invest in infrastructure partnerships like the Build Back Better World Partnership and the Partnership for Global Infrastructure and Investment, which are transparent and values-driven.
  • Leverage GAI as a tool of soft power. The United States generally ranks highly in assessments of soft power, but there is room for improvement. Taking the cultural diplomacy angle as an example, diplomats ought to take advantage of US technical leadership in AI to bolster its attractiveness to other countries. A concentrated cultural diplomacy campaign focused on helping countries preserve their heritage is a clear way for the United States to increase its soft power.
  • Establish standards for transparency and ethical guardrails. US officials should seek to mitigate potential distrust surrounding their use of GAI. Toward this end, the State Department should develop clear guidelines for how it will leverage GAI in a manner that respects privacy, regularly examines the integrity of datasets, allows for an understanding of errors, and ensures diplomats have sufficient technical know-how to engage GAI critically, understanding bias and other shortfalls.
  • Reorient the department’s workforce to integrate AI. This effort should be pushed at the level of the secretary, with bureau chiefs also accountable for implementing changes in their respective departments. This shift will require undertaking careful studies and applying test cases to see how AI augments the work of diplomats—and where it falls short or is limited.

    Congress has a role to play in its capacity as overseer of the government’s purse strings. It should fund initiatives that enable the modernization of the department, while discouraging wasteful spending on legacy equipment and efforts that are reflective of the diplomacy of yesteryear.

    The State Department must take the challenge seriously and work to demonstrate to Congress that it is developing a diplomatic workforce that can utilize and innovate using AI and a department built to support the leadership and staff in pursuit of national priorities. AI needs to be integrated into training, and requisite expertise should be sought through hiring, whether through new employees or rotational programs with the private sector. This latter approach has the added benefit of fostering public-private cooperation and infusing fresh ways of thinking into the government bureaucracy.

    As noted in the first paper in this series, the State Department is making some progress in this regard. We reiterate calls in the first paper in this series for the State Department to “consider tech knowledge as part of its assessment of Foreign Service Officers and other hires throughout the department,” as well as to “incentivize tech training by making professional development in this space a consideration for promotions.”

    Apart from working with the commercial sector, this space is ripe for collaboration with universities and think tanks. This could take the form of executive or management training initiatives that seek to instill both technical know-how and broader policy awareness of AI, other emerging technologies, and their intersection with diplomacy. These efforts should be undertaken with congressional support and funding and could serve as a model for collaboration in other departments and federal agencies.

    Notably, in October 2023, the State Department released its Enterprise Artificial Intelligence Strategy, which both articulates a strategy for the entire department and explains briefly how the Enterprise Data & AI Council, the AI Steering Committee, and others will work in partnership to execute and modify the strategy moving forward. This includes following the model of the Enterprise Data Strategy’s data campaign approach, which has campaign delivery teams working with staff on one mission and one management priority for a six-month period. These campaigns have served as a “force multiplier” for efforts to innovate and develop the data tools the department needs. This model, with its short timelines and whole-of-department nature, can help create an AI mindset and allow staff to determine how to meld their expertise and diplomatic abilities with the computing and generative power of AI. To truly address the importance of AI in an era of strategic competition, State Department officials at the highest levels, from the secretary on down, will need to use their influence and substantial resources to propel the effort and ensure it remains top-of-mind for the entirety of the department.

Conclusion

This issue brief has outlined the opportunities and challenges for AI and diplomacy, with a focus on GAI, and it has sought to build on the initial paper in this series through its analysis and recommended courses of action to integrate AI into US diplomacy.

This paper does not include a complete analysis of AI or GAI, but it highlights the importance of AI for strengthening US diplomacy and offers a way forward for the United States to take advantage of its technical expertise in this domain. Successfully navigating a new era of strategic competition will require diverse instruments of power, including robust diplomacy. AI can help the US State Department become more agile in its exercise of diplomacy, better positioning it to pursue US national interests in an era of strategic competition with China and Russia.

Image: US Secretary of State Antony Blinken delivers remarks at the National Security Commission on Artificial Intelligence (NSCAI) Global Emerging Technology Summit in Washington, DC, July 13, 2021. Jim Watson/Pool via REUTERS