Opinions expressed by invited speakers or program participants do not necessarily reflect the opinion of the US government, its affiliates, or the Atlantic Council GeoTech Center.

AI Connect II’s fifth webinar opened with a keynote from Rumman Chowdhury, CEO of Humane Intelligence and US Science Envoy for Artificial Intelligence. Chowdhury summarized government, industry, and public reactions to the AI revolution, focusing on deepfakes and “soft-fakes” (a term coined by Chowdhury referring to digitally altered content designed to “soften” the image of a political candidate) in elections worldwide. She also outlined efforts to enable AI assurance through structured public feedback, algorithmic bias bounties and red teaming.

Following Chowdhury’s keynote, a panel discussion featured insights from experts Merve Hickok, president and research director at CAIDP; Claire Leibowicz, head of AI and media integrity at Partnership on AI; and Maggie Munts, director of public affairs and impact at Truepic. The conversation, moderated by GeoTech Center associate director and resident fellow Trisha Ray centered on the processes needed to help us understand how AI systems make decisions, and on how organizations must outline clear responsibilities and technical measures to ensure AI-based systems are open to inquiry.

The panelists emphasized that although issues of responsibility in AI are not new, there is a new will and vigor in both private and public organizations around accountability and how to make AI systems understandable to a wide range of stakeholders. Munts highlighted the role of transparency technologies and AI literacy, noting that trust is built gradually and can be supported by technologies such as cryptographic provenance, which is used to indicate how content was created, whether through AI generation or other means. Leibowicz pointed out that trust in institutions responsible for labeling AI content is crucial and stressed the need for organizations to invest in this area. Among the panelists, there was a debate on whether such measures should be mandatory or optional.

Hickok shared insights on the universal guidelines for AI developed by CAIDP in 2018, which include principles on individual rights and organizational obligations. She stressed the environmental impact of generative AI and the importance of academic research to understand the efficacy of measures to generate trust. In particular, Hickok spoke about the cognitive effects of AI labeling on people. She warned that excessive labeling, unless it comes from a direct, reliable source, could lead to a general distrust in all information. Leibowicz noted that even accurate and real images are now being questioned, making it easier for mis- and disinformation to spread. This issue is compounded by the cat-and-mouse dilemma in deepfake detection, highlighting the need for methods like watermarking. The panelists agreed that AI detection is probabilistic, not guaranteed, and therefore requires a human component as well. Additionally, they advocated for a layered approach to enhance robustness.

The panel concluded that a multifaceted approach is necessary for AI systems to be trustworthy and accountable. This includes robust guidelines, transparency, civil society involvement, and continuous education and literacy efforts. The role of government is crucial in setting these frameworks, but representation and participation from all sectors of society are essential to ensure that AI technologies are developed and deployed responsibly.

To keep up with the GeoTech Center’s latest work on artificial intelligence, join our mailing list below:

Stay connected

Sign up to join the Atlantic Council’s AI mailing list.

Related resources

Learn more from the resources referenced in AI Connect II webinar 5.

Our experts