Democracies around the world have a “growing vulnerability surplus” when it comes to protecting their societies against online disinformation and digital electoral interference, Sweden’s ambassador to the United States, Karin Olofsdotter, said on October 2.
Olofsdotter opened the two-day Global Forum on Strategic Communications and Digital Disinformation (StratComDC), hosted in Washington D.C. by the Atlantic Council’s Eurasia Center in partnership with the Embassy of Sweden, Lithuania’s Foreign Affairs Ministry, the United Kingdom’s Foreign and Commonwealth Office, and Twitter. The forum brought together leading experts from government, civil society, and business to discuss how to address online disinformation and organized foreign electoral interference campaigns.
While “deception and the spread of false and misleading information is as old as the human race,” as Olofsdotter said, the effects of organized digital disinformation campaigns have become more scrutinized since Russian efforts to influence the 2016 US presidential election.
As US Department of Homeland Security Undersecretary for the National Protection and Programs Directorate, Christopher Krebs, explained to the audience, foreign actors are using online disinformation to “[find] lines of divisions and [exacerbate] these divisions” in democracies around the world.
Ben Nimmo, a fellow at the Atlantic Council’s Digital Forensic Research Lab, explained that online disinformation sprouts from three separate types of actors: organized foreign government actors, domestic political activists and individuals, and commercial actors who try to profit from social media sites and online news outlets. These actors form a “triangle of disinformation,” according to Nimmo, that amplify divisive political rhetoric across online platforms. Often domestic individuals and commercial actors are unaware of their “unwitting dialogue” with foreign influencers, Nimmo said.
In the case of the 2016 US election, Russian actors took advantage of a “massively fragmented media market” to promote fake news stories and disseminate stolen material, according to Atlantic Council Senior Fellow Laura Galante. “We were vulnerable, and we weren’t ready,” Alina Polyakova of the Brookings Institution added, arguing that the “Kremlin was pushing on an open door.”
The impetus for StratComDC lay not in rehashing the vulnerabilities of the past, however, but finding new solutions to protect against disinformation campaigns in the future. Krebs highlighted the strides the US government has already taken since the 2016 election. During 2016, when interference was first discovered, government officials “didn’t know who to call,” Krebs said, but now government and state agencies know “how to talk to each other” and can “share rapidly and share broadly” different pieces of intelligence and information when a cyberattack or disinformation campaign is detected. Dan Eliasson, director general of Sweden’s Civil Contingencies Agency, noted that during elections in his country last month, “we did not see any direct influence” from foreign actors.
Krebs acknowledged that there was more still to do in the United States, especially helping individual states address threats to electoral infrastructure by updating systems and implementing more security checks on election hardware. “No single state. . . can withstand a direct attack from a nation state with the capabilities of Russia, China, Iran, or North Korea,” Krebs said. He argued that the federal government should provide more funds to take the burden off of cash-strapped states, as well as assist political campaigns themselves so they do not have to use sparse donation money to implement often costly security protections.
In addition to protecting hard electoral infrastructure and campaigns, many StratComDC speakers said governments could do more to push back against online disinformation. Nimmo pointed out that intelligence agencies have many more resources to deploy than open-source advocates, making them crucial to determining attribution for online disinformation. Both Sweden’s Eliasson and Arnoldas Pikzirnis, a policy advisor to Lithuania’s prime minister, detailed how their countries are devoting resources to directly improve media literacy among their populaces and launch educational programs for students on online disinformation.
John Herbst, director of the Atlantic Council’s Eurasia Center, suggested governments could require social media companies to adequately “label” clearly fake news and foreign-originated disinformation, while Polyakova added that authorities should pressure social media companies to disclose their algorithms for promoting content.
There was concern, however, that government action could be largely ineffective and even counterproductive. Both Eliasson and Pikzirnis acknowledged that education programs run the risk of being criticized for promoting government bias and Polyakova believed that these efforts would run into significant scale problems if attempted in the United States.
The specific threat of disinformation from Russia would also need to be addressed as “their behavior hasn’t changed” since 2016, according to Herbst. “Mr. Putin is testing his red lines” to determine how far he can go in weakening Western democracies, Pikzirnis said, and Western governments are simply “not responding in a way that is painful enough for Russia” to stop its behavior.
One reason Russia remains unchecked, Galante argued, is that there is still no understanding about what interference actions “will necessitate a US government response.” The key to traditional military deterrence is clearly outlining to potential adversaries the consequences for certain aggressive actions, something that has yet to be translated to the digital age, Polyakova added. “All we have are tactics,” Polyakova stressed, “we don’t have a strategic view for how to handle Russia.”
Additionally, there was consensus that governments need to be careful not to violate freedom of expression within their countries in the name of countering disinformation. As Herbst noted, even foreign actors, “when they speak in the United States should have first amendment rights.” Polyakova believed this line could be balanced by making sure that social media companies do not delete content, but at the same time ensuring that they not “prioritize it” either.
Rather than a government-down approach, many of the speakers advocated for a robust role from civil society and individual action. Education and media literacy programs are needed to “[build] critical thinking,” according to Krebs, which is key to “start… building resilience back into the American people.” Nimmo highlighted how easy it can be to train people to be “bot-spotters,” by teaching them the “three A’s” of online bots: activity, anonymity, and amplification. “The more you can teach people that understanding, the more they can do themselves,” Nimmo said.
Despite the progress of the last two years in identifying and beginning to push back on disinformation, democracies remain vulnerable. Pere Joan Pons, a Spanish member of parliament, said that throughout Europe “there is a huge concern about what will happen in the next nine months,” as the continent prepares for new European Union elections. He cautioned that the interference his government observed in the Catalan independence vote in 2017 is evidence that democracies around the world remain vulnerable to disinformation.
“The bad news is that this is a really hard problem; there is no silver bullet,” Daniel Fried, a distinguished fellow at the Atlantic Council, said at the end of the day. He added, though, that there is good news: there is now a “political imperative to do something” and finally “some serious work is being done.”
David A. Wemer is assistant director, editorial, at the Atlantic Council. Follow him on Twitter @DavidAWemer.