The United States can learn important lessons from Estonia, Sweden, and France in crafting policy responses to cyberattacks and social media campaigns of disinformation, a panel of experts told a conference on addressing foreign interference, co-sponsored by the Atlantic Council and Carnegie Mellon University on December 9, 2019.
Estonia represented a harbinger of increasing Russian political interference over the past decade when it was subject to the first major cyberattack in modern history in 2007. Although the cyberattack was unsophisticated in terms of the technology employed, it aimed to disrupt one of the world’s most advanced internet-based societies by focusing on a wide range of targets from media websites and online bank accounts to email systems in the former Soviet republic.
The key takeaway from the incident for Estonian officials was that “if you want to deal effectively with cyber challenges then you need to address not just the computer part…but the political part as well, the human being behind the attack,” said Jonatan Vseviov, the Estonian ambassador to the United States. “You need to affect the cost/benefit analysis of that person or group of people who have decided to test your society’s resilience.” He added that although “we have been pretty good at reacting” to threats, “a precondition for success is initiative. The question is how do you move from being reactive to presenting some sort of proactive policy line” to counter interference.
Although Vseviov said that many Western countries initially ignored the implications of the Russian cyberattack, one country that took heed was Estonia’s Baltic neighbor, Sweden, which traditionally has had tense relations with Russia.
Sweden has embarked on an extensive program to counter “influence operations” by Russia and other foreign powers as part of a return to its Cold War-era “Total Defense” doctrine, said Karin Olofsdotter, Sweden’s ambassador to the United States.
These efforts are coordinated by the Swedish Civil Contingencies Agency, known by its Swedish abbreviation MBS, which was established in 2009 and is regarded as one of the world’s most effective organizations in building public awareness about influence operations and responding to them. Sweden is also establishing a psychological defense unit to counter disinformation and maintain public morale in crisis periods.
One result of Sweden’s successful efforts to raise awareness of influence operations was that “we did not see any major foreign interference” in the last elections in 2018, said Olofsdotter. But Russian media outlets continue to try “to discredit our liberal social way of life,” such as targeting Sweden’s policy on migration to create the impression that “Sweden is imploding. This is a way to try to destabilize” the country, she explained.
France, however, did experience disinformation campaigns and “hack and leak” operations conducted by hackers linked to Russian military intelligence, the GRU, during the 2017 presidential campaign. In some respects, these efforts were “sloppy” and inept since they were largely conducted in English, not French, and were deaf to local social attitudes such as claims that Emmanuel Macron was a homosexual “when you don’t use the gay card in France,” according to Jean-Baptiste Jeangène Vilmer, a nonresident senior felllow at the Atlantic Council and a senior fellow at the French Defense Ministry.
Vilmer said there were several lessons to be drawn from the French experience. There is a need to build awareness about information manipulation in both the government and among the public. Strong central organizations need to be in place to counter disinformation. This should include a strategy to push counter-narratives to blunt the effects of disinformation, such as focusing public attention on the leakers rather than the leaks, which he described as “who done it” stories.
There is also a need for cooperation among countries to share good practices, not just in Europe, but also in Asia and the Middle East as well. He said that attention should be focused not just on Russia, but China, Iran, and extreme right groups in the United States, with the latter appearing to play a role in the 2017 disinformation campaign in France.
To achieve these goals several challenges must be overcome. “Bureaucracy is our daily adversary” since it slows a response to threats even as the development of technology, such as deep fakes, accelerates and are more widely distributed, Vilmer said.
The United States lags behind Europe in addressing the threat of foreign interference, said Alina Polyakova, founding director of the Project on Global Democracy and Emerging Technology at the Brookings Institution.
One reason is a slow regulatory process in governing technology policy. “We are constantly in this catch-up game” when it comes to emerging technology, she said, so that policies are often “dead in the water the second they are implemented” since the technology has moved on.
Although Europe has taken a more aggressive approach on regulating technology, “they cannot be replicated in the United States,” Polyakova added. Europe has focused on policing content on the Internet, but this would not work in the United States because “we have a more expansive view of protected speech, such as hate speech,” she explained.
Instead of focusing on content controls, she suggested that more attention should be paid to regulating content distributors and those engaged in micro-targeting vulnerable audiences through privacy and antitrust laws to break the transmission chain of disinformation.
John Burton is a writer with the Atlantic Council and a former foreign correspondent for the Financial Times.
New Atlanticist Oct 18, 2019
Richard Stengel on disinformation and the threat to democracy
By Zarine Kharazian
Disinformation—both foreign and domestic—is a catalytic harm that acts to magnify existing societal vulnerabilities. Forging digital resilience is an urgent priority—because, as Stengel said, disinformation is “an attack on our very democracy. And Americans need to be aware of it.”