Why Talk About Disinformation Now?

Disinformation has become a hot topic since Russia’s interference in the US presidential elections in 2016. As seventeen US intelligence agencies agreed in December of last year: “Russian President Vladimir Putin ordered an influence campaign in 2016 aimed at the US presidential election.” This influence operation aimed to undermine faith in democracy and the credibility of the Western institutions.

Cyberattacks played a key, but relatively small part in this operation. Kremlin-backed hackers used targeted phishing e-mails to steal troves of documents and communications between Democratic Party operatives, but the bigger, and more sophisticated, part came later. Rather than using the stolen information for intelligence gathering—a normal and expected technique in the world of spycraft—the data instead appeared on WikiLeaks and other sites beginning in July 2016. It was at this point that an intelligence-gathering operation turned into an influence operation.

The Kremlin’s well-resourced media networks, including RT and Sputnik, and its social bots and troll armies quickly spun narratives about Democratic presidential nominee Hillary Clinton’s campaign. Western media, compelled by trending topics and dramatic headlines, followed suit. And “fake news entrepreneurs” looking to turn an easy profit from ad dollars by writing false stories with alarming headlines took advantage of the scandals. As a result, the e-mail hacks, rather than policies, came to dominate our political discourse while polarizing our society. The Kremlin effectively exploited the virtues of free and open societies and the plurality of our media space to undermine our electoral process.

But the threat posed by disinformation goes beyond the spread of false stories. Disinformation is more than “fake news.” It is a strategy primarily deployed by the Kremlin, amplified by Russian-affiliated social media accounts, organizations, and media outlets, and supported by data theft. It is not random. It has an intent and an aim. It also goes beyond the US elections and elections in general.

Disinformation attacks happen on a daily basis in Ukraine, as the journalists and researchers at StopFake.org know too well. In Germany, the now well-known “Lisa case” demonstrated how quickly a false report—in this case of a Russian-German girl sexually assaulted by Arab immigrants—propped up by Russian media and high-level Russian officials can prompt thousands of people to take to the streets.

German Chancellor Angela Merkel, who has taken a strong stance against the Kremlin and is up for reelection this fall, is under constant attack. Germany’s spy chief has repeatedly warned of Russian cyberattacks similar to those used in the US elections. In Sweden, a country that is not a NATO member state, Russia has carried out an anti-NATO disinformation campaign for two years that included the spread of false documents by the usual suspects: Russian-backed trolls, political bots, and media outlets. And in France, members of the far-right National Front, which is aligned with Russia, along with bots and trolls, helped propel allegedly stolen e-mails from then presidential candidate Emmanuel Macron’s campaign to the top of Twitter’s worldwide trend list. There are many more examples from the frontline states of the Kremlin’s political warfare in the Baltics, the Balkans, and Central and Eastern Europe.

A better way to think about disinformation in the digital domain, where this strategy has found its home and breeding ground, is as “computational propaganda.” The Oxford Internet Institute defines computational propaganda as “the use of algorithms, automation, and human curation to purposefully distribute misleading information over social media networks.” In a series of recent case studies, the Oxford researchers warn that “computational propaganda is one of the most powerful new tools against democracy”—originally developed by state actors with resources to devote to maintaining whole troll factories and media networks, it is now used by state and non-state actors to intentionally manipulate the online information environment. And as artificial intelligence (AI), machine learning, and automation technologies have grown, they’ve also become much cheaper. As our interactions and information consumption shifts to the digital space, our societies become increasingly vulnerable to manipulation.

It’s no longer just the Russians who are playing in this space, though they certainly lead the pack. Extremist and terrorist groups, other authoritarian states like China, and opportunists are using these tools to influence our politics and manipulate our attitudes. As these technologies evolve, so does the threat posed by malign actors who co-opt and deploy them on democratic societies. And while the time horizon in the digital world is short and technological advancement is fast, our capability to respond is still far too slow.

The threat is imminent and real. To get ahead of it will require the efforts of governments, civil society, tech firms, and individual citizens. We will need to work together to identify vulnerabilities, test solutions, and ultimately build long-term resilience. This war for our information security, our democracies, and our values will not be won quickly or easily. But the free world is worth fighting for.

Alina Polyakova is director of research for Europe and Eurasia at the Atlantic Council. She tweets at @alinasphere.

Related Experts: Alina Polyakova

Image: Senate Select Committee on Intelligence Chairman Sen. Richard Burr (R-NC) (right) and committee ranking member Sen. Mark Warner (D-VA) conferred during a hearing on Russian interference in the 2016 US presidential elections in Washington on June 21. (Reuters/Joshua Roberts)