Has progress been made in containing disinformation?

The spread of online disinformation during the 2018 election campaigns in Mexico, Colombia, and Brazil demonstrated to social media companies that they need to “make sure that we are not solving just the problems that we saw in the US in 2016, but that we are really thinking steps ahead,” according to Katie Harbath, public policy director of global elections at Facebook.

The three high-profile elections in Latin America made up “one of our very first big test cases” for new measures meant to limit the spread of false information on Facebook, Harbath said at the Atlantic Council in Washington on March 28. But while Facebook has had some success in limiting harmful activity on its platform, Harbath explained “we have to have different solutions for all of our different platforms.”

Harbath was joined at the Atlantic Council by WhatsApp Director and Head of Communications Carl Woog for an event looking back at 2018’s elections in Latin America. While Facebook is used throughout the region, WhatsApp use is far more ubiquitous and presented unique problems for counter-disinformation efforts.

“Encryption is a headline feature of what we do,” Woog explained, “which means that we can’t see the messages that people send.” The widespread use of WhatsApp during all three electoral contests last year presents an issue for “a digital version of a private space that is much more like a living room than it is a town square,” Woog said.

The event was held to mark the launch of a new report by the Atlantic Council’s Adrienne Arsht Latin America Center and Digital Forensic Research Lab outlining the polarization, automation, and disinformation online during the 2018 elections and presenting a vision for fostering digital resilience in future elections.

Harbath and Woog both reported on how their companies are making tangible changes to their platforms to prevent their misuse in the future. “Even though we can’t see the content that people are sharing,” Woog said, WhatsApp “can look at an account to see if it is acting in an abnormal fashion,” such as sending thousands of messages per minute. Once abnormal accounts are identified they can then be banned. Woog said that during the Brazilian election WhatsApp banned almost 2 million accounts per month. Woog also said that WhatsApp has limited the ability to mass forward messages in order to stop the viral spread of messages on a platform that is primarily designed for one-to-one messages and small groups. With new features, Woog said, “as soon as you get a message from someone who is not in your contacts, you get a warning that pops up.”

Harbath said that Facebook is also targeting clearly automated accounts and is focusing on giving “people additional context about who they are seeing this information from.” The company also made a conscious decision last year to change its feed algorithm to prioritize content from family and friends in user feeds. The decision has hurt the ability for business and other pages to grab attention, Harbath conceded, but she argued it was based on the actual preferences of users and the desire to limit the spread of disinformation by bad actors.

Harbath warned that Facebook’s initial attempts to mark potentially fake information had largely backfired on the platform as “it made people believe it even more” by drawing attention to it.

Graham Brookie, director and managing editor of the Digital Forensic Research Lab, explained that “disinformation is an emotional problem” and that marking disinformation as overtly fake often makes people have “a visceral reaction.” He warned that if platforms or fact-checkers get bogged down in “an extended conversation about who is right and who is wrong, then you have already lost. Those who are trying to spread disinformation have already won.”

Harbath said Facebook now is including relevant links below potentially fake content and focusing on putting the source of the information in the proper context.

Both Harbath and Woog stressed the importance of factcheckers from civil society and the media in helping social media companies identify and then counter disinformation online, an idea shared by a collection of Latin American disinformation experts who also spoke at the Atlantic Council on March 28. Francisco Brito Cruz, director of the InternetLab in Brazil, pushed back on the instinct to blame social media platforms themselves for the promotion of disinformation. “We need to stop thinking that the problem is technology,” he said, but focus instead on how this technology is interacting with “the social and political environment.”

Tania Montalvo, executive editor of Animal Politico in Mexico and coordinator of the Verificado 2018 election reporting and fact-checking initiative, argued that journalists and media companies can play a huge role in pushing back disinformation, especially if media outlets “put into the center of our work as journalists, the citizen” rather than making money or trying to affect political outcomes.

Andrew Sollinger, publisher of Foreign Policy magazine, on the other hand argued that media companies can only play a limited role in pushing back, especially on platforms such as WhatsApp where “it is very difficult for media organizations—trusted media organizations—to publish premium content on that vehicle.” As media companies double down on subscription services in an effort to stay financially solvent, Sollinger said, “information is becoming a luxury commodity… [and] folks living in favelas don’t have not just the presence of mind but also the wallet” to consume quality news.

Carlos Cortés, co-founder of Linterna Verde in Colombia agreed, saying that fact-checking initiatives by news outlets and social media companies “need to be assessed critically [to see] if they are really working, if they are really having an impact.”

What both Harbath and Woog want to avoid is becoming “arbiters of truth” on their platforms, deciding what type of content to allow or not to allow. Harbath argued that she sees content moderation as following the rule that a user “has the right to say that the sun rises in the West, but he doesn’t have the right for us to amplify it.”

Woog worried that overreaction from policy makers could threaten the very benefits that an open and free Internet provides, especially if it threatens the privacy of the user. “Once you lose privacy, it is very hard to get it back,” he said.

David A. Wemer is assistant director, editorial, at the Atlantic Council. Follow him on Twitter @DavidAWemer.

Image: From left, Digital Forensic Research Lab Director and Managing Editor Graham Brookie, Facebook Public Policy Director of Global Elections Katie Harbarth, and WhatsApp Director and Head of Communications Carl Woog, speak at the Atlantic Council on March 28, 2019.