From its earliest days, the internet was seen as a bastion for free speech and global education. But few foresaw that social media would transform it into “the [Ku Klux] Klan den of the twenty-first century” as well, as Amy Spitalnick, executive director of the nonprofit Integrity First for America, declared Wednesday during a 360/Open Summit panel on fighting extremism moderated by the Atlantic Council’s Digital Forensic Research Lab Director Graham Brookie.
That observation set the tone for a discussion of the radical new reality of online extremism with four leading experts: Spitalnick; Mary McCord, executive director of the Institute for Constitutional Advocacy and Protection at Georgetown University; Nicholas Rasmussen, executive director of the Global Internet Forum to Counter Terrorism; and Rachel Gillum, a visiting national security scholar at Stanford University.
The January 6 attack on the US Capitol underscored the intense threat of online extremism following a year in which physical violence seemed uniquely fed by pixelated propaganda. Below are some of the panel’s key takeaways on how civil society groups, law enforcement, and policymakers are assessing and combating the threat.
Sprawling, shifting targets
- Spitalnick said that the internet poses a challenge for containing extremism because of its global reach: “In the past, you had… Klansmen sitting in the woods in their white hoods—and their violence, their extremism, would be limited to their specific geographic area.” One example: The fact that “the Taliban was anti-vax before anti-vax was popular,” as Rasmussen noted, saying opposing vaccines was part of the militant group’s messaging campaign against the West.
- That long reach is particularly troublesome when considering the shapeshifting nature of extreme narratives. The past eighteen months saw a “morphing of grievances,” McCord said, as extremists tried to stay relevant. They began with anti-mask protests related to the COVID-19 pandemic, evolved after the latest Black Lives Matter protests into militias rising to fight off “antifa,” and ended with the attack on the Capitol after allegations of a stolen election. “A lot of the groups were the same groups, they just found the [new] issue of the day,” McCord said.
- Not all disinformation campaigns are aimed at causing violence. Spitalnick noted that many focus instead on rewriting history. She said extremists plot to “dismantle” the narrative around the Unite the Right rally in Charlottesville, Virginia, which led to a white supremacist killing a counterprotester. Extremists say they don’t want Charlottesville to become “a stupid cliché, like Selma,” Spitalnick said. She pointed to mainstream Republican criticism of critical race theory as evidence of how these kinds of online influence efforts can gain credence with the wider public. “It has been used as this red herring,” Spitalnick said, “with the ultimate goal of undermining our reality.”
Broadening the solutions
- Facebook, Twitter, and other major social media platforms are obvious vectors for disinformation, and many of them have hired counterterrorism professionals as a result. But other less obvious forums for radicalization are emerging: Consider how insurrectionists used Airbnb to prepare their assault on the US Capitol. The housing platform responded by canceling and blocking Washington-area reservations during the inauguration. “If you’re one of these other companies and wake up one day seeing your platform exploited, you may not be as well-equipped,” Rasmussen said.
- A global approach to tackling disinformation requires thinking beyond Facebook and other Western brand names. WeChat, for example, is much more popular in China, and preferences differ from Latin America to Southeast Asia. “Each of those geographic regions has preferred social media platforms,” Rasmussen said. “What good are we doing if we solve the problem in Silicon Valley and don’t solve the problem [elsewhere]?”
- While in the Department of Justice, where she was principal deputy assistant attorney general for national security during the Obama administration, McCord used to say that government officials needed to use all tools at their disposal to fight extremism. Now she believes even more is necessary: “I’m now more in favor of the ‘all tools of society’ approach,” McCord said. For governments, that includes consulting tech companies and social scientists on how to act. It also means aiming not just to defeat extremism but also to deal with the grievances that led to those ideas taking root, McCord said.
In the balance
- As with any discussion about policing disinformation, significant free-speech concerns remain. In order to help preserve an open internet, any company Rasmussen’s organization works with must make two pledges: to moderate extremist content on its platform and to do so while respecting human rights—including rights to free speech. “That’s an equally important part of our mission,” Rasmussen said.
- Even with that in mind, several of the experts agreed that aggressive content moderation is needed. Simply put: Deplatforming works. “There is real value in limiting the audience to which these extremists can speak,” Spitalnick said. She pointed out that white nationalist Richard Spencer has admitted that being kicked off mainstream internet platforms has made it exceedingly difficult for him to raise money from supporters: “When a leading neo-Nazi is telling you that the platform has an impact on his ability to plan the sorts of racist, violent events like we saw in Charlottesville, that’s meaningful.”
- Gillum urged future action to be rooted in empirical evidence rather than anecdotal accounts, as links between online engagement and radicalization remain largely unsubstantiated by data. There are some signs that extremists, including QAnon conspiracy theorists, have dwindled in number, although information from past extremist movements suggests that “the true believers who do remain” will become more entrenched and violent, Gillum said. “We need to put more investment into understanding the problem, how it’s manifesting, where it’s manifesting, to really be strategic in how to approach it.”
Nick Fouriezos is an Atlanta-based writer with bylines from every US state and six continents. Follow him on Twitter @nick4iezos.
Watch the full event
Tue, Jun 22, 2021
Former Wikipedia chief on fighting censorship and potentially paying contributors to address diversity gaps
Katherine Maher appeared at the 360/Open Summit, hosted by the Atlantic Council’s Digital Forensic Research Lab, to discuss how tech can earn the public’s trust.
New Atlanticist by
Wed, May 5, 2021
Facebook’s Oversight Board ruled Wednesday that former US President Donald Trump will remain banned from the platform for encouraging the January 6 insurrection at the US Capitol. And what consequences is it likely to have on online radicalization and the use and abuse of social media around the world?
Fast Thinking by
Thu, Jan 7, 2021
The team at the Atlantic Council’s Digital Forensic Research Lab has conducted exhaustive research into how the event happened, combing through social media and other networks frequented by the far right. Let’s break down what they found.
Fast Thinking by