FutureSource



August 24, 2017
The single biggest shortcoming of Countering Violent Extremism (CVE) programs is a lack of ability to measure their effectiveness. For this reason, there is much controversy around the concept of CVE and those who practice it, making it difficult for the government to justify funding certain programs themselves. In the past year, based on a now widely-held belief that governments cannot counter extremism alone, we have seen Silicon Valley’s tech giants attempt to define their own place in the CVE field. Their reactive, tech-driven approach has produced some metrics of success. However, it has, in turn, neglected to recognize the root causes of violent extremism that CVE-focused organizations aim to address.

The apparent “success” of tech-based CVE programs often results in the perception that the problem is being solved; these methods are merely "Band-Aids" covering up the failures of community-based CVE programs on the ground. This surface success raises cause for concern, leading to the key question: What role should a tech company play in countering violent extremism? As that debate plays out in the CVE and tech communities, it is important to remember that the standards for success must be higher than videos watched and ads clicked.

Tech-driven CVE is becoming more prominent, most recently with YouTube’s announcement of their use of the Redirect Method. The Redirect Method, created in 2016 by Jigsaw, another Google-owned company, was brought to fruition through a partnership with several CVE organizations and tech companies, such as Quantum Communications and Moonshot CVE. The concept is rooted in the Google algorithm with hopes of de-radicalizing individuals who are searching for ISIS material online, and attempts to uncover the narrative and keywords that are often used both in ISIS propaganda as well as in online searches for the terrorist group. Once the algorithm detects a search for ISIS material, ads and “related content” that undermine the ISIS message appear on the side. When searching on YouTube for ISIS-related content, the platform redirects the search to videos whose messages run counter to that of ISIS. During the eight-week pilot, just over 500,000 minutes of videos were watched by close to 321,000 viewers. This averages to around just a minute and a half per video watched.

The lack of understanding about what Countering Vionlent Extremism programs should aim to do is one of their biggest flaws.


The other social media giants are also playing a part. Facebook’s efforts most recently moved to using artificial intelligence to fight the ISIS message. This involves using image matching, text-based signals, cluster investigations (friends of terrorists online), and repeat offender signaling. Twitter has been removing extremist accounts for the last four years; they recently revealed that the number of ISIS accounts removed in 2016 totaled 125,000.

But what do all these numbers really mean? Twitter and Facebook are removing extremist pages and flagging ISIS sympathizers, but their processes lack any substantive efficacy. Focusing on ISIS propaganda only addresses a piece of the problem; these methods largely ignore other forms of hate and extremism unrelated to ISIS, such are right-wing and left-wing extremists. For example, the man who shot House Majority Whip Steve Scalise, had posted anti-Trump, anti-right, anti-government messages on his Facebook pages, yet that text remained within the company’s current “acceptable speech” boundary. On Twitter, right-wing extremists have outperformed ISIS in every metric possible. They have twenty-two times the mean numbers of followers and have their content retweeted twice as often.

The lack of understanding about what CVE programs should aim to do is one of their biggest flaws. There has never been a general consensus among experts or practitioners. There is no oversight, nor rules or roadmap to what a CVE program should look like. This has made it increasingly difficult to measure a certain program’s success, which is the only way a program could hope to receive further (or any) funding.

A major cause of radicalization is the search for identity. When individuals search for an identity, they often become estranged from society, creating a void that allows for an extremist narrative to resonate. In the tech giants’ CVE methods, person-to-person contact is removed. Thus, their solution to radicalization is not addressing one of the root causes – a withdrawal from society and human interaction. With the Redirect Method, for example, those who are searching for extremist material are now simply forced to look in a different place other than YouTube, a mere inconvenience instead of a deterrent. The tech companies’ programs therefore cannot claim to be aimed at de-radicalizing. Rather, they are an electronic gatekeeper – protecting but not effecting. CVE in its purest form should aim to prevent an individual's loss of identity in the first place, keeping them from reaching the point where they are vulnerable to extremist ideology and actively seek it out.

So, what role should tech companies play in CVE? The answer is: a minor one. The importance of Facebook, Twitter, and Google in countering violent extremism is undeniable, but should be viewed as a small piece of a much larger puzzle. Technology should not be on the front lines of CVE, nor should it be celebrated as a resounding success story. To place importance on tech-driven CVE programs is to fall into an endless pit of reaction-based campaigns, whereas community engagement and societal resilience efforts – proactive, grassroots approaches to a grassroots problem – should be better funded and equipped. In the end, the CVE community, government, and greater society should not settle for programs that provide "surface-success" when, in reality, the root of the problem is going untouched.


Tyler Cote is an intern with the Foresight, Strategy, and Risks Initiative in the Brent Scowcroft Center on International Security. He is currently completing his degree in Homeland Security and Political Science at the University of Massachusetts Lowell. Tyler is also the Co-Founder of a Countering Violent Extremism program, Operation250.

RELATED CONTENT