Moderating non-English content: Transparency and local contexts are critical

Watch the full event

Platforms face no shortage of challenges when it comes to moderating non-English content. It’s not an easy task to create automated systems that can understand local contexts, delicate nuances, and different dialects. Yet there is no room to fail in this space: The decisions made by content moderators and algorithms have significant impacts, not only in online information spaces but also offline.

Platforms have a responsibility to their users across the globe. They have a responsibility to rein in hate speech, to not suppress users in authoritarian countries advocating for change, and to support users in conflict zones who rely on social media to document war crimes and human-rights abuses.

At a panel discussion Monday during the Digital Forensic Research Lab’s 360/Open Summit in Brussels, Scott Hale, director of research at Meedan, advocated for tech platforms to build through community. “This is where community organizations just excel to such a greater extent,” he said. “They have their ear on the ground, they know about the long-running issues in these communities, and those sorts of perspectives should absolutely inform the design of policy and the enforcement of policy adjudication.”

Design is political, but that means it can be changed to make improvements. At the moment, platforms are designed to make a profit, noted Dragana Kaurin, founder of Localization Lab. Despite social media companies being spaces where speech norms are determined and political landscapes are shaped, decision making is motivated by profit. For civil society organizations, this is the largest obstacle to overcome when advocating for more resilient content moderation systems.

The question remains: How can markets be incentivized to better align with the needs of social media users based outside of the West? Often, when platforms are motivated to make changes, it is the result of being influenced by Western media or advertisers. Reflecting on how design often omits the contexts of people outside the West, Kaurin said: “People find their own ways to adapt [to platform design], but we don’t want people to adapt. We want people to see themselves in the technology.”

Marwa Fatafta, the MENA Policy Manager at Access Now, noted that platforms have offered leniency to Ukrainian activists in allowing them to call for violence against invading Russians or President Vladimir Putin. Yet activists in the Middle East and North Africa see their content removed when advocating for violence against an invading force or authoritarian government.

This imbalance was on display during the Gaza war in May 2021, when Palestinian activists who documented Israeli forces storming Al Aqsa mosque, the third-holiest site in Islam, had their content removed from Instagram. In May 2022, Iranian activists faced a similar situation when content that used the phrase “death to” was removed en masse, but in Iran, the phrase has a history of being used symbolically to call for change, rather than as an actual call for violence.

Fatafta noted that many solutions already exist, they just need to be implemented. “There is no shortage of coverage, there is no shortage of human-rights experts [and] content moderation experts that can pinpoint what’s exactly problematic in Mexico, in Palestine, in Myanmar,” Fatafta said. “It’s about willpower. It’s a political decision first and foremost and of course a financial one.” For Kaurin, the solution starts with transparency for content-moderation systems—both in policy decisions and implementation: “Being accountable to users is the most important thing, and this is how we have trust.”


Layla Mashkoor is an associate editor at the Atlantic Council’s Digital Forensic Research Lab.

Watch the panel

Further reading