Event recap | Practical steps forward: Improving global efforts to advance digital content safety
On Thursday, December 3, the Atlantic Council’s GeoTech Center and the World Economic Forum partnered to host a private roundtable under the Chatham House Rule to discuss the possible practical policies for improving digital content safety. The following notes summarize the event’s discussion.
Key topics
- How policymakers can best regulate companies to more effectively reduce harmful content online, considering various goals of safety, innovation, competition, privacy, and free expression;
- How regulatory frameworks requiring increased transparency in and consistency of content curation practices can protect users and improve trust; and
- How new methods of collaboration, governance, and measurement can improve safety of spaces online.
Key industry and expert insights to prevent and counter the spread of harmful content online:
1. Regulation and competition should work hand in hand to improve online safety. Enabling consumers to understand the choices they are making by using specific platforms, including through regulation requiring transparency, would help incentivize positive change
- From an anti-trust perspective, promoting competition in the online world means protecting consumer choices. With consumers unable to make realistic choices between social media platforms, competition cannot exist.
- In order for consumers to make educated choices about which platforms to use, platforms must disseminate information regarding their content curation practices so that users can understand what goes on behind the scenes of the content they see (and don’t see).
- Without transparency and consistency in curation practices, consumers cannot fully trust what they are seeing because they cannot trust the platform. Consumers want to feel safe online, but they also want to see and take advantage of free expression and an unrestricted flow of ideas.
- Regulation, in this case, can have positive or negative impacts on competition. Section 230 has long promoted entry into the market for companies of any size. Abrupt regulatory changes to that policy would likely hamper competition as a result. That said, change of some kind is clearly warranted to mitigate the cycle of harm that currently exists.
- Through enhanced competition, users will be empowered to demand features that they want from a platform, including improved curation tools to prevent the spread of harmful or fake content.
- User-originated movements to break down the dangers of an ad-based revenue model could yield substantial progress. However, in the current market, true competition is insufficient, if it exists at all, for users to have enough sway over company practices.
- Differences in regulatory frameworks across countries largely reflects the different priorities of distinct cultures when it comes to expression and online behaviors. But from the standpoint of a company or platform, these differences make it difficult to operate in a truly global manner.
- Therefore, convergence and synchronization across countries, for example through multilateral organizations like the OECD, is essential to any efforts to change the digital landscape.
- Considering best practices for content curation from around the social media space can provide a model for future recommendations to be applied more ubiquitously
- In the current market, each platform maintains its own system of curation and moderation almost entirely distinct from its competitors.
- Companies such as Reddit have demonstrated the effectiveness of a multi-layered, community led moderation system, through which user-moderators are empowered to restrict content in their community to keep their peers safe. Moderators create their own set of rules for specific communities, which include guidelines preventing harmful content or curating the type of content permissible. Reddit reports that 99 percent of moderation activities are made by these volunteer user-moderators. Severe cases, especially of criminal behavior or moderator misconduct or neglect, can then be moved up the chain to Reddit-employed administrators, and in some cases to government authorities or law enforcement.
- As far as curation, Reddit also operates on a democratic model of up- and down-votes, which organically promotes higher quality content while sorting out harmful, negative, or low-quality posts. Reddit’s algorithms for curation and moderation are also open-sourced and available to researchers and investigators alike.
- However, Reddit’s model would not work on all platforms, especially those that do not rely on community or group structures. Nonetheless, recognizing that such models exist and are successful can give users and regulators alike the leverage to demand more and higher quality moderation from other platforms.
- Already, platforms are beginning to emerge that are specifically designed around principles of people-centered curation and moderation so it is up to incumbents to adapt
3. At the same time, the current information ecosystem is clearly broken, especially in terms of the enforcement mechanisms and their capacity to generate real change by platform providers. Regulators and social media companies alike must recognize this failure and act.
- Governments must move away from content-based rules for online activities that attempt to address each type of illegal activity separately. Instead, a flexible framework approach can help to protect consumers online, by ensuring that platforms clearly divulge how they moderate content, where they apply curation methods, how they communicate moderation activities with users, and how users can appeal decisions quickly and efficiently.
- When reporting on moderation activities, companies must take care to disclose not only the impact, but also the source code behind their moderation and curation algorithms to researchers/regulators/auditors. Using this information, independent auditors can verify these reports, and note places where companies are discriminating illegally or encouraging (intentionally or otherwise) greater access to hateful, false, or dangerous content.
- Companies must also be required to report on moderation trends, so that outsiders, including policymakers and researchers, can understand changes within the information ecosystem.
- With all this in mind, governments must be careful to establish a regulatory regime that does not overstep the bounds of censorship and idea manipulation. Worldwide, many cases exist where social media regulation has been used to mandate which ideas can and cannot be shared online.
- Countries around the world must come together to discern what content must be restricted, and what lines must be drawn to prevent repression.
4. Content moderation has become a professionalized global industry. Incidents worldwide have revealed the danger of these human-based systems breaking down. Without proper steps taken to empower curators and reduce vulnerabilities and blind-spots, real-world damage will follow.
- Commercial content moderation involves people in tandem with computational tools and has emerged as a growing industry in the wake of the explosive expansion of social media. Workers are often outsourced to other countries where employees moderate the content of communities across a particular country or region. Previously, companies did not even disclose the existence of these workers. Though transparency has improved somewhat, much still remains hidden regarding the specific practices and governance specific to each platform’s moderation.
- Due to the COVID-19 crisis, outside observers have noted the dangerous outcome of removing these humans from the moderation equation. In the Philippines, mandated quarantine required social media moderation to fall wholly onto computational tools, which then resulted in a lag in enforcement, failing to restrict the spread of dangerous content for a time. In other cases, algorithms employed to unilaterally enforce moderation have become overzealous, lacking human input to govern properly.
- Overall, the public lacks a good sense of how and where content moderation takes place, while also remaining blind to the human actors within the system. Increased transparency will benefit all and empower content moderators to take bolder action where necessary.
- In the area of child safety, certain regulations means that reports of CSAM and other illegal material not responded to within ninety days are required to be purged, lest the company retain illegal content past the legal limit. Legislators are already considering realistic changes to these regulations that would boost law enforcement capacity to respond to reports, while also lessening burdens on reporting authorities.
In partnership with
The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.