In the aftermath of the January 6 riot at the US Capitol, crackdowns on certain social-media accounts and apps involved in the violence have further fueled a debate over reform of Section 230 of the Communications Decency Act, which largely protects online platforms from liability for their users’ content, and other policy options. The conversation about the role of private companies in regulating speech has quickly become a transatlantic one—as well as a test for how free and open societies should best approach a free and open internet. So how exactly can Europe’s experiences with regulating online speech and content inform the debate in the United States? And what should solutions look like in the United States? We’re offering a series of perspectives. Below, the Europe Center’s Distinguished Fellow Frances Burwell offers her perspective. Read Europe Center Nonresident Senior Fellow Kenneth Propp’s perspective here.
The decisions by Facebook and Twitter to suspend former US President Donald Trump and thousands of other accounts following the riots at the US Capitol have been criticized by some as trampling on free speech and by others as too little too late. But the real question is why two private companies have been the key decision-makers in this situation. Rather than relying on CEOs Mark Zuckerberg and Jack Dorsey, the US government—especially Congress and the courts—should make clear what type of speech is acceptable online and what type of speech is not.
After the events of January 6, Congress will certainly take on reforming Section 230 of the Communications Decency Act—the 1996 law that allows online platforms, including social-media companies, to escape liability for content posted by their users. When Congress does look at the act, it should not just focus on the companies and their responsibilities. Legislators should take a good, hard look in the mirror. They must provide the guidelines that are central to reducing violent extremist content online: rules on acceptable versus forbidden online speech.
For all Americans, free speech is a sacred right. But social media has demonstrated a tendency to proliferate and magnify the most hate-filled and conspiracy-based speech at breathtaking speed, with serious consequences for the country’s democratic future. Companies have responded by establishing their own user guidelines and policing content as they each see fit. Legally, they are free to do this, since the First Amendment applies to government restrictions on speech. But many users regard Facebook and Twitter as essential avenues of communication in the digital age that should not be censored. Should we continue to rely on such an ad-hoc system, based on private-sector interests, to restrain especially violent speech? Or is it time to have a serious debate about how the United States as a nation should define and police the most egregious speech online?
As US lawmakers take on this issue, they might usefully draw some lessons from the experience of European governments in regulating content online. The European Union (EU) is without doubt the “regulatory superpower” of the digital world. Germany and other EU member states have imposed significant obligations on online platforms in terms of monitoring and removing certain content. In some cases, platforms must remove content within twenty-four hours of notification, sometimes less, or face significant fines. For several years, the major social-media companies (including Facebook, Twitter, and YouTube) have participated in a voluntary EU “Code of Conduct,” pledging to remove content deemed illegal hate speech after being notified of its existence on their platforms. A 2019 review showed that 90 percent of the notifications were reviewed within twenty-four hours and 71 percent of the material was removed.
This system is about to get even tougher: A proposed EU Digital Services Act will impose significant reporting requirements on companies regarding content removal and, for some platforms, intrusive inspections designed to change how algorithms recommend certain content. In the wake of the Capitol riots, some European politicians urged the United States to adopt similar rules constraining social media.
Such a content-moderation system is only possible, however, if based on a clear definition of unlawful speech—and establishing that definition is not a job for corporations, but for elected representatives. Today in the United States, only a few categories of online speech are prohibited, among them terrorist content and child pornography. Other illegal speech includes incitement of imminent lawless and violent action and threats to the US president or vice president—both of which Trump may have violated during his speech to supporters before they headed to the Capitol. For the most part, decisions about what is not protected as free speech have been made in the court system, and thus each exemption applies in very specific and limited circumstances. Incitement to lawless and violent action may be protected, for example, if the action is not imminent.
In contrast, many European governments have long defined certain categories of illegal speech, many of which pre-date the online world. In Germany, for example, it is illegal to deny that the Holocaust happened. As in the United States, terrorist content and child pornography are illegal, although European attitudes vary widely toward what is considered obscenity in the United States. Central to European regulation is the idea of illegal hate speech, defined in EU law as “the public incitement to violence or hatred directed to groups or individuals on the basis of certain characteristics, including race, color, religion, descent, and national or ethnic origin.” While this rule does not prohibit racist caricatures of specific groups or individuals, it does ban calls for violence or other injury. Prohibitions on such hate speech have been enforced not only online, but in magazines, on television, and even in nightclub acts.
If Congress seeks to reduce the liability protections of platforms for user-generated content, it will need to be specific about the nature of proscribed content. Unless that content is clearly defined, companies will simply seek to protect themselves by establishing guidelines that allow only the safest, most mundane material. Any restrictions on online speech should be very limited—perhaps adopting a concept similar to Europe’s “public incitement to violence or hatred” or dropping the requirement that the dangerous incitement in question be “imminent.” Aside from the constitutional considerations, authoritarian governments around the world will see anything but modest limitations as an opportunity to legitimize their own moves to restrict online speech.
While the EU experience offers some useful lessons, even very strict content-moderation rules will not solve the entire problem. The EU’s definition of illegal hate speech does not address the spread of conspiracy theories and fake news, for example, both of which are detrimental to US and European democracies and which can be found not only online but also in traditional media outlets. And the regulation of larger platforms often pushes hate speech to the wilder reaches of the internet and smaller, more ephemeral platforms.
US President Joe Biden has called for a Summit of Democracies during 2021, with disinformation on the agenda. The United States and Europe should use this meeting to compare their approaches to the dangers some online content presents to our democracies and to work with other democracies to find a common way forward. As a first step, Congress and the Biden administration must consider how best to safeguard US democracy from incitements to violence and hate.
Frances G. Burwell is a distinguished fellow at the Atlantic Council and a senior director at McLarty Associates.