Speech moderation and militant democracy: Should the United States regulate like Europe does?

European Commissioner for Values and Transparency Vera Jourova and European Commissioner for Justice Didier Reynders give a news conference on EU rules on data protection (GDPR) and the EU Strategy on victims' rights, in Brussels, Belgium. Olivier Hoslet/Pool via REUTERS

In the aftermath of the January 6 riot at the US Capitol, crackdowns on certain social-media accounts and apps involved in the violence have further fueled a debate over reform of Section 230 of the Communications Decency Act, which largely protects online platforms from liability for their users’ content, and other policy options. The conversation about the role of private companies in regulating speech has quickly become a transatlantic one—as well as a test for how free and open societies should best approach a free and open internet. So how exactly can Europe’s experiences with regulating online speech and content inform the debate in the United States? And what should solutions look like in the United States? We’re offering a series of perspectives. Below, the Europe Center’s Nonresident Senior Fellow Kenneth Propp offers his perspective. Read Europe Center Distinguished Fellow Frances Burwell’s perspective here

The insistent requests from the German Foreign Office would land on my desk in the US Embassy in Bonn, Germany with monotonous regularity: Would the US Postal Service please take steps to block American citizen Gary Lauck from mailing to Germany the neo-Nazi propaganda he published in the United States?

While dissemination of printed materials denying the Holocaust and glorifying the Nazi regime were prohibited in Germany, the materials were protected free speech under the standards of US constitutional law. Indeed, only a few years earlier, the US Supreme Court had refused to allow the Chicago suburb of Skokie, where many Holocaust survivors lived, to block a march by members of the National Socialist Party of America who intended to wear Nazi-style uniforms. So, as Embassy legal adviser, I had to answer each such German entreaty with a polite explanation of why the US government could not assist.

This carefully scripted diplomatic pas de deux occurred in the 1980s. The German government eventually shifted to other tactics to suppress Lauck’s scurrilous publications. In 1995, Lauck traveled to Denmark, where he was arrested and extradited to Germany to stand trial for distributing neo-Nazi propaganda. Lauck was convicted, served a four-year sentence, and then was deported back to the United States. Today he runs Third Reich Books, an online purveyor of the same material.

Germany now leads the battle against online distribution of hate speech and other forms of illegal content. In 2018, its government proposed the innovative Network Enforcement Act (NetzDG), which obliges large social-network platforms to investigate user complaints and remove ”manifestly unlawful” content within twenty-four hours—or risk large fines for non-compliance. The legislation also requires platforms to publish regular reports about their content-moderation practices. The German federal minister of justice at the time, Heiko Maas, alluded to his country’s history during the debate over the legislation, asserting that “freedom of speech has boundaries.” Civil-rights activists sharply criticized the NetzDG proposal for threatening free expression and for placing content-removal decisions in the hands of the large platforms. The German Bundestag nonetheless quickly enacted it—another example of Germany’s commitment to “militant democracy” (wehrhafte Demokratie), where rights are sometimes sacrificed in the interests of the democratic order.

Initial corporate-transparency reports suggest the system established by NetzDG is working, according to a study by the Transatlantic Working Group of the Annenberg Public Policy Center. Google and Twitter each reported scrutinizing hundreds of thousands of pieces of content, removing a small minority, and almost always doing so within the requisite twenty-four-hour period. The largest categories of challenged content involved hate speech and defamation or insult. Most of the takedown decisions are based on the companies’ own internal speech guidelines for users rather than the German speech laws that NetzDG is designed to enforce. This suggests that the NetzDG law encouraged companies to better comply with their own existing internal guidelines.

Other governments across Europe soon became interested in developing their own national versions of NetzDG. In December 2020, the European Commission seized the initiative by proposing to include elements of it in the Digital Services Act (DSA). The DSA would retain, with adjustments, the immunity that internet platforms enjoy under the European Union’s existing e-commerce legislation—its counterpart to Section 230 of the US Communications Decency Act. Platforms would continue to not be obliged to monitor the information users store on its services, nor to proactively look for illegal content. But they would assume new requirements to act on takedown orders received from judicial or administrative authorities and to document their compliance with such orders.

Germany’s leadership in combating hate speech, now informing EU legislative initiatives, is not the only instance in which it has been a European pioneer in content moderation for political and social ends. Germany also was driven by Nazi-era abuses—in this case of personal privacy—to adopt one of Europe’s earliest data-protection laws, which now finds its EU-wide expression in the General Data Protection Regulation (GDPR). One of the GDPR’s most notable and popular provisions is the individual right to the erasure of collected personal data that is no longer “necessary” in relation to the purposes for which it was collected—the so-called “right to be forgotten.”

In a landmark 2014 case, the Court of Justice of the European Union (CJEU) ordered Google to delink search results of a Spanish lawyer’s name that yielded news articles documenting his failure, sixteen years earlier, to pay his tax debts. The CJEU, sensitive to the “ubiquitous” information available on a search engine, decided that the retained articles were “inadequate, irrelevant or no longer relevant, or excessive” in relation to the original purpose and in light of the time elapsed. Google and other search engines henceforth would have to receive and decide upon erasure requests from individuals across the EU according to this standard. The company’s effort to get governments to assume the burden went nowhere, so the decisions are made internally, according to company guidelines, with the possibility of subsequent appeal to national data-protection authorities and courts.

Google reports that it has received more than a million requests for delinking in the five years since the CJEU judgment and honored slightly more than half of them. The company publicly provides anonymized examples of significant decisions, but no comprehensive public record exists, nor are the specifics of its internal guidelines available. A delisting decision must be applied on every Google domain inside the EU, but not outside, since the GDPR does not impose the right to be forgotten on a global basis.

The most controversial cases have revolved around criminals trying to repair their reputations by removing public evidence of accusations and convictions. Google’s decisions often depend on the severity of the crime and the time that has elapsed, among other factors. National courts reviewing these cases in Europe sometimes take strikingly different positions on the deletion of online records showing convictions for serious crimes, depending upon their attitudes towards social rehabilitation. Finland, for example, ruled that online records of a murder conviction of an autistic man should be expunged from search results.

The reaction of American civil libertarians to the right to be forgotten has been predictably scathing.  Prominent First Amendment lawyer Floyd Abrams wrote that “government action limiting the publication of truthful speech, let alone such speech about judicial proceedings, is nothing less than a form of rewriting history.” And, Abrams sharply added, “as is frequently the case with censorship, it is contagious.” Indeed, the right to be forgotten has proven to be a popular EU regulatory export, becoming part of the laws of countries as diverse as Canada, Japan, Russia, and Turkey. It has even attracted interest in the United States, where bills proposing the right to be forgotten have been introduced in several state legislatures. In addition, several major newspapers, including the Boston Globe, have begun to entertain erasure requests for minor crimes as a contribution to redressing past racial inequities.

European laws on internet hate speech removal and the right to be forgotten regulate objectionable content through a complex combination of legislation, internal corporate standards, and external administrative or judicial review. In coming months, US legislators will examine whether the legal standard for incitement or imminence of violence needs to be revised, as Frances Burwell urges. But the universal character of the internet, the global reach of its Big-Tech purveyors, and the variety in national speech laws means that the United States needs to look beyond possibly tweaking exceptions to free-speech protections.

Congress especially needs to take a close look at who makes and reviews the case-by-case content-moderation decisions applying the governing legal standard. European leaders were quick to focus on this aspect of the decisions by Twitter, Facebook, Google, and Apple to restrict former US President Donald Trump’s access to social media. German Chancellor Angela Merkel’s spokesman accepted that freedom of expression “can be interfered with, but along the lines of the law and within the framework defined by the lawmakers. Not according to the decision of the management of social media platforms.” French Finance Minister Bruno Le Maire agreed, saying “the regulation of the digital world cannot be done by the digital oligarchy.” European commissioners quickly joined the chorus, with Thierry Breton seeing the corporate decisions as “not only confirmation of the power of these platforms,” but also a display of “deep weaknesses in the way our society is organized in the digital space.”

These European statements, in their zeal to criticize US Big Tech, failed to note that existing and prospective European laws on content moderation leave takedown decisions in private hands to a significant extent. That is too bad, because it is in the interplay of general legislative standards, specific corporate guidelines, transparent decision-making, and independent review that the potential contribution of the European model lies for the United States. Facebook, for example, has taken a valuable first step in this direction with its Oversight Board, which will review the decision to suspend Trump’s account, but that board lacks complete independence from the company.

As the role of online media messages in inspiring the January 6 attack on the US Capitol becomes clearer, many Americans’ sunny faith in a robust media “marketplace of idea”’ is being tested. The US Constitution’s free-speech tradition—radical by Western democratic standards—is unlikely to emerge greatly changed, however. The European historical experience that informs “militant democracy” and speech-invasive privacy laws remains largely alien here. But adjustments at the margins, particularly in the areas of process, are possible and desirable. Legal traditions may vary across the Atlantic, but democracies are all in this together.

Kenneth Propp is a nonresident senior fellow in the Atlantic Council’s Europe Center and teaches European Union law at Georgetown University Law Center.

Further reading: