GET UP TO SPEED
It’s become so regular an occurrence these days that you’d be forgiven if you missed it entirely: On Thursday, the CEOs of Facebook, Twitter, and Google headed to the US Congress (virtually, of course) to testify before the House Energy and Commerce Committee. But this wasn’t just another tech hearing. It was the first since the January 6 riot on Capitol Hill. It was focused on extremism and misinformation. And our team at the Digital Forensic Research Lab, which is all over these issues, tells us that lawmakers came at their questioning in a new way that offers insight into what our digital future could look like. Let’s break that down.
TODAY’S EXPERT REACTION COURTESY OF
Why this hearing was different
- What was most surprising about the hearing, according to Rose, was that it “was far more substantive and less partisan than past hearings that have typically devolved into a fight over whether the approaches of social-media platforms to content moderation go too far or not far enough.”
- In fact, Rose adds, the conversation went well beyond how to moderate content: “Republicans—who often accuse platforms of having a bias against conservative ideas when removing anti-vax or election-related disinformation—instead focused their questioning on issues of child protection online and the business models and financial incentives that could lead to real-world harm for kids. Democrats asked how these same incentives contribute to radicalization and disinformation for adults. This shift from a content-focused approach to a look at platform infrastructure suggests a new opening for bipartisan action.”
- Lawmakers clearly “did their homework this time,” Zarine tells us. “A recurring theme across both Republican and Democratic members’ questioning was the relationship between online and offline harms—an important link in understanding domestic extremism and radicalization.”
- As Jacqueline sees it, the hearing only added to “the momentum created in the past year for stronger federal regulation of the internet” based on an “acknowledgment that self-regulation has not worked. Multiple members of Congress bluntly expressed their distrust and dissatisfaction regarding the statements and solutions offered by the CEOs.”
Subscribe to Fast Thinking email alerts
Sign up to receive rapid insight in your inbox from Atlantic Council experts on global events as they unfold.
The future of social-media governance in the US
- Here’s how Rose assesses the state of play in terms of policy solutions: “There are more than two dozen legislative proposals floating around” that range from “completely gutting” Section 230 of the Communications Decency Act, which largely protects online platforms from liability for their users’ content, to “making platforms liable for content they algorithmically promote.”
- Zarine thought “some of the best lines of questioning went beyond discussing specific pieces of content to instead address fundamental issues of platform design and the underlying business models. It is increasingly apparent that the largest social-media platforms are not prepared to moderate content at the scale at which they operate, in part because they are not built to prioritize ethical and civic values over maximizing engagement. But it seemed that much of the discussion was still focused on articulating this problem rather than putting forward substantive solutions.”
- As for what all this talk means for what social media will look like in the future, Jacqueline notes that lawmakers “seemed to coalesce around the need for measures to increase transparency and public reporting on moderation decisions and efficacy.”
The international dimensions of the domestic debate
- Jacqueline notes that several lawmakers called for “some sort of federal privacy standard for personal data” like the European Union’s General Data Protection Regulation. As US policymakers develop new rules regarding personal data and internet law more broadly, she advises, they should make sure those rules are interoperable and reciprocal with those of allies.
- Rose zooms out from the hearing on Capitol Hill. Way out: “Though the United States is home to the world’s largest and most impactful [tech] companies, it is far behind the EU, India, Australia, and China in clarifying how it thinks tech should function in society.”
- But while it may seem super-technical, “the regulatory conversation is foreign policy,” Rose argues. “American absence in the conversation means ceding space in what amounts to one of the most significant expressions of power in the modern era—leaving countries like China to shape the digital world” that we all increasingly inhabit.
Thu, Mar 4, 2021
Transcript: YouTube CEO Susan Wojcicki on online speech, government regulation, and Donald Trump’s suspended account
The Youtube CEO talked about the company's responsibilities in the information ecosystem, disinformation, government regulation of speech, and what's next for Donald Trump's account.
Mon, Feb 1, 2021
New Atlanticist By Frances Burwell
Congress will certainly take on reforming Section 230 of the Communications Decency Act, but it should not just focus on the companies and their responsibilities. Legislators should take a good, hard look in the mirror. They must provide the guidelines that are central to reducing violent extremist content online: rules on acceptable versus forbidden online speech.
Mon, Feb 1, 2021
New Atlanticist By Kenneth Propp
Many Americans’ sunny faith in a robust media “marketplace of idea”’ is being tested. The European historical experience that informs “militant democracy” and speech-invasive privacy laws remains largely alien here. But adjustments at the margins, particularly in the areas of process, are possible and desirable.