The war in Ukraine shows the disinformation landscape has changed. Here’s what platforms should do about it.

Watch the panel

360/Open Summit: Contested Realities | Connected Futures

June 6-7, 2022

The Atlantic Council’s Digital Forensic Research Lab (DFRLab) hosts 360/Open Summit: Contested Realities | Connected Futures in Brussels, Belgium.

Event transcript

Uncorrected transcript: Check against delivery

Speakers

David Agranovich
Director, Global Threat Disruption, Meta

Alicia Wanless
Director, Partnership for Countering Influence Operations, Carnegie Endowment for International Peace

Min Hsuan Wu (aka “Ttcat”)
Co-Founder and CEO, Doublethink Labs

Moderator

Lizza Dwoskin
Silicon Valley Correspondent, the Washington Post

LIZZA DWOSKIN: So first I’d like to introduce—right next to me, I’d like to introduce David Agranovich. He is the director of global threat disruption at Meta, the company formerly known as Facebook. So he coordinates the identification and disruption of influence-operation networks across Facebook. And prior to joining Facebook, he served as the director for intelligence at the National Security Council at the White House, where he led the US government’s efforts to address foreign interference.

Next to him, we have Alicia Wanless, who we’re meeting for the first time. Alicia’s the director of the Partnership for Countering Influence Operations and the Carnegie Endowment for International Peace. She researches how people shape and are shaped by changing—the changing information space. She conducts content and network analyses and has developed original models for identifying and analyzing digital propaganda campaigns.

And then we have Ttcat. You’ll have to tell me how to pronounce your last name. How do I pronounce your last name?

MIN HSUAN WU: My last name is Wu. It’s kind of easier. But my first name is Min Hsuan.

LIZZA DWOSKIN: Perfect. Better than I.

MIN HSUAN WU: All right, just go back to Ttcat. All right.

LIZZA DWOSKIN: Better than I. He’s the—he’s the co-founder and CEO of Doublethink Labs, and they’re really at the forefront of the effort to track Chinese and Chinese-language disinformation. He’s an activist and a campaigner around a number of social movements in Taiwan, including the anti-nuclear movement, environmental, LGBTQ, and human rights movement, and as I said is on the forefront of tracking disinformation by China.

So I want to jump in and say when Graham asked me to do this panel, he said it came out of a conversation he and I had last year when I had just come back from Israel. And I was tracking a disinformation-for-hire company—not going to say the name because it’s related to a forthcoming article in the Washington Post—but this company essentially is one of many that have proliferated around the world that governments or political actors can hire if they want to run a disinformation campaign and they want to outsource it somewhere. And I called Graham because I was thinking about how much the world has changed since we first—myself and other journalists first started reporting and uncovering Russian interference in the 2016 election and the platforms’ very weak response to it; just kind of wasn’t prepared for it.

So I wanted to spend some time chatting with you guys today, you guys and gal, about how different the world looks today than the way it did, how different the defenses are, how different the attackers are, and how the landscape has changed. And then what are the responses to that changing landscape, both from governments and from platforms and from civil society?

So I want to start with the question we’re all talking about, the most pressing reality, which is the war in Ukraine, and David, ask you, how does the world look different from where you sit at Meta than it did before February 27, before the war?

DAVID AGRANOVICH: Thanks for kicking us off. I think it’s a really topical question, particularly given how much Ukraine has focused on the conversations here at the conference over the last few days.

Maybe just for a little bit of grounding, my team has been working across the company with our threat investigative teams to look for, identify, disrupt, and then build kind of resilience into our systems around influence operations for the last several years. I joined the company back in mid-2018. That effort was already underway after the 2016 elections.

And so some of the things I’ll talk about in terms of what we saw from particularly Russian influence operations around the February 24 invasion of Ukraine are predicated on the trends we’ve observed over the last four or so years of Russian activity.

And I’ll break this up maybe into three main categories: First, kind of what looks different from a preparation perspective, what looks different from a response perspective, and then what looks different from a capabilities-across-society perspective.

On the preparation piece, I think one of the biggest differences here was in the weeks leading up to the 24th of February, you saw a substantial shift in the ways that both platform companies prepared for, you know, Russia crossing the line of control in eastern Ukraine as well as the way that governments and civil society were engaging around the possibility of influence operations, disinformation, surrounding the crisis.

When I was still at the White House, I was working on the global response to the poisonings in Salisbury in the UK of Sergei Skripal and his daughter. And at the time it was really hard for governments to share information about what we thought people were going to push as disinformation narratives, and it was very difficult to kind of get ahead of what at the time felt like a very agile disinformation apparatus surrounding the Russian government.

Ahead of the 24th of February, you saw these somewhat unprecedented strategic disclosures that narrowed the operating space of Russian disinformation operators by the US government, by NATO, by the Ukrainian government and others.

On the platform side, several platform companies spent the weeks in the run-up to the 24th of February preparing for what we expected to see, what would we need to detect, refreshing our investigations into known Russian-linked disinformation operations we had previously detected. And so when the 24th rolled around, there was already this very constrained operating space. I mean, this was the response piece. And there were platforms ready to look for them, civil-society researchers… who were already out there with capacity to look for this stuff.

And so though we saw several influence operations linked to known Russia-linked disinfo networks, they didn’t seem to get much traction, either on the platform or in the broader media ecosystem. That’s not to say that there isn’t a threat there, but rather that the defenders were more prepared.

The last thing I wanted to touch on was the capabilities piece, the strategic disclosures, the preparation work, that gave us fertile ground to continue our work in kind of constraining this type of influence-operations activity. But now that we are in the post-initial-invasion phase of the operation, the war isn’t over, right. Neither is it over on the ground nor is it over in the information space.

And so I think what we’ll need to focus on is ensuring that these early victories of essentially constraining the success of some of these operations aren’t lost as kind of global attention continues to shift from issue to issue. And so that’s an area I think, I hope, we’ll have a chance to focus on a bit here.

LIZZA DWOSKIN: Right, because the world, of course, was actively debunking Russian disinformation in the beginning of the war. And there were so many—you know, the whole of the world was responding. And now that the world isn’t paying as much attention, that’s where perhaps these influence operations then can get more traction.

Alicia, what do you think?

ALICIA WANLESS: Well, I’ve been looking at problems like propaganda and disinformation since about 2014. And so the longer tale of that is that I think that the bigger change now, even since 2014 but not necessarily because of Ukraine, was a greater awareness that we have problems in an information space.

When it comes to Ukraine, I think what it’s demonstrated is a lack of a multistakeholder response, that we really didn’t have a strategy, particularly in the West, that could bridge the gap between, say, industry, civil society, and governments. And in that way they were working in their own field, their own sector. But even within each one, they tend to work in their own area and were broken up by topics. So one team over here might be working on disinformation. It might be foreign-originating. Another one might be strategic communications. Another one would be cybersecurity. And all of these things are part of the information environment.

And even within companies, they work on single-policy enforcements. They’ve got teams that do singular and different things. And those don’t necessarily come together. But then between those stakeholders, the trust between them, the languages that they’re speaking, they are not usually the same, and they haven’t really collaborated. There’s been more tension than not before the conflict.

So what we do have here is a unique opportunity, if there is a will, to build a stakeholder response that actually helps create efficiencies in terms of how things are coming in. So, for example, what we see is maybe governments making multiple requests to companies and not coming together. Well, maybe multilateral institutions would be the better bet to do a singular briefing, but also companies providing greater information to stakeholders like civil society and the government as well in advance, to be able to get ahead of a threat.

But the key here is that we have to find standards and systems that make this safe and collaborative and that there is some sort of an outcome with lines in the sand, because ultimately this is the thing we’re missing the most, rules of engagement and a strategy.

LIZZA DWOSKIN: Well, it’s interesting, because I—you know, I saw, actually—and David and I were talking about this before the panel—that the companies were willing—at least the platforms were willing to draw a line in the sand, you know, and take a side, which is different.

But to your point, you know, you have Google that decides they’re going to ban any content that distorts real-world events. And Facebook has a different policy and they’re going to, you know, allow people to criticize Putin and potentially Russians. And, you know, they were all—you know, there wasn’t a uniform response from the companies, even though, in some ways, there was maybe more uniform response than we’ve seen in the past.

What do you think, David?

DAVID AGRANOVICH: So—

LIZZA DWOSKIN: You brought that up with me before.

DAVID AGRANOVICH: I do think that there’s coordination between kind of the threat-investigative sides of companies that’s grown out of the 2016 period. And so you saw this around elections, whether it was in the US or in the Philippines or in Brazil or in India.

But in particular, I think one of the challenges around setting these types of content-moderation policies—and I know Emerson and Katie talked about this yesterday—is in these fast-changing periods of potentially global conflicts or ethnic strife, it’s difficult and, I think, perhaps not always the best position to rely on the platform companies to be the leading indicator of where we want those lines to be drawn.

This is actually, I think, a place where civil society, where governments, particularly can lead, because these types of decisions have effects on people’s lives. And having a clear kind of norm-setting across the industry would be really useful.

LIZZA DWOSKIN: But I feel like in this case there was a war. And pretty much the whole world, civil society and the companies, were against it.

DAVID AGRANOVICH: I think that’s right.

LIZZA DWOSKIN: So is this something that—you know, we talked before about how this is unusual for platforms to draw a line in the sand like this politically.

DAVID AGRANOVICH: I think that—I mean, there’s some helpful guiding principles here. And I’d be interested in kind of Alicia’s take in particular of how we take this from just, like, platform policy to, like, strategic.

But the guiding principle is how do you protect the people who are using your platform. And in the context of people in Ukraine, right, that is how do you protect their accounts? How do you give them tools to lock their profiles down so that if the city that they’re in is taken over by an invader, they can quickly hide the information that might get them into trouble?

But it also means how do you protect, for example, dissenting voices in Russia, where talking openly about the war might result in physical-security risks or risks of imprisonment and the like? And so I think that that guiding principle, that I would argue pretty much all platforms should have—how do you protect the people who are using your platform—can help, you know, bridge some of the differences in how the platforms approach these types of problems.

LIZZA DWOSKIN: Mmm hmm. What do you think?

ALICIA WANLESS: I’m not going to comment on that specifically. But I do think that there are also other areas where it makes it painfully apparent that we aren’t really coordinated. Now, stepping aside from Ukraine, bringing Ttcat in—this is something that we talk about quite a bit—in terms of even just the research community.

So you have a very wide and diverse group of people who are working on research related to influence operations. They might be in civil society, nonprofits, think tanks. They might be academics. But all of them are almost entirely working in isolation, building up their own data pipelines that don’t necessarily get reused. And we’re talking about research that’s really engineering resource-heavy, and that’s extremely costly. And we haven’t really found a mechanism to come together to be able so share that type of resource, build up datasets that we can use together, and have representative samples. And this is—this is just one example where we lack coordination.

MIN HSUAN WU: Me? All right.

ALICIA WANLESS: Yeah. Just—I’m giving you the floor.

MIN HSUAN WU: Yeah, all right.

LIZZA DWOSKIN: She’s looking at you.

MIN HSUAN WU: So thank you so much. All right. So especially when you’re talking about the data, it’s not only on Facebook or Twitter, which they granted certain access to the research group for the API. We are also talking about a platform like Weibo or WeChat or TikTok or Douyin that’s even harder, right? And they are consistently changing their rules for people to collecting those data.

And in fact, actually, I know that there is a business model for industry. When they collecting those data, they were actually exchange those dataset from company to company so they can build up more data for the older clients. And we don’t have those exchange mechanism in our community. So I think that’s also very hard.

Just quoting one, like, very good Ukrainian partners here I just met last night, they say he found four different group of people in this summit collecting the same dataset with them, right?

LIZZA DWOSKIN: Really?

MIN HSUAN WU: So we are all collecting data and spending money and also build up those dashboard, and we definitely need some more coordinated efforts from our community.

LIZZA DWOSKIN: So that’s a really interesting idea, like creating a central repository of all information influence operations and evidence of them across the platforms that any researcher can use. Is it feasible?

MIN HSUAN WU: Yeah, but it also is like we spend a lot of money collecting those data because we think that collecting those data, it will be able to help us to investigate what’s happening inside those dataset in the future. But actually, we are collecting the data more than what we can do for analysis because we are also facing the capacity issue for the analysis of this data. So, yeah.

LIZZA DWOSKIN: This is a question actually on that, which is, you know, Russia will be taken to court for war crimes. And what happens to all the—all the content that platforms deleted because they were fighting these influence operations during the war? Can that be retrieved in legal cases?

ALICIA WANLESS: I was just going to say, this here is another massive gap that we have in terms of regulation that governs how we actually deal with our modern information environment and the information within it and who actually gets to dictate that. Most laws—I’m not a lawyer, so I’m going to qualify this—tend to happen at a national level, but we don’t necessarily even have that in place right now, much less some sort of international agreement of what could happen and where. So, again, we have this massive, gaping hole that we weren’t prepared for. And yes, it takes years to build up for that.

My hope is that with something like Ukraine it’s enough of a force-multiplying factor that we come together and we’re aware of this—we’re aware of the wider information environment, the lack of guidelines that we have, the lack of norms, et cetera, and that we suddenly, hopefully, have impetus from governments to take a charge on that and do something.

DAVID AGRANOVICH: Maybe just to plus-one that, I think it’s—to Ttcat’s point, the industry’s responses to particularly the question of how do you archive and enable research are very different platform by platform, in no small part, as Alicia noted, because there hasn’t been a lot of clear guidance from regulators or democratic governments of, like, what people actually want to see and how they want that data shared and with whom they want that data shared.

Similarly, right—so my background is more on the traditional security/cybersecurity space. The law and the norms around information sharing for, like, cybersecurity threat indicators, as folks who work on the cybersecurity front know, is much clearer, right? There are vehicles explicitly designed to enable companies and research organizations to share information about cybersecurity threats. We don’t have that in any clear form whatsoever around issues like influence operations.

ALICIA WANLESS: We don’t even have it for data sharing for research purposes, although I’m really hopeful for that EDMO report.

MIN HSUAN WU: If I may, but bottom line is that those public opinions or whatever the content that push your platform or other platforms essentially is not the data owned by the tech company. It’s the data about our own society, our countries, what’s happening there, what people are talking about there. So it should be public available or at least for a research group to understand what is happening to our citizens, what are they talking about, what are they producing, right?

LIZZA DWOSKIN: You know, you’re right. It’s a societal—it’s a societal record. But then it kind of—it actually comes back to the question that, I think, sparked this panel and my conversation with Graham, which is, OK, we’re just starting to have these global frameworks for cyber weapons and laws, but disinformation and influence operation is their weapons and there’s no framework both in terms of sharing data, how governments should handle it, and it seems like it’s a void and then one—you know, the example here.

Do you all think that there should be that a government or global governments or international bodies should come in, therefore, and, like, mandate, for example, that platforms archive influence operations, that they share it publicly in uniform ways? Do you think that that should be mandated by governments?

ALICIA WANLESS: Can I start?

LIZZA DWOSKIN: Yeah.

ALICIA WANLESS: I think we should start with the first step, which would be transparency or operational reporting of online services to understand what even data they have beyond that, because influence operations and disinformation is one part of the problem.

I mean, we—I’m sorry, David. I’m picking on you now. We don’t understand how the policies are developed. Well, the people who didn’t work for the companies don’t understand how the policies were developed. We don’t necessarily understand how they’re enforced or what research is happening, and then this research comes out and leaks and it erodes more trust in our information environment, and these things need to be rectified.

So first step, I would say, is that governments should regulate operational reporting to inform how companies are working. It would be ideal if a number of countries came together and broadly harmonized that. Maybe a place like the OECD leads on this and that would be extremely helpful and expedite things.

That would inform researchers on what information is available to research and also inform policymakers on how we can do regulation to actually control things and archive stuff like that.

LIZZA DWOSKIN: Then it wouldn’t be as fun for journalists because we depend on leaks.

What about a question on—what about—you know, we haven’t talked yet about the disinformation from our hire industry but it’s something, David, that Meta has actually talked a lot about in your reports, which is that it used to be, you know, that governments would pay for this directly. Now they’re, increasingly, outsourcing it.

Tell me—tell us about that world, and how does that world get regulated? What can prevent this from happening, this gray space?

DAVID AGRANOVICH: So it’s a difficult question in no small part, I think, because disinformation for hire definitionally and PR agency are not hugely different definitions, right. It’s more what those companies end up doing.

But that said, right, we—so our teams put out a report last year about surveillance for hire industry, right—your NSO Groups, your Black Cubes of the world—and one of the things that, I think, worried us the most about these surveillance companies was not only are they engaged in these egregious abuses of people’s privacy by hacking their phones, hacking their accounts, hacking their email addresses, they do so for commercial gain for any customer that’s willing to pay and in doing so they hide the people behind them, right.

Oftentimes, if you look at our surveillance for hire report you’ll notice that in almost all of those cases we weren’t really able to identify the clients. We could tell you exactly what company was providing the services but the whole business model is hiding who that ultimate client is.

Whereas if you look at our influence operations reporting going back a few years, there’s a ton of this very specific attribution to governments, to intelligence services, including some of the very sophisticated services in Russia.

And so one of the big risks around disinfo for hire is that it creates this whole industry that, essentially, just hides from all of our views, whether you’re an OSINT researcher or an investigator at a platform company, who is actually paying for it, who is driving these operations, and why are they targeting the people they’re targeting.

How do you regulate them? Some of the challenge here is that we’ve taken down a handful of disinfo for hire firms. We’ve banned them from our platform when we find them because their business model violates our policies.

But I can’t think of a single example where the people who ran the operations at the firms or where the firms themselves faced any meaningful business impact for doing so, right. Those people still work in the PR industry. The firms themselves still have very large clients all over the world.

Until there are some actual costs for engaging in this behavior beyond Facebook taking down your accounts and then trying to embarrass you in a public blog post, it’s hard to imagine that a profitable business model isn’t going to continue driving that type of PR and ad agency activity.

ALICIA WANLESS: And the politicians benefit from using it. This happens quite a lot. Maybe not the politicians, but in Taiwan—

MIN HSUAN WU: Yes. So lots of different tactic that—by the Chinese information operation mode is that—we published a report last year. There’s four different one that people commonly noticed that those, like, state-funded media, they do a lot of the propaganda working with other media outlets. Or you see a lot of patriotics or the cyber troop, they’re trolling the people around. But that’s an easy one, right?

So, but the hard one is the one you just mentioned, that when they hire people actually in your society, in your country, that the people who create those content is Taiwanese or the people who promote that content is also Taiwanese, how do you defenses then be higher or they are just people have a different idea or different political opinion with you within your democratic society?

So to increase the cost for those activities, and also shrinking their business model or profit model, I think it’s essential that—to prevent those things, because in the end of the day they are the one—whatever they do for politician for business, for make-up company products—what they do is they input a lot of inauthentic content, opinions, and pretend that it’s genuine to the audience in your society. So I think that should be at least a social norm that you don’t engage with those PR firm or marketing company that provide those service at all.

LIZZA DWOSKIN: Yeah. I’ve done some reporting on this in the Philippines, and I really felt like this disinfo for hire individual, it’s, like, a hot new job for a twenty-something in the global south. Because you can make money, you can be online, you can become an influencer. Or, if you were already an influencer, you can get paid for political sponsorships. But, yeah, it just sounds like, from what you all are saying, I don’t—it doesn’t sound like there’s really any incentive from any government to actually stop this.

ALICIA WANLESS: I would like to distinguish between the people who work in the bureaucracies and the politicians. Because my experience has been those inside the government would like to do things, and they would like to clean up the information environment and make it more reliable. Politicians don’t have the vested interest, usually.

LIZZA DWOSKIN: I want to open it up for questions in a minute. So would love to see your questions, if anyone wants to come up to the mic, or you can send a question already. Ttcat, while people are teeing up their questions, I did want to go—I did want to go back to Russia and Ukraine, because you’ve done so much research on China’s involvement in that conflict. And I wanted to ask you about how you see China walking a fine line in terms of the disinformation it will echo, and where it diverges.

MIN HSUAN WU: Right. So ever since the war started—well, back to February 22nd, that our team started special taskforce, everybody work over hour, and published the digest every day, and a look at how China state media, influencers, and also those nationalist media outlets are pushing those narrative against Ukraine. They copy a lot of things from Russia. They translated a lot of things. And they tweet—they twist whatever Zelensky say to another meaning, and push that to the Chinese-speaking citizens.

First of all, I want to say two things here. One is that oftentimes when you heard, like, something like this, it feels, like, very exhausted, right? So it’s like something far away. But actually those disinformation or propaganda campaign in Chinese language is not only about China people. It’s also about the Chinese speaking world, like in Malaysia, in Singapore, in Taiwan, in Australia, Canada. Everywhere the diaspora community. Ask your friend whether they—what news outlets they are reading in New York, in Vancouver. And all over the place they read the news on WeChat and also all this—you know, whatever Chinese news is available there. So, first of all, it’s not only about the people within China.

Second, think about what they do stuff on the war until now, it’s over one hundred days. They are pushing this narrative, dragging those Chinese audience away from Western country, Western value. They are attacking that… Whatever they do, they are preparing the environment—the information environment. That’s exactly what Russia did in 2014. They start to demonize Ukraine and prepare those propaganda. Of course, some people don’t believe. Some people don’t believe. But that’s just right now, one hundred days, right? How about two years later? How about four years later when they keep pushing those narrative?

LIZZA DWOSKIN: So you’re talking about preparing for an invasion of—laying the groundwork for an invasion of Taiwan?

MIN HSUAN WU: I don’t want to jump that conclusion, but I would say they are preparing for whatever things they want to do, because it’s all pre-justified. They don’t need to explain to their citizen why we don’t want to help Ukraine anymore, right, why we want to help Russia today. Yeah, because they’re already—there’s already a lot of narrative and justification out there by those disinformation.

LIZZA DWOSKIN: And then you—yeah.

ALICIA WANLESS: They see the information environment as a system and have for a long time. They’re not quibbling over definitions like we are and debating this. They have a center of gravity to understand it and they have a strategy. We don’t.

LIZZA DWOSKIN: But, you know, Ttcat, when we were talking earlier, I thought it was real interesting how you said, you know, there’s so many limits to where Chinese disinformation will go in support of Russia. How you said that they will not—they will not mimic the narrative around independence in the Donbas region.

MIN HSUAN WU: Yeah, right. There’s an ecosystem, right? So there’s an ecosystem for—if you want to make profit, I can recommend this new gig for you guys—because we have lots of White people here—make a video or a TikTok video that promote how great China is. Then you will become an influencer. That’s how they work. So this nationalism created a huge nationalism interest, become a new business model. So China government doesn’t have to pay you as an influencer. Once you follow their narrative, follow their state media, whatever they are talking today. You open the People’s Daily, CGTN, whatever the hot topic today, you just follow it, then you gain followers. You gain traffic. You gain profit. That’s how they work. So this whole bottom up or decentralized network is what we’re dealing with right now for the space.

LIZZA DWOSKIN: And why is it not as profitable to be an anti-government influencer?

MIN HSUAN WU: Oh, yes. That’s a good question. So I think a lot—we don’t have that much yet, but I do see a lot of people are going that direction right now in Taiwan or in other places, as a diaspora community. They also do that, but they are not as profitable as China citizens—the pro-China one, yes. I don’t know why.

LIZZA DWOSKIN: If no one else is itching to jump in, we can go to a question. So we have a question here which says: It feels like the discussion around accountability by social media platforms happens only in reference to Western companies. What leverage does the democratic world have over platforms like VKontakte, Telegram, and WeChat? Great question.

MIN HSUAN WU: Right. That’s a question I also want to ask. I don’t have the answers, yes.

LIZZA DWOSKIN: Does anyone want to take that?

ALICIA WANLESS: Well, yeah, no, it’s—I mean, it’s the same as, like, GDPR and the EU. It will apply to where that law is placed. So, I mean, the West has the same options, I wouldn’t advocate for it, that Russia has taken, and China has taken, in kicking out companies that don’t comply with the way that they’ve decided they’re going to regulate their information space. So it’s possible. It’s there. I think the emphasis for a long time has been on the major American ones because they’re there at home and they’ve taken a central role in our own information ecosystem.

DAVID AGRANOVICH: I do think one thing that can help here is, so, one of the things that we’ve been trying to do more and more of in our own analytical reporting is calling out the platforms that we see content spread to, right? I think more and more—and I imagine most of the Sherlocks in the room would agree—these operations are inherently cross-platform. And so one thing we’ve done, in particularly the operations around Ukraine, we called out the fact that we saw, you know, Facebook profiles who were designed to backstop content written on websites that were primarily amplified on VKontakte and OdnoKlassniki, for example. So in some ways, hopefully, just raising some of this awareness of how these other platforms play in the global information ecosystem, in hopes that it will then inform some of the regulatory conversations.

ALICIA WANLESS: We need to look at things as a system.

LIZZA DWOSKIN: I’m just laughing because you said that before, so. Because you believe it.

ALICIA WANLESS: Yeah, I do. I think that’s the only way we can get out of this. The information environment is like the physical environment. If we don’t start looking at the systemic, we have no way out of this. We will just constantly be reacting as we are.

 LIZZA DWOSKIN: But what is systemic? You know, WeChat, they’re not going to face pressure from their government the way that the American platforms face pressure from their governments to crack down on this stuff. They’re just not.

ALICIA WANLESS: No, but they may not be operating in the environments that they are right now. I mean, they can be banned. We see that they can—things can be banned. Russia banned. China’s banned. I mean, I’m not advocating for—

LIZZA DWOSKIN: Or TikTok could be banned in the United States.

ALICIA WANLESS: Exactly. That’s what I’m saying.

LIZZA DWOSKIN: Yeah.

MIN HSUAN WU: I don’t want to put you cold water, but what they can do is they can separate a company and promote a different version, like what TikTok and Douyin does. So and actually, WeChat is—Weibo is also—they have an international version. So whatever you download is actually—there’s a different—you probably see different stuff or different—you face different content moderation standards.

ALICIA WANLESS: Yeah. TikTok US is, technically, separate, I believe.

MIN HSUAN WU: Yes.

ALICIA WANLESS: But, again, global information ecosystem.

LIZZA DWOSKIN: There was someone who raised their hand over there. Yes. I think—oh, OK.

Q: Hi. So my name is Omri Preiss. I’m the director of Alliance4Europe and also part of the DISARM Foundation. I want to thank you for the really interesting panel and also a great discussion that we had at a session yesterday.

And DISARM stands for Disinformation Analysis and Risk Management. It’s exactly the kind of framework, it’s a common language on disinformation that we’re talking about here, basically, applying cybersecurity approaches to share information. It’s based on MITRE ATT&CK, for those who are familiar, and it’s something that we’ve been working on to bring stakeholders together around how we get this off the ground in a way that really enables information flows in a way that is, you know, transparent to the community and really is able to engage, you know, those in this space.

Now, Alliance4Europe has been working on this kind of cooperation building for the last several years and what we see is that there is a reason why everyone wants to have their own thing and want to invest their resources in one specific space or one specific project.

Everyone wants to have their funding, their branding, and the right to do so. Everyone wants to have their own great idea. And so the genuine question, I think, that we face as an organization and as a community and in establishing these common resources is how do we do that in a way that is a win-win for everyone.

How do we enable everyone to have a common interest to use these tools together, to share information together, and not feel like oh, well, I just lost a bit of funding to that guy because they’re going to steal my idea, or, you know, how do I shine through?

How do we really solve this collective action problem and show everyone, like, you can buy into this forum and feel that you’re going to gain for it for your own advancement as well as advancing the community and the common cause that we have, which is to have, you know, a democracy that is safe in the digital world and being able to really communicate together?

So over to you.

LIZZA DWOSKIN: I’m going to ask one person to address that for a minute so we can get to some more questions, whoever wants to take it.

MIN HSUAN WU: I can.

ALICIA WANLESS: If you want.

MIN HSUAN WU: Well, we start our work by—we think that we want to build a cross-platform database that our analysis, just put a keyword, they can gather all the data from Weibo, from all these, like, China junk news site. And turns out, well, we did it in just a few months, then they changed. Then we keep spending the money, try to adopt it, and it’s never done.

So I would suggest that maybe we can develop our competitive strengths in analysis or other way. Once we have—if we have a joint—if we can—if we don’t need to bother for collecting those data, we can spend our money and our time on developing an algorithm or develop training our analysis or building up our capacity, yeah, because we will never be better than who owns the data, right? Yeah.

LIZZA DWOSKIN: I see another question on the board, which is how does the model of surveillance capitalism driving major social media platforms enable the disinformation for hire industry, and what challenges do the design of the platforms pose in formulating lasting change? Which, I’m assuming, has to do with the fact that disinformation can be controversial and enraging and get clicks.

Who wants to take that for a minute?

ALICIA WANLESS: That’s a full-on research paper question. To answer in less than three minutes, I think, would be a little bit much.

I mean, I think it’s not just surveillance capitalism. It’s the role of influence in our society that we are just not having a frank conversation about. I mean, this goes beyond influence operations and disinformation to the very fundamental basis of our legitimacy.

I mean, we have influence happening everywhere to sell us things, to get us to vote for somebody, and for some reason in democracies we have not had that moment to come and really discuss how far is too far, at what point do people lose their agency, and to get to that we need to accelerate research around the impact of these things, and we’re not going to do that unless we start to pool resources and have shared engineering infrastructure, something as big as a CERN for the information environment.

LIZZA DWOSKIN: OK. You have had a question for a while.

Q: Hey, yeah. My name’s Justin, Code for Africa. We track a lot of this stuff across twenty-one countries in Africa at the moment, and you’ve hit on a lot of important points that we’ve been trying to hit on with our partners.

Disinformation’s super profitable. It’s a boom industry in places like Kenya. It’s not just disinfo for hire; there’s a whole subset of sub-economies inside there. But we’re seeing the same kind of playbooks being used everywhere from Sudan through to Ethiopia, Burkina Faso, Mali, kind of you name it, regardless of language or audience. It’s cross-platform. It wherever possible tries to use vernacular to avoid algorithmic kind of detection. It’s franchise-driven—specifically in the cases that we monitor, Russia protagonists franchising out to local kind of implementers.

What are we doing—and so I’ve got a three-part quick question. What are we doing to stop the fragmentation that’s happened where even within the platforms your fact-checking teams were and the people who are trying to debunk the misleading information were completely separate from the threat-disruption teams? There’s this firewall between them, and we’re seeing that play out in the rest of kind of the ecosystem now as well. Fact-checkers are not speaking to the guys who are doing, you know, kind of the work that DFRLabs or others do ourselves. So that’s the first question, because it’s part of—it’s something that the people driving the disinformation, they don’t see this distinction. They’re leveraging all of that. So that’s the first one.

The second one is that the enablers who are building this wish-fulfillment infrastructure are not just the political kind of PR click-for-hire people. It’s the scams—the scam artists who are building mass audiences, almost like an Amazon delivery service for disinformation operators. What are we doing to take them down, or if not taking them down to map them out? At the moment in Africa, we’re seeing there’s a massive campaign to drive everyone on Facebook and Twitter onto dark social, specifically because enforcement’s getting better.

And then the third question was kind of slightly self-serving. Ttcat mentioned it. It’s local nuance, understanding the local ecosystem. Most of the people doing work in the space are in the North. What are we doing to support kind of in-country, in-region analysts, researchers, and the people who join the dots?

ALICIA WANLESS: I’m not sure that was so much as questions as important statements that needed to be heard because it, again, reiterates the lack of coordination, the lack of bringing all of the different bits of knowledge that we have generated together and the lack of an international, interconnected approach to this. I don’t have answers in that amount of time.

LIZZA DWOSKIN: It looks—is anyone else itching to take that?

MIN HSUAN WU: Yeah. I don’t have an answer for others, but in sum I echo whatever Justin just said that, yes, for the—for the local context. But also in some region, like for the region where I from, I feel like we need more digital Sherlock. We need more capacity building for—training more people who also understand their local context, local language, and local political context, and also can do those analysis work.

Frankly, lots of people asked me, do you know what China do information operation in Thailand or in Middle East? How I supposed to know, right? So we don’t live there, right, and we are not—as long as we don’t have the chance to send people actually there, whatever tools or whatever knowledge that we have and join with this community, we will never find out what they do there.

So that’s my kind of response, or, yeah.

LIZZA DWOSKIN: Did you want to?

DAVID AGRANOVICH: Maybe just—knowing that we’re almost out of time.

So I did want to echo, I think, Alicia and Ttcat, right? A lot of those points are really important, particularly the scams piece, the fact that I think we’ve seen this growth in these kind of scam and spam actors trying to get into this business.

But the most important takeaway of those three points is the importance of enabling communities like Sherlocks all over the world, in particular people who have that ability to dive really deep in local context, understand not just what’s happening on the internet in a particular country but what’s happening on the ground.

And I know one of the priorities of the folks on my team is not just building some of the tools that I know some of the folks here are familiar with to archive and share information about influence operations; it was also working directly with some of these teams. So hopefully we’ll have a chance, for those of you I haven’t met, to talk after this panel because it’s something I think we really do want to do more of.

LIZZA DWOSKIN: Well, I just want to thank all of you because I learned so much from the panel. I was thinking very quickly about the theme that we—oh yeah, I want to remind everyone that you can get this content and other relevant event information, the agenda, on the DFRLabs website and also their social media account, so go check that out.

Yeah, I learned a ton. Thinking about the—going back to the beginning where I asked how is the world different from six years ago when there was the IRA infiltrating American social media companies in the US election, and now it’s like a million small IRAs with all sorts of different motives paid by different actors. And it’s really fascinating to hear the collective knowledge in this room, actually, about how to tackle this problem, so it helps my coverage a lot. So thank you so much.

Watch the full event

Further reading