Inside a new effort to define and promote tech transparency

Watch the full event

Event transcript

Speakers

Chloe Colliver
Head of Digital Policy and Strategy, Institute for Strategic Dialogue Global

Shashank Mohan
Senior Project Manager, Centre for Communication Governance

Jason Pielemeier
Deputy Director, Global Network Initiative

Alicia Wanless
Director, Partnership for Countering Influence Operations, Carnegie Endowment for International Peace

Introduction

Rose Jackson
Director, Democracy and Tech Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council

Moderator

Issie Lapowsky
Chief Correspondent, Protocol

ROSE JACKSON: … Policymakers and civil society leaders alike increasingly point to transparency as a prerequisite for addressing any number of tech-related challenges and yet, very few can define, much less agree, on what transparency means or should look like, which is exactly what our next session will explore.

We’re hosting this panel as part of a collaborative effort we’ll be launching in partnership with the Global Network Initiative, the Partnership for Countering Influence Operations, Institute for Strategic Dialogue, and the Center for Democracy and Technology, among others.

This Action Coalition, organized under the auspices of the Danish government’s Technology for Democracy Initiative, will launch a year-long effort to bring clarity and progress to this urgent topic.

Joining us to moderate this conversation is Issie Lapowsky. She is the chief correspondent for Protocol, and if you follow tech and democracy issues you have likely seen her in-depth reporting and, like us, eagerly picked up her latest piece on the White House’s Summit for Democracy.

So without further ado, Issie, thank you for joining us. Thank you to all of our panelists. Let’s get started.

ISSIE LAPOWSKY: Thank you, guys, so much for having me, and thank you for the great introduction. I am really excited for this opportunity to discuss the topic that seems to be on everyone’s mind, transparency. Obviously, tech companies, academics, governments, they all say they want it but we’re here to talk about what exactly it means and what the barriers are that are standing in the way from policies like this being passed.

Our panelists today are Chloe Colliver, head of digital policy and strategy at the Institute for Strategic Dialogue; Shashank Mohan, senior project manager at the Center for Communication Governance; Jason Pielemeier, deputy director at the Global Network Initiative (GNI); and Alicia Wanless, director of the Partnership of Countering Influence Operations at the Carnegie Endowment for International Peace…

So, Jason, I wanted to start with you. There has been just so much talk about transparency. It is becoming something of a buzzword. When Facebook whistleblower Frances Haugen came out with her disclosures, that was really her big ask to Congress, please regulate these companies so that they have to share more of this information that I’m sharing with you now. And it’s almost like transparency is being discussed as a solution in and of itself, and I wanted to get your thoughts on that.

Is transparency really a means to an end or is it an end itself, where, by having more transparency, it actually solves some of these problems that we are facing with regard to technology and the information ecosystem?

JASON PIELEMEIER: Yeah. Thanks, Issie. That’s a great question, a great way to start this off, and just want to say thank you also to our friends at the Atlantic Council and DFR Lab for putting this together and to all the co-panelists for joining.

I think transparency is both a sort of necessary precondition to regulation because for regulators to effectively govern this particularly dynamic and sort of fast-moving space they need a deeper understanding of how it works and what sort of the evolving patterns of use are. But transparency can also, I think, can also be a solution, right.

I think there are ways in which deeper and more meaningful transparency can foster greater collaboration across different stakeholders, including the companies and the governments who are sort of in the best position to make changes that can address some of the challenges that we’re seeing in the digital ecosystem, fostering collaboration between those actors and the very rich group of civil society actors, researchers, digital rights activists, and others who are seeing the impacts that digital technologies are having over time, a number of whom are on this panel today.

And so I think transparency is both a sort of a necessary precondition to regulation but can also help develop co-regulatory and other [kinds] of more creative approaches that don’t require the government itself to try and sort of unilaterally solve all of the challenges that we may be facing.

And I think the last thing I’ll just say, and I’d love to hear from others on this as well, is I think that transparency is, in many ways, a sort of stand-in for understanding, sort of comprehension, right, and we need to sort of think about it. We’ve tried in this sort of collaboration that we’re putting together to use the term meaningful transparency to kind of hint at that because I think there’s a real risk that transparency that is overly formal—let’s say transparency that’s done either voluntarily or through some sort of regulatory framework—if it’s overly formalistic, it becomes sort of a tick box and actors who are trying to comply with whatever regulatory requirement is in place can just sort of walk down the list and say, OK, yeah, we publish this, we publish that.

But at the end of the day, what’s really coming through is perhaps not super meaningful or useful to the people who are best positioned to sort of collaborate and to really understand and take collaborative action.

On the other end of the spectrum, if transparency is sort of so detailed or, perhaps, under described, then you might get a whole bunch of data dumps that people, perhaps, have a hard time sifting through to sort of find what’s most useful to them. They could end up being, you know, kind of having a lot of personal information or other data that people could misuse.

And then, of course, if we sort of extrapolate this to kind of a global arena, if we have lots of different transparency requirements in different governments and different jurisdictions all over the world, it can be burdensome for companies to comply with and it can end up being a barrier for newer companies or smaller companies that are trying to operate a service globally.

So those are some of the risks that we have to be conscious of and address and that’s ultimately I think, why we thought it was so important to put this Action Coalition together to bring a pretty diverse group of actors, including governments that are in the business of regulating, including the regulators themselves that will have to actually oversee some of these new and emerging regulatory regimes, as well as the researchers, the civil-society activists, the companies, to share as much information as we can about what we’re already doing, some of the experimentation that’s already taken place, what we’re learning from that, and how we can kind of develop a path forward to get to the right kind of meaningful transparency as quickly as possible.

ISSIE LAPOWSKY: Right. And I noticed that one of the goals of this Action Coalition is also to come to sort of some consensus around the definitions, what is transparency and all of its components. I mean, that can seem like sort of an incremental goal. Why are definitions so important here?

JASON PIELEMEIER: Yeah. So I think Alicia has some really good things to say about this because they’ve been doing some great work looking into this and interviewing a range of folks. That has really been, I think, the results of which are still to come, but she can, hopefully, give us some initial insights.

But I think it’s for the reasons I was alluding to. I think we need to make sure we’re talking about the same things as we sort of move forward into more concrete steps, whether those are regulatory steps or whether those are, you know, the development of technical [application programming interfaces].

You know, there’s a real risk that we sort of all have slightly different understandings of what we’re trying to do, in which case we kind of end up with a bunch of disparate efforts that don’t sort of create the synergies and connections that, ultimately, I think, we’ll need if we want to kind of address the fundamental challenges that we’re facing.

ISSIE LAPOWSKY: So, Alicia, Jason teed you up. Your organization has done a pretty thorough review of a lot of the policies that are out there. So I was wondering if you could tell me a little bit about where you have seen that those policy proposals have traditionally broken down. Why haven’t any of them advanced to the point where they are law at this point?

ALICIA WANLESS: I think the simple answer [to] that is that the vast majority of policy proposals that are put out don’t contain a detailed roadmap for implementation. So they may have the level of calling on more transparency from companies and from governments alike but without the details of how that would work it becomes a challenge to actually put into practice.

Our interest in transparency reporting really came from a bit of a roundabout angle. We have as one of our aims to foster evidence-based policy development to counter influence operations and you can’t develop evidence-based policy if you don’t have the measurements to guide that, and what we found in looking at the overarching field was that there were at least three key areas where we have very little understanding.

One is how the information environment works in the system. The second is on the known effects of influence operations, particularly longer-term on social media and in the Global South. And then the third thing is we really know even less about the impact of interventions being currently done to counter influence operations. Most research focuses on fact-checking, “prebunking,” and we don’t have a great handle on what the [impacts] of platform interventions are.

And so as we dug into that we found three more gaps. Why don’t we have these measurements? And they stem [from] the fact that we don’t have transparency reporting to inform what data would even be available to researchers. The second thing is, even if we had that we don’t really have data sharing rules to facilitate safe access to that data.

And then, thirdly, we don’t have a mechanism to facilitate independent research using that data, and often on the measurements work, it does still require some collaboration with industry and, as we know from the leaks, this is really hard for researchers to be able to achieve and still maintain independence and credibility.

ISSIE LAPOWSKY: Right. And one thing that I hear a lot from these companies is, oh, we’re more transparent than ever, and particularly Facebook, we’re more transparent [than] all of our peers. Look at this report and look at that report.

You have written about the difference between data sharing and transparency. Since we’re talking about definitions, can you distinguish between those two things? What do we have now in the tech ecosystem and what do we need?

ALICIA WANLESS: That’s a great question. So we just completed an interview process of fifty-five multi-stakeholder participants to try to get a better understanding of what people are thinking about this problem and also are there any solutions. We also went through two hundred related documents—my colleague, Elonnai Hickok, scoured through those—and partly the problem is that it’s pretty disparate.

So if we’re counting transparency reporting as simply a blog that goes up, it can be really challenging for people to track that over time and really make sense. The other thing is there’s not a lot of consistency in what’s put out. So this month it may be reporting around some takedowns. But that may change the next month. And, again, as some of the leaked documents were indicating, this may change based on how the companies might feel about the response from media…

But when we really started to dig into it, one of the things that stood out was that there does seem to be a conflation about the term transparency reporting and data sharing. Increasingly, as we went through the process, we came to see transparency reporting as more the consistent and regular reporting by companies of aggregated stats to inform on aspects of their operations, versus data sharing, which would be more of the raw data sharing with researchers who can then dig into that data and make sense of it themselves.

And sometimes they cross over and add libraries. But, really, that kind of level of definitional consistency wasn’t really coming out in the interviews or the documents.

ISSIE LAPOWSKY: Got it. And you have also written about one possible solution here, which is the idea of creating multi-stakeholder research and development centers. Can you sort of talk about what that would look like and why that might be a preferable model?

ALICIA WANLESS: Yeah, and this, again, comes back to the challenges of maintaining research or independence, writ large. So one thing that we’ve been toying around with is kind of building a [sort of European Organization for Nuclear Research] CERN for the information environment. So this builds on the European Center for Nuclear Research, and what it is, is it’s multinational and that many governments can fund it, which also creates a layer of protection should any one of the democracies who fund it slip into autocracy.

The second thing is you can also draw on companies and philanthropies. The idea here is that if you build up a network of researchers, particularly around the world and in the Global South, and you start plugging them into a center like this where they can actually access data, work with other researchers, [and] potentially, collaborate with industry, if those rules are worked out to protect their independence, we could do a lot more at scale and also lead to more global solutions because a lot of the things that are happening in countries like the Philippines are going to happen in the United States if they haven’t been already, and we’re not learning lessons and we’re also not necessarily seeing our own Western responsibility to do something about that and listen to stakeholders there in doing so.

ISSIE LAPOWSKY: Right.

Shashank, I actually want to bring you in to talk about just that, the flip side of this conversation, which is the lack of transparency about the way governments are increasingly imposing their will on tech companies through takedown orders and through other interventions, and it seems sort of unlikely that a government is going to regulate itself into the spotlight. And so I wonder what more can and should the private sector be doing to bring these actions to light?

SHASHANK MOHAN: Thanks, Issie, for that question and yes, I’d like to talk about how it’s different when we’re talking about transparency from the government. Often, in India, we’re seeing this trend where governments are not very keen to share a lot of information on the kind of takedown orders they’re sending to internet platforms and social media platforms, in particular, as [in] countries like India, the kind of population and the amount of people who are using the internet is rising.

A lot of crucial public [debates], including social questions and political questions, are being debated online quite vociferously, and we’re seeing that the government is using its sort of legal apparatus available to them to sort of monitor a lot of content on social media.

And there is, built [into] law, no transparency around these orders, which specifically under Indian law the government is asking for takedown and sending these orders to social media companies. As researchers, as lawyers, that’s a bit of a trend we’d not like to see and the responsibility, I think, lies on governments to ensure that they’re transparent about what they’re asking social media companies to take down.

Of course, as you said, the governments are not keen themselves to put a lens on them through law when they’re drafting it in the legislature, and [we’ve seen] one example of this in India’s soon-to-be sort of tabled in parliament data protection bill.

Largely, government agencies and entities have kept themselves away from that regulation through clauses of exemption. There are clauses of transparency and accountability, which are loosely based on the European Union’s [General Data Protection Regulation] GDPR, which are not, per se, bad. It’s a good start for India.

But what we’re seeing is a trend where there’s a sort of intent in a lot of government agencies to keep themselves out of the purview of those clauses. And then that’s, again, a trend we’re seeing. There’s a bunch of right-to-information applications being filed by various civil-society players, like our colleagues, our peers, in the tech-policy sector in India, and we’re seeing that it’s becoming very hard for us to get meaningful information.

Another point I’d like to make is the kind of relationships governments and especially in India we’ve seen, and I’m seeing [this] in news across the Global South as well, governments often engage with private entities and private companies for a lot of public-service delivery, and once the pandemic hit, the Indian government’s push on digital solutions is quite strong and they, like other countries, wanted to come out with a digital contact-tracing app.

And what was making researchers feel a bit uncomfortable and rights experts feel a bit uncomfortable was [that] there wasn’t hardcore information coming from the government on the kind of private entities they engaged with to make the software, the kind of security, the code that has been used, the kind of security built in. And we are continuing to see that there’s a sort of case before the New Delhi High Court which is trying to get that information from the government that exactly whom that you work with it’s pertinent to know. It’s a public welfare solution. So it’s pertinent to know for the public whom did you work with to sort of build this software.

So we’re kind of seeing this trend in India where [the] government really works in close tandem with private entities, and there’s not meaningful transparency—just borrowing that word from Jason—around this working.

So, I’ll just make a final comment saying, again, building on what Jason and Alicia were talking about, meaningful transparency and sort of that intent, I feel that transparency is not very meaningful unless you have other institutions that are strong.

So, for example, if we do not have a judiciary that is willing to be rights-focused and willing to sort of agree the law should be to protect citizens’ rights, transparency might not mean a lot, and sort of other regulatory institutions as well. So I’ll close with that statement.

ISSIE LAPOWSKY: Right. Well, that’s a great point and those seem like just gigantic intractable problems. And so I wonder is one solution to the problems that are sort of institutional—is one solution more self-reporting from these companies? And if so, what do you want to see them self-reporting, and do you think there’s an appetite there?

SHASHANK MOHAN: Yeah. Thanks, Issie, for that question. Yes, and… let’s talk about reporting from social-media companies. Reporting from them typically contains numbers on the amount of takedown orders they’ve got from X country, Y country, A country. I think that’s a good start and we’ve been seeing that for a number of years.

I think what we’d like to see, and I guess it’s a bit ambitious, but what will really help are two things. One is what are these orders, right. Can we sort of see some information of the actual orders, whether it’s redacted information? And we understand that there might be certain orders that relate to national security and terrorist content, which I understand might be very sensitive.

But I’m still talking about some of the defamation suits or the copyright issues. I’m sure that companies can make, you know, that step forward to actually show these orders. The other thing which I think is very critical for the health of the internet as a whole—and I don’t have a solution to how this will work because of corporate interests—but really, until we see and we understand how these algorithms work that makes certain content viral, which eventually leads to harm on the internet, I’m not sure how researchers will be able to really play meaningful or give meaningful solutions. And I think that was also what Alicia was saying and I’m sort of building on that.

ISSIE LAPOWSKY: Chloe, I want to bring you in because one other alternative I can imagine to addressing the problem Shashank is describing with global governments is what is the role of, say, the US government, which is home to so many of these global tech platforms, and requiring them to report on their interactions with maybe more authoritarian countries?

CHLOE COLLIVER: Thank you so much, Issie.

Yeah, I think you raise a critical problem here, which is we need to see democratic governments, or the regional institutions that represent democratic governments, really take a leading role here in both defining what we mean by robust transparency and then enforcing that, and enforcement is going to be an enormous part of the problem, as we’ve seen, as Shashank mentioned with GDPR-esque privacy law, where the enforcement question is often too big for governments to take on and to handle. So we haven’t necessarily seen the benefits of that kind of legislation ring true.

What I think is promising, though, is we are seeing, not in the United States, in fact, but in Europe, some development in meaningful transparency-regulation drafts. So in thinking about meaningful transparency, not just as sporadic data about the outcomes of certain decisions but actually about the policies, the processes, and the outcomes of company decisions, we’ve seen some really interesting proposals, for example, from the European Union recently in the Digital Services Act, where we’re seeing them take a little bit more of a systemic approach to treating social-media companies or information service providers like other businesses that are regulated and audited, and thinking about risk assessment and risk mitigation and auditing as the way to get at some of these transparency challenges we’ve been talking about.

So both the Digital Services Act at the EU level and the online safety bill at the United-Kingdom-level take this quite systemic approach to saying, we’re not that interested in individual pieces of reporting from you on the number of pieces of content you removed or the number of users you’ve blocked but we actually would like to know who is making decisions at your companies, why are they doing that, how are those enforced, where is the balance between automated decision-making and human decision-making and why does that happen.

So thinking a bit more here about thresholds, about risk mitigation, around safeguards, and that kind of systemic form of transparency that might actually get us a little bit further towards a preventative approach to thinking about the harm that might be done by certain product decisions.

So I do think we’re seeing some beneficial moves there in how we think about transparency from a regulatory standpoint that might affect the global market of users on these platforms because they’re coming from quite important market bases for some of these companies.

ISSIE LAPOWSKY: Absolutely. And correct me if I’m wrong, but there is an aspect of the Digital Services Act that would require some sort of data sharing with vetted researchers, right?

CHLOE COLLIVER: Absolutely.

ISSIE LAPOWSKY: How does that law consider all of these privacy concerns that these companies are constantly leaning on as an excuse not to have to share this information?

CHLOE COLLIVER: That’s a great question, and I think it’s only because we’re seeing this requirement for data sharing coming from the EU where GDPR is already in place and enforced that we’re really able to have confidence that that will be done safely and securely.

There’s been a number of academic papers written about how GDPR does not mean that it is impossible to share data safely with researchers and, in fact, actually enables companies to do so fairly securely.

So the provisions in the DSA that will require data access for researchers, I think, [are] quite promising, though they’re currently not particularly well defined and it’s a bit unclear how the EU Commission is going to define what a vetted research organization is and who will have access to that.

So that’s an important question that’s still to be decided. And I think it’s worth stepping back a little bit here and thinking about the bigger role of some of these governments and also meaning that we shouldn’t be asking for all of this user data. We shouldn’t want all of this user data because it shouldn’t exist in the hands of companies in the first place.

And stepping back and thinking in parallel to the data access conversation, the transparency conversation, we need to be thinking bigger about the system that collects all of this data on users that allows this kind of conversation to happen, and I think those two things need to happen in parallel at the kind of democratic governance level.

ISSIE LAPOWSKY: Right. That’s a really important point. And when you made the point about GDPR it made me think: Well, does that mean that all of the transparency bills that are being floated in the United States are in a much worse position because of the lack of a federal privacy law here?

CHLOE COLLIVER: I mean, it’s possible that without the same safeguards in place, there will be bigger risks in the way that that data is shared. But, really, you know, this is both about the risks of the company sharing the data but also about who has access to that and how they are controlling and processing that data. So [there are] lots of risks to be figured out on that side of things.

At the moment, obviously, all of that responsibility really sits with how the companies are structuring and sharing and opening up access to that data themselves. But I think the two things need to come together and in parallel, and if we can ensure there are appropriate safeguards in place, then there should be a multitude of ways in which companies are able already to share data with researchers fairly safely and securely.

ISSIE LAPOWSKY: Jason, because GNI has as members lots of big tech companies you might have more sympathy with some of their arguments around the risks to this kind of data sharing or transparency. So can you walk us through what industry’s biggest concerns are when it comes to these sorts of regulations?

JASON PIELEMEIER: Yeah. So just to be clear, GNI is multi-stakeholder in its membership so we have [information and communications technology] companies of all stripes. So we have the platform companies, who are often at the center of these discussions around transparency, but also telecoms and [internet service providers], as well as companies that build some of the physical infrastructure and equipment that the internet runs on, as well as cloud companies. And so it’s a pretty diverse group of companies, and then on top of that we have civil-society organizations, we have academics, and we have investors. So it’s a big tent.

I can’t speak sort of on behalf of companies, but I can say that I think there are a number of legitimate concerns that need to be addressed so that in order to enable companies to more freely share data and engage more broadly on transparency—some of them are around data protection and privacy, and Chloe mentioned that, I think the DSA is a step in the right direction in terms of thinking about how to kind of provide some reassurance around how that kind of data sharing can be facilitated without risking people’s privacy.

There are other concerns, though. Some of them are sort of competitive concerns, not wanting to show too much leg when it comes to sort of the specific systems that a company may use to curate content and to be able to, you know, identify user preferences and user interests, that those things, as Chloe was saying, do go to the core of the business model, and as long as the business model is what it is—and there are very legitimate questions and concerns about that—but as long as it is what it is, there will be sensitivities among companies about sort of sharing that information in a way that their competitors could, potentially, access.

I think there are other concerns when it comes to transparency about the relationship between companies and governments. Obviously, governments are a very critical stakeholder for these companies and there has been, I think, a lot of good work over the years to move companies to provide more transparency as to not only the number of requests that they get from particular governments but the way they respond to those requests, the kinds of systems they have developed to process those requests.

This is very much where GNI sits at this intersection between companies and governments. And it’s just interesting to note that for almost a dozen years now, GNI has actually facilitated a process among our members, that multi-stakeholder body of now eighty-four different organizations, to create a confidential space where companies can share more detailed information about the kinds of government demands and restrictions that they face and how they respond to them [and] what systems and policies they developed. We have a set of principles and implementation guidelines that GNI has put out that are intended to guide companies on what responsible business conduct looks like in the face of these kinds of demands and restrictions.

But, clearly, there are going to be cases where the companies don’t want to publicly say, we got this demand from government X and we told them we’re not going to respond, or we ignored it or, you know, we complied only in a very limited way in order to safeguard our users’ rights, because that could be sort of thumb in the eye of the very government on the other side of that demand and is likely to lead to some sort of reprisal.

So there is a need in that particular context, which I think everyone appreciates, to be able to share that information confidentially and so GNI, essentially, facilitates that. And I think, you know, that’s a very sort of particular place in the overall sort of data ecosystem map. But it’s, I think, a critically important one and I think the example of sort of how GNI works and the fact that for over a dozen years now we have been facilitating information exchange about this very sensitive set of facts and information without it leaking, you know, to the press or without there being particular blowback, at least in specific instances for companies, and the fact that it’s not only sort of, you know, not had those negative outcomes but has generated, I think, a lot of, you know, very hard to necessarily pin down but for those of us inside GNI very appreciable sort of positive impacts, right, this sort of awareness that’s able to be facilitated among our members about the realities of the situations that these companies face fosters a lot of collaboration.

It builds a certain degree of trust across the various participants that allows them to work together, both through GNI and then independently of GNI, in ways that, I think, are really critical.

So I think as we think about the different kinds of transparency that are necessary, the different situations and circumstances where we want to have a better understanding of what’s happening in the digital space, we can think about different mechanisms that might work for each particular dynamic, right.

So in some cases, you know, a government regime that creates a facility for data sharing among companies and vetted researchers might make sense. In other situations, there’s public transparency that’s absolutely vital because users need to have access to at least a certain amount of information.

In other situations, still, there may be a need to facilitate actual data sharing between companies for competition reasons, and to promote interoperability, to allow user choice. So my sort of overarching point is there’s a spectrum of sort of information sharing avenues that I think we can sort of map and then there may be slightly different mechanisms that we put in place for each of those different avenues.

And so, again, I think this Action Coalition, we’re hoping that it will be at the very least, an opportunity to kind of come to a shared understanding of that map and then identify where there are some interesting examples, you know, of mechanisms like GNI, mechanisms like Social Science One, other mechanisms that have already been out there for a little bit of time, and we can be very candid about what their limitations are, what’s worked, what hasn’t worked.

But I think if we can sort of, at the very least, paint that picture and get everyone to sort of see it collectively, we will start to see a lot of kind of people saying, oh, you know, why don’t we try this here? Why don’t we collaborate? You know, we’re both doing similar things here. Why don’t we collaborate on that?

That’s the hope. If we can get funders to participate in this and help them see some of the opportunities, I think that will only accelerate the kinds of experimentation and collaboration that we hope to generate.

ISSIE LAPOWSKY: Got it. And I put this question to Chloe, but I’m curious since you’re based in the United States, what role do you think the US government or Congress has in trying to mandate transparency? Not just about what’s happening in the United States. We hear about transparency in the context of, you know, what is Instagram doing to teens and all the things that are very, you know, focused on what is happening to our citizens.

But given that the United States is home to all of these companies, do you think there needs to be a broader view as to what the US government can require these companies to disclose about their activities beyond US borders?

JASON PIELEMEIER: It’s a good question. I mean, I think there is this tendency to see that the companies, the largest ones, many of them are US-based and, therefore, to sort of assume that the sort of regulatory or legislative solutions need to come from the United States.

But I think the GDPR is a good example of how, you know, that the EU has a very large market with jurisdiction over these same companies, has been able to not only shift the way these companies approach data protection within Europe but, honestly, to create a quasi-global regime. And I think the DSA you know, it’s still being negotiated, so we’ll see how it, ultimately works out.

But I think it has the potential to have a very similar impact. If you’re going to have to create certain mechanisms in terms of how you characterize data and what data you make available and how you design your own internal systems to be able to comply with whatever regulatory requirements are in the DSA, you may as well not just segregate that for EU-specific data, there would be efficiency arguments to just making that a sort of global approach. And like the GDPR, I think the DSA will likely generate a lot of copycat regulation.

And so, you know, I think the DSA it’s going to happen in one way or another and it seems to be at a pretty aggressive timeline at the moment. So, in many ways, the United States, I think, is getting beaten to the punch in terms of regulation here. I don’t think that’s a bad thing, necessarily. Again, I echo some of the, I think, generally positive indications that people have expressed the approach that’s in the DSA.

I do think that the US Congress and the Biden administration have opportunities to sort of build on that and [understand] the DSA’s approach, thinking about ways to make sure that there isn’t anything in US law or in our regulatory approach that would block companies or make difficult compliance with the DSA, that wouldn’t create conflicts of law. And I think that there’s an equally critical element of sort of international leadership, that this mostly falls on the executive branch, to push US companies, to use that kind of bully pulpit, to push them to be more open to this kind of collaboration, data sharing, as well as to push back against the authoritarian models.

So we talked a lot about the DSA but there are competing approaches [that] come from countries that are not democracies and are not committed to transparency and, in fact, are in the business, generally, of being opaque about how they engage with companies and requiring a lot from companies but not being very transparent about how they required it.

And so I think the… Summit for Democracy and the recently passed Danish Summit on Technology for Democracy, under which this broader Technology for Democracy platform was created that this Action Coalition will be a part of, are really good steps to bring democratic actors together and say, look, you know, we need to acknowledge these real challenges that are taking place in the digital ecosystem, some of which are, because of governments—in particular, authoritarian governments—some of which are because of lack of governance, and so what are we doing as a community of sort of democracy-affirming democracy-supporting states and nonstate actors to be proactive both in terms of how we push back on authoritarians and how we come up with solutions that uphold human rights rather than threaten them.

ISSIE LAPOWSKY: Right.

Alicia, I know that as part of your research you’ve definitely come across not just, you know, what is the will of governments to act on these issues but what are the impediments they’re facing, and I know that a lot of times tech companies are that impediment.

So what has your research shown you about the actual will among the tech industry to see these regulations pass? They’re sort of waving their hands a lot about seeing more transparency in the world. But are you finding that they’re actually standing in the way of that work?

ALICIA WANLESS: I think it’s a complicated question, given how big the tech companies are. I mean, I can point to certain mid-level people inside companies who definitely want to see more transparency and more data sharing happening with researchers, but they may not necessarily have the political will institutionally to move that forward at a higher level.

I think there’s also a bit of a challenge here in that it would appear that most of the impetus has them put on the companies to do something and also determine what that something should be, which is extremely problematic because, on one hand, if we don’t trust what they’re doing, how do we trust that the solution that they’re going to develop is going to work? And I think this is where we get into this bit of a quagmire.

I can’t really speak to how they lobby or deal with governments, and I also would say that sometimes in governments perhaps the level of understanding isn’t there. But there’s also a lot of movement that, you know, in the last few years I had not seen and I think that that’s encouraging that this may be a groundswell moment where things will change.

In the United States, I would point to Congresswoman [Lori] Trahan’s office. She was the one that was carrying the data bill and also has committed to being transparent about her own ad campaigns that she’s running in the upcoming election. And so [there are] also these other layers of if more transparency were forthcoming from other actors, too, perhaps we would inch things forward.

On the whole, I remain optimistic that through multi-stakeholder initiatives like the Action Coalition more details can be fleshed out. What we found going through this survey was that instead of finding readymade answers, we found about a hundred more questions that are all kind of thorny and need to be picked out.

So, for example, on the question of vetting researchers, what are the criteria by which that would happen? How do you do that in an inclusive way so that civil society is still engaged? I mean, who even gets to determine that? Because, again, right now, this is left as an ad hoc basis for companies and that means that they hold the power and that’s not a sustainable method.

But also researchers, especially in academia, were quick to point out that the review boards that they use are sufficient for social media data, but that was not what people in industry or government or even civil society felt that there was a big gap. So we may have to revisit models for even oversight of research that we hadn’t yet.

And so there’s a big body of work that needs to happen and that can really only be done through bigger stakeholder coordination like the Action Coalition.

ISSIE LAPOWSKY: Got it. And I take what you were saying about the fact that we don’t want tech companies writing the rules for themselves, but at the same time, they are the ones who best understand their own platform. So what do you think is the most appropriate role for these big tech companies to play in these policy discussions?

ALICIA WANLESS: That’s similarly challenging to say, considering that we do work with industry and we do receive funding from them. I think it’s important to understand how they work such that the rules that are being drafted are actually functional and can be implemented.

It’s very tricky to be able to discern where, you know, that practicality ends and, say, interference to prevent something from happening begins, and I’m not sure that I would be the best person to provide an answer on that.

But I think that it’s absolutely the case that we’re going to have to engage industry to make this work, even if it’s just because in order to do the level of measurements on interventions that’s required to understand what is working and, potentially, what isn’t, we’d still have to work with them. And so finding means to do that in a way that there are checks and balances and that independence is ensured is pretty key.

ISSIE LAPOWSKY: Got it.

Shashank, I want to see, do you have any thoughts on that, on the role that industry should be playing in these discussions?

SHASHANK MOHAN: Yeah, I think it’s a critical role, and one example, actually, I’d like to share is taking on, when Jason was talking about GNI’s sort of multi-stakeholder coalition format, I’d absolutely forgotten about the ISPs and the [digital signal processors], and I know Jason can sort of agree with me, and probably this is not a conversation that often may happen in more West and Global North sort of rooms.

But India is the capital for network shutdowns and internet shutdowns in the world. And I’m taking a slightly different sort of approach to this discussion because we’ve been talking about social-media platforms and sort of the popular big-tech platforms.

But I’m talking about how, especially in countries like India, the access to [the] network is generally sort of either in an oligarchy or a monopolistic situation where you have around two to three companies maximum controlling, really, the network infrastructure of internet access. And, there’s been a lot of litigation in India on internet shutdowns, that it’s gone to the supreme court, the supreme court’s given an order, [the] supreme court’s asked transparency from the government, and what we’ve seen is governments are not very sort of forthcoming even after the apex court of the country has sort of ordered them to be transparent about ordering the shutdowns.

… I’ll explain where it gets tricky—is because companies depend on the government for licenses to operate on the bandwidths and the airwaves of the country. But we feel that it’s better and they [have a] responsibility to the consumers to slightly be more transparent around internet shutdowns. That’s just a sort of another point I wanted to make.

Where I’d like to comment is that transparency is a very important tenet of democratic principles, just the pool of democratic principles—and what comes very hand in hand with transparency is accountability, and often we feel as human-rights researchers in a country like India that how do you get to accountability unless you have the correct information, because even, in fact, in some situations where you have the information it’s been difficult to hold particular people accountable.

And another example that I’d like to give is there was this whole—the sort of Western and the global media on Pegasus, really, sort of helped [human]-rights activists and journalists put the conversation in parliament and take it to court. Some of our peers and colleagues working in the sector really took the baton and took it to court, and we’re seeing that—and this is, you know, in the sort of court’s order—the government… they were reluctant and they actually did not say on an affidavit to court whether they used Pegasus or not.

In fact, the supreme court of India has given the task to a committee they formed of experts to actually find out whether the government used it. But the interesting thing is that they didn’t even deny that. They didn’t deny saying, we’ve not used Pegasus. So I’m just giving a couple of India-centric examples which point to the importance of transparency and accountability.

I think one point I’d like to make is, and this sort of adds to the importance of coalitions like the Action Coalition that GNI is forming, and this is talking about copycat legislation from EU, right—[is that] when we talk to bureaucrats in India around, hey, why is this clause in Indian legislation, they’re like, hey, if Germany can do it, if [the] EU can do it, if X Western country can do it, then why not us?

So, and these [sorts] of laws are not perfect. The European laws are not perfect, and they’re made for a very homogeneous society, and they can’t really be transposed to a heterogeneous society like India and various other countries.

So it’s an important global table to come at and ensure that benchmark laws which are, of course, very helpful also are made in a particular way because they are going to get copied by sort of countries like India. So yeah, just a few thoughts to share there.

ISSIE LAPOWSKY: That’s great.

And, Chloe, I wanted to ask you Shashank talked about the need for more transparency with regard to government takedowns, the need for more transparency with regard to government use of tech.

I wonder, what are the other sort of buckets of information that you think are most ripe for more transparency regulation?

CHLOE COLLIVER: Yeah. So Shashank covered a lot of those [kinds] of transparency lines around government-to-tech company or industry transparency. But I like to bucket this into three types of transparency area. So one is policies, and I think it’s fairly self-explanatory but it’s the way that companies make their own rules and communicate those to their users. That’s not just about the rules themselves. It’s about how they’re communicated, where they’re stored, how they’re formed, who makes those decisions. There’s an enormous amount around that that we don’t always know.

The second bucket would be processes. I think this is the one we have probably the least information on for all industry companies and actually, to me, seems like possibly the most important. It’s a lot of what’s been contained in some of these whistleblower leaks about Facebook, in particular, for example.

But what are the human software aspects around the technical software that we talk about? So who is deciding thresholds for accuracy of [artificial intelligence]? Who is making decisions about the engagement metrics that are used to promote or curate content? Who is training content moderators in different languages and contexts? All of these [kinds] of aspects of company behavior and decision-making that are currently pretty much a black box.

So I actually think those are probably the most important and applicable globally to make us understand the why of some of these decisions and not just the outcome, because actually in the third bucket is sort of outcomes and data, and that’s what a lot of data access requests think about. It’s a lot of what the exposing of bad stuff on these platforms relies on.

But and the logic of transparency sometimes assumes that the kind of observation of malpractice is alone enough to see kind of well-intentioned actions off the back of it, but as Shashank said, that there needs to be an accountability measure in place after that.

So I think the policies, processes, and outcomes all need to be part of this discussion. But they really do need to be dealt with specifically and technically and each to their own. We can’t develop a kind of patch-all term of transparency that is magically going to encapsulate all of those things across different kinds of services for different kinds of users.

I also think we need to think about the audiences for those different buckets of transparency. So some of them might be really useful for the public so we might expect the public to actually understand the policies. We might want the public to understand the policies. We probably don’t expect the public to understand the raw data and the outcomes of some of those things so we need to think about who we expect to use them and why and, therefore, how they should be communicated and in what format.

ISSIE LAPOWSKY: Also, I feel like I’ve been hearing a lot more about algorithmic transparency and I see these members of Congress sitting up there, and [saying], we need to see your algorithms, not that they would understand them if they did. But is that an important component of this?

CHLOE COLLIVER: It certainly is. And you’re right, there’s an enormous amount of conversation about algorithmic transparency going on at the moment. There’s a few multi-stakeholder forums where algorithmic transparency is kind of the new battle that they’re waging across different sectors.

So the Global Internet Forum to Counter Terrorism has a technical working group on this. At the moment the Christchurch Call that looks at terrorist and violent extremist content online also has a workstream dedicated to algorithms and algorithmic transparency, and the outcomes of algorithms on that kind of content.

We come across the same issues, though, that we’ve talked about throughout this webinar. One is around definitions. So what do we actually mean by algorithms? What is included within that?

We come across the challenges of safe data access and trade secrets. What are companies able and willing to share about these things? And with algorithms, there’s a really interesting challenge that’s cropping up in these discussions now, which is how do we define the scope of our interest here?

Because if we were just interested in illegal content or content that violates terms of service and whether that is promoted through algorithms, for example, that content shouldn’t be on these platforms to begin with, along with abiding by their own rules.

So are we allowed to scope creep into content that might lead you there? Are we allowed to scope creep into studying and understanding legitimate speech and content on these platforms to see where users are led?

There are some really tricky questions, around how you study and understand these things that we’re really only at the beginning of the discussion around, that are going to require a lot of their own transparency to make sure they happen in a way that’s responsible and proportionate.

So there’s, certainly, a lot of discussion, but I think not that many answers as of yet.

ISSIE LAPOWSKY: All right. We did get a question for you, Chloe, from Claire. Claire said that Chloe mentioned rethinking how much data companies can collect from people in the first place. Can privacy-respecting products and services thrive without much tighter privacy laws? This is more of a privacy question, maybe less a transparency question, but, still, I think you’ll be able to answer it.

CHLOE COLLIVER: Yeah. Thanks very much, Claire. I think it’s going to be incredibly difficult for new or emerging companies that try to prioritize privacy to thrive in the current environment without both the legal and also the competitive environment that’s required for them to be able to do that.

So I think we need to see both robust antitrust and competition law applied but also privacy law enacted to enable safety-first products or privacy-first products to be on an equitable footing with some of those companies that have now reached near-monopoly status on a number of services.

As we discussed across this session, I think privacy law comes hand in hand with other forms of transparency that are required.

ISSIE LAPOWSKY: Thank you. We’ve got a few minutes left, and Alicia, I wanted to bring it back to you to talk about this Action Coalition. What needs to happen next to ensure that this actually works and doesn’t break down like some of the other processes that you’ve, I’m sure, come across in your reporting?

ALICIA WANLESS: Well, I think off the top, as we said from the outset, we really need to get the definitions straight. When we look at things like mis-, dis-, malinformation it’s been mired in a definitional challenge that hasn’t really got us out on a footing to do much about it. So that’s the first thing.

I think the second thing is to really understand what’s already being done. Again, if we’re looking at more than a hundred questions that all have to be teased out, we need to be better at identifying who’s best placed to work on a specific problem set in this and support them, and then help connect that data and create those feedback loops.

I think, beyond that, governments are really responsible for at least implementing some regulations in part to make this happen, complete with an auditing process that’s going to ensure that what comes out is actually trusted.

And even that, too, is fraught with peril because, again, you brought up the issue before, if you don’t actually understand how the companies work how can you do the audit? But if you take people from the companies, will that inadvertently pollute the process?

So it’s fraught with challenges and that’s why a multi-stakeholder approach is really the only the only best bet here.

ISSIE LAPOWSKY: Got it. And, Jason, I’ll leave it with you. You know, a year on from now, what can we expect to have seen from this Action Coalition? What will be your deliverables, to use government-speak?

JASON PIELEMEIER: Yeah, great question. As I mentioned earlier, you know, we do want to start with some of this sort of definitional kind of clarification and then mapping of existing initiatives, and then I think we’re hopeful that by sort of setting that table sort of a thousand flowers can, hopefully, bloom from it, that we don’t necessarily have to micromanage from this Action Coalition but that we can sort of help fertilize.

I think that one thing we definitely want to do in the near term is produce some sort of concrete and actionable recommendations, in particular, for policymakers. I talked about the DSA, you know, kind of wrapping up next year. The online safety bill in the United Kingdom also is going to have some transparency provisions likely, and there are, you know, more and more regulatory proposals popping up everywhere with transparency-related provisions.

And so I think the sooner we can come to some sort of broad consensus recommendations on sort of how to regulate transparency effectively, the things to watch out for, the safety mechanisms that we can put in place to address unintended consequences, the better. So that’ll definitely be a priority as well.

I think another thing that will be a marker of success is the inclusivity of the process itself. We need to make sure that we, this Action Coalition, isn’t just a group of sort of Western companies and researchers and civil-society groups and governments because many of the key insights and challenges that we’re trying to ultimately address are happening in jurisdictions all over the world and they take different shapes and forms depending on, you know, the laws and the cultures and the user bases in those places.

And so we really need to focus on making sure we’re bringing in folks from all over and that we’re not just addressing narrow issues and challenges. Just to very quickly note two pieces of excellent reporting that came out today that made me think about… why it is really important that this be about transparency, not just data access and not just the narrow government piece. There was a great story today in The Markup about Life360, which is this family-safety app, and so that it turns out that they’ve been selling a bunch of data to data brokers who are then selling it on to others, including governments. And so, again, this sort of why it’s not enough just to sort of look at what the companies that interface with users and user-generated content are doing but also this whole ecosystem of actors who may have access to data.

And then, separately, this Bloomberg News/Bureau of Investigative Journalism piece about this text message service provider Mitto, a Swiss company, that has apparently been, according to the reporting, selling access to its very sensitive text messaging services to governments who, basically, cannot facilitate government hacking is another just important illustration of kind of how diverse and complex this data ecosystem is and how we need to sort of think about all the component parts so that we have a comprehensive understanding. And that goes a little bit to Claire’s question, I think, as well.

One thing we can do to ensure that privacy-first or privacy-forward companies can compete is [to] not undermine end-to-end encryption and some of the technical solutions that are out there. And so at the very least, hopefully, we can sort of hold the line there. But, certainly, our agenda is hoping to be much more productive than that.

ISSIE LAPOWSKY: Great. Well, I really hope you guys are successful. I’ll look forward to seeing what you come up with and thank you all for joining this conversation.

Watch the full event

360/StratCom 2021
December 6-8, 2021

360/StratCom 2021

360/StratCom is DFRLab’s annual, premier government-to-government forum focused on working with allies and partners to align free and open societies in an era of contested information.

Image: An Indian woman talks on her phone in Kolkata, India, on November 8, 2021. Photo by Indranil Aditya/NurPhoto via Reuters.