Report

Jun 21, 2023

Scaling trust on the web

By Digital Forensic Research Lab

The Task Force for a Trustworthy Future Web’s report captures analysis of the systems gaps and critical opportunities that will define how the next generation of online spaces will be constructed.

Washington DC: June 21, 2023

Diving in on artificial intelligence, trust and safety, and national security

The Task Force’s first official launch event coincided with the formal release of the Scaling Trust on the Web report and featured introductions from Atlantic Council President Fred Kemp, Democracy + Tech Initiative Director, Rose Jackson, and a presentation from Task Force Director Kat Duffy.

The first panel, “Generative AI: Friend or foe of ‘Trust & Safety’?”, featured Matt Soeth (head of trust and safety, Spectrum Labs), Rumman Chowdhury (responsible AI fellow, Berkman Klein Center), and Alex Givens (chief executive officer, Center for Democracy & Technology), with Bertram Lee (senior policy counsel, data, decision making, and artificial intelligence, Future of Privacy Forum) as moderator. Experts discussed generative artificial intelligence as a game-changing technology with the potential to revolutionize how people work, how online spaces function, and how society will evolve in an increasingly digital world. The panelists shared developments in the generative AI space; how companies, civil society, and governments are working to respond; and what to watch moving forward. Two of the three panelists testified on this same topic before Congress in the week leading up to and following this launch event. 

The event’s second panel, “National security implications of the changing web,” featured Michèle Flournoy (co-founder and managing partner, WestExec Advisors), Lauren Buitta (chief executive officer and founder, Girl Security), Michael Daniel (chief executive officer and president, Cyber Threat Alliance), and Rose Jackson (director, Democracy + Tech Initiative, Digital Forensic Research Lab) as moderator. Panelists discussed the national security and foreign-policy implications of how digital convening grounds are shifting, and what that means for resilient and democratic societies. This wide-ranging conversation covered everything from privacy protections, to immersive technologies, foreign investment and interference, and the consequences of gendered harassment for our democracies.   

Transcript

FREDERICK KEMPE: Good afternoon. I’m Fred Kempe. I’m president and CEO of the Atlantic Council and it’s my pleasure to welcome you to the launch of Scaling Trust on the Web, the comprehensive final report of the Digital Forensic Research Lab’s Task Force for a Trustworthy Future Web.

The report launches at a moment of great global uncertainty and growing interest in the role technology will play in our future. I actually wrote these comments first through Chat GPT, but because it was so much better than my own comments, I decided to go with mine. The work of this task force and the impressive community of scholars, practitioners, and innovators it represents provides us a road map for navigating this moment in ways that will allow us to build more resilient democracies and brighter futures. It’s no surprise that it was the Digital Forensic Research Lab and its Democracy + Tech initiative that convened this work.

The Atlantic Council’s DFRLab remains a groundbreaking organization with technical and policy expertise on disinformation, connective technologies, democracy, and the future of digital rights. The Atlantic Council prides itself on undertaking innovative new projects, most of them, many of them have landed and become one of our sixteen programs and centers. There’s no one more proud of than Digital Forensic Research Lab as a field builder. It has grown the community of experts around the globe and here in the United States; able to document and make sense of the information environment and connective technologies through which the world engages. As these technologies become more present and complex, the work of the Lab and its partners will only grow in importance. Certainly, here at the Atlantic Council, we find ourselves grappling with the geopolitical implications of these technological changes in every one of our sixteen programs and centers. Whether looking at the future of defense, the future of energy, the future of the transatlantic relationship, the bilateral China-US future, questions around climate change, global resilience, it touches on them all. I’m excited to hear from today’s panelists about the findings of this report and how it can be used to help us execute the Atlantic Council’s stated mission, “Shaping the global future together, alongside partners and allies.”

Before we get started, I’d like to take a moment to thank the Atlantic Council leadership responsible for this body of work. First Kat Duffy, the Task Force’s director and a visiting senior fellow here at the Atlantic Council, who led a fast paced, timely, and rigorous initiative. DFRLab’s Senior Director and our Atlantic Council Vice President of Technology Programs, Graham Brookie deserves particular recognition for growing the DFRLab into the robust center it is today. And finally, the Democracy + Tech Initiative’s Director Rose Jackson, who initiated this task force and leads a team working to ensure the ways that technology is funded, the ways it is built, and the ways it is governed, and how that can reinforce, rather than undermine open societies. And, of course, thank you to all of us joining today in our headquarters here at the Atlantic Council in Washington, DC, and all around the world. With that, I’m pleased to pass the floor to Rose Jackson to kick us off. Thanks again for being here.

 

ROSE JACKSON: Thank you, Fred. And to all of you here with us today, we started the Democracy and Tech Initiative just over two years ago because we believe how technology is funded, built, and governed to be one of the most important questions of our generation. We set out to fill what we saw as a gap in how people were approaching questions of technology. That’s to say, either sitting in siloed conversations, failing to knit together foreign and domestic equities, or distinct policy areas like privacy and competition. Or simply ignoring the systemic nature of the Internet itself and the impact that choices around it can have on society. Technology mediates nearly every aspect of our lives at this point. It’s how we buy things, learn things, connect with family, access government services and exercise our democratic rights. To date we’ve done so largely through privately held companies, many of which are headquartered here in the United States. Influencing the digital spaces and platforms has also become a key point of competition for governments. And since the Internet is systemic and interconnected, whatever set of interests dominates that single Internet impacts everyone, including here in the United States.

Core to our work is creating new and trusted spaces for difficult and urgent conversations that require the insights of a wide range of people and perspectives. This task force is in many ways the embodiment of what we set out to accomplish through our program. It centers human rights. It focuses on the next phase of technology and the trends that are shaping it. It includes voices and perspectives from around the world.

It breaks down silos of industry and issue areas, and it elevates diverse leaders with cross cutting expertise. At the DFRLab we spend a lot of time advising democratic governments on how to think about an approach to technology both collectively and within their national borders, but it can sometimes feel like we’re debating yesterday’s challenges while the future is being constructed before us. Now, while we believe that governments must set rules and standards to ensure that the rights we expect offline translate to our networked world, it’s clear that the speed of technological change is outstripping the ability of governments to set those standards. And so, we’ve been searching for opportunities to focus on what can be done to proactively inform this next phase, both within the companies that are making decisions every day about the products and policies that shape our lives and the external community of civil society experts and researchers that companies and governments alike rely on for their perspective and expertise. I’m grateful to Eli Sugarman and both the Hewlett Foundation and Schmidt Futures for giving us the opportunity to explore these themes in partnership with an exceptional group of people in an action-oriented manner. Our hope is that this report, and the collective insights of our task force, can serve as a jumping off point for many of you to carry forward this urgent work together.

It’s now my pleasure to introduce you to Kat Duffy, who took on the Herculean task of coordinating the collective wisdom of forty task force members, more than thirty contributing experts, and fifteen partner organizations into tangible, actionable insights. That she did this in less than five months speaks to her superhuman energy and mastery of this endlessly complex topic. Personally, allow me the opportunity to say, that to do work of consequence with people you respect and count as friends is a great blessing. Now, before I pass to Kat, reminder that you can find the full report and annexes on our website and keep updated on subsequent events by following us at DFRLab on Twitter, Facebook, LinkedIn, and Instagram using the hashtag “#ScalingTrust,” and you can find many of our team and task force members talking about this on Blue Sky, T2, and Mastodon. Please find us there. Now without further ado, let’s get this started, Kat.

 

KAT DUFFY: Thanks, Rose. Hi, everyone. Thank you for joining us today. Thanks to everyone who’s joining us online as well. We don’t see you, but we do appreciate you. This has been a very fast-paced initiative and so the one thing I didn’t really have time to do is memorize my points. So, I’m going to apologize to everyone for referring to notes, but I’d like to start today. I’d like to set the stage. I’d like us to harken back to a different time, to a different era, a historic age, something that I like to call October of 2022. In October of 2022, these things had not happened. Meta, Accenture, and Microsoft had not announced a massive partnership to establish immersive enterprise systems. Elon Musk had not taken over Twitter. The third largest cryptocurrency exchange in the world, had not yet collapsed overnight. The European Union’s Digital Services Act had not come into force. And no one in the public had had any real exposure to generative AI. None of that had happened in October. By the end of November, all of that had happened. In a fifty-day span, in seven weeks, we were able to get a shape of what we thought was a coming Internet Age and realized that that it’s now. It’s happening right now. And I don’t know a single person who works in this area or in this space who in November and December wasn’t reeling, wasn’t looking around going, “Oh, my God. What do we do? How do we keep up?” Just across the space there was an onset, overwhelming, [funny noise]. And so, we said, “What do we do? What do we do?” I think the obvious thing is, of course, you create a task force, but the impetus here was really that we needed to take a beat. We needed to take a moment to situate ourselves, to breathe, to look around, to pull people together who have been doing this for long enough that they don’t respond to a hype cycle, that they have seen trends, that they have seen what sticks and what doesn’t, and that they know how to differentiate signal from noise. And we really felt like in this moment, we were at a pivotal space, and we needed to come together, and bring a lot of experts together, and help us understand what are the broader systems level dynamics that we keep having to navigate. How do we stop playing Whack-A-Mole with all of these different verticals and these different challenges? And how do we start getting at some of the root causes that continue to land us in trouble? And so, with that, we came up with the idea for the Task Force for a Trustworthy Future Web.

Part of the impetus behind how we created this task force, is that at this exact moment, we’ve also seen the emergence from industry, and especially American industry that has driven a lot of our online spaces and a lot of our emerging technologies over the past many years. We’ve seen the emergence of a field of trust and safety practitioners be developed. This is how companies have innovated the way that they deal with harms that are happening on their platforms, the way that they deal with harms that emanate from the technologies that they produce. And so, for over a decade now we’ve had these extraordinary teams and people sitting inside of companies who are working so hard to make spaces better, to make technology safer, to make it more accountable. But up until a few years ago, they were more or less kind of trapped inside of companies. Over the past few years, what we’ve seen is the emergence of a field of practitioners that more sectors and more communities of experts can actually engage with. Right?

And that is important because this field of practitioners has really deep expertise and really deep knowledge of the nuts and bolts of how these problems can come to pass inside companies and also how different solutions can be tried out. And so we looked at that and we also looked at the emergence of the Digital Services Act, which is a massive regulatory evolution and we saw how c-suites and companies were really shifting their investments and shifting their focus in order to respond to the emergence of the DSA. And those two things in connection with everything else that has just occurred, really inspired a lot of the building of the focus for the Task Force for a Trustworthy Future Web. We focused very specifically on how online spaces have been constructed and have been built, what we know works and what we know doesn’t work, and we also constructed on building a very big tent of experts and perspectives to come to this equation. And so, we have brought in forty people from technology policy, from AI, from trust and safety, from advertising, from gaming, from civil rights, from human rights, from virtual reality, from children’s rights, from encryption, from information security, community organizing, product design, digital currency, Web3, national security, philanthropy, foreign assistance, and foreign affairs.

We brought all of the people together, and we basically said, “What do we know? What do we actually know? At this moment? What has worked what hasn’t worked, and how could we work more effectively together to ensure that future online spaces can better protect users’ rights, support, innovation and incorporate trust and safety principles. And how do we do this very fast?”

And as I said to all of the task force members when I recruited them, because they didn’t believe that we could launch and close the task force in six months, “You can plan a wedding for two weeks or you can plan a wedding for two years.” And so, we just planned this wedding in two weeks. And so, over the past five months, we’ve had interviews, we’ve had expert roundtables, we have had virtual convening, we have done literature reviews, and I’m very pleased today to share with you the report of all of that work and what it really reflects, is the collective insights and the collective expertise, not the individual opinions of any particular task force member, but really the collective findings of a group of people who have been working at this now for decades.

And so, what you’ll see when you capture when you go into Scaling Trust on the Web is an executive report that captures the overarching findings as well as a collection of six annexes and those six annexes are a deep dive into some of the areas that the task force felt required specific focus. And those annexes cover: the current trust and safety space and how it’s been evolving, where open-source tooling and trust and safety might be helpful, and examination of the role of children’s rights, and the consideration of children’s safety, an introduction to the gaming industry, an assessment of the trust and safety capabilities of federated platforms, and a review of the lessons that can be learned from the cybersecurity industry, as well as how we can think about the nexus between trust and safety and generative AI. And with that I want to take you through, very briefly, some of the fundamentals before we get to our great panel today.

And so, these were the broadest areas of consensus across task force members, and some of these may seem obvious to people who have been in the space, but what was surprising to me is how frequently task force members said, “We just want people to understand this. We just want everyone to understand some of these core issues.”

The first is that that which occurs offline will occur online. There is no magical technology. There is no silver bullet. There is no magical fix that will give us online spaces that are free from racism, that are free from misogyny, that are free from risk, that are free from harm because we do not live in societies in which that is a possibility. And so, it was striking to me that so many people on the task force felt at this point consistently needs to be reiterated because techno-solution has been the bane of our existence for many, many years now.

Another was that some harms must be accepted as a principle of operating in a democratic society. That if we are going to operate in a free society, we also have to be cognizant that there will be things we don’t like and harms that emerged from that that we simply have to accept on the grounds of being in a free society.

But that doesn’t mean that the choices you make when you’re building technologies, and especially when you’re building online spaces, are values neutral. In fact, what it means is exactly the opposite. If you know that the society on which you are deploying a technology that you’ve built is inherently inequitable, is inherently unequal, and contain systemic risk, then you are also on point for thinking about how your technology will scale, that how it will scale malignancy, how it will scale marginalization, and how you work from the outset to mitigate those impacts. You are on point, and I think this is another really key finding from the task force. There are no more excuses.

There is no more, “Oh, how could we have known?” No, we know, it’s clear. Everyone is on point. Finally, risk and harm are set to scale at a really exponential pace if we don’t pull together, and we don’t come up with better and stronger systems. And there is no one key sector or key field that can do that on its own. This is going to be a big group effort, and so we need to figure out ways to work together more efficiently and more effectively.

Finally, this is a pivotal moment. This is a task force that is filled with people who, as I said, are immune to hype cycles. Everyone agreed, this is truly a pivotal moment. We have a very narrow window of opportunity in which to build new and take on new approaches and bring in new innovation and we have to seize that moment right now or we are going to be in a world of hurt, because we are not dealing with new problems, but we are dealing with new speed and new scale and our systems, right now, are not designed to respond to that. And so, we need to focus and we need to get working together much more quickly. And that was really our goal, then, in the key findings and the key recommendations of the task force. I am not going to run everyone through all of our key findings and through all of our key recommendations, because I want you all to go and read the report. But there are two very quick things that I want to highlight. One is this concept of the emerging trust and safety field. For a DC audience, I don’t think as a DC person, I don’t think this is an area that is necessarily adequately understood outside of industry and outside of Silicon Valley. We have a really unique opportunity. We have a whole new sector of experts and of allies for civil society, for media, for everyone who cares about accountable tech, for people who care about human rights, for people who care about ethical tech, for people who care about tech and society, tech and democracy, a feminist Internet. We have access now to a new community of practitioners who are forming their own collectives and who are able to engage with their own expertise, and we didn’t have that a couple of years ago. That’s new, it’s different, and it’s important and it’s something that should be leveraged.

The second, is that within this task force there was a baseline understanding and a baseline belief in the inherent importance and expertise of stakeholders whose rights and perspectives have historically been marginalized in the creation of online spaces. This includes marginalized communities in the Global North. It also includes entire populations in the Global Majority, which the tech sector is still egregiously refers to as rest of world. And it also refers to women and it refers to youth. And so, we didn’t question that in this task force. We took that on as a baseline truth, and we operated from that assumption. And third, as I’ve said before, no one group or sector is able [to take] this on their own. And really, what this task force report is aiming to do is to give a lot of different individuals and a lot of different experts a way to see an entry point into an area of expertise they may not have or a group of individuals they might not have. Everyone is part of a jigsaw puzzle and my hope is that in reading this report, in looking through the findings and looking through the recommendations, that everyone in this space will be able to situate themselves in what it is and find other allies, find other people, find other avenues of entry and begin to go and engage there. And so, with that we have our key findings; we also have the key recommendations.

The key recommendations in the report are very focused on what philanthropy can do in particular in this moment to create actionable specific work that moves abroad community forward: that’s civil society, that’s academia, that folks coming in from industry, it’s also government. And so, we have a lot of key recommendations there. I think there’s sixty-three, sixty-eight, for specific work that can be done. And finally, I would like to end by saying I am so, so, so grateful to our task force members for their unbelievable contributions over the past five months. It has been a sprint. We have 150 pages of product to show for it. I’m really excited for everyone to engage with it. I hope everyone will check out the website and the reports. I am so deeply grateful to Nikta Khani, the associate director of the task force, who has been tireless. Also very grateful to Rose Jackson, and Eric Baker, and Graham Brookie of DFRLab, and the whole DFRLab team, for all of their guidance and their support as well, of course, is Eli Sugarman, Hewlett, and Schmidt Futures. And so with that, I just want to say thank you all for joining. I’m so excited for you to hear our panels today. What we really tried to do with the task force report and with our annexes.

Oh, sorry. I should say what’s next? We’re in New York next week, y’all. You should come; totally forgot about it. I’m not a good hype girl. So we’re in New York next week. Everyone can see it afterwards. And then we’re on the West Coast, which is not the best coast, on Wednesday and Thursday. The best coast is the Midwest. And so on the 28th and 29th, we’re going to be in the Bay Area as well. We’re also going to have an event at TrustCon in July in San Francisco. And so, what I hope everyone will do is follow these engagements, continue the conversation, read the report, and begin connecting with each other on some of these findings and the recommendations.

Naming a problem makes it easier to solve. It’s also the thing that you have to do to find solutions. And once you’ve named some problems, and once you’ve built some foundational consensus, which is what our task force members did, you’re in a much better position to start figuring out how to make things better and how to solve. And so, with that I’m going to turn it over to our first and awesome panel today, who’s going to walk you through how we can think about Generative AI, how we think about solutions, and how we think about opportunities. And so with that, our amazing task force member Bertram Lee, moderator from Future Privacy Forum, take it away. Thanks so much.

 

BERTRAM LEE: Thank you so much, Kat. Thank you so much Kat for allowing me to feel like Bryant Gumbel for at least a little bit and to also be on a panel with some of my heroes and some really key leaders in, not only the responsible tech field book, but the responsible AI field as well. And so, I’ll start with Dr. Rumman Chowdhury. Dr. Rumman Chowdhury’s passion lies at the intersection of artificial intelligence and humanity. She’s a pioneer in the field of applied algorithmic ethics, and she is an active contributor to discourse around responsible technology with bylines in the Atlantic, Forbes, Harvard Business Review, Sloan Management Review, MIT Technology Review, and Venture Beat. Dr. Chowdhury currently runs Parity Consulting, Parity Responsible Innovation Fund, and is a Responsible AI Fellow at the Berkman Klein Center for Internet and Society at Harvard University. And she’s also going to be testifying tomorrow in front of the House Science and Technology Committee.

Right next to me is Alexandra Reeve Givens and Alex is the CEO of the Center for Democracy and Technology, a nonpartisan nonprofit organization fighting to protect civil rights and civil liberties in the digital age. She is a frequent public commenter, and one of my favorites to be completely honest, on ways to protect users’ online privacy and access to information and to ensure emerging technologies, advanced human rights and democratic values. At CDT, Alex leads an international team of lawyers and technologists, shaping technology policy, governments and design. CDT advocates the policymakers and courts in the US and Europe and engages with companies to improve their policies and product designs and shapes public opinion on major technology policy issue. Alex also recently testified in front of Senate Judiciary, just last week.

And last, but not least our digital friend, Matt. Matthew Soeth is the Head of Trust and Safety at Spectrum Labs AI, supporting the “Hashtag TS” collective community, as well as many of our partner companies across gaming, dating, and social apps. He also serves as an adviser for All Tech is Human, focusing on responsible innovation in technology, and as a former member of the Global Trust and safety team at TikTok, Matt worked on cross functional teams to develop safety resources that on that platform around bullying, and harassment, and suicide, self-harm, hate speech, election integrity really needed, and media literacy, education, and education tools. Prior to entering tech, Matt spent fifteen years in education, working with diverse high school populations. As a teacher and administrator. He is the co-founder of the nonprofit #ICanHelp and co-creator of #DigitalforGood. And in 2015, Matt helped start the Social Media Helpline, the first social media helpline for schools in the United States.

Please put your hands together for this wonderful panel and I want to start with you, Alex.

 

ALEX GIVENS: Can I just go home after that? Give the bios, cool, take a bow.

 

BERTRAM LEE: I mean, you know, that would be wonderful. But unfortunately, we have questions to ask. I promise not to ask compound questions as I am notorious for doing, but Alex, CDT has been at the forefront of the tech and civil rights for years and especially under your leadership around disability and advocating for marginalizing and multi-marginalized communities. Your team recently put out a great report on the limitation of large language models, and you testified to Congress last week on AI. As we see industry rushing to adopt Generative AI and policymakers scrambling to determine their response, what are the key points you believe people should be focusing on?

 

ALEX GIVENS: It’s such a great question. We are in a moment, and Kat set this up, Rose set this up, one of the things that we are really focused on as an organization is to say yes, this is an important moment of focus public attention, but also we cannot lose sight of the very real ways in which AI is impacting people’s rights and access to opportunity today. I sometimes worry that in the big kind of rush around existential risk in AI, we’re losing sight of that. And so, a lot of the work that we’re doing is saying yes, focus on those harms but also let’s make sure that we’re addressing real world harms today as they manifest. What I mean by that is how AI is used in decision making, impacting people’s access to jobs, to housing, to credit, to insurance. How AI is being used to sort people and to screen people in face recognition systems, law enforcement use, etc. And I think one of the things that’s important is that it doesn’t need to be an either-or conversation. But actually, when we start thinking about these very specific use cases happening right now, if we can think about responsible governance approach to those, it lays a really essential foundation and groundwork to tackle some of the longer-term issues as well. Happy to go into what some of those solutions are, but I think that framing really matters because it also makes it a problem we can address today, right? Things that we can focus on right now.

 

BERTRAM LEE: Dr. Chowdhury, you are an innovator in the responsible AI field and someone who I had the privilege of working with when you were at your previous employer. I don’t know if you want to name names out here, but… the bird app. And honestly, you were one of my favorite folks in industry to work with because of not only your honesty, but the way in which you conducted your research. And so, because of your innovation, and you also work closely with trust and safety teams inside of industry, you bring a unique perspective to the challenges and opportunities that Generative AI presents and how it may interconnect with many of the issues raised in the Task Force report. What in the current dialogue strikes you as being hype cycle and what is mission critical? And help us differentiate between the signal and the noise.

 

RUMMAN CHOWDHURY: That’s a great question and it is a compound question. But I think it’s important to talk about what is mission critical. There’s been a lot of conversation about elections and the impact on elections is actually explicitly talked about in the report that Chuck Schumer came out with today and his guidelines. I actually share that concern. It is reflected in the report, in the DFRLab report, as well.

All social media companies, any company that’s taking information seriously and responsibly, is very concerned when it comes to election mis- and disinformation. This is something that will scale due to Generative AI. We have already seen false images being perpetuated intentionally and unintentionally. So now the distinction between mis- and disinformation becomes very, very important. It’s not just malicious actors making bad things are misleading things. It is also unknowing people looking at something that looks and feels and smells and sounds very, very, very real, sharing it, and even if they find out later that this is false information, it still has shaped their perspective. So, I do worry quite a bit about upcoming elections and the role of Generative AI and mis- and disinformation.

In terms of hype, and I’m going to talk about some of these tomorrow in the testimony, I think there’s a lot of concern about emergence, or like emergent skills and emergent talents of Generative AI and anthropomorphizing the technology to sound bigger than humans, smarter than humans, better than humans. At the end of the day, people make this technology. People are making decisions. And as Kat stated in one of the things that we as a community tried to impart to people, is that all technologies are values driven. These values are also created by the people who make the tech. So there is no… I coined the phrase years ago, “moral outsourcing.” It’s in our language, it’s how we talk about AI does X, AI does Y. It actually doesn’t. People make a technology that does a thing, so it’s important to parse out when we are attributing an outcome to a technology. We actually truly should attribute it to the people who made the technology.

 

BERTRAM LEE: Thank you. I think about the Pope in the puffer jacket.

 

RUMMAN CHOWDHURY: Everybody brings up Balenciaga Pope.

 

BERTRAM LEE: But Balenciaga Pope is actually really interesting. If you didn’t know as a generative model, or even as something that’s Photoshop, how would you be able to tell the difference in that way?

And so that leads me to your work Matt, as our digital companion, which, and you have such a phenomenal Zoom background, by the way.

 

MATTHEW SOETH: Thank you.

 

BERTRAM LEE: Matt, you work for a company that offers trust and safety services to a range of companies. Can you explain Spectrum’s role for a bit to our DC audience who may be less familiar with the role vendors play in the space? And can you tell us, without exposing any trade secrets, how you are dealing with the influx of Generative AI content? Is there anything that you are seeing in the field that gives you pause?

 

MATTHEW SOETH: Yeah. I mean, speaking of AI, I am definitely having my own Max Headroom moment, right here, right. So nice little eighties throwback to conceptualize AI. In terms of what we’re seeing, well, let me let me backtrack. Two parts of this question, yeah?

Part one, who is Spectrum? So, we started using AI almost four years ago, looking at and helping clients around gaming, dating, and social. So think Riot Games, Grindr, Udemy, Together Labs, Wildlife, some of our partners: gaming, dating, social marketplace, that kind of stuff, to really protect from threats, hate speech, profanity, inappropriate conversations on apps, age verification, grooming, and etcetera. We soon discovered the higher the risk behaviors, think like radicalization and grooming, are all about people trying to deceive one another over time.

So, using an AI natural language processing, you could look at stuff in threads versus like keywords or other stuff. You might see a sentence, or comment, or statement of a threat, but they’re indicators up to the point that says, like, hey, do you think something really bad is happening here and AI could be very good at detecting those high-risk behaviors. It does this looking at metadata, user level information, right? So, rather than looking at broad behaviors over the course of a platform, it will look at individual behavior. So, moderators, humans have the ability to go in and make a better determination. Is this person a bad actor? Are they having a bad day? Are they trying to deceive? Etcetera. We kind of found what worked even better were aspect models that could even score users on past behaviors, right?

So, tracking this user level data is very important over time, and doing so in a way that’s kind of ethical or understanding that person’s life and experience, in that particular platform. All of that has helped us really create good, very behavior libraries, intent detection, AI tools that help and have kept billions of people safe. You know, we processed billions of points of data a day pretty rapidly and really help reduce exposure to toxic and hateful content and that’s kind of the dream of AI, right? I think, on the one hand, can we keep users safer for in order to have a better experience online and then two, could we really limit the amount of content that moderators and others are exposed to that has been defined as being very toxic and harmful as well? So, in order to do that, we’ve built our very own, essentially, large language model of data, working meticulously to label that, working with a lot of academics, developing processes, working with native language speakers to localize. So, we can sort of have the context in place and our role as platforms is taking this layer of protection and just adding it to what they’re already doing, right?  So, as you think about how, whether it’s detection models or other stuff, really trying to decrease, or excuse me, increase accuracy and source a better signal.

When you look at AI, ChatGPT, general models, and how people communicate, we’re rethinking that, you know. We hit, like Dr. Chowdhury, and I know, we hit on some of the big ones, right? As well as Alex looking at mis- and disinfo. We know that has been an existing problem. All we’re really anticipating and starting to see is the scale which that stuff replicates and gets distributed, right? And there’s a myriad of challenges there that I’m hopeful in this sense, because A, if you look at who’s in the room and connected to this report, is we have government, we have industry, we have content experts working together, which is what it’s going to take in order to make sure that AI is deployed not only effectively but does what it intends to do. And the other part, you know, is I’ve yet to come across anything in AI at some level, that hasn’t had to have a human review it, evaluate it, or have some kind of eyes on it, right. So, we’re not exactly free. You know, I have yet to be on an AI panel that someone doesn’t make a joke about the machines taking over, right? Like that conversation always comes up, but there’s a strong human element and thinking about how do we produce that data, label that data, and effectively pulled together. And at the end of the day, like, the strength of the data will determine the strength of the model, what it’s able to do, detect and otherwise execute in terms of function.

 

BERTRAM LEE: Absolutely, and the data issue goes so far, so much farther, than I think people realize, particularly along with whose data provenance, where did the data come from, who has access to the data? Where is the data stored and how was that data created in the first place? And which communities is it representative from? So I really appreciate that data point, Matt.

Dr. Chowdhury, teams inside of companies are often those working most closely with companies like Spectrum, as well as with organizations like my own, Alex’s, or DFRLab. I really miss working with you and your team at Twitter. I really bring this up because there are a lack of folks in the AI ethics, compliance, and fairness space as companies are using more AI, not less, and it seems to be a real pullback of those teams. Can you talk about the need of internal AI fairness and ethics teams in companies and the important role they play from a trust and safety perspective?

 

RUMMAN CHOWDHURY: Gosh, yes. So, it takes multiple… there’s multiple levels of impact. There’s the first, very visible part of work that my team did, and other teams like minded, right. There is the audits that we published. We did the first algorithmic bias bounty, opening up our models for public scrutiny. But then there’s all the work that people didn’t see, because bad things didn’t happen. And that’s fundamentally but a lot of trust and safety people and ML ethics people, privacy and security people do. It’s hard to show value at a company when we don’t have a shiny widget at the end of our year, right? We can show is nothing bad happened and that’s something to be very, very, very proud of. People work incredibly hard so that nothing happens. So, at the end of the day, it’s hard to show and hard to demonstrate sometimes because the more effective you are at your job, the less people need you. When I say “need you,” I mean, something isn’t directly on fire. So, you know what I worry about in today’s AI fever pitch, around generative AI, is first as I mentioned earlier comments, people are attributing a lot to what these actually quite limited models can do because they seem very fancy.

And the second part is, trust and safety and ML ethics, and all the fairness people, we actually have to rethink some of the things we do because a lot of our approaches worked for traditional machine learning models. They don’t actually, necessarily, work for this new wave of highly technical massively sourced LLM’s. The other part I’ll add is, I actually think now we’re reaching the world of two levels of trust and safety. So, there are these companies that are making these core models. Currently, these are companies like OpenAI, Anthropic, Google DeepMind, etcetera. That there’s one layer of trust and safety that happens there.

There’s also another layer that happens with these vendor partners. So, you know, the companies that will build models off of these core models, the fine tuning that they do on those models will also have to be trust and safety as well. So, there’s actually this dual layer of trust and safety that’s happening and I don’t know if as an industry we really organized ourselves that way yet. Traditionally, it has been models, algorithms or code, your values and your data, so in any given company or deployment of an AI model was simply you took a took code that was publicly out there and you put your own data in it. Now there’s a layer of obscurity and then a second layer of obscurity before it even goes out into the public.

 

BERTRAM LEE: Go ahead, Alex.

 

ALEX GIVENS: I was going to say, the other thing that I would add in is the role and the importance of internal trust and safety teams. Matt alluded to the increasing power of technical tools to help support content moderation, and obviously that is only going to grow in importance as online content proliferates, as generative AI helps there be even more kind of inputs to the system. But the human review is a feature, not a bug, right. It’s hugely important, particularly when we’re talking about contextual analysis. There is some types of either unlawful content or undesirable content where you know where you see it. You know, that is an easy decision, but so many more where it really needs to be contextually based decision and that’s where the human element is essential. So, finding that right balance and making sure that even as we move forward in technological developments, we’re not losing the really important touch of the human driven trust and safety team.

The other piece I’ll add in the spaces that, I will say when I look at the landscape today, we are an organization that historically has cautioned against a heavy regulatory thumb when it comes to online content moderation and the reason is as a user focused organization, we worry about how companies sometimes overreact to the heavy hand of legislation and as a result over-moderate or take a simplistic reductionist, like if that’s going to be remotely controversial, we’re just not going to let it appear on our platform at all, and our answer in those conversations to regulators has been don’t do that. Let’s find other levers to make sure that the companies are acting responsibly and with nuance. And I have to say when you look at the market right now, and the investment interest in safety teams that argument is getting harder and harder to make.

So, it is so important that the trust and safety team say strong that this field continues to grow and the recommendations of the report that’s been put out really, I think, lean into that as to why we need to embrace this field because these questions aren’t binary. You need people that can approach these questions with the nuance of being skilled practitioners looking at these questions on the ground.

 

BERTRAM LEE: And Senator Warren highlighted those exact same issues about SESTA/FOSTA recently and kind of the backlash and kind of what we didn’t think of as a follow-on effect from the implementation of SESTA/FOSTA and what would happen to the digital ecosystem there. And so as a follow up question, everything you both spoke about, and Matt you’re talking about as well, investing inside of trust and safety can be seen as a form of voluntary self-governance. But you know, you both have spoken about the need, and both of your organizations has spoken about, the need of independent verification of AI systems and independent governance of AI. So could you all on the panel, and please you all decide who goes first, or I can pick, where do you all think we need to go in terms of existential, of external governance of AI systems, particularly around AI audits.

 

RUMMAN CHOWDHURY: I can start. Actually, I didn’t even send you my testimony for tomorrow, but you may as well read it. Pretty much just a preview of it. I actually say the phrase, “governance is a spectrum” and right now we’re at this pendulum swing where people are, like, really, really looking at, like we need hard regulation etcetera, and this is not to say we don’t need regulation, but what’s worse than no regulation is poorly written regulation, as you mentioned with SESTA/FOSTA. So what we don’t need is mandatory algorithmic audits and not enough people who have the skill set to do that, which is the world we live in today. What we don’t need are mandatory algorithmic audits that are so light weight that they’re actually rather useless, which some would argue that certain audit bills are.

What we do need is an investment in the community. So, governance takes forms of standards, codes of conduct, norms, education, legal protections. So, I’m part of a team running the largest ever Generative AI red-teaming exercise. Red-teaming is really fascinating because it is how companies bring in third parties and sort of under… currently, the way it works is under their purview invitation only. They get closed access to models and ask of them is, you know, break this and then we will try to, and help us fix it. So, we’re actually taking that process and we’re opening the doors to anybody. We’re allowing thousands and thousands of people at DefCon to come and try to hack at them, at our models, at all of the major large language models, with their approval  and they’re also part of working with us on it. And what we’re trying to do with that is to educate people to demonstrate that you don’t need to be a coder or programmer to be somebody who has an opinion or thoughts that are of value to identifying the harms in AI models and importantly, we want to demonstrate that we do need an independent and vibrant community. I think that often goes overlooked. So much of this work is often volunteer based, and that’s very, very difficult. But what we actually do need is to protect the people who do this kind of work from litigation and harm which we’ve seen in the cybersecurity community. But we also need to cultivate, create organizations, and fund organizations that allow people like myself, frankly, to be independent, if that’s where we are of the most value.

 

ALEX GIVENS: So, I strongly associate myself with all of those comments, and I’ll add a couple more. So, one of the things that I worry about, we talked about the need for there to be meaningful auditing, right? And no matter what regulatory framework you’re talking about in any region of the world, this is going to be the key lever that moves forward, right? We might not be able to agree on exactly but the notion that there should be auditing is, kind of, clearly a common denominator. But who was doing it and to what standards are they doing it to make sure that it’s not just a race to the bottom and a really easy like seal of approval that you get stamped on. This is where tying the conversation around Generative AI to some of the longer standing conversations around other AI harms is useful, because they help serve an important warning moment, I would say, as to how we need to approach this auditing question.

What do I mean by this? One of the things that we are seeing right now, in the move for auditing of algorithmic systems use an employment, for example, is a whole ecosystem of people popping up saying, “Hey, we’re going to do that audit for you. You know, we’re going to see give the seal of approval on this tool.” But what are they measuring for the things that are really easy to do a quick statistical analysis for. So in the hiring context, we see people do a really quick statistical analysis for discrimination on the basis of race using the traditional US census categories and gender using the two traditional gender binaries in the US. They don’t measure or audit for discrimination on the basis of disability, for example, but you know what If you use that AI hiring tool, and it’s discriminating against disabled people, you are in violation of the Americans with Disabilities Act. You’re still going to face legal liability. But the auditing, because the auditing is a little harder or its qualitative, not quantitative, the auditing services haven’t stepped up. Why am I spending a minute talking about this? It’s a good example of why we need the race of the top, not the race to the bottom, and we can’t just let auditing be defined by what’s easy to measure. We need to be more nuanced and more sophisticated about that.

 

BERTRAM LEE: Matt?

 

MATT SOETH: Yeah. You all have given me so much to think about. I’m trying to figure out where to jump off first. There are several things going on here. I was like when it comes to trust and safety, I always think of trust and safety professionals as, like, the social workers of the Internet, right? It’s not only understanding human behavior but understanding systems and human behavior. So, as we think about Gen AI and rolling that out, and potential implications, the value that trust and safety brings is really being able to look at data and anticipate where harms could occur. And I think that’s a lot of the big backlash coming from a lot of layoffs coming within Trust and Safety is like who’s thinking about these things, right? Because we know in any institution. If you have an issue owner, you have someone who can work towards solving the problem and advocate for it and that’s the challenge looking forward.

Another big part of this, I love the call outs on the data. Who’s thinking about this? How we’re thinking about it? It’s so important to me to have different groups in this conversation, which we’ve all kind of hit on, right? You know, thinking about government’s role; thinking about different NGOs and institutional roles. You know, we’ve seen this with previous, you know, to Alex’s point, regulation where it’s come out, and there are auditing bodies will come in and help like, “Hey, we’ll get you a compliant,” right? This has been consistent for a very long time, but thinking about what does that compliance look like? What is the standard of data? You know, there’s been a big call to action in the past year or two about more transparency from tech platforms, but even if you look at the transparency reports that are currently published, the data that is being referred to or the terms that are being used is not consistent at all across platforms. So without having sort of a union unifying identifier, you know, one example within tech is referring to behaviors as being “toxic,” which is so open ended and just kind of bland. It tells you everything’s bad without telling you anything at all.

So, trying to have like, how do we break this down further and think about these steps and look at it. And then just data science, you know, I’m a really big believer, not only in understanding data, but what is the narrative that the data is telling us, right? And I think having people who are well-informed to kind of understand the work behind it, and the goals of what we’re trying to accomplish is going to be critical. And trust and safety just kind of brings that, right? It’s a different lens coming to the table, particularly if that trust and safety professionals can have some tech background check or policy. These teams tend to do a really good job of bridging those gaps.

 

RUMMAN CHOWDHURY: Can I give one example that actually ties in like everyone’s points? Really, really briefly. So, my team at Twitter actually did an analysis of our toxic speech identification model. And, you know, toxic speech, as accurately mentioned, it is a very vague term. So you know, companies will build machine learning models to say based on this list of things we know to be quote, “toxic.” Is this thing toxic, yes or no?

We also know that African American vernacular English tends to be overwhelmingly tagged as toxic speech when it is not. So to your point about nuance, what we did was an analysis of the false positives to demonstrate what are the kinds of speech that are incorrectly being labeled toxic and what we found that actually, it’s not just African American vernacular English, but also it is all reclaimed speech in general. So in general, minoritized communities tend to reclaim the language that is used to marginalize them in order to empower themselves. Because it is a societal nuance, that machine learning model will not understand, it does understand that this word equals bad, but they don’t understand the context in which it may be used to uplift or, you know, to elevate versus being used harmfully. So just as just an illustrative example of yes, we can use machine learning, but we still need humans to have the nuance to understand how to make these models better.

 

ALEX GIVENS: It prompts like one more consideration, which is there’s a reason why auditing in the Generative AI space is particularly complicated or just content moderation in general is. We could have agreed methodologies, but the agreed standard, like the “what,” do you allow that reclaimed speech, or do you not, like we probably want companies to land in slightly different places around that, right? We’re seeing it in the deepfakes conversations. Some generative AI companies today will say, “We will not allow you to upload and manipulate images of a political figure currently running for office in the United States,” because they’re worried about political deepfakes. There are others who may well make the normative decision, like “Hey, we should let that happen because people might want to use it for parody.” So, I actually think this is where it gets really complicated, right? I can jump on my high horse around auditing in hiring because we have employment discrimination laws, and we can tell when are you in violation of the law? When are you not in violation of the law? But when it comes to this expressive conduct, actually knowing, like what is the right place where we should be landing on these is really hard. And as I think of, you know, it’s a nuanced conversation that’s going to differ depending on the outlet. That’s an additional complicating factor here, too.

 

BERTRAM LEE:  So, to add context to what you all are doing. When I was at [the U.S House Committee for Education and Labor] this was the exact conversation I had with a company that was looking at people’s social media for toxic speech and flagging them for employment. And so that context is, I think, really important because I brought up that exact example right with more colorful language, and it made everybody uncomfortable. But it brought me great joy. And so, I think it was… but it’s something to keep in mind and then also within the context of auditing, Alex, one thing I just want to flag for folks is that the civil rights community for a very long time has used testing as like a critical metric of how to engage in ensuring that civil rights are being abided by in private industry. Testing is a critical measure of how civil rights is enforced and how civil rights is tested. And it is functionally a third-party audit and so thinking through those contacts about how different organizations about how different communities think through about what third-party standards are also incredibly interesting in that context as well.

So, I think we have time for one more question. And so, I think there was recently an ABC report about deep fake porn in the impact that that has on women online. And Congressman Joe Morelli and Congresswoman Yvette Clarke have been at the vanguard of this issue in Congress, but we are still at the beginning of this crisis. And what are some solutions that you all think are feasible to deal with this kind of generative AI and as a follow up, I promised I wouldn’t ask compound questions but here we are, as a follow up what are some pitfalls that we should avoid when regulating generative AI from the context of US legislation?

 

RUMMAN CHOWDHURY: I can start with the first part. See what’s really interesting is we have actually had the capability to generate deepfake porn for many years, and you know, there have been some amazing journalists that have been covering the speed and this issue. There also have been multiple attempts at creating deepfake porn generators. I think the most recent one was back in 2018, I believe it was pre-pandemic, where this individual that was Russia-based was actually selling for $50 ability to make, the code to make deepfake porn. So, it’s very interesting that this is coming to the forefront doesn’t issue as if it is a new issue and it’s not a new issue. So, the question to ask ourselves is well, “Why haven’t we been inundated with it previously?”

So, one answer to that question is yes, generative AI makes things a lot easier to build. I think another one is… maybe to put more optimistic spin, is what has stopped these people in the past and yeah, I know we’re talking governance is a spectrum. One thing I find really fascinating is the open-source community actually does have strong standards of governance. They have strong community norms and bad actors are actually kicked out of communities very often. And I think it’s worth thinking through what are the ways in which we can carry these standards as we look at more open source and open technologies, you know, so deepfake porn is one aspect of what can happen. We talked about misinformation. These are all actually very similar… like different manifestations of a similar phenomenon, right?

But in particular related to your point about, you know, deep fake pornography in general, gender-based violence online due to generative Ai is also increasing. We are seeing individuals like Maria Ressa, Jacinda Ardern… people getting attacks online because disproportionately due to their gender if they’re a prominent woman. Whether it’s journalism, you know, some particular field, and especially politicians. So, I think it is certainly worth thinking through what’s the impact on these individuals and what can be done to protect them? And it’s really hard to think of an answer, and maybe… one of Kat’s points… that you brought up earlier from the report is that there are some harms that will exist if we live in a society that is not overly regulated or overly punitive. However, that is not an excuse for it to exist. So, what can we do to stop it, you know, in a way that is not creating a surveillance state? That’s the way the difficult question to answer.

 

ALEX GIVENS: Yeah again, do I just get to copy paste? I mean, this is one of the hardest tradeoff questions in our entire field, right? It’s completely abhorrent conduct, but how do you crack down on it in a way that doesn’t impinge on lawful speech? A couple of things that I turned to. One, I think this is a really important forum where we’re going to see how well existing law stacks up. So, in the case, a lot of deepfakes involve celebrities or kind of known figures where the copyright law rights of publicity can actually give a cause of action and can allow the DMCA to be a mechanism. How well does that stack up as a strategy here? How well does that play out?

Two, how responsive are the companies to taking down this content again? Again, this problem isn’t new, many of them have been active in this space for a long time. But if we see the uptick, which many people are already reporting, the significant uptake due to the ease of access, affordability, and quick scale of these generative AI tools—how quickly are, kind of, the platforms responding when this information when these images are being posted, and the manufacturer the generative AI tools themselves? There are safety measures they can take to limit the creation of nude images through their tools, of celebrity images through their tools. How many are deploying that and how does the market pressure on them to do kind of play out over the coming months

 

BERTRAM LEE: Matt?

 

MATT SOETH: Yeah, I mean, all great points. You know, when you think about deploying AI, whether it’s us or other platforms, you’re really trying to accomplish something. Whether it’s kids safe, data safe, users safe, and privacy safe, right, like those are the goals. When it comes to online safety, Alice Hans Berger has a great triangle, right? You’re trying to balance, you know, user safety with human rights and privacy, and you’re going to land somewhere in that triangle. Sometimes you’ll land more on the privacy side, sometimes more on the safety side, sometimes, like, the human rights, you know, whether you want human rights, speech, or other stuff, free expression. And so trying to balance that, as a policy, as platform as a constant struggle because it takes a lot of thoughtful consideration and writing.

You know, we talked a little bit about machine learning and one thing that AI can do, when we think about the technology, is look at conversations in context, right? It allows for that nuance. And that’s something that’s new for platforms in the industry, is the ability to not look at a simple sentence or anything else and determine, like, well, we’re not sure if that’s reclaimed language that violates our policy, we’re going to shut this account down, but if the data is done well, it gives the ability to allow for that nuance and then it’s more about a platform’s decision of do we allow this kind of language on our platform? Does it fit within our community norms? The other stuff, I love the healthy communities. This has become one of my favorite stats. And this just comes from my school days, like I’m obsessed with culture and climate. And really, if you have a healthy online community, which a lot of platforms are very protective, it will sort of help self-regulate itself. It’s not perfect. You still need systems and tools in place to detect unhealthy behavior, but within that, those norms will start to solidify.

So, looking at Beyond AI platform policies, relationships with regulation, law enforcement, whatever it may be, policy writing, how you structure of the types of engagement that happen on a platform or all things to consider that go well beyond gen AI. Gen AI, any type of AI, it’s a tool. And it’s a tool when deployed correctly can do very good things when it works in concert with all of these other resources, and the other part, you know, beyond the doom and gloom, we’ve had a few years in tech so far, right? There’s not a lot that, so when the Metaverse came out, everyone’s freaking out. We have like these new things we have to worry about and these bad things are going to happen and in reality, like we have a lot of principles thinking about safety by design and risk assessment frameworks, World Economic Forum, Atlantic Council, others have pulled together resources that platforms can use to look at and evaluate and predict where is the potential risk and harm going to occur? How can we plan for that and put tools in place? And the reality, like, this is a moving target, right?

We know that eighty percent of problems are caused by about five percent of users. So, looking at the rest of that group, we could probably shift a good sixty percent of that group toward the right direction, right? And we think about healthy behaviors, we can get really good at detecting bad, we can also get really good at detecting good, so as we structure these platforms, we can really start to guide people towards making good things happen. The community helps with that and we have opportunities to make good things happen. But it’s going to, again, it’s going to come back to that ecosystem of resources and really leaning into our institutional history and seeing like, “Hey, we’ve tackled some of this before. Now, it’s just, again, it’s data, scale, and rate at which this is all happening.”

 

BERTRAM LEE: So, I want everyone to put their hands together to thank this wonderful panel. It’s truly an honor to be here and to be Bryant Gumbel for at least an hour. And with that, I’m going to pass it to Kat. Kat?

 

KAT DUFFY:  Thank you so much to Bertram, thank you to Alex, to Rumman, to Matthew, we are so grateful to all of you and thank you for that wonderful panel. I wanted to give our audience online a chance to go and take a break, also for our audience here in the room. We have coffee and water outside if anybody wants to get up and stretch, we’re going to reconvene in about five minutes with a fantastic panel on national security and its implications with trust and safety. So, we’ll see you all in about five minutes. Thanks so much.

ROSE JACKSON: Thank you, Bertram, Alex, Rumman, and Matt for that fascinating conversation. The Task Force report identified three areas of emerging technology with the potential to significantly impact the future web, that includes generative AI, immersive, experiential technologies, and decentralized systems. You just heard a panel focused on generative AI. In this next panel we’re going to touch on the second of those trends, the increasing use of immersive technologies, particularly from the gaming industry, that is building much of it. But I’m thrilled to welcome three panelists here on stage to help us explore this and many other national security implications of the changing web.

Today we’re honored to be joined first by Michèle Flournoy, the former undersecretary of defense, founder of the Center for New American Security and co-founder and managing partner of WestExec Advisors, which is a strategic advisory firm, helping businesses, including many tech startups, navigate an increasingly complex and volatile international landscape.

We next have Lauren Buitta, the CEO and Founder of Girl Security, which is the only organization dedicated to advancing girls, women, and gender minorities, and national security, which includes many aspects of technology. Thank you.

Finally, Michael Daniel, who is the president and CEO of the Cyber Threat Alliance, a first of its kind organization, incentivizing essential cyber security related information sharing across companies. Michael also previously served as the US cybersecurity coordinator in the White House.

I’m thrilled to be joined by such an illustrative panel. Thank you all for making time to come here today. I think we’ve just heard a lot about generative AI. But I think sometimes people struggle to understand why we talk about national security in the context of social media and the internet and frankly, even games; words like Roblox usually don’t go in the same sentence as the Defense Department, although maybe now with Discord.

I want to start with Michael. Though one of the task force’s key findings was that a lot can be learned from mature adjacent fields, and in particular, looking at cybersecurity as a possible model for the evolution of the trust and safety field, which is really in its nascent. There’s actually an entire annex on that which you helped us draft. Thank you so much, and I hope people will take a look. But you spent your career focused on cybersecurity, which is now pretty readily accepted as relevant to national security. But that wasn’t always the case. So, as someone who helped to define that field and the nexus, what can we learn from that experience? And how do you think the issues we’re calling trust and safety are really relevant to national security experience.

 

MICHAEL DANIEL: Sure. Well, first of all, thank you for having me today. I really appreciate the opportunity. I mean, when you look back at where cybersecurity, you know started, it was definitely true. I mean, Michèle and I were talking about this earlier. It was like the first few meetings in the situation room where cyber would come up. Everybody was like this because they were all reading their notes and not, you know, focused on talking to each other. But there were several reasons for that, right that you were alluding to. One of them was that we treated cybersecurity as if it was purely a technical computer issue. Which it is way obviously way more than that, and so part of getting it accepted as a national security issue was realizing its multifaceted dimensions. I think a second part of it was the realization that it was deeply embedded in other issues, right. There was also an assumption from a lot of cybersecurity people that if you didn’t take cyber security seriously and you didn’t do those sorts of things, well then you were just obviously stupid, right, which is not the case. There are other reasons for why people made the decisions that they did. And then lastly, it wasn’t as obvious what the harms were until, you know, we started having multiple instances of cyber incidents affecting other things, and you can see that play through even all the way through. You know, in 2021 you had things like Colonial Pipeline finally driving home that you can have an effect in the real world.

 

ROSE JACKSON: That makes a ton of sense. I think what’s interesting too is, you know, Michèle, some of what we’re talking about really is kind of core geopolitics, and so that is what the report said. What happens offline is going to happen online. But I think that there’s also kind of this new sphere of competition at tech platforms and tools themselves, or something that the countries are trying to own and shape and control. You advise companies now that are finding themselves caught in this increasingly complex world. I’m curious how you’ve seen the role of tech change and the conversation around national security change with it.

 

MICHÈLE FLOURNOY: Well, I think there’s a broad recognition that we are in this much more competitive era that is obviously economic. There’s economic competition, there’s security competition. But there’s also competition across a number of critical technologies. Most focus lately has been on semiconductors, but it’s quantum, it’s AI, it’s biotech. It’s so many other areas. And for a lot of companies that have spent the last several decades being told, you know we’re going, this is an integrated global economy. We want to make China a responsible stakeholder. We want you to invest. We want you to bring them in, etcetera. Now they’ve done that. And now they find themselves like okay, well, now I’m completely integrated and dependent either on Chinese supply chains or the Chinese market and that’s really it for a lot of US companies. They’re trying to decide am I a US company acting globally or am I global company, that I’m trying not to take sides in this competition, but I think they’ve also realized that in many, particularly for the online, the internet companies; you know they’re the platforms. The platforms are not owned and operated by governments they’re owned and operated by private sector players, and they have a responsibility to ensure, to try to shape, you know the standards, the rules of the road and, um and in many cases they’re trying to figure out how do we create ensure that the Internet reflects openness, transparency, privacy, all of these things that we, you know they’re so precious to our democracy. When you show up at an international meeting, you have Chinese, Russians, others showing up and arguing for a very different kind of Internet. So suddenly they find themselves not just as private providers of services but as public actors in a very public and very substantial high-stakes policy discussion.

 

ROSE JACKSON: Certainly, and if I might jump around a little bit from what we had discussed, but I’m glad that you raised kind of the crowded space of the regulatory environment for a company walking in, and having China say you have to do it this way, in Russia say this way, in India this way, and, frankly, even in democratic countries, in another way, disparate versions. One of the things that was really interesting, and we’ve talked about this in the task force is that almost every conversation we had, someone would raise that the lack of privacy protections in the United States made it really hard to address certain harms that people were worried about it was almost across the board. It didn’t matter if they were industry. People, academics, civil society, international, domestic, you know I’m curious. This is a question for everyone on the panel. You know? How does the US have a more hands off approach to that rule setting affect our national security? Given that you do have many companies, some of which are American walking into these international spaces where other countries, some of them quite undemocratic are kind of setting the rules of the road. What does that mean, for our national security, Michèle we can start with you.

 

MICHÈLE FLOURNOY: Sure. I mean, I think first of all, it’s a real challenge for our companies if they have to try to adhere to multiple different sets of standards or rules. I think most of them would much rather say okay. If you want to just say GDP is going to be the standard for us, it’ll be the global standard. We’re fine with that. Let’s just have the unpredictability and the multiple, you know the feeling of being pulled in multiple different directions and trying to actually manage that. That’s really, really tough. But I also think in the national security space, it really becomes important, that leads into other issues like how do we protect the information of people who are in the national security enterprise? How do we protect the information of Americans that in the wrong hands could be used either for information purposes, or other ways that could affect national security. And how do we feel about foreign ownership of certain platforms? That again could be used to influence American public opinion in ways that might be detrimental to our security in the future. There’s a lot of questions that come up. I know that the example right now of this is the TikTok issue and whether it should be banned, whether it should be regulated whatever. But I do think the deep, whatever you think of TikTok, there are those deeper issues which it’s putting on the table are really, really important for us to address.

 

MICHAEL DANIEL: And just to build on that a little bit. I mean the part of the issue there is that because we don’t have as many of those standards right set up for ourselves. When we then start picking on TikTok for privacy concerns, there’s a legitimate sort of wait a minute, but what are you holding to your own companies? What standard are you holding them to? So, is this really just about the fact that you hate China? Or is there in fact, actually a real privacy issue here, and I think it makes it much harder for us internationally. To have those conversations and also, that the lack of some of those general privacy standards puts us at a disadvantage when we have discussions with our European allies. And it means that we have difficulties in dealing with data transfers and other things that serve a very legitimate purpose. And it puts us at a disadvantage on that playing field, and it’s largely I also think, because we have to get out of the mindset that not everybody thinks of the US and the US government as the good guys. And we have to set that aside and we’re not very good at doing that.

ROSE JACKSON: Certainly, I want to turn to Lauren in a second on that point, but you raised one thing about information sharing, which I think is another really interesting recommendation that came out of the report, which was looking at the experience of the cybersecurity community and finding this intractable problem where everyone was like, okay I am concerned that the data about Americans using credit cards is just getting taken right and left and anyone can grab that, and that feels unsafe and problematic. Great. We all agree. Let’s do something about it. Next step, private company could you let us know when there’s been a breach? And also, could you give us the information of private citizens? And you can imagine how that conversation went. And so, what was the solution to navigating those tension points between the necessity of privacy, the need to have standards that government can be trusted in a conversation like that. And how might that look for the trust and safety discussion that we’re having here?

MICHAEL DANIEL:  Some of it actually, really gets down to what information you are actually trying to share for what purposes. And when I found this to be true and a lot of different policy areas. When you deal with, like large policy issues, particularly when you’re trying to get sort of discussions about them, in some ways, that’s where the abstract actually hurts you – because what you actually have to get down to is the grand. You’re like, okay, I want to share this piece of information. Is there anything about this? That actually you know, says anything other than something about the bad guy? No. Okay. Does anybody care if we share this? No, you know, and it’s like, okay, well, then go under the knife. And you literally almost have to go, like, information type by information type. And once you actually start having those conversations, it starts to dawn on people that you’re not really talking about just this abstract sharing of stuff. But it’s about very specific kinds of information for very specific purposes. When you get to that level, then that’s how you can actually make progress. And you also set up rules about it. We are going to use the information for these purposes. Not for these purposes over here, and you set up the guardrails. And that proves true whether you’re talking about sharing between the private sector and government or between private sector companies. Like what? You know my own organization does this, so I think you have to drive that to the specifics instead of staying up at the abstract level.

 

ROSE JACKSON: Absolutely. And I think right now, the conversation in Europe around the Digital Services Act is an opening that that moment for everyone on connective technologies. But we just talked a little bit about TikTok. And I think it’s kind of hard to have a conversation about TikTok and not observe the kind of insane gulf between what that conversation looks like, frankly, here in Washington, DC, and some of the boardrooms in Silicon Valley. And within the community, as you work, often with youth that are coming into this ever-present digital world, and coming in to be the next generation of people that will make decisions about our national security. What do you think are the greatest moments of opportunity right now? And are there things that the National Security community is getting wrong?

 

LAUREN BUITTA: Right now? I think there is more opportunity than there is the possibility of error, especially if you believe that securing the blessings of liberty for our past charity and promoting general welfare are integral to providing for common defense. And I think the national security community has long missed an opportunity to engage youth. Movements around climate change, and gun violence are an indicator, there is a really crucial space for young people to be involved in national security. Certainly, it didn’t hurt that the defense department was also concerned about climate change, but I think youth have a really vital role to play within this conversation, and they have a very unique perspective.

And we talked a little bit about this before. Modi’s reticent to call young people, digital natives, because I think they really lack a lot of constructs and frameworks for the digital landscape. However, they understand democracy and one of the refrains that we also hear a lot of his. You know, we can’t say the American people are focused on Russia and China, or they’re not focused enough on Russia and China. They’re preoccupied with domestic issues. Well, we can both prioritize the intellectual property threat posed by China as well as threats from Russia. While also dealing with the fissures of our own democracy, of which data and the internet are a huge aspect of and so there’s a lot of opportunity around training up that future workforce, which we were talking about, as well as education initiatives, which shouldn’t be borne solely by civil society organizations. I think there’s a great opportunity for the media as well as within the national security sort of media sector. I think there’s an opportunity to engage media around how are we reporting on national security issues so that we can broaden the aperture of the public, and so that young people can also feel like they’re part of the discourse as well.

 

ROSE JACKSON: Absolutely. And I think a little bit later in the conversation we will have some more entry points on youth and particularly in the gaming sector. But you brought up the question of workforce. And so, Michèle, I want to turn to you briefly. You know, prior to your current role, really helping companies understand how to grapple with geopolitics. You were previously serving in very senior roles in the US Defense Department, and I’m assuming plenty of tech issues coming on your… plate. Looking at this next phase of the digital world. What do you think leaders should be thinking about that they’re not, and in particular how we staff and resource our government to be able to serve the interests of American people?

 

MICHÈLE FLOURNOY: I mean, I think given that we are in an era where there is a true revolution going on, we’ve talked about the digital revolution. We’re now talking about the AI revolution. Cybersecurity continues to evolve, and as it starts to leverage AI and move at machine speeds. It’s already happening. I mean, so one of the challenges in government as we don’t have the tech literacy in general among policymakers, and we don’t have the tech talent in the sufficient quantities to be smart policymakers, smart advisers, smart procurers of technology, smart managers of technology, smart users of technology. And so, I think there have been some wonderful sort of pilot efforts, you know. Fellowships that bring tech talent in for a period or take military civilian personnel and put them in tech companies for a number of weeks. But I think it’s kind of a soda straw, and we really need to open up the floodgates.

So, I actually think this is a part of the National Security Commission on recommendations that still need to be implemented but creating a digital reserve. You know, going out to the digital ecosystems that are just absolutely at the cutting edge and this country, modifying the rules. You know, you don’t have to pass a PT test or get rid of your tattoos or whatever it is, but you can serve your country as a civilian reservist in tech. That would be huge. A digital service academy. Where it’s like we do with military academies. Will you get a free education? You know, high tech, great education. And in exchange, you owe the government a period of service to start your career, that may become a full career, could just be five years. So, I think we need to sort of create some more institutional kind of superhighways. To incentivize and bring tech talent in and then once it’s in, managing it and retaining it. And so, we’ve got to have real career paths for technologists in government, even if we acknowledge that government is not going to be developing cutting-edge technology. To use it to govern, to make sure that governance is cracked, the testing and evaluation, that trust and safety there the policy framework. All of that, you need that tech advice and digital literacy. So, I think this is an area where we could make some progress. Submit significant progress and people have laid out some really great ideas. We just need to get Congress and the administration on board to implement some of them.

 

ROSE JACKSON: Can I add to that as well? Lauren, you’ve done much work on how we even define what workforce means.

 

LAUREN BUITTA: I think part of it, too, is getting the private sector and the national security community to have a conversation openly about the extent to which our national security relies on the private sector, especially in the trust and safety space, where you have a such a significant flow of professionals from the federal government flow into the private sector.

And I think trust and safety is such a unique space in the sense that it does bring this sort of social justice component to play, which so many young people are activated by,  so once again, I think if we can get the sectors talking and being, as Michèle said, intentional about how we define pathways in and out of government will be able to create that sustainable pipeline. And I think, as we learned from cybersecurity, if we want a diverse pipeline of diversity, skilled people, especially those who are often the canary in the coal mine and experience the digital harms that are online—we have to start quickly and we have to make pathways as accessible as possible.

 

ROSE JACKSON: I’m so glad that you raised the kind of canary in the coal mine dynamic because one of the things we certainly know online is that the most vulnerable in society not only are most affected by harms online, but they’re usually the first to experience them. And so, one of the things that came out of the task force was certainly that you know, even if you don’t care about what’s happening the rest of the world, you don’t care about human rights, you could have a cold, cold heart. You still needed to have some interest in knowing what’s happening elsewhere because it will eventually manifest for you. The internet is systemic.

Michael, we’ve talked a little bit about how important it was in the cybersecurity world. To really build an ecosystem and not think of it as necessarily that the government alone was going to be able to solve a problem or that a single company was going to be able to solve a problem. Can you talk a little bit about what that looks like in practice and what it means to be able to have a full community capable of playing its role?

 

MICHAEL DANIEL: Yes. So, when you look at the issues that we deal with in cybersecurity, and you actually look at the way that the entire ecosystem has put together, I mean, the first thing that you actually start to realize is that if your image of the bad guy hacker is some disaffected white dude still living in his mother’s basement, wearing a hoodie—like that is not the adversaries that we face, right? I mean there there’s still a few of them, but that’s not the total of the adversaries that we face. The adversaries that we face are highly organized. They’ve read their Adam Smith. They read their Harvard business school cases. They run a lot of these operations on the criminal side like a business. They’ve got extended supply chains They’ve got vertical integration and horizontal distribution. And I mean it really is like an entire ecosystem. And so, the idea that any single entity is going to actually have a view and an understanding of that entire ecosystem all by itself. It’s just ludicrous. Nobody is actually going to have the resources or the analytic capability to actually understand the complex entirety of that complex ecosystem.

And so, if you want to actually disrupt that ecosystem, whether you want to make it less profitable for the criminals, you want to make it harder for the intelligence operations of our adversaries, when you want to dampen down those that are using it for disinformation or other purposes, you actually want to understand how to actually affect the ecosystem as a whole. And to do that, you actually need to combine the insights and understanding from a whole bunch of different sources. And that’s really where the cybersecurity industry is evolving now, really trying to understand how you put together those comprehensive sorts of views and understanding of that entire ecosystem—and it’s still a work in progress. Because culturally it’s very hard to let go of, like, I’ve got this awesome source of information, and now I’m going to share that with other people. But the ones that have fully embraced that transition realized that that actually makes them more effective at whatever it is whatever part of the cybersecurity ecosystem they’re working on. It makes them more effective at their part of that. And that’s really the driver. Behind it is that you know that need to actually have that impact on the adversary. But I’d say these lessons have been hard learned over, you know, fifteen or twenty years.

 

ROSE JACKSON: Lauren, I think that also speaks to some of the work that you’ve been doing. It’s not just that, you know, it should be a diverse community of people doing this work because that’s the right normative value, but because it’s the only path to success. Do you want to speak a little bit too? Why that is?

 

LAUREN BUITTA: I mean, back to the privacy issue, and we have an entire generation of young people who are exploring. There are norm shifts happening, where their expectation, their subjective expectation of privacy looks very different from what it did to prior generations. And so, I think we make a lot of assumptions about the extent to which the next generation understands digital technology. So, I think there’s just an opportunity around sourcing and understanding what those norms shifts look like, which usually national security when norms are shifting. We start passing laws and making policies when, in fact, I think this is a moment in time that we need to hit pause in to really understand. What are the experiences of diverse communities online? You know, the latest statistics were 2020. When only two thirds of the world’s school age students had access to the Internet. So, we’re making assumptions about how youth are engaging, what their perspectives are in power and privacy, who has access around the world, at least from our perspective. Really digging in deep to understanding how entire generations of young people who are the future workforce are again going back to posterity. This is about the future of the workforce, the future of humanity. It’s doing a deep dive, doing analytical research on understanding what those norm shifts look like and making sure that we’re actually developing solutions that can leverage those unique experiences of future generations who are going to exist in a very different world, based on all of the things that we’re talking about today that were also included in the report as well.

 

ROSE JACKSON: Absolutely.

 

MICHAEL DANIEL: Yeah, one more thing, in this space we see this all the time, so it’s not just a theoretical thing that the diverse teams are more effective. Like you actually see this in the ability of cybersecurity researchers and things too­­­—understand what the bad guys are doing, and the more diverse teams are more effective at that. One of one of the companies that we work with, a huge number of their really, really, good penetration testers, come from Hollywood. Literally come from the acting industry. Why? Because actors put themselves in the mindset of other people all the time. That’s what they do. And so, it turns out, they’re really good at doing penetration testing, weaponized empathy, that you could actually draw on that. So, this company has made a great business out of hiring people who went to Hollywood thinking, maybe I’ll be an actor but didn’t quite make it. And they’re looking for an alternative career path, and it’s sort of funny to say it like that. But I mean, it’s a very concrete example of how nontraditional career paths actually can play a huge role in cybersecurity and really make a meaningful difference. And like nobody would have thought of doing that certainly a few years ago, looking for that skill set in that community.

 

ROSE JACKSON: That’s interesting. I want to with the time that we have remaining, turn to a little bit of an emerging tech theme, and I’m glad that you brought up the question of how many people in the world are actually not connected to the Internet. I think it’s been a topic of geopolitical conversation for some time now that certainly countries like China with an authoritarian viewpoint, in the world are investing significantly in bringing more people online into, the version of the Internet and connected world that certainly will be at least not friendly to rights, if not straight, antagonistic to rights. But it goes beyond just the question of what fiber someone is running the internet off of, or the digital Silk Road.

I’m going to claim a little bit of monitor moderator’s prerogative here for a minute to set the scene because I think this is an area that’s a little bit newer to most people. We’re seeing persistent, systematic and pretty significant investments from Saudi Arabia and China into Western gaming companies, which, if you don’t know a lot about gaming, probably seems like weird thing that I’m going to mention this here, and that you should care about it. But not only do three billion people play games around the world, it’s estimated that by 2026, the market for games is going to be about $300 billion. Just a lot of money. Sounds like funny money to me. I have trouble with numbers that large, but a lot of this report that we actually an entire section deep diving into this and trying to demystify this industry. What’s really interesting is that a lot of the virtual technology, that if you remember Kat referenced the fifty days in the fall, where everything was happening prior to the release of ChatGPT, the number one thing everyone wanted to talk about was the Metaverse. Whatever that was supposed to mean. And so, it was quite focused on what we should think about immersive technology and an immersive world. Most of that tech is built by gaming companies and across the board. You have companies like Tencent. You have the Saudi sovereign wealth fund and a number of other Chinese backed firms taking majority stakes, if not full ownership of the majority of that infrastructure. Which is an interesting dynamic for us to watch. So, we put that in the report. We think it’s an under examined sector that people are trying to build their own influence, and we should be paying attention.

I wonder if I can allow us to just have a little bit of fun conversation on the basis of that information here and Michèle, starting with you. What do you think we should make of these moves and as someone who has spent a lot of time thinking about the geopolitical implications of investment and of how technology interacts with our national interest. What might it mean for American companies to be answerable to government backed or, frankly, government investors themselves?

 

MICHÈLE FLOURNOY: So, you’re going to inspire my inner paranoid just to come out. So, I think there are several things at once. I mean gaming. There are games that are all about narratives. And so, you can imagine, you know, a states backed company putting, using games to put forward, inject narratives into American society or whatever, at a very broad level, and sort of influence public opinion on other issues. But the thing that really worries me is, you know, my youngest son became quite a gamer during the pandemic. And what? It’s not just about the playing of the game. It’s the chat that happens during the game and the fact that a lot of times you don’t really know who you’re chatting to, and I do worry that combined with other information, you know, a foreign government could try to identify US military members, or members of the intelligence community or law enforcement, or whatever­—and use a gaming platform to try to engage them personally developing a victim. I mean, it’s just, it’s another platform for espionage, or for recruiting. This is the inner paranoid part. But, you know, I think it’s combined with other information. It is another way of targeting people you want to target, that does so by learning about them, by observing them, and their behavior in a game in context.

 

ROSE JACKSON: Absolutely. Lauren, I think one of the interesting things is that it sounds crazy for me to start talking about gaming in an international security conversation because people mistakenly think that games are for kids. Even though I can’t remember the exact number, the majority of people that have played digital games online are actually adults. But particularly working with youth, I assume that the conversation looks a little bit different about gaming and how people are interacting with that, and what it might mean.

 

LAUREN BUITTA: We talked a lot about TikTok. I never thought I’d say TikTok so much in my life. But you know when I think young people understood that the federal government was explaining that TikTok posed a threat, because of the potential for algorithms to be manipulated for information to be weaponized, especially around identity-based groups. And I really don’t mean this as a zinger, but I feel it’s important to point out because a lot of youth in our program mentioned it. The White House was also hosting TikTok influencers. And so, we had a lot of young people saying, all right, what’s all this TikTok nonsense about? But the reality is that again youth have the capacity to understand national security challenges. And I think it really is a matter of simply engaging and empowering them with information about what those challenges look like, because they do occupy the gaming space. But again, you know, if we want to build up a resilient sense of societal resilience around these issues, if we want to grow trust in institutions, which is declining, we need to provide young people, especially who are predominantly in the gaming community with information. Also, a large percentage of gamers are girls. Or female identifying individuals. You know, as we always say at Girl Security, girls are born in a world where they’re taught to fear everything but often secured from nothing. So once again, you start to see the same types of gender threats online, which again it just adds a layer to the experiences of already marginalized communities not only within the gaming context, but then again within the context of how we communicate and socialize online as part of a global community and so forth. So, I think there’s a lot of layers where we’re just big proponents of give youth the credibility that they deserve. They can have hard conversations, and we owe them for posterity. The opportunity to learn about these issues and gave means the perfect platform because it’s where they coexist.

 

ROSE JACKSON: I want to pick up two things that you just said there and then we’ll come back to you, Michael, on this question. You know, the first is that you raised the kind of tension between you know one part of the government focused on the national security threat of TikTok, and another part of the government focused on the political potential of leveraging TikTok. I wonder if this is a question for anyone here, you know, how is it that, do we think that the US government right now has the structure and capacity it needs to bridge? What are the same platforms and conversations happening in very different spaces if it’s domestic or international policy?

 

LAUREN BUITTA: The question is, does the government have capacity? I think the government certainly has the capacity to do many things, but I think with respect to bridging these platforms are these different issue areas. I think it would take a lot of strategic thought, a lot of investment, and again a lot of expanding how we define our national security priorities, which requires breaking down silos. It requires crosscutting training. It requires convenience, and a high level of investment, so I think we have the capacity to do it. I think it’s more the time and the will to actually commit to it because again, we often seek short term solutions when we’re working on the long game in the digital space. And so, I think it just is going to require a heavy level of commitment across potential administrations amid already politically divisive times and again, that’s where I think youth. As we’ve seen in our program there, they are so civically minded even if statistics around their civic understanding is low, they are civically oriented. And so, they provide the perfect sort of lever to empower in these spaces—to help drive government and the private sector to make these types of changes.

 

MICHÈLE FLOURNOY: Sorry. I think there is a huge educational component, though, to your point—how do we help? Not just young people, but any user of the Internet become more educated. How do we recognize malign behavior? To recognize a potentially malign someone pretending to be someone they’re not. To recognize, to look at the DNA of your data, what is the with all the potential for misinformation, disinformation, now, deepfakes, are there ways that we can educate people to whether the ways of approaches, of tagging or where you can sort of give much greater transparency to the source of the data. The journey of the day, the journey of the image, the journey of the voice print. To educate people to be much more sophisticated consumers of what they’re getting off of the web and to recognize that, you know, okay, this piece of information that I’m reading is in an American newspaper. Actually, it came out of a planted Sputnik article.

You know, twenty iterations ago. Or this is a video that is fake because it has these various attributes. Looks like it was probably a fake. I think there’s a huge public information component to getting all of us to be much more discerning, sophisticated users of the of the Internet.

 

ROSE JACKSON: I can promise everyone I did not pay Michèle Flournoy to say DFRLab’s work is an essential part of our national security. But thank you for that.

Michael, I wonder if I can turn to you real quick on the question of augmented virtual reality and what that means for how that technology is built. Likely to amass more and frankly, more sensitive kinds of data already moving into the ecosystem that we discussed, that doesn’t have a lot of clarity of protections for people. You work with companies every day on situations of data security requests from governments. What do you think we should be paying attention to? And is that a space that we’re ready for?

 

MICHAEL DANIEL: So, I think one of the things that we should be paying attention to, it actually goes back to something that Kat said at the very beginning, which is that technology is not neutral, right? And there’s a lot of what I refer to as digital utopianism that has been baked into the way that we have thought about technology. And a lot of times that people that are developing this technology, they’re usually motivated, because they see an opportunity. They see this huge opportunity to do something good in many cases. And as a result, it’s very hard for them to imagine that anybody could actually put their technology to malicious use.

I see this over and over and over again. In some technologies where you know we would have a discussion and the CEO would be talking to us about this data that they were amassing and how they could do all these things, and have to be like, wow. To your point, like, wow, that’s a really powerful counterintelligence tool. I’d really like to know that about that person to recruit them. And the guys sitting there like it never occurred to him that anybody would misuse the platform for that. And so I think the fundamental thing is, we have to go into this knowing that there are people that will misuse and abuse those platforms and misuse and abuse that data to do bad things. You have to go into it, knowing that that is going to happen. Not might happen. Not could happen. It will happen. And so you have to bake in. Those we talked about the lessons from cyber security, this is like what you don’t do. Bake it in later, or try to bolt it in on top of it. We can mix metaphors—after the cow has already left the barn or whatever—you don’t want to be doing that. You want to build the security protections in. You want to be asking those questions early. About how is the data going to be used, what is it going to be used for, how long are you going to keep it, how are you going to protect it? What are you going to do to actually get rid of it when you don’t need it anymore. And don’t say that we could just keep it because someday we might use it fallacy, so I think a lot of that is really that mindset of going in knowing that these technologies can be used for ill and planning accordingly. Yeah.

 

ROSE JACKSON: I appreciate it that you referred to it as your inner paranoid because I have…

 

MICHÈLE FLOURNOY: Fellow travelers.

 

ROSE JACKSON: Yeah, no. I always joke that this is mostly my job in tech, to be the like red teaming person telling you all of the ways in which something bad could happen. But I think it’s a helpful way to get people focused on where there may be holes in how we’re having conversations. If it’s Amazon, purchasing One Medical, and people wearing increasing amounts of biometric gathering information. Things like this, or a ring that I have. I am not immune to it, or the Apple Watch that I wear almost every single day. What are the rules around them? Where does HIPAA extend to protect my data privacy from an Apple Watch. If I use an Oculus Rift and start playing a virtual reality game, and it measures my temperature, and my pupil dilation is that allowed to go into my medical history, can insurance companies make decisions off that? And as much as that might sound like an inner paranoid question, we have to ask those questions to set rules and have answers for them. And I think that’s why I’m so grateful to this group being willing to have these discussions across boundaries that often we get stuck in silos.

I do want to come back to one thing that that you said when we were talking about gaming, which is you mentioned gendered harassment and the experience that girls often have walking into a digital space in which they have no protection. That really resonated because it’s not just girls, and one of the things that we talk about a lot within the DFRLab, and with some of our partners and all around the world is the consequence of digital spaces that are hostile to half of the world’s population participating. Particularly when those spaces are essential for democratic participation. They’re essential if you’re an elected official for being able to use them, and I think this conversation brings forward that they may be essential to having pathways to serving in government and making decisions about societies. What have you seen as promising steps that can be taken? And what did you learn through participation in this report? About what we should be doing about that problem as something that does affect not only our democracy but our national security.

 

LAUREN BUITTA: I mean, I think I think of a group of girls from Afghanistan who we provided a cyber training for last summer. They were brought here. They were given cellphones, and within a week they had them they were removing their jobs and posting locational images. And it was really challenging for the organization that was shepherding them. And so, we were working with these young people to sort of help understand? How do you exist in an online space coming from an oppressive area where you want to flourish and sort of be able to explore this new aspect of your identity online, and how quick they were to be able to balance those concerns. And I think one of the things that we always talk about with certain populations we work with, especially those who have suffered a lot of trauma, is not disempowering targeted populations, because the last thing you want is for them to be scared online. Which probably is why I’m a bit more of a Pollyanna idealist, is we have to generate a message that there is levers of change for them that they can be activated. And I think one of the aspects of the child safety report that I felt so strongly about was young people need to engage in risk taking behavior if we buy into social science, which is that its own crucial aspect of their development. And so if we remove that if we remove those spaces for them to engage in that risk taking behavior, what does that look like for again, historically targeted in marginalized populations, who are harassed online. And so, I felt really inspired by that conversation, because that is a big proponent of how we are a big proponents of creating those spaces, preparing people who are targeted but also creating those spaces where they can engage in risk taking behavior because it is so vital for adolescent development. So, I’m really looking forward to following up on some of the parts of the Child Rights Annex because I think that child’s rights perspective. It is so vital for vulnerable populations because the last thing we want to do is disempower them online.

 

ROSE JACKSON: That’s really, really wonderful. I think we’re coming towards the end of time. So, I want to do is give each of you the chance to say any final thoughts things. You think that we haven’t considered key takeaways about why this matters for national security or what you hope people walking away from this conversation will start thinking about engaging with if I can start with you, Michael.

 

MICHAEL DANIEL: I mean, I think that it’s route. You know when you talk about what we do? What do we mean by safety? Right? You’re talking about protecting people protecting not whether it’s individuals or groups or whole societies of people from harm, whether that’s mental harm or emotional harm or physical harm, and I can’t think of anything that’s more core to national security than that. So, to me the connection international security is… a very clear path.

 

ROSE JACKSON: Wonderful, Michèle.

 

MICHÈLE FLOURNOY: Well as we obsess in this town right now about regulation, I would like to actually suggest that we should start at a higher level with a set of principles and norms, and if we can agree on those and, use those to be designed into designed into the technology as it gets developed I think the regulation you know, will come, but you’ve got to know. What problem are you trying to solve? What norm you’re trying to reinforce, what kind of behaviors are you trying to encourage and discourage if you’re going to regulate well? And so, I’d sort of elevate the proper the for the fundamental conversation to this question, and I think there’s a lot of good grist for the mill in your report of what are the key principles? What are the normative guardrails? If you will, that we’re trying to enshrine and then figure out how.

 

ROSE JACKSON: Excellent. Thank you. Regulation.

 

LAUREN BUITTA: It’s hard to follow up on those two comments—that I feel like the words trust and safety I sometimes I feel like Tom Hanks in Big, like my brain is always stuck in adolescent mode. But I think the concept of trust is so simple. But it’s such a foundational aspect of not only adolescent development having those trusted relationships, but also of our democracy. And so, I think if you believe that a strong democracy is vital to our national security, then you should pay attention to the issues impacting youth and make them part of that conversation because I think they’re the greatest untapped national security resource that we have, and I don’t think we can afford, I think it would be a gross miscalculation to continue to leave them from the discussion tables and decision making tables as well.

 

ROSE JACKSON:  Thank you. I think if there’s one thing to take away from this entire report, and certainly this conversation is that the strength of any solution to these problems is going to be dependent on so many different people with so many different perspectives and expertise coming together, and one of the worst things I’ve witnessed in tech conversations is people’s belief that they don’t know enough to participate or they’re not technical enough to have a contribution. And I really hope that this conversation brought home how essential it is that all of us are part of this. That is an urgent conversation to have. I hope that all of you will join us again for more of the work to follow and really appreciate you taking the time to help us launch this and write the entire report to begin. Thank you. Going to turn to Graham Brookie, our senior director of the DFRLab to close us out and giant thank you to all of you for staying with us for so long today. Thank you!

 

GRAHAM BROOKIE: Thank you to my stalwart colleague, Rose Jackson, and thank you so much to the panelists: Lauren, Michèle, Michael, the panelists in the previous section: Bertram, Alexandra, [Matt], and Rumman. My job is purely logistical. My name is Graham Brookie. I’m the Vice President of Tech Programs here at the Atlantic Council and my job is to be thankful for the opportunity to learn from this community every day. So, thank you to the panelists, thank you to the task force members, forty-plus that have spent time with us, and advisors. I would be remiss to not mention the advisors. But most particularly the staff that has done this amount of work in the last five months. An enormous amount of work as Kat laid out the stakes. I think when we launched this Task Force any number of things had not happened yet and by the time we’re done with this conversation today, we’re going to be moving from Chat GPT 5 to Chat GPT 7, or 8, or 9, or 10, or something like that, so we’ll check in on that as a Task Force as soon as we wrap today’s event. And huge thank you to Kat Duffy, the Task Force Director. Huge thank you to Nikta Khani, the associate director of the Task Force and the entire DFRLab team that made this possible today, including the global team who is not with us here in Washington, DC. Huge thank you to the supporters of this Task Force—Eli Sugarman, the Hewlett Foundation, and Schmidt Futures. And most particularly, thank you to you all who have joined us either online or in person today, you’re now part of a growing community and there is clearly a good amount of work to do going forward, including checking which version of Chat GPT we’re on after this event.

So, thank you for spending the time today and thank you so much for your continued commitment to being a part of this group. Trust & Safety, in my opinion, we’re in Washington, DC and at the beginning of next week we’ll be in New York. At the end of next week, we’ll be in San Francisco. But today we’re in Washington, DC, so I would be remiss if I didn’t say the Trust & Safety teams, the Trust & Safety community that you all are a part of, in my opinion, are those kind of jump teams for democracy. So the role that it plays in national security or foreign policy, the conversation that we’re having today in Washington, DC at a foreign policy think tank, is that this is the team that is committed to the systemic promotion and protection of rights at, as Fred said at the very beginning of today’s event, a critical inflection point in which we get to decide what technology’s role, and increasing role, in society looks like and especially in an era of global competition for information around the world. So now, we’re all a part of it. Congratulations, thank you for your work. Thank you for your continued work.

The last logistical note for today is that I hope you get to spend some time with the report on the internet which is where we’re consummately at. If you’re engaging with us online, the hashtag is #ScalingTrust and the report can be found at atlanticcouncil.org. For those of you in person, most importantly, there’s a reception afterwards and next door there are refreshments, and we will see you there. Thank you so much.

Speakers

  • Frederick Kempe, President and CEO, Atlantic Council
  • Kat Duffy, Task Force Director, Task Force for a Trustworthy Future Web
  • Rumman Chowdhury, Responsible AI Fellow, Berkman Klein Center
  • Alex Givens, CEO, Center for Democracy & Technology
  • Matt Soeth, Head of Trust & Safety, Spectrum Labs
  • Bertram Lee, Senior Policy Counsel, Data, Decision Making, and Artificial Intelligence, Future of Privacy Forum
  • Michèle Flournoy, Co-Founder and Managing Partner, WestExec Advisors
  • Lauren Buitta, CEO and Founder, Girl Security
  • Michael Daniel, CEO and President, Cyber Threat Alliance
  • Rose Jackson, Director, Democracy + Tech Initiative, Digital Forensic Research Lab
  • Graham Brookie, Vice President and Senior Director, Digital Forensic Research Lab

New York City: June 26, 2023

Protecting user rights and supporting innovation online

The Task Force collaborated with All Tech Is Human to host a lunch event in New York City at Betaworks exploring the “responsible tech” implications of Scaling Trust on the Web.

The event featured a panel with Task Force members Yoel Roth (technology policy fellow at UC Berkeley; nonresident scholar at the Carnegie Endowment for International Peace), Maya Wiley (president and chief executive officer, the Leadership Conference on Civil and Human Rights and the Leadership Conference Education Fund), Mike Masnick (founder of the Copia Institute and the blog Tech Dirt), and Kat Duffy (director, Task Force for a Trustworthy Future Web) as moderator.

The panel provided a broad overview of the task force report’s key findings and recommendations, with a particular focus on the importance of illustrating for multiple stakeholder groups how their expertise and goals are intrinsic to informing safer, more useful online spaces and sharing the knowledge necessary to help ensure that digital innovation can build more resilient societies instead of scaling marginalization. Each panelist reflected on how their respective experiences in tech, politics, and beyond are represented in the Task Force’s final report. Roth shared his perspective on the Task Force’s recommendations based on his background as a trust and safety practitioner; for Wiley, “every recommendation is a civil rights recommendation.”

Speakers

  • Kat Duffy, Task Force Director, Task Force for a Trustworthy Future Web
  • Yoel Roth, Technology Policy Fellow, UC Berkeley
  • Mike Masnick, Editor, Techdirt
  • Maya Wiley, President and CEO, Leadership Conference on Civil and Human Rights

Menlo Park: June 29, 2023

The future of trust and safety

The Task Force hosted its final launch event at the William and Flora Hewlett Foundation in Menlo Park, California, on June 29. The all-day event brought together industry, venture capital, philanthropy, and civil-society leaders for a series of four panels, conversations, and exchanges. Duco Advisors Founder and CEO Sidney Olinyk moderated the day-long session, which included introductions from Eli Sugarman of the Hewlett Foundation and Rose Jackson, lunch time group discussions, and a demo of a new trust and safety game that will be released by Mike Masnick, and closing remarks by Kat Duffy.

The first formal panel, The Future of trust and safety tooling, featured Alex Feerst (chief executive officer, Murmuration Labs), Sarah Oh, (co-founder, T2), Hannah Fornero (group product manager, Discord), and Dave Anderson (founding partner, Beat Ventures), and was moderated by Clint Smith (chief legal officer, Discord). Panelists discussed the challenges and evolution of trust and safety tooling implementation, trade-offs for trust and safety teams, and harm prioritization. Fornero spoke about tooling considerations for audiences within companies, product users, and regulators. Leveraging her experience as a startup founder, Oh provided perspectives on the tooling vendor space and the “buy vs. build” dilemma. Feerst shared how tooling problems might become more complex in the wake of emerging technologies. 

The second panel, What generative AI means for trust and safety, featured Kat Duffy (director, Task Force for a Trustworthy Future Web), Dave Willner (head of trust and safety, OpenAI), and Alex Stamos, (director, Stanford Internet Observatory), and was moderated by Zoë Schiffer (managing editor, Platformer). Experts from academia, industry, and media discussed how generative AI might create new harms and exacerbate existing ones in online spaces, with a particular focus on how the speed and scale that generative AI enables will outstrip existing societal guardrails and governance structures. Experts forecasted how generative AI may impact the 2024 elections and spoke about how to prevent AI systems trained on synthetic data from degrading and collapsing. Stamos outlined three major takeaways on AI: “One, people use AI, and there are good outcomes, and it improves their life. Two, people use AI, and it messes up. Three, the intentional use of AI to cause harm. A majority of this will be open-source.”

The third panel, The role of private investment in building the trust and safety market, featured Sara Ittelson (partner, Accel), Jamil Jaffer (venture partner/strategic advisor, Paladin Capital), Dror Nahumi (general partner, Norwest Venture Partners), and was moderated by Shu Dar Yao (founder and managing partner, Lucid Capitalism). This timely discussion brought together professionals in venture capital to dissect market trajectories for trust and safety, as well as how to drive major investment in this area, especially in the start-up space. All panelists agreed that trust and safety should be a consideration for all companies, at any stage. How to incentivize funding to be directed to trust and safety within companies, especially at the early stage, remains a key area of concern among the panelists. “Trust and safety no longer just [hits] top-tier companies,” Ittelson said. “Any company of reasonable scale should expect to be a target.”

The day’s final panel, Where trust and safety is headed: A view from leadership, feature Karen Courington (vice president, Trust & Safety Trusted Experiences, Consumer, Google), Charlotte Wilner (executive director, Trust and Safety Professional Association), Victoire Rio (digital rights activist), and was moderated by Katherine Maher (former chief executive officer and executive director, Wikimedia Foundation). Leaders in industry, nongovernmental organizations, and civil society gave their perspectives on the evolution of Trust and Safety, Trust and Safety investments as drivers for organizational brand safety, and the state of trust and safety teams in the face of regulatory compliance. As tech regulations mount globally, there are concerns from companies, trust and safety practitioners, and civil society that trust and safety would be reduced to a checkbox. As Rio said regarding the state of trust and safety today, “something we are very concerned about in these contexts is that it’s becoming less about human rights and more about regulatory compliance.”

Speakers

Additional Task Force launch and preview events

RightsCon 2023

At RightsCon 2023 in San Jose, Costa Rica, Kat Duffy (director, Task Force for a Trustworthy Future Web) was joined by Agustina del Campo (director of the Center for Studies on Freedom of Expression), Nighat Dad (executive director of the Digital Rights Foundation), and Victoire Rio (digital-rights advocate) to discuss the then-upcoming Task Force’s major findings, namely the importance of inclusionary design in product, policy, and regulatory development. The panelists’ conversation covered how government, civil society, and private industry must be involved, entrenching self-regulation as an industry norm, and the growing professionalization of the trust and safety industry. Significantly, all of the panelists highlighted the limitations of trust and safety as a framing mechanism, noting its deep roots in the American tech industry cultural norms, and flagging how governments have become more sophisticated in co-opting trust and safety mechanisms in order to increase repression at local and regional levels.

TrustCon 2023

The Task Force hosted a briefing on Scaling Trust on the Web at TrustCon, focusing on the implications for the Trust and Safety field and community of practitioners gathered at the San Francisco conference. Task Force Director Duffy moderated the session which featured insights from Alex Feerst (chief executive officer, Murmuration Labs), Sarah Oh, (co-founder, T2), and Nighat Dad (executive director, Digital Rights Foundation). Emphasizing the need for cross-sectoral collaboration and the task force report’s key finding on the importance of media, academic, and civil society expertise in trust and safety, Duffy outlined the intersectional dynamics at play for the future of trust and safety. Feerst provided historical context for how private-sector entities have come to understand trust and safety, largely through the lens of social media and American cultural norms. Oh and Dad spoke about the role of civil society in tech, especially in surfacing harms and risks that affect communities and individuals in global majority countries, and that might otherwise go unnoticed by Western companies and audiences. They also highlighted the significant benefits of spotting those harms early in order to ensure safer online spaces for all users.


Task Force for a Trustworthy Future Web: Rose Jackson speaks at the Washington, DC launch of Scaling Trust on the Web.
Task Force for a Trustworthy Future Web: Kat Duffy presents the report and Task Force's findings at the Washington, DC launch of Scaling Trust on the Web.
First panel at the Washington, DC launch of Scaling Trust on the Web. The panel featured Bertram Lee, Alex Givens, Rumman Chowdhury, and Matt Soeth.
First panel at the Washington, DC launch of Scaling Trust on the Web. The panel featured Bertram Lee, Alex Givens, Rumman Chowdhury, and Matt Soeth.
Second panel at the Washington, DC launch of Scaling Trust on the Web. The panel featured Rose Jackson, Lauren Buitta, Michèle Flournoy, and Michael Daniel.
Second panel at the Washington, DC launch of Scaling Trust on the Web. The panel featured Rose Jackson, Lauren Buitta, Michèle Flournoy, and Michael Daniel.
Senior Director and Vice President Graham Brookie closes out the Washington, DC launch of Scaling Trust on the Web.
All Tech is Human hosts Mike Masnick, Maya Wiley, Yoel Roth, and Kat Duffy at the Responsible Tech Mixer and New York City launch of Scaling Trust on the Web.
Task Force Director Kat Duffy moderates a panel featuring Task Force members Alex Feerst, Nighat Dad, and Sarah Oh at TrustCon 2023 in San Francisco.
Kat Duffy, Alex Stamos, Dave Willner, and Zöe Schaffer discuss what generative AI means for Trust and Safety.

The Atlantic Council’s Digital Forensic Research Lab (DFRLab) has operationalized the study of disinformation by exposing falsehoods and fake news, documenting human rights abuses, and building digital resilience worldwide.