Atlantic Council

#DisinfoWeek Athens 2019

 

Introduction:

Geysha González, Deputy Director, Eurasia Center, Atlantic Council

 

Welcome Remarks:

The Hon. Geoffrey Pyatt,

U.S. Ambassador to Greece,

U.S. Embassy in Greece

 

Kyriakos Pierrakakis,

Research Director,

diaNEOsis

 

Damon Wilson,

Executive Vice President,

Atlantic Council

 

Storyteller:

Valentinos Tzekas,

Founder and CEO,

FightHoax

 

Panel: “Disinformation: Actors, Tools, and Solutions”

Panelists:

Ambassador Daniel Fried,

Distinguished Fellow, Eurasia Center,

Atlantic Council

 

Thodoris Georgakopoulos,

Content Director,

diaNEOsis

 

Samantha Bradshaw,

Researcher,

Oxford Internet Institute

 

Moderated By:

Marianna Kakaounaki,

Reporter,

Kathimerini

 

 

Location:  Atlantic Council, Washington, D.C.

 

Time:  6:30 p.m. Local

Date:  Monday, March 4, 2019

 

GEYSHA GONZÁLEZ:  (Speaks in Greek.)

Good evening, everyone.  My name is Geysha González, and I’m the deputy director of the Eurasia Center at the Atlantic Council.

It is my immense pleasure to welcome you as we kick off #DisinfoWeek Europe.

Throughout the week, we’re going to be around Europe holding a series of events, talking about the common challenge of disinformation and strategizing with our partners and allies on how to best counter and fight back against malicious actors that seek to disrupt, distort and destabilize our democracies.

#DisinfoWeek Europe is an effort of love.  The Atlantic Council believes that we are stronger with allies.  And therefore, to manage one of the most pressing challenges, we must work together on this.

First, I’d like to thank the U.S. embassy of – the U.S. embassy in Greece, the U.S. mission of the EU for their kind support for this event.

I would also like to thank diaNEOsis for being an excellent partner and for making sure that this is a true transatlantic dialogue.

I’d also like to thank the American School of Classical Studies for hosting us, and Aegean Airlines. 

Lastly, I’d like to recognize #DisinfoWeek Europe partner Politico Europe.

Before I welcome Ambassador Pyatt to the stage, let me take this opportunity to talk a little bit more about why we’re here.  Since 2016, we’ve been facing a challenge to democracies worldwide.  Disinformation is nothing new.  It is a challenge that has been around as long as there has been a sharing of information, but new technologies have made it so that it is easier and faster to spread malicious information across platforms.  So this discussion really is about, how do we work together to address this challenge?

For those of you watching at home, please use #DisinfoWeek or follow DisinfoPortal. 

I would also like to remind everyone that we do have notecards walking around in case you have any questions that you would like to ask at any point during the discussion.

Ambassador Pyatt has long been a friend to the Atlantic Council.  We used to work with him very closely back when he was ambassador to Ukraine.  His experience there from 2013 to 2016 have made it so he has firsthand experience on the challenge of hybrid threats.

Ambassador Pyatt, I cede the floor to you.  (Applause.)

AMBASSADOR GEOFFREY PYATT:  Thank you.  Kalispera, everybody.

I’m delighted to welcome you all here tonight as we kick off a series of #DisinfoWeek conferences that the Atlantic Council is organizing across Europe.

But before I get to my remarks, I want to in particular acknowledge the presence here this evening of the Atlantic Council’s senior fellow Ambassador Dan Fried, who will speak later this evening.  As ambassador and assistant secretary of state, Dan has educated generations of diplomats, including me, regarding the importance of our transatlantic community and the need to stand up strongly to defend the West.  I can think of no one in America better suited to speak on the challenge of disinformation as Europe heads into a year of elections.  And I’m delighted that he is with us here in Greece for a couple of days.

Thank you, Dan.

Many of you have heard me speak about the pivotal geography of Greece, a country that has been at the crossroads of great-power competition for centuries.  So it’s only fitting that Greece is one of the hosts for a Europe-wide discussion on a topic that is dominating headlines and geopolitics around the world.

As Western democracies, both as governments and as citizens, we face an unprecedented attack of disinformation, accelerated by social media, that threatens to distort the open market of ideas by promoting lies and stoking hatred.  My own country has been and continues to be a prime target for disinformation as illustrated by Russia’s manipulation of social media around the 2016 presidential elections. 

Disinformation comes in many forms, but the most malicious is when disinformation is used as a tool of one state to destabilize another.  I lived this firsthand during 2013 to 2016 when Russia weaponized information to complement its military and hybrid warfare strategies, invading Crimea and attempting to dismember eastern Ukraine.  I regret that in responding to the crisis in Ukraine, we in the West were sometimes slow to recognize this hybrid warfare strategy for what it was.  But today there’s no excuse.

Almost everyone I talk to in Greece now understands the well-documented Russian malign influence operations in the 2016 United States presidential election.  Russian attempts to use propaganda in the United States are nothing new.  They existed long before Facebook or Twitter.  But they are much more pernicious today because social media has a multiplier impact and can spread so fast in our open societies.

What we saw in the United States in 2016 is an interference model that should concern all Western democracies.  In this model, the Russian state promotes fringe voices across the political spectrum from the far left to the far right, including groups who advocate violence or the overthrow of governments.  Russia attempts to foment and fund controversial causes and then foments and funds the causes opposed to those causes.  As my friend, colleague and recently departed Assistant Secretary of State Wes Mitchell phrased it, Russia tries to systematically inflame the perceived fault lines that exist within our society.  It tries to use an essential feature of our democracies, our openness and the free flow of information, to destabilize us.

Recognizing these challenges, we need to work together to develop comprehensive strategies to identify and combat disinformation.  That is why discussions like the one here tonight are so important. 

The U.S. government is committed to fight malign influence and disinformation in our transatlantic community.  One of the best defenses against disinformation is a free and transparent news media.  In Europe, the United States is engaging with partners like the Atlantic Council and diaNEOsis to promote healthy and robust public debates based on facts, evidence and reason.  We believe a well-informed citizenry is key to the strength of our democratic institutions.

As most of you know, I’m a fairly prolific user of social media to support my diplomatic work.  And while I believe that we can and should employ technological tools to make our democracies work better and to communicate more broadly, we also have to grapple with managing the firehose of information that these tools can produce.  As technologies allow us to share information more easily, these same technologies also make it easier to manipulate information.

But despite these challenges, I am certain that freedom of expression and freedom of speech will remain a bedrock of our democratic societies.  It is one of the values that binds us together.  And that technology will serve over the long term to strengthen and to deepen our democracies.

So tonight, here in the birthplace of democracy,  I hope your discussion will generate concrete ideas for how to increase media literacy, promote professional journalism and create even stronger networks of open dialogue, fact checking and resource sharing in the fight against misinformation.

Thank you all so much for joining us.  And I look forward to a productive and lively discussion.  Efcharistó polý.  (Applause.)

  1. GONZÁLEZ: And just a brief reminder, please silence your phones for the discussion. And still, follow along using #DisinfoWeek.

It is now my pleasure to introduce our partner Kyriakos Pierrakakis – close, close – who is the director of research at diaNEOsis.  If you could join us up here.  (Applause.)

KYRIAKOS PIERRAKAKIS:  Thank you very much.

Ambassador Pyatt, Ambassador Fried, distinguished guests, let me begin by expressing our gratitude to the Atlantic Council and to the U.S. embassy in Athens for choosing diaNEOsis as a partner for tonight’s event.

As most of you know, diaNEOsis is a research and policy organization.  We’re a bit more than three years old.  Our focus is on creating an economic and social reconstruction plan for the country, for Greece, in a technocratic way.  We want the series of our plans to have been created and published to cumulatively shape a technocratic road plan for the country. 

And we focus a lot on policy.  As you know, most of you, even the non-Greek speakers, in Greek, we don’t necessarily distinguish between policy and politics.  We use the same word, we use the word “politiki.”  So what we wanted to do, in a sense, was to create a space for technocratic solutions for policy.

And while we believe that a lot, we feel and what we have been experiencing is that those lines are sometimes a little bit blurry, they’re not really well defined, because the evolution of policy is defined by politics, which they are in turn defined by the media ecosystem.  And this ecosystem has obviously changed dramatically as a function of technology.

And before touching upon disinformation, let me attempt to connect technology a little bit with politics.  Because I remember from when I was a student at college that when studying campaigns the idea was that the campaigns that won, the successful politicians were the ones who mastered the technologies of their times.  I remember that this was the case with Lincoln and the newspapers.  This was the case with FDR and the radio, his famous fireside chats.  This was the case with television with Kennedy, with Ronald Reagan.  This was the case later on with data, with technology.  Barack Obama’s second campaign was famous for that.  And this was the case with the latest campaign of Donald Trump.

And especially with regards to the second Obama campaign, which was supposed to be the transformer in terms of using data and technologies, one remembers how campaigning had entered this world.  And political ad campaigns had enough accumulated data to target you, to tell you what you wanted to hear and what you needed to know, depending on the candidate.  I remember particularly that if you were a female living in the West Coast and in the 40-to-the-49-year-old demographic, the person to carry the pro-Obama message was none other than George Clooney.  The campaigns knew that the best person to talk to your friends about their voting preferences was none other than yourself, and told you which friends to talk to.

And these technologies became step by step even more sophisticated and a new set of issues emerged.  You have probably seen the latest movie with Benedict Cumberbatch about Brexit and the Out campaign.  You have heard obviously the news vis-à-vis the result of the latest U.S. presidential election.  So in politics, before we enter geopolitics, mastery over the technology effectively constitutes mastery over the political.  And in this sense, the medium often becomes the message, as the old McLuhan quote said.

We not only communicate in 140 characters, but we have started, in terms of the policy landscape, to think in 140 characters.  And this is certainly the case about the average voter as well. 

Obviously, despite the fact that this type of public discourse loses in sophistication, loses in depth and it’s not necessarily commensurate with the complexity of the issues and the necessities of the times, the nature of the medium of our times, the internet, is such that every content consumer has the capacity to become a content producer.  And this type of medium, as I have described it, can be particularly enabling for nonstate actors, it can be particularly enabling for states with geopolitically revisionist agendas to punch significantly above their respective weights.

In a world where the barriers to entry continuously drop because of the nature of these technologies, the promotion of such agendas becomes a much easier task.  Broadly speaking, it’s much easier to destroy rather than create when you use 140 characters.  It’s easier to deconstruct rather than construct.  It’s easier to break down rather than build.

And anybody whose aspirations involve the disintegration of the current system and the injection of an element of instability, both on a domestic and on an international level, because we have seen those types of initiatives both on national elections and we have seen them even on a senatorial basis in the United States or in regional elections all over the world, we have seen those activities.  Those who initiate them stand a lot to gain.

Especially as what we describe now as the fourth industrial revolution, the convergence between the digital, the physical and the biological, takes shape, disruption will obviously become the new norm.  And in such a context, many might flirt with even Luddite and technophobic aspirations.  However, in my view, this is the wrong approach.

When it comes to the future, it is the following statement that holds:  You either shape it yourself or you end up being shaped by it.  Disinformation campaigns and information warfare more broadly constitute an existential threat, not only to the national sovereignty of states, but to the stability of the international system overall.  If democracy can be hacked, then the responsibility of us, both as democracies and as citizens, is grave.  And we must guard against this emerging environment with the proper policies and initiatives.  And the ground is fertile for problematic approaches. 

At diaNEOsis we do a big poll every year vis-à-vis fake news and we try to measure conspiracy theories.  The most famous conspiracy theory, as most of you know, is the airplane fumes, they are spraying us with some type of gas.  Apparently, 28 percent of Greeks, 28.5 (percent), believe that and another 10 percent of Greeks feel that this is a possibility.  So the total number is big.  This is because of fake news.  This has happened, but this is fertile ground to be used.

So in this sense, our responsibility is to create the necessary mechanisms to monitor and intervene when necessary.  And we need to take into account that even though we don’t connect necessarily those issues with the Greek context, 2019 is going to be a year of multiple elections.  Greeks haven’t voted for the last four years.  It’s the first time in the last 20 years I think that we haven’t had an intervening election between national elections, so the possibilities are out there.

And the obvious first step in such an environment is to raise awareness.  And in this regard, we’re delighted to be cohosting tonight’s event.  Thank you very much.  (Applause.)

  1. GONZÁLEZ: I will try your last name again next year. (Laughter.)  Thank you so much.  We are really truly delighted.  DiaNEOsis is a great partner.

I’d like to introduce my colleague, Executive Vice President Damon Wilson.  (Applause.)

DAMON WILSON:  Thank you very much, Geysha.

Good evening everyone.  Thank you so much for being with us.

I’m Damon Wilson, the executive vice president at the Atlantic Council.  Just really delighted to be able to kick off this event and to have Ambassador Geoff Pyatt here with us to open Disinformation Week.

And our partners at diaNEOsis, we’re really delighted to have you with us, Kyriakos.  It’s a fantastic turnout, fantastic group.

We’re kicking off Disinformation Week here in Athens because this reflects the ethos of the Atlantic Council.  The Atlantic Council – you’ve heard Kyriakos describe sort of the intent of how his organization is trying to be a different actor and institute here in Greece.  Well, in the United States, the Atlantic Council isn’t just a think tank.  We’re an organization, a community of people that are committed to the United States working with our allies and friends on solving the biggest global challenges we face.  We’re not just a think tank, if you will.  We’re a group of people that want to figure out, and what do we do about these issues?  It’s a community, a community that wants to defend core principles, solve problems based on evidence-based analysis.

And we see a world right now that’s in a little bit of turmoil.  We fear – and in many respects, these challenges are how we define the strategy of what the Atlantic Council community is doing – the fraying of Western democracies, the rise of autocracies, and so we must go on offense to defend democracies and to check the authoritarians.

The pace and change of technology that’s disrupted many of our societies, we feel we have to figure out how to harness technology for good and ensure that the free world gets there before the authoritarians.

The erosion of a rules-based international order that literally the founders of the Atlantic Council helped build after World War II, where we don’t just defend the past order, but we’ve got to play an active role in working with stakeholders and working with you to shape the global system and make it fit for purpose for the future.

The issue of the return of great-power conflict and what means for our countries, so we have to help navigate this to avoid and prevent a World War III.

And an uncertain role for our country, the United States, and its position in the world.  And it’s why the Atlantic Council is committed to constructive American engagement with its allies and partners and to fostering public support for that position.

And finally, to deal with the challenge we face, what you could arguably say is a leadership deficit in the transatlantic community, the leadership that is required to tackle these issues.  And it’s why the Atlantic Council has made such a deep investment in the issue of the next generation of leaders that will have the capability and the networks to solve these problems.  We’re so honored we’ve got so many of them with us here in Athens as part of our millennium fellowship program.

So that’s how Disinformation Week fits into our strategy.  This is a weeklong series of convenings across Europe that seek to advance the transatlantic partnership on countering disinformation.  And we are glad to be here with so many friends in Athens to launch this important effort.

I’m going to just shout again, as Geysha did, a real thank you to the extraordinary team at the U.S. mission to the EU, particularly the team here in Athens at the U.S. embassy in Greece have been fantastic.  And we’re really delighted to begin a partnership with diaNEOsis, a really remarkable organization here in Greece, as well as Politico Europe.

We couldn’t have done this without the support of Aegean Airlines.  We’re grateful for their partnership.  And we’re delighted to have so many folks from Kathimerini here to help cover our programming.

Some of you may have noticed that we’ve been in Greece for a few days.  The Atlantic Council turned out in force at the Delphi Economic Forum, participated earlier today as Ambassador Richard Morningstar, who is with us, accepted the Prometheus Award and gave a lecture on energy security today, Ambassador Fried and Geysha leading our team on disinformation.  This is part of the Atlantic Council’s commitment to bolstering the U.S.-Greek relationship at a time after a tough decade when there are extraordinary opportunities for Greece and the United States working together within NATO, within the U.S.-EU relationship.

And it’s why we’re so pleased to have our own Greek team, our board member Harry Sachinis, who is here in Athens, and one of our fellows Artemis Seaford, who is with us tonight as well.

So this event is just one part of the Atlantic Council’s body of work in identifying, exposing and fighting disinformation campaigns.  And as you heard from Geysha, the Council has been working on Kremlin-linked disinformation certainly since 2014.  That’s when the Kremlin began to use a massive campaign of distraction and denial to obfuscate its aggression in eastern Ukraine.  And since then, we’ve seen cyberattacks and disinformation campaigns to disrupt, distort and destabilize democracies worldwide.

So in response, the Atlantic Council hasn’t just analyzed this, but has structured ourselves to go on offense to defend our democracies.  We’ve published numerous reports and analytical papers on these issues, convened key stakeholders to strengthen our efforts across lines, launched the Disinformation Portal to create a platform for partners and allies to coordinate these efforts, and we created a digital forensic research lab to help identify and expose in a real-time basis disinformation as it sows division in our societies.

And while the Kremlin is still a major actor in the disinformation space, it’s not the only one.  Today, both state and nonstate actors can employ a range of tools to conduct malign influence operations.  And that’s why it’s key that we work with our partners and allies to address this challenge that is affecting democracies worldwide.

So tonight, this conversation will kick off a weeklong series of strategic dialogues.  We’re going to hear from some incredible experts tonight as they lay out the major malicious actors and tools in the disinformation space and discuss potential solutions.  Ambassador Dan Fried, who is one of my mentors, if not my actual boss in the U.S. government, now a colleague at the Atlantic Council, will be guiding this discussion throughout Europe, continuing on tomorrow to Madrid, Spain for a similar discussion, and then on to Thursday and Friday to Brussels for a major capstone forum in the heart of Europe’s capital that will touch on these topics from election interference to the Balkans to bots and trolls, deep fakes and to debate over government regulation.

So along the way and tonight, I hope that we can turn to these experts to dig deep, getting past the what, the why and even the how of the issue and driving us towards realistic, genuine solutions to the problems and threats that disinformation presents.  We extend an invitation to many of you who are active in this field to rejoin us in London in June, June 20th, 21st, when we convene our open source – 360 Open Source Summit, convening around what we call digital Sherlocks to help deepen the networks of folks that are combating this type of disinformation.

So we hope you enjoy tonight’s conversation and really looking forward to the rest of this.

I’m going to turn it back over to Geysha to lead us through the evening.  Thank you very much.  (Applause.)

  1. GONZÁLEZ: So we’ve welcomed you. You’ve silenced your phones.  You know that the hashtag is #DisinfoWeek.  You know that you can follow us at DisinfoPortal.

And now we can get started.

And again, please do note that we have cards running around for you to ask any questions that you may have throughout the session.

It is my absolute pleasure to welcome Valentinos Tzekas for his discussion on fighting hoaxes in Greece.

VALENTINOS TZEKAS:  Hi, everyone.

So before I begin what our job at FightHoax is and what we do and present two interesting studies that we conducted together with our partners, I want to take a step back and reminisce a time when I was a kid and my father was and still is a photojournalist in a local newspaper back in my hometown in Greece.

So for countless days I do remember him taking me to the newsroom days and nights so I can observe how information is being gathered and how everything goes to creating a super interesting paper that people are going to buy, that people are going to discuss.  And, of course, it was really interesting and I was really geared to observe how all these journalists act in this jungle and how journalists can grab your attention and profit from it and, of course, at the same time form reading habits to us.  And I do and I did remain curious.

So fast forward to 2016.  We all know that the U.S. elections basically got the world shook.  It was, for me at least, it was the very first time that I saw how technology can really, really impact the way that humans think, the way that humans debate, the way that humans vote.  And I do remember that hoaxes and extreme opinions started to emerge from basically everywhere.  We hit all-time-high records, so I was really disturbed by all these hoaxes and the misdirected articles that went viral.

So one day, I sat down and tried a brainteaser.  I was thinking that we were losing this war against knowledge and we were losing the war against common logic.  And I was feeling really bad, so how could I create an automated algorithm to perform all these complex steps that humans, that journalists, that fact-seekers do in order to extract actionable insights and, at the same time, filter out harmful content, extreme opinion, disinformation and potential misinformation articles?

At first, it seemed really impossible.  I mean, think about this.  We have super-limited – super-limited human capabilities against machines.  And even we humans can’t even agree on what is true, what is false, what is an opinion, what is a fact.  But then I realized that machines with the proper datasets and with the proper training, you can make them have no feelings, no biases and, of course, they could judge some aspects of news articles actually better than humans do – with the right training, of course.

So after almost three years of hard work, FightHoax is capable of reading, analyzing and deeply understanding thousands of news articles in just a few minutes.  We are on a mission to help brands and advertising companies filter out harmful news content from their campaigns, such as fake news, extreme opinion, low quality and poor content.  And in this way, we are able to cut the revenue motives, the advertising revenue motives of the bad actors that create low-quality content or fake news to get their advertising revenue from it.

We have recently raised our second round of investment, so I’m suggesting reading this interview we did with Forbes. 

So over the past few years, we have been closely working with news agencies, academic institutions and organizations all over the world doing some largescale media landscape research on a number of important issues.  The very first case study that we conducted together with the Ethnos newspaper in Greece, we analyzed a number of both foreign and local news articles that covered the recent FYROM name change negotiations during 2018.  The majority of the FYROM name change coverage in this space came from center-left-leaning political sources.  And the reportage started from mainly world-known sources and presentations.  But as the time was passing by, we saw more small, unknown, and local news organizations to cover the topic deeply.

The raise of the reportage from many unknown sources mean that the danger of fake news rumors and extreme opinions is really high.  And this theory got confirmed, basically, because at the same time 48 percent of the authors had unknown digital footprint.  This means that the names were completely made up with no social, Wikipedia, or historic footprint.

After that, we discovered that the very first actions toward solving the FYROM name started around January 2018.  The articles had an overall feeling of joy.  And I do remember a lot of people saying that, all right, we’re going to solve this problem, we’re going to find a name and everyone is going to be happy.  So as the months were passing by, feelings of sadness and anger conquered the landscape.  People got angry and protested.  We all remember that.

At last, we tracked the most mentioned entities and keywords so we can actually see how the topic evolved month by month.  The interesting part here is that the journalists from the Ethnos newspapers observed the chart and they tried to predict what the name, the proposed name, is going to be.  So they said, all right, I see a spike of mentions right before June, so the name is definitely going to have the word “Macedonia” in it.  And they were right.

So this is a really interesting study to see how technology can analyze thousands and thousands of news articles day by day and assist humans on their research and try to predict what is coming next.

And the second study that we conducted was a really interesting one because we had discovered just 10 articles with more than 3 billion impressions combined on several news networks.  All these articles were completely fabricated, fake news, with completely fabricated information.  They had 3 billion collection total and no organization, no fact-checking organization or journalist has come out until today to fact check it as completely hoaxes.  So until today, all these 10 articles exist out there and people keep sharing that.

Some sort of information about this is that all these articles were super short in length.  They had no author’s signature.  And most of them were using excessive language that caused feelings of anger, sadness and disgust.  And, of course, almost every article talked about politicians, political developments and huge companies like Apple and Tesla.

So what we observed here is that misinformation, propaganda, of course disinformation and extreme opinions, hate speech, does not only affect – affects minds and votes, but at the same time it affects the life of many people, their health, and even the stock market.  The huge issue that we all face right now in the age of the internet is that we are against enormous amounts of data compared to the reduced human potential time.

Back in the old days, people relied on journalists to discuss – to discard fake content and properly report the news that people wanted to read the most.  So nowadays, I do believe that anyone can be considered a journalist, anyone can write a small blog piece, an article, even a social network post, advertise it with a little or no money and get thousands and even millions of clicks in less than 24 hours. 

So I’m trying to conclude here and say that the true meaning of the teacher is lost nowadays.  Journalists, media companies and social networks are the teachers – are the new teacher of the masses, of this new generation.  And just think about this.  Right now, right here, we’re about 400 people talking about ideas, discussing about social issues that we all face.  But when a journalist appears on a TV or even on the social networks, millions and millions of people listen to him whatever he or she is saying.  And, of course, technology is amplifying this effect.

So keeping this as food for thought, I would really like to thank you and proceed to the next panel.  Thank you.  (Applause.)

  1. GONZÁLEZ: Thank you, Valentinos. Thank you so much.  Thank you, Valentinos, for that.  I thought that was excellent.

It is my pleasure to introduce Marianna Kakaounaki – close, yeah – from Kathimerini, one of the reporters, to moderate our discussion, and the rest of the panelists as well.

MARIANNA KAKAOUNAKI:  OK.  Good afternoon, everyone.  It’s a great pleasure to be here tonight.

I would like to thank the Atlantic Council, the U.S. embassy in Athens and diaNEOsis for bringing us together for this conversation.  I really feel it’s both very timely and very useful.  We’re only two months ahead of the European parliament elections, we have Greek elections coming up this year, and actually, I think we have national elections in eight different countries in the region.  And we have established that disinformation goes far beyond just fake news or, you know, undermining our faith in the press or the political system.  It actually influences elections.  I think we saw that in the U.S. elections in 2016. 

And for a while, I felt that we were kind of – we discussed about this – we were kind of unprepared, kind of observing the problem and probably feeling unprepared how to deal with it.  So that’s why I’m really looking forward to tonight’s panel.  We’re going to try and focus on solutions and, obviously, the many challenges surrounding those.

With us tonight is Samantha Bradshaw.  She’s a researcher at the Oxford Internet Institute. 

I was stuck in bed for two weeks with my leg in a cast, so I read most of your work and it’s really interesting stuff, a lot about how different governments use social media for disinformation campaigns.

So Samantha is going to focus on the tools and the tactics, how they are used around the world.

Thodoris Georgakopoulos, he’s also a journalist.  He’s a writer and the editorial director for diaNEOsis.  And what I love most about his work, apart from the fact that he has a great sense of humor – and I’m sure you’ll figure that out very soon, no pressure there –

THODORIS GEORGAKOPOULOS:  Let me change with her.  (Laughter.)

  1. KAKAOUNAKI: – is that really, you know, their work, they have their eyes and heart on the ground. They’re not just locked up in a room analyzing, you know, statistics and doing theoretical reports. 

I really feel like your work is very important at this critical time for the country.

  1. GEORGAKOPOULOS: Thank you.
  1. KAKAOUNAKI: And Ambassador Daniel Fried, he’s a distinguished fellow Atlantic Council and, I read, one of America’s longest-serving diplomats. So for 40 years, he was a Foreign Service officer, so I’m sure with a lot of insight, you know, and lessons learned from the Cold War at its coldest probably and what came after that and always, you know, a policymaker, so hopefully with a bias for solutions.  And that’s where he’s going to focus.

So as Geysha mentioned, you have some papers, you can fill them in with questions and we can address them at the end of our opening remarks and then we’ll have a moderated discussion.

Samantha, would you like to start by giving us an overview?

SAMANTHA BRADSHAW:  Great.  Yeah, thank you.  I would love to.

So I’ve been asked to kind of paint some of the broad-stroke ideas around how different state actors are leveraging social media to automate suppression, to manufacture consensus and to undermine trust in our democracy and our vital institutions that support it.

And I wanted to start my talk tonight here by reminding us all of how the internet was actually once celebrated as this metaphor for democracy.  It was supposed to be this medium that would allow us to communicate with one another, to express our fundamental human rights online, to mobilize political action and political protest.  And it brought a lot of promises.

But fast forward to 2018.  Now we’re seeing genocide in Myanmar fueled by hate speech being spread on Facebook.  We’re seeing people dying in India because of rumors and conspiracies being shared on WhatsApp.  And, you know, we’ve talked about the U.S. election a lot, but those polarizing sentiments that – sorry, populist sentiments that were both present in the U.S. election are very present in many countries in Europe and in other places around the world, like Brazil, who has also just recently elected a very populist president as their leader based on a very successful disinformation campaign on WhatsApp.

So I think this idea that technology is inherently liberating or inherently going to bring us democracy is not necessarily true.  I think it creates both affordances that can support democracy, but it also has features that undermine it.  And so there are three particular things about social media that I think are undermining democracy today.  And these are, because of the characteristics of this technology, this is what we’re seeing hostile state actors or hostile domestic political groups use and exploit to undermine a lot of our debates online today.

And so the first of these affordances has to do with the algorithms and the automation that supports how we find information, and also how information goes viral.  So I think a lot of people here are familiar with the idea of a political bot on social media.  So for those of you who aren’t, a bot is essentially a fake account that’s essentially a piece of script and it’s designed to mimic human behavior.  So it might like, share, retweet, even interact with real people online, but it’s not actually a real person.  This entire thing just runs automatically on social media.

And what it does is it amplifies a lot of content or ideas that might not actually be true.  But because it’s operating much faster, it might be operating throughout the entire day rather than just set periods of times, like you or I would, it can give this false sense of popularity, momentum or relevance around a particular idea, around a particular person.  And so that might make a lot of these ideas and these voices at the fringe seem much louder than they actually are.

The second characteristic of technology that I think changes the propaganda and the persuasion game being played has to do with the data richness of the personalization that happens.  So we all know that data forms the currency of our information economy.  So every time we interact with a post or a person online, social media platforms collect information about these actions and they generate these profiles about who we are as people beyond, you know, just basic demographic information about our gender or where we live.  They have deep insights into what appeals to us, what interests us.  And when you combine this with propaganda through targeted advertisements, you can send very specific messages to very specific individuals or groups of individuals that appeal to them based on their preexisting beliefs and who they are as people.

And this, of course, raises really critical concerns for democracy, because how can you start to hold political parties accountable if they’re sending different messages to different people based on who they are?  What I see on my social media feed is going to be completely different from every single one of you in this audience because, you know, I’m so different than everyone and that’s what makes us great.  Right?  But if we have no way of holding or having that consistent message, then how can we have a healthy public debate?

And so the third characteristic of technology that I’d like to point out, and the final, has to do with the fact that we now have a multitude of voices online.  But unlike propaganda in the past, which there was very clear messaging from particular sides, the narratives aren’t so clear anymore and it’s not about pushing one ideology or one political agenda, it’s about pushing multiple.  We already heard about how the Russians spent a lot of money supporting one side of the issue and then equally supporting the other side.  And there’s lots of evidence of that around Black Lives Matter issues in the U.S., particularly on race.  And that’s one of the unique things that has also changed about technology and the disinformation campaigns that are happening today.  We’re seeing different platforms being used to spread these messages as well, so it’s not only on Twitter or Facebook anymore, we’re seeing these campaigns move into WhatsApp and not just using text, but also using images, which can be a very powerful medium for making people remember something, making them feel emotionally connected to a particular issue.

So those, I think, are three fundamental changes about the technology today.  And, you know, when I’m – I guess to wrap this up, when I’m reflecting on some of these issues and the way that hostile state actors might leverage these technologies to sow division and distrust between us, I think they make one fundamental mistake and that’s that they assume that our differences are actually our weakness.  But we all know that diversity is the strength of democracy.

So using this as an opportunity to talk about some of these really challenging issues that often come up in disinformation campaigns around immigration, around inequality, I think this is a great opportunity for democracies to sort of wake up and learn to use these digital public spaces and have a healthy conversation about these technologies and how they can once again become that metaphor for democracy.  How can we start designing our social media algorithms better?  How can we start dealing with targeted advertisements or maybe funding these business models differently so that we can strengthen our digital public sphere?  Thank you.

  1. KAKAOUNAKI: Thank you very much.

Thodoris?

  1. GEORGAKOPOULOS: Thank you. I will start by saying a few words about fake news in general.  Fake news, as you’ve gathered by now, is not any type of false information that exists online.  It’s not journalist mistakes.  It’s intentional dissemination of false information in order to gain something.  So according to this definition, the tabloid press has been doing it for decades, and political propaganda has used it since the ancient times.  In a sense, fake news has existed forever.

And the reason why this type of disinformation is so successful and gains a lot of traction is that the human brain has inherent deficiencies and that’s important to remember.  These deficiencies make us susceptible to lies and falsehoods.  Psychologists have studied those traits and they have given them colorful or difficult names like cognitive dissonance or backfire event or anchoring or attribute substitution.

(Off mic.)  I hope you’ve heard it because I’m not going to repeat anything.  (Laughter.)  Those attributes – oh, wow.  (Laughter.)  Should I go again?  OK.  So I’ll assume that you’ve heard everything.

Those attributes, those traits rather, make us make stupid decisions essentially.  They make us – they make it easy for us to fall for hoaxes.  And they generally make it – make us – make it easy for us to make a mess out of our lives.  It’s just part of being human.  We are vulnerable and susceptible to lies and these type of messages and disinformation.

The reason the problem is more acute now and the reason we’re talking about it is what Samantha touched upon, the way fake news are disseminated in our time.  In the past, some of you remember that information traveled a very different way, through heavily controlled channels.  People consumed carefully curated packages of information – a newspaper, a 30-minute news program – they encountered a limited number of selected information bites every day.  They were – those bites were selected for them by other people, qualified people most of the time.  And among those information bites, they would invariably find things that they didn’t care for, they weren’t interested in or even things they disagreed with.

Fast forward to now.  Now the impact and the power of traditional media companies that used to produce those packages of information has diminished greatly.  The purveyors of those curated packages of information no longer influence public opinion as much as they used to.  The majority of Greeks, for example, get their news from social media and the internet.  According to a recent study, for the first time, in 2018, more internet users in this country got their news from social media rather than from TV.  So people now get to curate their own package of information and now the algorithms make sure that they never or almost never see things that they don’t care about or that they disagree with.  And that’s essentially what the platforms were built for.  So that’s what has changed. 

It’s not fake news, per se, they’ve always existed.  Human beings are the same, we’re still just as vulnerable as we were before.  But now we have the social media platforms.  And suddenly, a lonely, troubled guy living in his – in a basement or his mom’s spare bedroom can come across a wide conspiracy theory about a pedophile pizza parlor.  And not only that, but also a community that believes that conspiracy theory as well, which validates the veracity of this extraordinary lie.

And the effects are obvious.  My colleague Kyriakos Pierrakakis – Geysha – (laughter) – mentioned that 29 percent of Greeks believe that the white plumes of airplanes leave out chemtrails meant to change the way our minds work or something.  Up to 40 percent of Americans agree with that. 

So now it seems that the platforms are, if not a big part of the problem, maybe even the entire problem.  So what can we do about it?

When I started talking about this issue in events a couple of years ago, and actually until very recently, I used to argue that one possible solution is to flood the internet with truth.  And I thought an organization like ours would have this mandate as well.  We could answer everything, provide accessible, credible information to everyone who can run across lies online.  However, many people think that that does not work very well, because even if it does work, it brings additional attention to the lies. 

One of the reasons anti-vaxxers are a thing is the fact that their bizarre beliefs are refuted so loudly by everyone else.  This brings attention to them.  A lot of people in the media who would otherwise not be aware of antivaccine campaigns suddenly get the message that there is a controversy there.  So there’s a difficult balance to maintain when you try to answer disinformation directly.

Another solution – and I’ll be brief about this, I’ll try to be brief about this – is regulation. It’s very complicated.  But it’s a conversation that needs to start.  I was talking the other day at Delphi with Christopher Wylie.  He’s the Cambridge Analytica whistleblower.  He’s the guy who exposed the way his company used Facebook data to track vulnerable users and bombard them with messages before the 2016 election in the U.S.  And he told me many interesting things.  The most important thing that he said is that the platforms themselves can absolutely build technical solutions to the problem.  They can do it, they can find the trolls, they can track the bots, they can track those who are using their platforms to use them as weapons.  Everything that Samantha said, they can technically probably solve it if they want to, if they dedicate enough resources to that end.

I mean, Valentinos has built a fact-checking tool that works, by himself.  He’s, like, 12 years old.  (Laughter.)  I’m sure Google or Facebook can do it. 

You’re 22, I’m joking.

But the thing is that they don’t seem to want to.  But judging by the way they approach each scandal, it seems that they don’t really care.  It’s not – it’s like it’s not part of their mandate.

Recently, a couple of Facebook executives came to Greece on a fact-finding mission.  Supposedly, the company is trying to form some sort of response and safeguard itself ahead of the European election.  The EU has forced these platforms – and when I say these platforms, I mean Google, Facebook and Twitter, those are the main platforms of disinformation, the most effective ones – the EU has forced them to submit monthly reports explaining what they are doing exactly, explicitly, to combat disinformation that takes place ahead of the election.  So I guess their trip was part of this effort.

And they asked for a meeting with me.  And I met them.  I can’t say exactly what we talked about because it was a closed meeting, but I can say that I was baffled.  The whistleblower, Christopher Wylie, was able just by looking at my 80 most recent likes to tell whether I’m married, where I live, what my interests are.  He was able to know everything about me.  And Facebook itself supposedly doesn’t know what Greek political parties think about the European Union and they have to fly to Athens and ask me about it.  (Laughter.)  I mean, why don’t they look at my Facebook page?  It’s also on there.

So I think that this is a problem with the platforms themselves.  And perhaps the only solution can come from the platforms.  When I asked the whistleblower what can we as citizens do to protect ourselves from disinformation online, he said:  nothing.  We’re not supposed to do anything.  We shouldn’t have to do anything to stay safe online.  We should be able to go online, do anything we want without having our data weaponized and our timelines hijacked.  The platforms can do it.

The only thing that we can do is to demand that they implement innovative technical solutions to limit the effects of this problem, if not downright eliminate it.  And if they continue refusing to acknowledge the magnitude of the problem, to be held accountable and to take their responsibility; if they refuse to build technical solutions, alter their infrastructure, then maybe they should be forced to.  It’s not easy.  It’s not going to be easy.  There are many caveats, the protection of freedom of speech being the most important one.  But I think – and by this I’m concluding these remarks – it’s about time that we start talking about this seriously.

Thank you very much.  (Applause.)

  1. KAKAOUNAKI: Thank you, Thodoris.

Ambassador?

DANIEL FRIED:  Social media companies.  Your experience illustrates the problem but also the direction of the solution, but I’ll get to that.  I could not agree more.  That’s an insight.  But I want to work toward that.

Solutions exist to the problem of disinformation.  We are not hopeless.  We don’t have to throw up our hands and say that there’s nothing we can do.  And more importantly, solutions exist working within the framework of our tradition of freedom of speech and free expression.

Now, the set of policy solutions that I’ve been working on – I and Geysha and our colleagues have been working on at the Atlantic Council – focus on foreign-based disinformation, not domestic-based.  But there’s an overlap between the solutions.  Let’s not kid ourselves.  If we get the policies right to deal with foreign-based forms of disinformation, we will have helped our information ecosystems enormously.

I chose foreign-origin disinformation because, frankly, it’s easier for national governments and for the European Union to tackle foreign-based disinformation than it is to go immediately into the whole disinformation problem.  As an old policy guy, I’m a great believer in dividing up the problem and tackling it in digestible bits rather than trying to go up against everything at once and end up doing nothing except admiring the problem.

Work within democratic norms.  That means, first of all, I’ll start with what we shouldn’t do.  We should not be tempted to race to content controls.  Now, I know that in fighting terrorist – you know, ISIS – propaganda there has been some control – content controls.  Beheading videos are treated like pornography or child pornography; they are simply eliminated.  But this is not a tempting solution to deal with foreign-based disinformation because then it puts governments in the position – to do so would put governments in the position of being arbiters of truth.  I mean, who wants that?  All right, since I’m an American I can speak to my own government.  Do I want the Trump administration deciding what is true and what is fake news?  You know, probably not.  I don’t want any government – I don’t think any national government should be given that responsibility, nor is it necessary.  Nor is it necessary.

The solution for foreign-based disinformation, rather, lies in the direction of transparency and integrity, and possibly regulation along those lines.  Now, what do I mean by transparency and integrity?  Transparency means disclosing who actually is online.  In the United States, if there is a site called “Concerned Patriots from Texas” with extreme political views, they have every right as Americans to be online espousing their political views.  But if they are, in fact, not Americans, they’re not from Texas but are an arm of the Russian-controlled Internet Research Agency in St. Petersburg, I want them identified as such, OK?  Fake accounts and misleading accounts ought to be labeled.  We have IDs that we need to drive a car.  Should we not be required to actually be the people we purport to be when we go online?

Integrity is related.  It means that if you are, in fact, an account – you know, we’re in Greece – “Greeks Concerned About FYROM,” that ought – I think Greek listeners would probably be interested to learn if that site were actually controlled by the Kremlin.  There is a difference between being American – or non-American when you purport to be American.  Integrity and transparency.

Now, it follows from this that governments ought to have the ability to regulate political ads online.  Regulating commercial speech is a long tradition.  Cigarette advertisements have been banned in the United States for a generation.  There is nothing wrong with regulating commercial speech.

Bots.  A bot is a robot, OK?  Samantha gave an excellent description.  Human beings have freedom of speech rights.  Do bots?  My guess is probably not.  At least they can be identified.  And why can’t they simply be removed?  This is the point about technical capabilities.

Now, one problem with removing bots is that our companies in the West, all of them use bots to create the impression of interest in a commercial product that may or may not exist.  It is part of the advertising model for corporations that social media companies sell to those corporations.  But there is nothing inimicable to the freedom of – to freedom of speech and freedom of expression to regulate commercial speech.  So I would argue for the regulation of bots.

Algorithmic bias is tough to regulate.  But in the United States we used to have something called the Fairness Doctrine, which meant that national television networks needed to at least make an honest effort to show both sides of the same issues and not be partisan.  What does that mean with respect to algorithms?  Well, maybe it means that I in the United States should not be sent just confirmational news items telling me what I already believe.  Maybe there ought to be an algorithmic requirement to send me stuff on all sides of whatever issue I happen to be interested in.  Is this consistent with freedom of expression?  I think arguably it is.

Then we get into the area of standard terms of service and definitions.  That sounds technical and wonky and fine print, but bear with me for a second.  If we want the social media companies to behave honestly and root out the trolls, the imposters, the fake accounts, the impersonators, maybe there need to be common terms of reference mandated by governments and the European Union.  Right now social media companies can actually not talk to each other or develop common standards because the definitions are all different.  Maybe we ought to mandate common definitions and then start regulating the issues of transparency and integrity.

Now, this process has actually already started.  As an American, at this point I’m supposed to say something bad about the bureaucracy in Brussels and the Commission, but in fact the European Commission is way ahead of the United States government in beginning to tackle the problem.  They have negotiated with social media companies a code of practice.  It’s vague.  It’s a little bit soft.  It’s capable of being interpreted in different ways, but at least they’ve got something out there.  And if you read the fine print of the code of practice, they say that they will judge social media companies by how much they do to remove foreign disinformation.  And if the social media companies don’t step up, the next step may be regulation.

Now, the combined power of the European Commission and potentially the United States to regulate social media companies together is pretty powerful.  The social media companies are going to listen to us – the “us” being the Europeans and Americans – if we get our act together, if we come up with a common story, and if we start talking to the social media companies from a position of strength but rooted in our common defense and adherence to freedom of expression.

Finally, the heroes of this story are not going to be governments or old policy people like myself.  They’re going to be 20-somethings.  They’re going to be young people, like the people who do – you know, fight fake – or, FightHoax, or the Digital Forensics Research Lab at the United States, or the Baltic Elves, or StopFake in Ukraine, or EU’s DisinfoLab, which is a private group operating out of Belgium.  The heroes of this story are going to be the young activists who, in their own countries, are far more capable than foreigners of spotting foreign-based disinformation campaigns.  We should put our faith in them.

There was a question of – now, there was a question of what individuals should do.  Yeah, individuals should be able to go online and not have to worry about fake news.  But come on, let’s be real.  Since the beginning – since the printing press new information technology has been exploited by extremists.  And they have usually moved faster.  The printing press, Gutenberg bible, but all kinds of inflammatory religious tracts that ended up helping foment religious wars.  Radio, right, wonderful invention.  In the hands of Goebbels, a less-wonderful tool. 

How did we manage to deal with information technology revolutions in the past?  Well, we don’t worry about the printing press and the radio because over time and a couple of generations social and sometimes legal norms developed, and we learned how to live with these things within our – within our framework of freedom of expression.  The same will be true of the internet.  But our job is to foreshorten the period of adjustment and limit the damage. 

Now, I said at the beginning that the policy focus – or, at least my own policy focus – was on fighting foreign-based disinformation.  But if we do it right, it will create new norms which will affect and improve the social media information climate in all of our countries.  Now, America came to this problem late, and we did so only after we got hit on the head hard in 2016.  And I remember my Estonian friends used to – said to me after 2016:  Yeah, we’ve been telling you this for how many years?  Maybe now you’ll listen.

We can deal with this problem if we act together.  By together, I mean starting with the core of the world democratic community on both sides of the Atlantic – the EU and the United States.  Learn from each other, stop pretending the problem doesn’t exist, and remember to follow our own best traditions, and we can handle it.  Thank you.  (Applause.)

  1. KAKAOUNAKI: The first thing that I keep is that we’re all in it together and that we need to work together to deal with this. And I heard your opening remarks, I hear what Christopher Wylie told Thodoris at Delphi about, you know, how there are technical solutions.  And I know, for example, that Twitter once in a while suspends accounts until you give them, you know, a number, and you have to verify it.

But I still have a lot of concern and a lot of questions about how we deal with organic content.  And especially at a time when, you know, politicians use the notion of fake news even when, you know, we do reporting that affects them, like, how do we deal with it, without jeopardizing freedom of expression?  I know you went into it, but it’s – I think it’s my biggest concern and the next steps.  And if anyone wants to take –

  1. GEORGAKOPOULOS: Well, it’s a major concern. It’s maybe the most difficult and sensitive aspect.  The Chinese have solved that, but we can’t in the European Union, in the United States, we can’t approach the same solutions.  And that’s very good.  It’s not – not all cases are the same.  As Ambassador Fried said, when you have Russian trolls you can find them, the protection of their freedom of speech does not apply, you can eliminate them relatively easily.  When you come to issues like the anti-vaccine movement, at first sight it may seem like a freedom of expression issue, but a recent study showed that over 20 percent of anti-vaccine content on Facebook comes from just seven Facebook pages. 

Most of these hoaxes, and the more virulent, the more – the ones that survive for a very long time online – come from concentrated campaigns that come from specific sources.  Those sources are easy to find.  When it comes to issues that are maybe not protected by freedom of expression, when it comes to issues about public health, for example, where regulations in the European Union are quite strict in some countries, maybe there’s a real way to do something there.  Find the seven pages.  Maybe Facebook can tell something about their ownership, how they operate.  Maybe some of them are clearly bots and not operated by humans.

So the thing is to make a decision that something has to be made.  And that – whatever the platforms decide to build as a technical solution to the problem, it needs – it needs to be implemented transparently.  And regulators need to be able to see what they are doing, to understand the rules, maybe have a say in what the rules should be.  And they should be watched – they should be watched over.  As it has turned out, these giant corporations are monopolies.  They have vast resources.  They have virtually no competition anymore.  They’re not – pretty much not regulated at all.  It’s a very complicated issue, but I think some steps can be made in specific – let’s start with something.  Let’s pick an issue and start with that.  Let’s eliminate bots.  Let’s have them eliminate bots, and then we’ll see.  We’ll move to more challenging issues.

  1. BRADSHAW: I mean, if I could jump in on a second on this point, on eliminating bots, I think it’s really important to differentiate, first of all, between the kind of bots. As Ambassador Fried noted, there are many different kinds of bots online.  We’re not necessarily worried about The New York Times bot that automatically pushes stories that are published to people’s Twitter feeds.  It’s performing a fairly harmless function. 

Another great example of a bot that I like to tell people about was this bot created by this activist when Trump announced the first ban on immigrants.  And what this bot did was it looked at the record of some Jewish immigrants who had tried to come to America on a boat during World War II and they were turned away.  And every minute, this bot would retweet the name of one of these people on this boat who was not allowed asylum to the U.S.  And I thought this was a very powerful way of making a really important expression of political speech.

So bots are not necessarily the problem in and of themselves.  The technology is not necessarily the problem.  It’s the way that it’s being used.  And so we see a lot of hostile state actors using bots to push all kinds of disinformation and amplify these voices at the fringe.  Those are the bots that we really want to take down.  One of the interesting anecdotes, though, from the research that we’ve been doing at the Computational Propaganda Project – and we were some of the first researchers to start studying this whole bot phenomenon back during Brexit, when no one actually knew what a bot was at the time.

A lot of the bot developers that we’ve interviewed, they pay attention to our methodological footnotes, and I’m sure they are doing the same with Twitter as well, to avoid detection and to make these accounts more resilient.  So they might not just purely or crudely automate posts anymore.  They might add this element of human curation to the account.  They might engage with real people online every now and then.  So it’s not a fully automated account.  When we were interviewing this bot developer, for example, we set our very basic bot identification threshold at 50 tweets per day. 

So if you tweeted more than 50 times, we considered that not really normal.  Most people don’t tweet that often.  And so you were considered a bot or a highly automated account.  One of our interviewees said, oh, yeah, I read that study and I reduced my threshold to 45.  So this is a great example of how it’s a constant game of whac-a-mole looking for these purely technical solutions to the technical problems.

  1. FRIED: There is no – in public policy, as in life, there is no such thing as purity and 100 percent. All solutions are partial.  If we get everything right in the public policy that we’re discussing, some disinformation – a significant portion of disinformation will get through.  So we’re talking about reducing it at the margin.  And in the meantime, educating our society so we develop a certain tolerance and ability to automatically discriminate.  There’s a certain percentage of the population which will believe anything.  Can’t help it.  There’s a certain percentage that won’t believe any garbage because they can make their own discrimination, they don’t need any help.  But most people in the middle are going to benefit from sound public policy.

Now, I think we could probably come up with public policy solutions to eliminate bots, particularly impersonator bots and foreign-based bots.  Bots are yesterday’s challenge.  Tomorrow’s challenge is going to be a deep fake.  What’s a deep fake?  Well, that’s a video of a major politician giving a speech where he says horrible things, or a video of a politician having a conversation saying disgusting things, that actually that politician never said.  It’s a complete fabrication.  But it looks like him, or her, sounds like the politician, and is at immediate glance indistinguishable.  So deep fakes are tomorrow’s problem.  Well, maybe today’s.  But they’re the problem that is going to be cresting.  And we need technical solutions to be able to identify and remove them, particularly if they are foreign based.

But my point is that – you said whac-a-mole, and that’s absolutely right.  We need to step back and think of broader policy solutions, algorithmic bias, to make sure that the social media companies’ algorithms don’t drive us into our respective ideological corners.  And we have to start looking at information.  The commodity in the internet that is sold is you and me.  Our profile is sold to advertisers.  Now, we may not realize this – and maybe the Millennials don’t mind – but the fact is that means that social media companies are in fact research agencies for Russian intelligence service, and the Chinese, and God knows who else.  We may want to explore – and I say this cautiously rather than pushing it forward as an immediate policy objective – to apply information fiduciaries to social media companies, to make – to limit their ability to sell us and our profiles online. 

You know, of course the social media companies know more than those – pardon me – idiot researchers who came out and pretended not to know anything about Greece.  But they probably weren’t lying.  They probably are just not applying their own techniques to the job that they’re supposedly doing when they come out here – which, by the way, leads to suspicion that we have to change the incentive structure for social media companies, so they get a little more serious, OK?  Which is well-within our power.  But it’s this kind of a conversation which suggests to me that the policy answers do exist.  If we’re already talking about the different ways to approach the problem of bots, or the modes of approaching algorithmic bias, we’re already halfway there.

And then we need to get serious about the solutions, put them into effect, and make sure that the – that we’re supporting the EU and that we, Americans, are insisting that our own government take this seriously.  And a lot of people in the U.S. – don’t mistake my point.  There are a lot of people in the U.S. government, in the current administration, who are taking it seriously and are trying to do the right thing.  We’re just having trouble as a government getting organized, but we’ll get there, I suspect.  (Laughter.)

  1. KAKAOUNAKI: I’m glad you mentioned deep fakes because I couldn’t believe when I saw, you know, the things they are doing, you know, with deep fakes, and the technology that imitates voice. You know, those actors are just – they’re moving really fast, the digital actors.
  1. FRIED: Well, the good civil society groups are just – are going to be brilliant at detecting the deep fakes, and then exposing it. Geoff Pyatt is here.  I remember when he was concerned about the Russians lying about the shootdown of the Malaysian airliner over Ukraine.  And it turned out that the Atlantic Council’s own digital forensic research lab was able to actually not only demonstrate which units had done it, but which officers had done it, because they were as careless on their own personal social media accounts as Millennials everywhere, as it turns out.  (Laughter.)  And they got nailed, OK?  They were nailed. 

The heroes are going to be the 20-somethings that are doing this and will probably be able to identify deep fakes and expose them in real time.  And then we need to deal with the bots, and the trolls, and the algorithmic bias which drives forward a deep fake piece of content so far that everybody reads about it before the truth exposing it comes out.

  1. KAKAOUNAKI: Yeah. I like how he’s very positive, huh?  Optimistic.  (Laughs.)
  1. FRIED: Well, it’s the American deformación, you know? I can’t help it.  (Laughter.)
  1. KAKAOUNAKI: And we – let me remind you, you have these cards. We’re more than happy to take your questions as well.

Until George can collect them, another question that has to do with, you know, regulating social media.  Europe’s security commissioner criticized the lack of effort done in these monthly reports they have to do.  Probably didn’t do a very good job with them when they came.  (Laughs.)  And actually, the U.S. Senate Intelligence Committee also told them off – you know, the social media companies – that your platforms are being misused.  Do something or we’re going to do something about it.  And from one point, you know, you think that those are platforms, they’re not publishers.  But then, for example, we have this amazing investigative work from The New York Times that showed that actually Facebook knew a lot more about Russian interference in the U.S. elections, and they hide it. 

So what’s the situation now?  Like, are they helpful?  Are they pretending to be helpful?  Maybe Samantha with her research has an insight.

  1. BRADSHAW: Yeah. I can jump on this one or start off – start off the conversation.  So I think I hold a little bit of a controversial view on this panel.  But I think the platforms have done a lot around content.  The platforms, through their terms of service, already have a much wider scope for regulating content than any government.  Where they really failed have been in two areas.  The first has to do with actually enforcing those terms that have existed.  So actually taking down the content before it became problematic and became this major issue that we’re now still grappling with. 

The second area where I think they failed is not around the actual content itself, but around, you know, this idea of information fiduciaries, doing good around our data, and actually developing business models that protect personal privacy.  I think the content is almost a symptom of some of these underlying problems that have to do with the way that algorithms structure and incentivize content to go viral, the way that our data targets us with particular messages.  These fundamentals of how the business model actually works is the systemic challenge that’s underlying a lot of the bad that we’re seeing today.

And actually, I’d like to add a third point in terms of where I think they failed, because I just thought of it now.  I think they failed around the areas of transparency, like some of my panelists have already noted.  When we’re thinking about regulation and how to proceed, I don’t think it has to do with regulating the content, right.  Since 2016, we’ve already seen more than 42 different governments implement new laws designed to tackle some aspect of fake news. 

A lot of these laws have been in authoritarian regimes that are using fake news as a way to legitimize further censorship and further control over their populations.  It’s not about regulating content.  It’s about developing some kind of procedural accountability within these platforms.  So not what the rules should say but how were these rules developed.  Are the rules successful at actually tackling the problem and how do we measure that?  How adaptable are they to some of these unanticipated consequences? 

It’s developing transparency in the structures of these platforms and allowing governments access to the way that these platforms are making these really important critical decisions about our content and our use that is really, really important here and I think the area that regulators should really be focused on when they’re thinking about this problem because if we start debating speech we’re going to be debating it for – until the end of times because speech is such a contentious issue. 

Every government has a different point of view as to how that should be regulated, even within democracies.  The U.S. model versus Europe’s model towards regulating speech is very different, right.  For very extreme cases on social media, that’s why we have national courts to deal with these issues around speech and around content.  But – yeah, I guess that’s where they should stay.  When we’re thinking about new regulatory models and new regulatory approaches, we need to look at the underlying business models.  We need to look at procedural accountability.  We need to look at information fiduciary duties and we need to avoid content.

  1. GEORGAKOPOULOS: I agree. From what I’ve – I’ve talked to some people who work at Twitter.  Apparently, the company culture in those platforms really doesn’t focus on the effects that the content has on societies, or at least until very recently these companies simply didn’t care about those – these effects, to some extent.  And that’s maybe part of the reason why they don’t enforce the terms – their own terms of use – because they have terms of use and some of them are quite strict.

Someone could argue that the president of the United States has violated the terms of use of the platform on Twitter several times, violating terms about hate speech, about other things.  It’s problematic content.  But, of course, Twitter would never consider deleting his account even for a minute for a timeout.  So the companies really don’t have the incentives to enforce their own rules.  Regulation is very problematic even when it comes to transparency in business model issues and economic issues.  It’s even more problematic when it comes to content – virtually impossible.  But as Ambassador Fried said, the European Union, which is very strict in other parts of the digital realm, is probably the most appropriate venue for regulation to take place, and maybe the United States could take the lead.  But the European Union could be a positive venue on that.  But I don’t see it happening because there needs to be some sort of positive cooperation from the platforms themselves and I think that their culture prohibits them to approach it in some – in the same way.

  1. FRIED: I don’t dispute at all that there’s a cultural resistance in the social media companies. They have done a pretty good job with content but that’s pornography and beheading videos – that sort of stuff.  I think that their behavior will change as the U.S. – if and as the United States and Europe combine their leverage and make it clear that the social media companies have to up their game.  Their business model – they’re operating within the culture of Silicon Valley and they think themselves masters of the universe in a post-national paradise and they have no idea the way they look to, for example, the Russian intelligence service.  Suckers.  OK.  Useful idiots. 

But the point is not to get mad at the social media companies.  The point is to channel that sense of frustration into positive – something positive.  The European Union is ahead of the United States in regulation or at least in the code of practice, which is not yet regulation.  But the basis for common action is there and I can easily imagine the United States and Europe informally developing common standards and then setting up – I mean, as an old – you know, as an old diplomat the thing to do is to set up an informal mechanism – maybe formal, but start informal between the U.S., the EU, key shareholders, and bringing in the civil society and then use that to have a conversation with the social media companies.  Like, we’ve got a lot of leverage – we can use it, and they will adjust.  Their culture is malleable.  They will respond to the incentive structure that we set up if we do our job.

  1. KAKAOUNAKI: There is actually a question from the audience about that, about this common EU-USA regulatory framework for disinformation. Someone – he didn’t identify himself – is wondering whether the First Amendment and the U.S. Supreme Court case law can be an impediment to the prospect of such a thing.
  1. FRIED: It depends on what the thing is. If it is content control, no.  If it is regulation of ads, certainly there’s ample precedent.  If there – if it is removal of foreign-based bots, maybe there will be a court case but I suspect that foreign – that the First Amendment does not – the First Amendment most strongly applies to human beings and probably not bots and programs. 

The precedent for algorithm would be in the old Fairness Doctrine of the Federal – the regulation of American television networks.  None of that is perfect.  I am not a lawyer at all, much less a legal specialist, but I’m convinced you could find within the rubric of transparency, integrity, and information fiduciaries enough space to work on public policy issues in this direction, I suspect.  I don’t know, but I think so.

  1. KAKAOUNAKI: Ayeiz Koumandakos (ph) from the University of Piraeus is wondering if face recognition is part of the solution against fake accounts, bots, identification and he’s wondering why they haven’t used it more in platforms and social media.
  1. BRADSHAW: I can answer this one. So there isn’t a lot of information about how social media companies actually detect these accounts in the public and part of that is for good reason, because if we knew how they were breaking the accounts then bad actors would be able to use that information to avoid detection and last longer in the ecosystem. 

So there’s a little bit of a trade-off there in terms of, you know, keeping up certain kinds of walls to how these platforms operate but also providing transparent access to that so that they can be held accountable for those decisions.  These are kind of like the tricky debates that we’re navigating right now as we think about regulation. 

I, personally, think that using facial recognition to determine the authenticity of these accounts is not necessarily a positive step and that’s because I support the idea of anonymity online.  I think it’s a really important feature of our digital ecosystem and, yes, bad people, bad actors, have leveraged anonymity to create fake accounts.  It is one of the underlying features of the technology that has led to a lot of good but also a lot of bad.  But I don’t think removing anonymity outweighs a lot of the good that it does, especially in a lot of countries where you need to be anonymous to express yourself or else you could get put into prison or, even worse, killed. 

Some research has shown that if an anonymous account is sharing something, people are likely to be more skeptical and check that source because we know not to trust these kinds of accounts instinctively.  So I don’t think addressing the question of anonymity and removing that from the platforms is an appropriate solution and then, therefore, implementing facial recognition technology to identify real people running these accounts is not necessarily a good use of the technology either.

  1. KAKAOUNAKI: This was actually another question, whether we should remove anonymity from Twitter. Ambassador?
  1. FRIED: I agree that anonymity is important for civil society activists and human rights activists in authoritarian countries. But anonymity should not mean license for imposters, and there’s a difference.  That is, Billy Bob from Arkansas ought to be Billy Bob from Arkansas and not Yvonne from the St. Petersburg troll farm – (laughter) – OK?  So there’s, like, it – now, it’s not perfect but that’s where I would try to draw the line.
  1. BRADSHAW: Yeah. And, I mean, I take your point and I agree with you that I don’t think we should – platforms should definitely remove accounts that imposter other accounts.  It’s already part of their terms of service.  But I guess what I’m trying to say is that if we have a verified system for accounts where – like, on Twitter if you get that little checkmark next to your name you’re a verified account. 

If people had that, if a bad actor was able to get an account through that verification process it intrinsically becomes more trustworthy and there’s a risk there in having those kinds of verified systems that a lot of these accounts that have evaded detection for a very long time, if they all of a sudden become verified then they could become a major source of disinformation that actually has more of an effect on people than the account that’s not verified, that’s anonymous, and that’s just spewing garbage.

  1. FRIED: Sure.
  1. GEORGAKOPOULOS: The verification process, by the way, is a very good example of a proactive step to do something about the issue of anonymity and trolls and bots. But here’s the thing.  They didn’t implement it perfectly or even successfully because it became a status symbol and they stopped giving verifications.  It could become a program of providing verification for people who can verify their identity.  That would work and you would also safeguard the anonymity of people who don’t want to say who they are to begin with. 

But they haven’t done it as they could have done it.  They haven’t implemented this tool.  They started doing it – it seems that they don’t have incentive, something to gain, from implementing it fully and I think that goes to everything else that has to do with other issues on fake news and disinformation on these platforms.

  1. KAKAOUNAKI: Another question, from Demetria Mikhaili (ph) – she’s a researcher in strategic communication and news at the Piraeus University – is to what extent the definition of the term propaganda in public discourse functions as a tool for an actor to do character assassination to the actors that are against him? And to what extent, in your opinion, are these strategies implemented by ISIS in the context of digital community for militarizing purposes?
  1. FRIED: I am skeptical of efforts to outlaw or limit anything called, quote, “propaganda” because defining that is – gets us into content control and makes governments arbiters of truth. I am a believer in labeling.  For example, RT – the Russian propaganda station – is, you know, would fit the definition of propaganda but I don’t want to outlaw them.  Do not ban them.  But the – in the United States they could be labeled, in fact, a foreign agent.  So that’s one way to go.  But I don’t want – I’m not tempted by taking overt known media and trying to ban them.  That, I think, is a mistake.  I don’t have to like them.  But that’s not the problem we’re dealing with.
  1. GEORGAKOPOULOS: Yeah. I agree.
  1. BRADSHAW: Well said.
  1. KAKAOUNAKI: Another question, from Christos Gavalas. He’s a fellow journalist from CNN Greece.

We’ve heard about surveys indicating it’s harder for real news to reach Twitter users or be retweeted, but are we maybe talking about the human willingness to believe sensational information and is it maybe that even our efforts to debunk false information fall short of changing people’s minds?

  1. BRADSHAW: Go ahead.
  1. FRIED: There is a human propensity to be focused on the sensational. That is, “Earth is round” sites don’t get a lot of readership but “proof the Earth is flat,” now, people are going to tune in to that.  So that’s why I’m more tempted by transparency and integrity.  That is, one of the ways deliberate disinformation campaigns work is to start you on some nonpolitical conspiracy theory.  The moon landing was faked.  The Earth is flat.  Then the people there who are interested in this are demonstrated to be capable of believing anything and then you start hitting them with your stories that are slanted. 

The purveyors of such information may be domestic but they may also be foreign and that’s where transparency can work because they can be labeled and – as foreign or if they are operating under false pretenses as domestic when they are in fact foreign they could be removed.  So that’s the construct – the policy construct I’m looking for.  And then, as the discussion suggests, the details are pretty complicated but if we’re able to get into the details – you know, three panelists and are able already to talk about the details of how the policy would work – we’re not that far away from solutions.  I mean, two years ago we would have been waving our arms and screaming that the world is coming to an end and there’s nothing we can do about it, or at least I would have.

  1. BRADSHAW: Yeah, and I totally agree and I think this question is also a great question because it also touches on another dimension of this challenge that we haven’t really talked about as a panel yet, which is sustaining journalism in the social media age where information is free. How do we continue to support media organizations that are promoting and producing quality information that’s incredibly important to democracy when, you know, someone can write up a blog post that’s full of disinformation and get that online for free and spreading much more quickly because it doesn’t have to be behind a pay wall because they don’t have to get paid for that like a journalist does. 

And so there’s these questions around the demand side for disinformation and the way that people want and are, like, more likely to consume more negative stories, conspiracy theories, because we think it’s fun, right.  We don’t always necessarily think it’s serious.  Some of us certainly do.  But those questions are also very closely linked to these questions of sustaining our fundamental journalistic models in the Digital Age.

  1. GEORGAKOPOULOS: That’s exactly right, and beyond the – sustaining the journalistic organizations of today, there’s a fundamental discussion that’s a whole separate issue but very closely linked with this one – the way that we have been cultured to understand what news is, what is newsworthy, what type of information are we accustomed to finding interesting. This is a discussion that hasn’t taken place yet.  It’s boring, on a more fundamental level, but I think it should take place at some point because that’s the fertile ground on which disinformation spreads. 

When we regard negative news, negative news stories, as more noteworthy than just news or just noteworthy information, then we are naturally predisposed to accept other negative news, other sensational news more easily.  We are able to absorb this kind of information.  We are accustomed to absorbing this kind of information so we are more susceptible and more vulnerable to this type of disinformation.

  1. KAKAOUNAKI: And I think – touching on what you said, I think another dimension is that, you know, call it bots, call it fake news, call it whatever it is – those things are usually pressing on real issues and I think it’s wrong of us not to pay attention to the issues in which the press, and I don’t know if you agree with me. It’s not like in Greece we have a media landscape that has been the pillar of investigative journalism and –
  1. FRIED: No. Exactly.
  1. KAKAOUNAKI: – fake news came to distract. You know, this – I mean, there is a question whether we have addressed those issues and make people feel that, you know, democracy is for everyone and not just parts of a society or – I don’t know.
  1. FRIED: That’s a larger question.
  1. KAKAOUNAKI: A huge question. I know.
  1. FRIED: Happily, it is beyond the scope of this panel.
  1. KAKAOUNAKI: It is.
  1. FRIED: Or any panel. But let’s just –
  1. KAKAOUNAKI: Reflect.
  1. FRIED: You know, I’ll stipulate that the democratic world is going through a period – a sag of self-confidence and it manifests itself in different ways in different countries. And although – this is – Geoff Pyatt, I’m about to say something controversial – Greece has shown enormous political courage in a democratic context – that is, statesmanship through the deal with North Macedonia.  I mean, I mention this because the cliché is that nationalism is sweeping everything aside and things will get worse. 

Well, I know it’s not a popular deal and so I’m only speaking for myself but this was an act of statesmanship and far – and far-reaching strategic thinking, at least in my view as an American, you know, who lived in the Balkans once upon a time – in a context where this is supposed to be impossible, right.  It’s all Brexit, right.  Short-term political thinking as the country marches over a cliff.  Again, I’m betraying my own views here. 

But I’m not certain – in fact, I am confident that the current cliché is not the end of the story.  We’re going through a sag.  We are going to get out of it.  The question is how much damage is done before we do.  OK.  So, look, you can erase what I said if you don’t like it.  It’s not the subject of this panel.  Geoff is going to kill me for saying it.  But, you know, I come to Greece and I’m, frankly, inspired by an example of political courage.  OK.  You don’t see that everywhere.  I think it’s sort of best thing ever.  Doing the right thing, not for short-term profit because it’s the right thing to do and profits the country in the long run.  OK.  So I just think that’s great.  OK.  Sorry.

  1. KAKAOUNAKI: OK. I think we may have some time for one more question from the audience.  Kostadinos Papamikhalikis (ph) is asking, as – (inaudible) – was saying, as you acknowledge, the average person will know what’s true and what’s not.  The question is, are we attacking the problem or are we looking to regulate again?
  1. FRIED: Yes. (Laughter.)  (Inaudible.)
  1. BRADSHAW: Yeah. And, I mean, I think, you know, the individual who commented – pointed out that yeah, we can generally tell the difference between true and false information.  But a lot of the information that’s out there that’s fueling the actual disinformation is not necessarily fake or false information and oftentimes it’s true information.  If you look at the U.S. elections, for example, a lot of the hacked information from the DNC was what fueled a lot of the conspiracy theories that were eventually spread on social media.  And so there isn’t necessarily a clear-cut line between true and false.  That’s not the only kind of bad information that we’re talking about here. 

So I guess I’d just like to point out that definitional complexity that our question asker identified because it’s what makes this challenge so challenging, for lack of a better word, because we’re not dealing with simply that black and white.  There’s a lot of gray issue area and a lot of overlapping issues that go along with this – you know, as many of my panelists had mentioned, the data, the business models, the algorithms, journalism models, media literacy, right. 

These issues are all entwined together.  And so, yes, we are looking to regulate and we’re looking to do something about it.  But chopping the problem up into very distinct challenges and then regulating there is, I think, the solution that we’re all sort of alluding to.

  1. FRIED: I agree.
  1. KAKAOUNAKI: Michael Gekas (ph). He’s an undergraduate student at the Department of International European Studies.

How will we ensure that our efforts to protect our democracy from fake news will not actually harm it by leading us to an Orwellian control of expression and who will assume the duty to define which opinion should be uploaded online and which not?

  1. FRIED: Nobody should be given that responsibility, and that was the whole point. You don’t want to give governments that authority.  You don’t want to set up a ministry of truth.  You want to – you want to work always within the framework of freedom of expression, which makes it more challenging, makes it better.
  1. GEORGAKOPOULOS: China has solved it that way. (Laughter.)  We don’t want to do that the same –
  1. KAKAOUNAKI: Yeah. We don’t.
  1. GEORGAKOPOULOS: So we’ll avoid doing that. But I think that the idea of an institutional formal or informal panel that will look into it, cooperation between – transatlantic cooperation on the issue, people start – to start talking about this.  I don’t know what institutions you have in mind.  Would be – would it be the European Commission?  Who else would that be?  That would be a great start, and we have someone from the European Commission here.  I think we have the ambassador.  Let’s start something.  So elaborate a little bit, if you may – if I may.
  1. FRIED: That is the easiest problem to have – like, how do you construct it. But you want to have the different players in the room.  You want the Commission.  You want – you want the U.S. government to decide who’s going to own that policy space.  Probably a little bit the State Department, a little bit Department of Homeland Security.  You want a group that will include – you probably want a mixed public-private group, at least for informal purposes, because you want the social media companies in the room or at least for some of it, not all of it.  You want civil society groups in the room for some of it. 

Some of the national governments with a particular interest ought to be invited and this – you can do this formally.  You can do this informally.  There is no shortage of ways this can be organized and parked.  The G-7 could do it.  The U.S.-EU could do it.  There are lots of frameworks.  It’d be nice if my own government were a little less ambivalent about the European Union but that – there’s a lot of cooperation going on behind – you know, underneath the rhetoric.  So it can be done.  That’s not – that’s a good problem to have.  You know, any – you know, ask Geoff Pyatt in his next job to be in charge of it and set it up and it’ll get done right.  It’ll be fine.

  1. GEORGAKOPOULOS: Would you? He’ll think about it.
  1. FRIED: If I had anything to do with it, I’d name him – I mean, he kept the Ukrainians on their feet through the war. He can handle this.
  1. GEORGAKOPOULOS: He can do it.
  1. KAKAOUNAKI: One last question from the audience. How far away are we from a fully transparent governance?  How far away are we from a widespread social demand of credible, integral, open, and valid data?  So as both problems are education related, do you see any business-driven demand to explore artificial intelligence in bridging instead of opening the gap between one and two during highly turmoil times?
  1. GEORGAKOPOULOS: A lot of questions. You’re both looking at me.  (Laughter.)  Who else is going to start?  Well, I didn’t get the entire question actually but I think the last part is very important.
  1. KAKAOUNAKI: There was a long introduction as well but so maybe that’s why.
  1. GEORGAKOPOULOS: The last problem with AI is very important there.
  1. KAKAOUNAKI: But I think –
  1. GEORGAKOPOULOS: The technical solutions that came up – can come up in the following decade may render this entire discussion moot. But I think that those initiatives – the technical initiatives – would start within these platforms and very soon.  Education efforts and activism efforts should be targeted towards that end.  We should put pressure to the platforms to not regulate themselves but implement technical solutions within their infrastructure to alleviate the problem if not eliminate it completely, because that’s impossible.  So the educational issue has to do with people becoming aware of the problem and also becoming more efficient in telling true or false information online. 

But that cannot be our focus because that’s a very long-term – that it should be a focus but not the main focus because it’s a very long-term strategy.  It applies to everything, all kind of information, not just online – information on social media platforms.  So it should be part of the mix but not the main part because it would have – take – show results in 10 years.  In 10 years from now, the AIs may be able to do these things for us anyway.

  1. BRADSHAW: For me, I mean, I started my remarks off today and alluded to this idea throughout that, you know, the technology is not neutral and it has a lot to do with how people use it as much as the technology itself creates certain, I guess, or how it both enhances and constrains different elements of the social media manipulation campaigns that we see today. With that said, I don’t think that a technical solution is the only solution to this problem because disinformation is also very much a human problem, right. 

So we can use social media as a mirror to reflect back some of the big challenges that we’ve been facing as a society, as a collective society, since the 2008 financial crisis.  I think a lot of issues around inequality, the refugee and immigration crisis that’s currently happening, is fueling a lot of the disinformation that we’re seeing being used in elections around the world.  And so simply having – sprinkling some AI pixie dust to identify disinformation isn’t going to make the problem go away.  At the end of the day, we need to take a step back and recognize the human aspects of these problems as well.

  1. KAKAOUNAKI: Perfect. Thank you, everyone.  I hope that if we get something from tonight’s panel is that there are solutions, one step at a time, and that, you know, we should all work together to deal with this.  Thank you very much, and see you in Madrid.  (Applause.)

(END)