Atlantic Council

2015 Global Strategy Forum:
New Approaches to a Changing World
Welcome and Enhancing Foresight
Daniel Y. Chiu,
Deputy Director, Brent Scowcroft Center on International Security,
Atlantic Council
Jon M. Huntsman, Jr.,
Atlantic Council
Scene Setter:
Peter Schwartz,
Senior Vice President for Government Relations and Strategic Planning,
Vijay Kumar,
UPS Foundation Professor,
University of Pennsylvania
Jamie Metzl,
Nonresident Senior Fellow for Technology and National Security,
Brent Scowcroft Center on International Security,
Atlantic Council
Location: Atlantic Council, Washington, D.C.
Time: 2:00 p.m. EDT
Date: Wednesday, April 29, 2015
Transcript By
Superior Transcriptions LLC

DANIEL Y. CHIU: Good afternoon, ladies and gentlemen. Could I ask you to please take your seats as we get underway here at the first-ever Atlantic Council Global Strategy Forum?

So my name is Daniel Chiu. I am the deputy director of the Scowcroft Center on International Security here at the Atlantic Council, and I am leading the Strategy Initiative that is here to kick off this first-ever Global Strategy Forum here at the Atlantic Council. I’m the former deputy assistant secretary of Defense for strategy in the Pentagon. So obviously I have a very strong interest and a passion for this, and I hope to share some of that with you today and tomorrow.

The Council is very pleased to be kicking off this event not only to really tackle questions of global strategy in a new way, as the title of the conference suggests, but also through some new formats. We hope you’ll be seeing some interesting presentations here. We hope this will be engaging and fun, and really encourage you all to participate as actively as you possibly can.

I also want to take this time to welcome our Millennium Fellows from our Millennium Leadership Program. We’re very pleased to welcome them here as very promising up-and-coming professionals in the field that we think will be critical to this discussion about global strategy as we go forward.

I’ll leave it to others to introduce the panels and the subject here. Let me just give you a few administrative points as we go forward. First, to just remind everyone this is a public, open event. We are webcasting and videotaping as we go along. You can tell by this somewhat intimidating tweet wall that we have behind here that we will be on social media hopefully very actively as we go through here. I encourage you to participate there as well. We do have, on that note, a hashtag for this particular conference, which is #ACStrategy. We hope you will use it liberally on this and any other tweets you would like to put forward.

And with that, I will leave you to hopefully enjoy this conference that will be kicked off by the former governor of Utah, the former U.S. ambassador to China, my current very great boss and mentor, and a strong supporter of this Strategy Initiative, Governor Jon Huntsman.

JON M. HUNTSMAN: Thanks, Dan. Thank you. Thank you. Thank you, Dan.

Dan is truly representative of the kind of talent that this organization brings on thanks to Fred Kempe and others, notwithstanding the foibles of the chair of the board. Dan, we’re lucky to have you, and you’ve done just a(n) absolutely superb job.

Welcome, one and all. Thank you for attending the Council’s first-ever Global Strategy Forum. This is a big deal. In fact, we’ve changed the room – the format of the room – just to accommodate today’s session. And I’d like to extend a special welcome, of course, to those who will participate, our speakers, our board members, our partners who are here to attend today’s event, as well as our friends from the U.S. government and numerous foreign embassies who are represented as well. We’re honored to have one and all.

Now, there’s no doubt that today’s international order is alarmingly different from the one we knew 20, 10 or even five years ago. Relationships between states and non-state actors are evolving at a shocking pace. Advances in technology are skyrocketing. And new sectors unheard of not so long ago, like biotechnology and cyber, are replacing traditional security threats at the forefront of today’s strategic climate.

The driving theme for this inaugural forum is that the world is changing in ways that require us to rethink how we navigate global challenges and rising uncertainty. In particular, our government structures and processes are increasingly disconnected from the complexity and actual pace of events. It is becoming more and more clear that we need to think innovatively about strategy rather than look backwards for historic analogies that may be more familiar to us but less applicable.

In order to look forward, this conference takes a novel approach in terms of speakers, format and focus. Now, the speakers are a more diverse set of experts from a broader range of countries, fields of study and generations. The format is intended to stimulate imagination, thinking and action, rather than provide more academic treatise on the topic. The focus is on change.

So, over the next two days, subject matter experts from the worlds of artificial intelligence, urbanization, foreign policy, energy and climate will come together to discuss strategic thinking in their respective domains. This forum is a culmination of the Council’s work on strategy thus far, and it is our goal to make this an annual event which will showcase the Atlantic Council as the go-to organization for this type of work.

The Global Strategy Forum will focus basically on three themes.

The first theme is enhancing foresight. As I just mentioned, everyone is aware that the world is becoming more complex, dynamic and interconnected, more so than ever before. Therefore, it’s absolutely critical to identify key global trends in order to avoid strategic shocks, as well as look for opportunities amidst continuous change.

The second theme will focus on preventing failures. In today’s rapidly changing environment, approaching problems in the same way that they’ve always been approached simply will not work. And current problem-solving tools in government and in the private sector are becoming increasingly obsolete. The gap between how the world works and how institutions work is becoming wider, and this gap must be bridged quickly so as not to lead to strategic failure.

The third theme will focus on imagining solutions. After discussing disruptive change and enhancing foresight, this panel’s task will be to imagine new solutions for thinking about global strategy that can lead to actionable results.

We will also host a formal debate tomorrow morning on America’s role in the world. It is more important than ever to determine U.S. strategy in today’s climate, and it’s our goal to elevate that conversation and to really get people thinking about the role the U.S. should play.

As you can see, the Global Strategy Forum is unlike anything that’s ever been done by the Atlantic Council, and as far as we know is the first conference of its kind by a think tank in Washington. We’re honored to take the lead on strategic thinking, and as I mentioned before, we hope that this forum will serve as a flagship event each year going forward.

Again, I want to thank each and every one of you for attending today. Those who are participating, a special thanks. We hope that you enjoy your time with us, and leave this forum inspired with brand-new ideas and approaches to the world – and the word strategy. Thank you again.

And it’s now a great honor to be able to hand the floor over to Peter Schwartz, the vice president of strategic planning at, who will officially kick off the session. Peter, the floor is yours. We’re delighted to have you. (Applause.)

PETER SCHWARTZ: Thank you. It’s a real privilege for me to be here. This is an organization I admire enormously, and the team that put together this event. So it’s my pleasure to participate in actually three of the four events that we’ll be talking – four out of five, now that I think about it – over the next day or so.

And my job is actually to help set the stage for the conversation. And the real participants are yet to come, the guys who are actually going to give you some real meat. But the first subject is the subject of foresight and what – why this is such a challenge. And that’s what I want to talk about briefly, is why it’s a challenge and why we need new ways to think about it in this context.

I want to point to – I think this is an audience that’s extremely sophisticated about these subjects. I have many friends in the audience. So, you know, a lot of you could give this talk just as well as I could – like Mat Burrows over there, for example.

So what I want to say is that there are a couple of really big structural things that have changed that make foresight much more challenging. And the first, I think, is an obvious one. And I might point to, actually, as it happens, today’s New York Times, Tom Friedman wrote about what I’m about to say; and in today’s Washington Post, David Ignatius wrote about the same – essentially the same subject. So it’s very much on the minds, at least of some of our kind of political journalists who think about this. And it’s the breakdown of the post-World War II order.

You know, I think we’re all very familiar with the idea that the United States actually framed the world and shaped a system of international institutions that pursued a half-century of relative peace and prosperity – that system of the United Nations, the IMF, the World Bank, NATO, et cetera – all of which helped establish a relative structure that persisted for half a century. And I had the privilege a few years ago of debating this with Niall Ferguson, and I made the point that in the first half of the 20th century we killed 180 million people in war and in the second half only 20 million. Now, that’s a pretty terrible number, but it’s a whole lot less than 180 million. And I think one of the big reasons for that was the framework of institutions that the U.S. worked through with its allies and collaborators over the last 50 years.

But beginning sometime I would say in the mid-1990s, that order began to break down, as the U.S. mostly walked away from it. We have yet to ratify any treaties in the last 20 years, including treaties that we write ourselves, like the treaty that we wrote on treating disabled people – essentially a map of our own laws, just the rest of the world should follow it – that we managed to get everybody else to sign and then we wouldn’t sign. And in fact, today’s columns dealt with the – the high point, I would say, was the establishment of NAFTA, which was the last kind of treaty that really helped establish U.S. leadership. And now virtually every treaty we’ve walked away from, whether it’s landmines or Criminal Court or Law of the Seas. We got the Law of the Seas rewritten in our favor and then wouldn’t ratify it.

So the U.S. is now faced in a world that we have essentially abandoned the framework of institutions. We wouldn’t reform the IMF. We wouldn’t reform the World Bank. And we’ve seen China now set in motion a new framework of institutions because they couldn’t get their way at IMF and World Bank, said all right, we’re not going to play by those rules, we’re going to write our own rules. And so what we have seen is consistently us stepping off the world stage in that respect, and as a result the most powerful force for order, stability and peace has now become as abdicated power, and it’s going to be filled by various other people in a world of increasing chaos as a result. And so I’m going to come back to the theme of chaos and why that’s important.

The second big thing I’ll touch on is technological change. And the big one that has already happened is the scale of interconnectivity, as we see on this screen over here to our right, right? Everything is transparent. Everything is commented on in real time.

I was struck by the fact that Hillary Clinton made her announcement for president on Twitter, right? Now, I will tell you in all candor, when Twitter was first launched, I said, this is silly, right? A hundred and forty characters? I mean, come on, what can you say in 140 characters worth saying? Well, in fact, one of my son’s classmates at Swarthmore wrote a book on writing for Twitter – he was one of the founders of Twitter – and it’s a great guide to writing succinctly. I highly recommend it.

But the point is it actually turns out to be a really powerful medium, and all the various social media that have now connected the world in a world of increasing transparency, where everybody knows everything all the time, is a really quite dramatic change. And that is combined with, I think, an increasing pace of technological change. And we’re going to – the next two speakers are going to talk, first of all, about robotics and big events changing there, and secondly genetic engineering and what that might mean and the future of genetics.

But what this leads us to, I think, most fundamentally is a world of inherent uncertainty. I spent a number of years as a director of the Santa Fe Institute, where we studied complexity and the mathematics of complexity. And one of the things that we’ve learned about very complex systems is that they generate what are called path dependents; that is, everything along the way triggers other things along the way. In other words, if you do not have a structure that establishes a persistent order, like existed before, and if you have a high level of interconnectivity of the sort we now have, then the inevitable result is increasing uncertainty. And it isn’t that we’re not smart enough to figure it out; it actually is unpredictable. It isn’t we could build a better model and make the world more predictable in that kind of an environment; that is, the outcome is actually indeterminate. And that is the world that we are in now, in a world of fundamental indeterminacy.

We don’t know what’s going to happen five years, 10 years, 15 years down the road as a result of the fundamental nature of the new circumstance, the absence of structure, the levels of interconnectivity, and the introduction of new technologies like genetics and robotics that are going to transform how we actually think about those things. So we have this world of irreducible uncertainty.

And so the other side of this that is also very important is once that happens, then what you have are kind of multiple paradigms of actions that result from different cultures, different theories of the situation, and different people address those situations very differently. If you think about the Arab world, the Asian world, the European world, the Russian, the Latin American, all of these think about the world in very different terms and have different models of action, different pictures of what is happening and why. And therefore, what you would do in the face of that uncertainty is also very different.

You know, this is mostly an American room. There’s a few Brits in the room and one or two others, but it’s mostly Americans. And we have our own ways of thinking, and they are not the same as all of our friends, allies and adversaries. And so, in the face of that uncertainty, you wind up with really very different desires for action – what you would do, what the consequences of policy are likely to be, and the appropriate strategies for nations. So not only do we have a different context, but we have as a result different ways of acting in that context. That introduces another dimension of uncertainty.

So finally, the last point I’d make – and it fits very well with where we’re going with this conversation, I learned over lunch – that is, how do you deal with this? And what you need is a sense of both rigor, to understand the forces – the economic, the political, the technological and so on – as well as imagination. It’s consistently a failure of imagination that is really at work. It’s hard to imagine what is likely to come.

And so, you know, I think – in fact, last night I went to see “Ex Mechanica” (sic; “Ex Machina”). I don’t know if any of you’ve seen it yet – movie about robotics, a very good movie. And “Her,” and one of the – is another very good movie on artificial intelligence. I mention them because it may be in that in the worlds of fiction we can deal more thoughtfully with this uncertain, unpredictable, multicultural world that we now have to think about for the context in which we actually have to act. And so, in fact, we’re going to explore that theme this afternoon as well in the second session, I think – as well as a little bit in the first session, but it’ll be at the heart of the second session – what can good fiction bring to thinking about it?

And if you look at in the guide to our various bios, and they did a lovely thing. They asked all of the speakers, what are our favorite books and what are we reading now. And I’m a book junkie, you know. Much to my disappointment, nobody named any of my books – (laughter) – as their favorite books, but that’s OK. But one of my favorite books is Isaiah Berlin’s “The Hedgehog and the Fox.” And “The Hedgehog and the Fox” deals essentially with the idea of – the essence of the debate in the book was which was the better history of the Napoleonic Wars, Tolstoy’s “War and Peace” or de Maistre’s (ph) masterful history of the Napoleonic Wars, right? And you can make both arguments. If you’re a hedgehog, what you see is the historian’s tightly defined history as the right way to approach it. If you’re a fox, a bit like me, then you want – you like fiction and all the different ways that it can actually get at the essence of the experience of history. So we’re going to explore that debate in terms of thinking about how you think about the future, what’s the best way to think about that, as well this afternoon.

So with that to set the stage, it’s my pleasure to invite the first of our presenters this afternoon, Vijay Kumar, to come up and talk a bit about robotics and what it might mean for our future. (Applause.)

VIJAY KUMAR: Thank you, Peter.

It’s my pleasure to be here, and I’d like to take this opportunity to tell you a little bit about the work that my students and I do at the University of Pennsylvania on aerial robot swarms, and also shine some of the spotlight on issues that might intersect – at the intersection of technology and policy.

I’m waiting for the slides to come up. There’s actually three ways you can view the slides, and it’s important you look at the slides and not me because that’s where the fun is.

So in my lab, we build robots like the ones you see in this picture here. So these are like drones, except with one big difference. Most commercial drones use GPS for navigating because that’s how they get information about their position and their velocity. So this robot can actually function in environments like this, indoors and completely autonomously. It uses a system of sensing which involves cameras on top and laser scanners that allow it to detect features in the environment and triangulate its position with respect to those features, and it does this autonomously. So a human with limited interaction with the robot can command it go down the hallway, look around corners. And I’m going to show you a video of a recent experiment we did in our lab that shows how this robot works.

So in the top right-hand corner you’ll see what the robot actually sees, but in the main panel you will see the map that it’s capable of building. So this video is played about four times the regular speed, but you will see the resolution with which the robot can build a map, and this is at a five-centimeter resolution, going through hallways and then ending up in our lab at the bottom light. And you can recognize it with the clutter, with all the junk lying around. That’s our lab. But the main point is that you can deploy robots like this today to autonomously explore buildings without actually entering the buildings, and this is the kind of technology we’d like to build.

Now, there’s one big problem with these kinds of robots. So if you look at this picture, you will see that it’s actually got four rotors, but it’s carrying a lot of junk on board – a lot of sensing, a lot of computing. And because of that, we have two problems. First, it’s heavy. So robots like this burn about 100 watts per pound, which means it makes for a very short shelf life. This robot can operate a maximum of 10 minutes. Second, by the time we put all the sensors and computers on board, this thing ends up being quite expensive. So this probably costs about $20,000 for us to build with off-the-shelf components.

So we ask ourselves the question, what lightweight device can we buy in an electronics store that is capable of sensing and capable of computation? And you all have this in your pockets or your – or your handbags or purses or what have you. It’s a smartphone. So an iPhone or a Samsung Galaxy phone, which is what we use, actually has all the hardware we need to operate this robot.

So in a partnership with Qualcomm, we invented what we call the “flone,” the flying phone. (Laughter.) And all we do is to build a robot like this, which you can buy off the shelf, and you take a Samsung Galaxy smartphone and you plug it with a USB cable into a robot, and then you download our app, and that app basically does what you saw the other robot do. And let me show you a video that illustrates this robot flying around in our lab.

So it flies at about three meters per second, and you’ll see it rolling and pitching aggressively to change directions on the fly. And all of this is done with COTS component – off-the-shelf components. The only thing – the only innovation here is really in the software, in the app, which is designed by my students. So the main point is that you can actually buy things off the shelf and create the intelligence that you need to run robots like this autonomously and make the packages really small.

So what can you do with small packages like this? So let me show you a couple of things we’ve been working on recently in our laboratory. So we have been always inspired by nature, and I’m going to show you a video of an eagle shipping – fishing, sorry, for prey. So this bird is able to coordinate different parts of his body to swoop down and catch fish. So we designed our robot based on similar principles, and in Philadelphia you try to catch cheesesteak hoagies, and that’s what that is. (Laughter.) So this robot can plunge down at three meters per second, coordinating its gripper, its eyes, the same kind of precision that you see, and fish, again, because of this tightly integrated sensing and computation package in a small payload.

To show you another example, our robot is able to learn how to fly through a narrow window. Here, the robot is carrying a suspended payload, the height of the suspended payload is actually narrow – is less than the window. So it has to figure out how to pitch itself in just the right way to get the payload through the window. So we can do things like this with really small robots and have them aggressively maneuver to perform fairly complicated maneuvers.

But we want to get even smaller. I want to show you a video of a swarm of honeybees, which is really interesting to see. If you look at the individuals here, they don’t care about avoiding collisions. This is not like the drone that crashed on the White House lawns and you have to worry about safety because these are so small they can collide with the environment, they can with each other – as you will see in a minute – and they can recover from these collisions. The small size gives it the agility and the robustness to avoid collisions and to be safe.

So in our lab, we have been trying to build robots that are smaller and smaller, not because – not just because we want them to be safe, but also because they’re more maneuverable. And as we do this, if you – we’ve started doing this work in 2007-2008, and we have to buy first-aid kits because students would get their hands nicked. And so – now, if you plot a histogram of Band-Aids we’ve been ordering, that has virtually tailed off with these smaller robots.

And I want to show you one of these really small robots we built. This is a 25-gram robot, and you have to see this video in slow motion. It’s the first, I believe, planned midair collision between two robots, and you’ll see our robots recovering from that. So this is, again, one-twentieth the normal speed. You will see these little robots bang into each other and then recover autonomously. Again, there’s no human control. If you and I were to control it, and no matter how expert we were at controlling it, we could not recover from these kinds of collisions because you really need split-second coordination and reaction time.

So we build small and safe robots. Of course, there are some disadvantage(s) with small sizes. And in fact nature has found many different ways for compensating for the limitations of small size, and mostly they all involve aggregation into large groups and building swarms.

So we’re interested in building swarms of robots, and what does it take to create these artificial swarms? Well, it turns out the challenges really lie in modeling how individual robots sense the environment, reason about the environment, and then take actions, and then how do they communicate with their neighbors and interact with them. So if you can understand the mathematics of networks and how to capture that model in software, that’s really the key challenge, and this is what we work at.

So you’ll see here three organizing principles that we use, which are borrowed from nature, and this video illustrates one of them. You’ll basically see the robots essentially recognizing their neighbors and reacting to their positions to maintain a safe separation. So here you have a human operator that’s literally hijacked one of the robots and is able to manipulate the swarm because there are leader-follower interactions that essentially constrain the other robots to follow the leader. Now, in this case it’s just a single leader, but you can imagine a larger swarm in which there are lots and lots of leaders manipulating this entire group, and that allows these very, very small robots with very limited payloads to essentially behave collectively as a large swarm. And you might imagine lots of applications, and I’m going to tell you a little bit about two of them shortly. So this is one fundamental idea.

The second idea is this notion of anonymity, which you see in nature a lot, especially in the smaller organisms. We don’t want to worry about individuals, their identities. We want the software, in fact, to be anonymous in the sense that we want to be agnostic of the identities of the individuals. So here you’ll see the robots commanded to main a circular pattern. And as you remove one or more of these robots and you add additional robots, the team essentially recognizes their neighbors and quickly reassembles to form the same pattern. All that’s required for running this particular experiment is just a mathematical description of the swarm, and the robots recognize that, interpret that and take actions.

So now what you can do is have these swarms essentially change shape. Again, the key thing is to specify the mathematics of the swarm and how that shape should change, and the robots essentially change from a rectangular formation into a circular formation, stretching out into a straight line and then back into an ellipse. And all of these things happen seamlessly, without the robots being aware of how many members are there in the team, who they are, what their identities are; all simply based on this high-level mathematical abstraction.

So what do we do with things like this? Besides making cool videos, we want to solve real problems. And the two applications I want to tell you a little bit about have to do first with agriculture and second with first response.

So let’s first talk about agriculture, which I think is the biggest problem facing our society. And there are a number of ways you can characterize it, but the fact is one in every seven of us is malnourished, and most of the land that’s available to us is already cultivated. And the situation is getting worse for a variety of reasons – there’s water shortage, the climate change and so on – that’s just making our production system less and less efficient, in contrast to other production systems that’s actually doing better. So how can we fix this problem?

So one approach is to actually use robots for precision farming. So here you’ll see one of our robots flying autonomously through an apple orchard. They fly between the rows of trees, and they’re able to build models of each individual tree and essentially operate the machinery surrounding the plants in much the same way you’d operate a factory. Specifically, they’re creating a model of the health of the plant. As in personalized medicine, you want to know every patient and how he or she is doing, and that’s what the robot is doing. It’s measuring vital signs. And in a minute I’ll tell you what kinds of vital signs we’re able to extract on the fly, but you’ll also see in this picture there are a couple of other robots that are traveling in formation because individual robots, as I told you before, have limited shelf life, so we want a swarm of these to cover large areas.

So in this next video you should see some examples of the data we’re able to collect in real time. So on the top left you see essentially video imagery that you might expect to collect. In the – in the center-left you see infrared imagery, and the bottom-left is a – is a thermal image that the robot collects. And in the center panel you’ll see the robot building three-dimensional models of every tree and every fruit on the tree in an online fashion, and this data then is available for analysis.

So what can we do with this data? So one very simple thing that farmers are very interested in is estimating the yield of every plant in the orchard. So if you ask today a farmer how many apples he or she has in their orchard, chances are they’d be off by at least a hundred percent. So what we’d like to do is something very simple: we’d like to go in in February or March, before the apples actually are red or golden, whatever color you expect them to, and be able to count them and allow the farmer to optimize the downstream machinery of harvesting, distribution, and so on and so forth. That will improve the efficiency by up to 50 percent, at least for apple farming.

The second thing we’d like to do is to take these models of plants, build three-dimensional quantitative models, and from that extract the volume and correlate that to what is called a leaf-area index, which in turn measures the capacity of individual plants to conduct photosynthesis. So once I know which plant is capable of how much photosynthesis, I essentially have some measure of efficiency of individual plants. That can then, again, be used to determine what inputs to apply to the plant to maximize the efficiency, and in this case the inputs would mean water, fertilizer and pesticide.

The third thing we can do is use various indices – one index that’s used, which is based on ordinary visual imagery and infrared imagery, is called NDVI. So using NDVI it’s very easy – in this case, you see the one plant that’s actually obviously not doing as well as the other plants, so this plant is diseased and it’s dying. And you can actually pick up these plants just with a flight over orchards; in this case, it’s a pepper crop in California.

We can also detect the onset of diseases. So here’s a flight over orange trees that allows us to detect chlorotic plants, and this is something which is characterized by yellowing of the leaves. It’s hard for us to maybe pick this out, but robots with sophisticated cameras can immediately pick up these kinds of problems.

So with swarms of robots like this, you can really try to improve the efficiency. Our conservative estimates are that you can actually improve the yield of orchards by at least 10 percent and decrease inputs that you use – particularly water – by at least 25 percent.

I want to tell you in the next couple of minutes another idea we have been pursuing, which is to imagine having first responders that are consisting of robots and maybe a couple of humans. So this is a picture of the Philadelphia area with our campus on the top-right. So I want you to imagine a building which is shown in that red circle – imagine there’s a 9-1-1 call there. Imagine there’s an emergency of some sort. Way before the Philadelphia police force can respond to this, we can actually dispatch robots, and the robots have to be – easily can be hooked up to the 9-1-1 dispatcher. And you might imagine a swarm of robots flying in – and these are experiments that we now do routinely – which then respond to the source of the call or to a human-identified disaster location.

I hope nobody from the FAA is here, by the way. (Laughter.) If it is, it was done in Colombia, actually. (Laughter.)

MR. : If you’re under 400 feet, you’re OK.

MR. KUMAR: No, no, not anymore, because the FAA actually requires you to have one robot per operator. If you’re talking about swarms, we don’t have swarms of people operating swarms of robots, so we – so we are – we are actually violating some of their guidelines. But anyway, we – (laughter) – we’d love to have these robots respond.

And now, the interesting thing about this is each of these robots has a downward-facing camera that then allows the team as a whole to reconstruct what is going on on the ground. So on the left side you essentially see what the operator can see or the dispatcher can see – the robots essentially being dispatched automatically to occupy ingress and egress points, with again the downward-facing cameras, and on the right side you see a mosaic being built in an autonomous fashion, and in the bottom you can see the kinds of 3-D reconstructions that are possible, again, built on the fly – all with a single operator. I have a graduate student who actually operates this entire team, and this is his experiment. So things like this are possible if you essentially embed the autonomy in individual devices and then network this team together.

Another thing we’ve been doing – and this is with Japanese collaborators, and this was done shortly after the Fukushima earthquake – is to build robots that can collaborate with ground robots. So the bottom robot that you see is actually a ground robot provided by collaborators at Tohoku University. Our aerial robot is hitching a ride, and it does that because it’s programmed to be lazy. I already told you that the mission life is short, so it tries to hitch a ride whenever possible. But when the team confronts this collapsed doorway, then the ground platform releases our aerial robot, it takes off, and along with its sensors it flies in and flies over the top of this bookcase. And it’s able to construct a map, and in three dimensions it’s able to essentially recreate what is behind that bookcase, something that a human could not have done in a collapsed building.

And you’ll see this team now exploring this collapsed building, and it takes us two-and-a-half hours to do this experiment – and don’t worry; this video won’t play for two-and-a-half hours – but the main point I want to make it that these were able to autonomously create maps like the ones that you see over here – you should be able to see. And this was, again, four years ago. This was a map of the 9th, 8th and 7th story of a – of a building that had collapsed, and we were able to build this autonomously in two-and-a-half hours with just two robots.

But of course, if you’re a first responder, the last thing you want to hear is wait for two-and-a-half hours while I go build my map. You want – you might probably wait for two-and-a-half minutes or two-and-a-half seconds, and that’s really what we want to do. We want to send in swarms of robots that go in and create maps like this way before the human first responders come to the scene so when they come there they have situational awareness; they know exactly what to do. And this way you don’t put humans in harm’s way and you do a more efficient job of finding victims and then doing what is needed to be done.

So let me just conclude. This is the future we’d like to build, and I want to conclude with a poster of an upcoming movie, “The Swarm.” I know a lot of you read books. I don’t know if you have read this book. It’s an old Arthur Herzog novel. I don’t know if you’ve seen the movie. If you have, you are dating yourself. If you haven’t, I encourage you not to see it; it’s a terrible movie. (Laughter.) But I love the poster. Everything about this poster is true: “its size is immeasurable, its power is limitless.” Even this last bit is true: “its enemy is man.” I might edit it and say the enemy is humans, because it’s us that’s standing between this technology and applications such as agriculture, first response, and a whole variety of other things that I think we can use this technology to solve.

So in conclusion, I just want to make a few comments and maybe shine the spotlight on five sets of questions that this kind of technology should raise.

The first question is about how easy is it to build something like this. So every one of the robots I showed you could be built by one of my graduate students in about two-and-a-half hours from scratch using off-the-shelf components. So this is great. This is an opportunity because it’s lowering the barrier to entry, it’s increasing the number of players doing robotics, and if you increase the number of players you get more shots at goal and the more times you score. So that’s great.

But it’s also a threat because the barrier of entry is low, which means other people can compete in the field, and that’s something we want to be careful about. The innovation here, though, is actually in how we have developed software and how we think about mathematics and how we reduce it to practice in these complicated networks. But here, too, there’s an asymmetry. While you might need talented people to write these kinds of programs and to build these kinds of systems, it’s actually very easy to write bad programs. It’s hard to write good programs, which is what we focus on – things that are safe, things that are provably convergent, they do interesting things and solve the problem. But it’s relatively easy to write bad programs and to do things – especially if you have an adversarial intent. And so these threats are intrinsically asymmetric, and something we have to be vigilant about.

We obviously have to think about safety. And again, if you have sensors on board that are visualizing the environment – I know the FAA is thinking a lot about this – but you should be able to sense and avoid. So the FAA actually has a tough job because it really has to walk this fine line between ensuring the safety of humans, the various assets we have around us, and at the same time allowing people to do the right thing, which is develop technology and innovate.

Then there’s the question of security. So most people talk about cybersecurity, but the security we’re more concerned with is cyber physical system security. So it’s not just that somebody can hack the cyber, someone can actually hack the way the cyber interacts with the physical. So, for instance, I might have a robot that you might not be able to hack, but you can hack into the GPS system and spoof that, and that can cause my robot to crash in a place where we don’t want it to crash. So this is a very complicated set of issues that I don’t think we as a society are paying attention to.

And then, lastly, everybody worries about privacy. To me, this is the least of the questions here because privacy is not specific to robot swarms or drones. It is an issue that we’re grappling with in many, many different areas, but this is something else that actually stands squarely between what we’re trying to do and important applications.

But one thing is clear: The swarm is coming. Thank you very much. (Applause.)

MR. SCHWARTZ: Thanks, Vijay.

We’ll go to the next talk in a moment, but I just wanted to say, you know, as we were talking over lunch, I’m involved with a little company called 3D Robotics, which is the largest manufacturer of robots of the sort you – they’re not autonomous the way you’ve described it, so not as far along as you are. But having said that, you’re absolutely right: agriculture’s our biggest market, and secondly that software is our biggest differentiator.

You know, anybody can build these quadcopters today. You know, they’re cheap and easy. I’m an aeronautical engineer. They’re nothing to – there’s not much engineering at that level. It’s all in the software, and that’s the real trick here. And this stuff is burgeoning like mad in – both among hobbyists, among companies, among practical applications, so we’re seeing this take off in a very big way. I think it’s – take off literally.

MR. KUMAR: (Laughs.) Absolutely.

MR. SCHWARTZ: So we’ll come back to you.

The other arena of science we’re going to touch on is one in which we have really radically transformed the world already. I graduated with a degree in aeronautical engineering in 1968 and nothing worth knowing in biology did we know then. Biology was souped-up taxonomy then, right? We knew there was DNA, and that was about it. Now biology is an engineering field. We’re actually changing the world of biology.

So Jamie Metzl, who I have known for quite a long time, is going to talk to us about the future of biology. (Applause.)

JAMIE METZL: Well, thank you very much. And I think you guys, both of you, teed it – teed up my talk very well because you, Peter, said it’s all about the software; and you, Vijay, said it’s all about the code. And I’m talking about the future of genetic engineering. And if you believe in the spirit and you believe that human beings are infinitely complex beings and that you could never understand the contents of the human soul, you probably believe that no matter how much we know about genetics or the genome, we’re never going to understand at a core how the human being operates. But if you believe, as I do, that we are related to single-cell organisms – and we can understand single-cell organisms pretty well – and that we have a lot of genetic similarities with roundworms or other simple organisms – and we can build computer models – robotic roundworms that behave exactly like a real roundworm – then you think, well, are we – are human beings this kind of totally different type of entity, completely different from single-cell organisms and completely different from roundworms, or are we just really, really complex single-cell organisms with a lot of cells? And if you believe that we are that, then – and you believe in Moore’s Law and the expansion of knowledge – which we have seen continuously for a long, long time, but it’s now increasing exponentially – then that would lead you to believe that at some point our machines are going to understand us, or we though our machines are going to understand how we operate, and we are going to understand the source code of the human being – just like with these robotics, the essence of the machines is in the code. And so my talk today is about where we are, where we’re going, and what are the implications, not just for societies but for all of us as individuals and for us as a species, as human beings.

So first let me say I’m absolutely thrilled to be here. The Atlantic Council is just a really wonderful and very special and very unique organization, and that when we talk about the growth in robotics and the growth of biology we also have to talk about the other great growth story of the 21st century, which is the growth of the Atlantic Council – (laughter) – which, under Fred’s leadership, has just expanded in a really remarkable way. And I’ve – and in my time as a fellow here, as a nonresident fellow, I’ve interacted with many people on the team, and the level of excellence and passion and commitment of this organization is really unparalleled. So kudos to Fred and to Barry and to Governor Huntsman and to all of the – all of the members of the team.

And Peter, in his remarks, talked about kind of where fiction and nonfiction come together. And when we talk about these kinds of topics, the fiction and the nonfiction really are merging, in a way; that it’s – you really have to be thinking fantastical thoughts to think of things that aren’t in one way or another, in one form or another going to be possible, either now or over the course of our – of our lifetimes. And when we think about change and how change happens, because we’re humans, because we are descended from these single-cell organisms and related to these roundworms, we don’t tend to think exponentially. We tend to think linearly.

And so when you think about the exponential rate of technological progress, but we – it’s incredible, with Moore’s Law, if you keep doubling the power of your processors, how quickly you can have incredibly strong computing power. But for us, because we are humans, because of our evolutionary history, when we think about change, we think about what’s a 10-year unit of change. Well, a 10-year unit of change just somewhere in our minds is, well, now is 2015, 10 years ago was 2005. And so from 2005 to 2015, that’s one 10-year unit of change. And so we just think that in 2025 it ought to be 1X, kind of like that unit. But exponential change means that it could be 3X or 4X or 5X. And so it’s very – it requires thinking, all of us, like fiction writers to get our heads around this world that already exists, but is being born every moment.

So let me take a step back and then come back to this issue of genetic engineering. So in this year, in my belief, something happened in this world that a hundred years from now or a thousand years from now people will look back and say, wow, that was probably the biggest thing, at least in my biased view, that happened in a long, long time. And if you read the newspapers, you would think, oh yeah, it’s ISIS, or it’s even the rise of China, which is something that I – that I know Governor Huntsman focuses on and that I think and speak a lot about. But what I think people are going to remember a hundred years from now of what happened this year will be a vote that happened in the British Parliament, in both houses of the British Parliament earlier this year, over a process called mitochondrial transfer.


MR. METZL: And so – I like it when you get – when you get that kind of response. (Laughter.)

And so what mitochondrial transfer is is that a very small proportion – well, let’s start with mitochondria. Mitochondria are the power packs of your cells. And historically, billions of years ago, they were bacteria that developed a symbiotic relationship with the early single-cell organisms and that merged into them. And so now if you – if you have healthy mitochondria, you have power for the parts of your body that require power, like your brain and your other organs. But a small percentage of women – and mitochondria passes from mother to child, so you all have almost entirely your mother’s mitochondria and almost none of your father’s mitochondria – but a small number of women have mitochondrial disease. And if you have mitochondrial disease, it means that you, and potentially your offspring, won’t get the – your cells won’t have the energy that they need to power themselves. And so that means that if you have a bad case of mitochondrial disease, you’re going to start having organ failures among the organs that require the most energy, starting with your brain. So it’s absolutely terrible, absolutely debilitating disease.

And researchers in the United States and Britain separately came up with two different ways of doing mitochondrial transfer. And so – but I’ll just use an egg analogy. So 99 – more than 99.9 percent of your DNA comes through the nucleus, which is the egg yolk. In the protoplasm, which is the egg white, is where almost all of the mitochondria exist. So the idea of mitochondrial transfer is you take the parents, if – in in-vitro fertilization, you take the parents’ egg yolk and you take a donor’s egg white – and you can do this either in an egg cell or in an early-stage embryo, a zygote – and basically then you have the parents’ egg yolk and the donor’s egg white, and now you have a cell that doesn’t have mitochondrial disease.

And it’s very controversial because if you have a child with that, the child technically has three genetic parents – the two parents who provided the nuclear DNA and the donor mother who provided the protoplasm and the mitochondrial DNA. And it’s very controversial, and in the United States there is a regulatory process in the Food and Drug Administration right now to decide whether we’re going to go forward with clinical trials in the United States. In Britain, they had three years of public dialogues, public forums, scientific forums. They had a very, very well-developed public engagement process. And then they had a full vote of both houses of Parliament, where they voted on the question of should the state, and the Fertilization and Embryonic Authority – authorize clinical trials to go forward in mitochondrial transfer. And that vote was a resounding yes earlier this year in both houses of Parliament. And this is a procedure, it’s not – a relatively small number of people are going to benefit from this.

So why is this such a watershed moment? And in my view, it’s a watershed moment because, first, it’s the first time in history – in the history of our species that a state has authorized heritable – meaning passing on from generations – human genetic manipulation, human genetic engineering. So that’s the first thing.

The second thing is we have used this genetic engineering to eliminate one disease. But we know a lot about how we can use genetic engineering to eliminate lots of diseases, because right now you all know about the Human Genome Project, which concluded a little more than 10 years go. So the Human Genome Project took 10 years to complete and cost about $3 billion, and what it did is it sequenced the first genome. Now, to do genome sequencing, it takes a few hours and it costs a thousand dollars. Because of Moore’s Law, in five years or 10 years it’s going just to take a few minutes and it’s going to cost relatively nothing. And we already know, based on the analysis that we’ve done of genomes to date, that there are a lot of diseases that are just single-gene mutation diseases, like Huntington’s chorea and Tay-Sachs and sickle-cell anemia. There’s a list of these single-gene mutation diseases.

And so right now we have all this technology. Everybody knows about in-vitro fertilization. And maybe some of you know about a process called preimplantation genetic diagnosis, PGD. But basically, what it means is that when you’re doing IVF, you impregnate – you fertilize the eggs outside of the woman’s body and then you grow it. And you all know this from biology; it’s one cell and then two cells and then four cells, and in about five days the cell is – you have eight cells, which are the blastocyst, the early-stage embryo. And in PGD, what you do is you take away two of those cells – you extract two of those cells and you sequence them. You sequence their genomes. And right now we’re in a very early stage of understanding what the genome says, but we certainly know that there are a lot of things, like the diseases I mentioned a moment ago and eye color and some other things, that are just – and gender – that are switches: if it’s one thing it’s yes, if it’s another thing it’s no. So right now we have the ability to screen any embryo that’s of a – that’s being processed through IVF and PGD, and to be able to determine, does this child have Huntington’s chorea, yes or no?

And so then – all right, so now you’re doing IVF, you’re doing PGD. You already have this example of mitochondrial disease, which is a disease that can be cured through genetic engineering and very likely will be, starting in the U.K. And if you’re a carrier of Huntington’s chorea, of course you’re going to want your embryos, your potential – your embryos to be screened to know which of them have that disease, because if you’re going through IVF an average woman has about 15 eggs extracted when they do egg extraction and you’ll have those 15 options, and every one of them would or could be your natural-born child. And so people with all of these diseases are affirmatively going to want to screen those embryos to make sure which are the carrying embryos and which are the non-carrying embryos, so you can easily imagine how we’ll go from mitochondrial disease to other single-gene mutation diseases.

But we’re learning more and more every day about what the genome actually means. And so right now we have a relatively small number of people who have had their genome sequenced, but we talked earlier about personalized medicine. The whole point of personalized medicine is that, unlike generalized medicine – meaning you have cancer, you get chemotherapy, even though you may be one of the 5 percent of people who will die from the treatment. What personalized medicine will mean is that your genome will be sequenced. It’ll be digitized. And when – before any kind of treatment is given, you’re going to test whether this treatment works on a person with a genome kind of like yours.

And so what does that mean, that if more and more – first millions and then ultimately billions of people will have their genome sequenced, and then we’re going to be able to compare the sequence of their genome with their actual life experience – how tall were they, what was their IQ, if they – let’s say they got Alzheimer’s, how old were they when Alzheimer’s set in. And through these genome-wide association studies, we’re going to know more and more and more about what the genome actually means, and we’re going to go far, far beyond these single-gene mutations.

And I talk with scientists who are working on this, and there’s an estimate – and let’s say this is off by 1X, 2X, 5X, it doesn’t really matter – but that in about two years we’re going to be able to look at that five-day-old embryo and we’re going to be able to estimate within an inch or so how tall that child is going to be, provided adequate nutrition. And in about 10 years, we’re going to be able to do that same process and be able to say, within let’s say five or 10 points, what the IQ of that child will be. So when you’re five days old, your genetic IQ will be available. And so what’s that going to mean? Because many people are going to – if you’re somebody like me, an Ashkenazi Jew and just historically carriers of a lot of genetic diseases – there are certain people who are carriers, certain people who are part of higher risk groups, certain people who are older parents – of course people are going to want to screen out these genetic diseases, which at that time will be seen as diseases of choice: you’ll get them because you didn’t screen them out. And so people are going to go through that process, but the very same process that determines what disease genes you may be carrying will give us all of the information that we need about all kinds of positive traits that we’re going to be able to select for. And some societies will want that and some – and some won’t, but that’s certainly going to be available.

So that’s kind of step one of the – of what this genetic revolution will look like. It will be – and it’s, again, all possible now: embryo selection based on more and more information about what the genome is saying.

But then what’s step two? Again, technology that already exists. And I know this is a family-friendly Atlantic Council event, and I know that the Millennium Fellows are here, who are of a younger generation, but forgive me in advance for saying that, in an average male ejaculation – are we allowed to say that, Fred?

MR. : Ejaculation. (Laugher.)

MR. METZL: Yeah. (Laughs.)

MR. : You can.

MR. METZL: There are – there are hundreds of millions of sperm. But the limitation in IVF is that in – as I mentioned before, in the average extraction of eggs from a woman, there are only 15 eggs. But we already have technologies using stem cells to do all kinds of things. We can take cells – ovarian cells and turn them into egg cells. We can take embryonic stem cells and turn them into egg cells and turn them into eggs. And so now, instead of having these 15 eggs, you can have a hundred eggs, a thousand eggs, whatever you can afford. So now you don’t just have these 15 options. Let’s just call is a hundred options. So you have a hundred of your own fertilized – of your own fertilized eggs, all of which are your potential, quote/unquote, “natural children.” And now, because the genome sequencing costs close to nothing, you can do the – gee, the sequencing of each one of those hundred. And then you’ll get a spreadsheet and you’ll say, of your hundred, these 40 have Down syndrome, these 25 have a 75 percent chance of being geniuses, these five have the same genetic markers as everyone who’s ever won the hundred-meter race in the Olympics. Anything that’s knowable we’ll be able to know. But you’re going to be able to choose from a much larger set.

And I don’t know if there’s – I know that it’s – you talk about old movies or old TV commercials, it dates you, but it’s just a fact. So I’ll – this is a reference that only maybe Peter and me and maybe Fred will get, but it’s from an old commercial: “But wait, there’s still more!” Do you remember that? (Laughter.) OK. Anyway, not many people. Anyway, it was an old commercial. Forget it. Sorry.

And then – but the next phase after that will be, again, technology that already exists, which is gene editing. And I mean, I’ll talk a little – I’m going to talk a little bit more about it in a moment, but we have technology now, and one of the technologies is called CRISPR-Cas9, but it’s precision gene editing. So you know there’s the genome sequence, the ability to go in, to identify one gene that you’d like to change, and you can just – essentially just swap it out using enzymes and manipulating the RNA. And we haven’t perfected this, but this is a technology, again, that already exists. And we will – either this or other approaches – we will perfect the technology. So now you’re going to be able to do the same embryo selection, but let’s just say that you and your partner are both carriers of a disease. You’re going to be able to go in, ultimately, and swap out this strand of DNA that indicates for that disease and replace it with something else. But once we can do that, it opens up a whole world of potential replacements. We can use synthetic DNA. We can use animal DNA. And these – again, these are all things that we can do now in preliminary experiments – I would say not on humans, but I’m going to talk about that in a moment. And this world is very quickly coming into being.

And the challenge that we’re – that we’re facing, as I mentioned before, the science is advancing exponentially, our thinking is only increasing linearly, and the policy and regulatory framework is only inching forward glacially. So there’s this total mismatch between what’s possible – what’s technologically possible and how prepared we are to figure out what to do with these technologies. And to make matters worse, and to make matters even more complicated, different societies have different views on how – if and how we should engage these technologies.

And so what happens – and first I’ll talk about this theoretically and then I’ll talk about the news from the last couple of weeks – what happens if one society – and let’s say ours – is uncomfortable with using these kinds of technologies and another – we can just call them China – are more comfortable? And we saw that with George W. Bush, with the stem-cell debate. We had a moral debate in this country and we decided that – our government decided that we weren’t comfortable with federal funding for a certain type of stem-cell research, and so we cut off funding. It didn’t slow stem-cell research in any way. It just meant that our best scientists left and went to Britain and went to – went to Singapore. And so every country will have the possibility to opt out, every individual will have the possibility to opt out, but that’s not going to slow the advance of the technologies.

And so what happens if we – let’s just say the United States decides to put a ban on using these genetic technologies on – heritable genetic technologies on humans and another society decides that they want to go ahead? Will we screen people at our borders? Will we give – will we do a full genome sequence of every person who’s trying to come into the United States? Will we make it illegal for an American citizen to procreate with a genetically enhanced person? And when you think about the debate that exists in the world today about genetically modified crops – and this is after 30 years of scientific research has shown that genetically modified crops are no more dangerous to humans than normal crops – but we can’t even have an informed conversation about GMO crops. Imagine how people are going to go crazy when genetically modified people start showing up. And again, I talked about the British example, so the first genetically modified humans will be crawling amongst us at earliest by the end of this year and latest by the beginning of next year, and so it’s only going to go from there.

And when we talk about these societal differences, they’re already bursting forth into the – into the media and into the world. And the United States, over the last couple of months there’s been a debate where there have been two groups of scientists who’ve sent letters in to the editor of Nature and Science. And so the group that sent into Nature talked about this CRISPR-Cas9 gene editing technology, and they said this stuff is so profound and it has such an ability to affect the genetic code of human beings forever there should be an absolute moratorium on the use of CRISPR-Cas9 on humans. That was one group. Another group, that included the professor at Berkeley who had been the lead developer of CRISPR, they said this stuff is really dangerous and really powerful; we need to be extremely cautious and we should be – we should be – wait a long, long time and be very careful before applying this to human beings. So that was the debate that we were having here in the United States.

In China, earlier this month a report – not a report, a research paper in a journal talked about CRISPR-Cas9 research that was done on non-viable but – non-viable human embryos. So while we were having this debate about should we have a complete moratorium or just be extremely cautious, in China not just one lab, but reportedly four labs are now doing experiments on human – non-viable, but still human embryos, where they are doing precision gene editing to alter the genomes of these – of these embryos.

And I’m not saying that there’s – that that’s absolutely the wrong thing to do. I think there is a debate, and these are non-viable embryos. But we are right on the verge – we have all of the tools to completely alter our genes. We’ve had 4 billion years of evolution which has happened one way, and that way is the way that you all learned, or I hope you all learned, in biology class. But now we are at the cusp of this Promethean ability to write our own future, to remake our own genome. And we’re seeing the difference. We’ve seen different societies going at it in different ways. And when you just think of where this is going to go, faster and faster and faster, it’s astonishing to me that we’re not having a meaningful national debate and an international debate on the implications of these technologies, because my biggest fear is that the conversation about genetic engineering will follow the path of the conversation about Ebola.

Twenty years ago, when there was the outbreak of Ebola in Africa, there was a story in the New Yorker and then a book called The Hot Zone, and everybody read it and everyone kind of registered Ebola, oh, that’s really bad. And then people kind of went on with their lives, and then all of a sudden last year there were these stories of Ebola happening in Africa, and then most people in the developed world thought, wow, that really sucks – sucks for them; another one of these stories about bad things happening in Africa. And then we had three cases to come to the – to the United States, and all of a sudden all these people who hadn’t really registered Ebola all of a sudden completely freaked out. We couldn’t have – it was a media frenzy. And at that point we couldn’t have a meaningful conversation about what to do about Ebola, because the right thing to do about Ebola was to invest in infrastructure in West Africa, to do all these basic things that needed to be done. But nobody could have that conversation once Ebola landed here.

And when genetically modified people start showing up, that’s what’s going to happen. People are going to run to the barricades. And this is not – it’s not just a theoretical point. I mean, we’re talking about the core of what it means to be a human being. And if – because of that, now is the time that we need to be having these kinds of conversations.

And so kudos to the Atlantic Council for trying to raise this and these other very challenging questions. And I think the challenge for all of us, and maybe even especially the Millennial Fellows, is to think about, how do we integrate these – the lessons, and how do we think about a framework for applying our best values to deal with these challenges that are coming faster than ever before? So thank you very much. (Applause.)

MR. SCHWARTZ: Thank you, Jamie.

And yeah, you guys really actually picked two great themes. I should say I have an IVF child. He’s 25. And if I’d known, you know, how it was going to – he’s a great kid, smart, sweet, lovely. But you know, if I could have given him the Roger Federer gene – (laughter) – that would have been my choice, but it wasn’t an option when we – when I saw his four cells in the microscope – they didn’t give me that option.

And as it happens, I have written a television event for TV that will appear on Fox television about a year from now. I help write movies and stuff like that. The next one takes place in 2050, where the big challenge is what is human, right? Because it turns out you were right and it turns out you were right in this moment in time, and indeed robots are a challenge and the enhanced human beings are a challenge.

So we have here a room full of normal, right? None of these – any enhanced in the room? No? Well, so I’ve got an artificial hip. I’m already part cyborg. And I’ve got artificial eyes; I had cataracts and I now have the eyes from a 5-year-old. So I’m on my way. So any other cyborgs in the room? OK, well, I’m one of the first. So I’ve got a little bit of – no electronics yet, but I’m working on that as the next thing.

But let me ask you both quite seriously. I’m part of a team in Silicon Valley working on the employment implications of robotic technology and so on. There’s a very good new book out by Andy McAfee and Erik Brynjolfsson, “The Second Machine Age,” that deals with all this stuff and what it means for employment. There have been several books on the subject.

No good fiction yet on yours. Yours is the next one. You know –

MR. METZL: You know, mine is already out, so – (laughs) – I’m glad –

MR. SCHWARTZ: Oh, it’s out already? OK. All right. Well, we’ll have to make sure we read your book.

But the point is, should all these people be worried? Should they be worried about their children? I mean, are they going to be able to – you know, so far none of you have enhanced your kids, I would guess, yet. So are your kids going to be uncompetitive with the robots on the one hand or the enhanced human beings on the other? Should they be worried?

MR. KUMAR: Well, one question you could ask is, are robots better at – better than humans? And in fact, there are many axes along which robots are superhuman, right? They can – so if I wanted to build – think of a perfect hitter in baseball, a robot could do a better job. If want to throw the perfect curveball, robots could do a better job, right? So there are lots of things that robots could do better, and so I think we should feel a little threatened by that.

The real question is, can you train people – and I think, you know, the comment about linear growth, exponential growth, superlinear growth, what – so in the past, technological innovation has been happening at a certain rate, and education and training has kept pace with that. So when people kicked out the typewriters and replaced them with world processors and then eventually iPads and so on, we were able to catch up. My mother uses an iPad. This is no problem. But the next generation of jobs will require, I think, a level of investment in education and training that’s commensurate with the rapid increase in technology. And I think that’s the piece – from a policy standpoint, that’s what we have to struggle with and that’s what we have to make sure, you know, we think through, and particularly from the standpoint of investment.

MR. SCHWARTZ: So we need to learn how to be competitive with the machines we’re creating.

MR. KUMAR: Yeah. If the – if the machines are becoming exponentially better, well, our educational programs have to be exponentially better, too. And that’s not – so on – that’s something no one wants to pay for.

MR. SCHWARTZ: On the other hand, on the genetics side, you know, are we going to have Olympics for the normal and Olympics for the enhanced? Is that where we’re headed?

MR. METZL: Well, I think that – let me – let me talk about that – your first question and that maybe together.

When we look back at photographs of our parents in whatever era it was, you think, like, look at those hairstyles, look at those funny ties, look at the shoes, whatever. Twenty years from now, when we look back now, people will say, you see that little rectangular thing on the table? And they’ll say, what is that? That was – it was called an iPhone, and it was this technology that you kind of carried around like it was another thing and you put it in your pocket. And they’ll say, what, you mean technology wasn’t always a part of you? And so I just think that we and our machines, in fundamental ways, are going to be merging.

And so in that, these adoption questions, whether it’s how we’re merging with our technology or how we are genetically selected or genetically, in the more distant future, engineered, will be part for some group of us of what it means to be a human being. But just as H.G. Wells predicted, there are going to be divisions between the people who have access to these technologies and access to these enhancing capabilities, like embryo selection – but there will be lots of others – and those who don’t.

And that’s why, for me, it comes back to a values question. But it’s the exact same values question that we have today because the difference between this genetically enhanced person of the future and technologically enhanced person of the future and somebody who doesn’t have those things will never be as great as the difference between every single person in this room and the average person in Central African Republic today. So we’ve already, as a – as a civilization, come to terms and have accepted these massive differences between people.

And so there is a possibility that all of these technologies will be distributed at least sufficiently enough so that, on average, people will benefit. But there’s also a possibility, if we don’t use the right values framework in thinking about every one of these technologies, that it’s going to be more and more centralized. And that’s why I was saying now is the time when we need to be having these conversations.

MR. SCHWARTZ: And of course, it is very hard to have those conversations outside the context of our history, our values, our culture, our religions, et cetera.

Well, we want to open up the floor for debate, conversation, challenge. You don’t have to ask a question. You can state your point of view or you can make it a question. So who would like to speak first from the floor? There’s a hand back there. Yes, please. Please identify yourself.

Q: Sure. Hi. I’m Daryl Sng from Deloitte Consulting.

And my question is, well, you’ve said this – now is the time to have this conversation, but as you know, public policy discussions don’t just – don’t just happen. Often, unfortunately, it’s what I call panic-propelled public policy: we only have those discussions when they become a – whether it’s Ebola, whether it’s whether we allow kids to walk home from the park by themselves. So how do we get the ball moving on these conversations about the vital importance of robotics and genetic engineering?

MR. KUMAR: Well, so you’re right. I think, you know, as I think Rahm Emanuel said, never waste a good crisis. And I think crises are important because that then sparks intellectual discussions, often – not always, but often.

But you know, I think some responsibility – I think we as technologists have to bear some responsibility. Again – and I very much appreciate this comment about linear and exponential – so technologists are the ones that hopefully are not thinking linearly and they can anticipate some of these things, right, going forward. And in some sense we are to blame because until now we’ve been sort of holed up in our own little shell and doing our own thing. And I think we need to reach out and engage people in these kinds of discussions, and not be blamed for things that we come up with and, you know, get criticized later on. So that’s one way of doing it.

I also think that increasingly, at least in Washington, the intersection between technology and policy is being appreciated. And I see more and more people willing to serve in that capacity, at least I see on both sides of the aisle staffers being receptive to these kinds of views. And I – and I think we’re going in the right direction. We might not be doing it fast enough, but I think it’s happening slowly.

MR. SCHWARTZ: You know, it is – I’ll ask you to comment in a moment – it is interesting that, sadly, in I think it was 1994, as part of the Budget Act, one of the people who actually created an institution that was designed to do precisely that, the Office of Technology Assessment, Newt Gingrich, who actually deserved credit for helping create it, then wiped it out. In a – in a fit of budget stupidity, basically said we don’t really need foresight and thoughtful consideration of these technologies, and informed the Congress we’d rather look at the future this way. And so we have taken what was, in fact, a very good institution and wiped it out. And I’d love to see us bring it back for precisely that reason.


MR. METZL: Yeah, yeah. No, I think it’s a – it’s a great point because before the crisis happens is the time when you have the thoughtful conversation, but people aren’t paying attention. And then, when the crisis happens, you can’t have a thoughtful conversation anymore. So all I can say is having these kinds of dialogues, having everybody here go to other places – whether it’s your homes – and say, hey, I heard these important ideas, whether it’s tweeting.

I mean, I think that there’s a lot of things – I mean, for me one of the reasons why I wrote my novel “Genesis Code,” which deals with all of these issues that I was talking about today, is I kept writing policy articles, and you know, Fred and Barry and a couple other people read them. I thought, well, I need to find some other way of reaching people. And so I just think that in the beginning we just all have to try our best and hope that we can do enough of laying a foundation so that when the crisis comes – and it inevitably will – at least the framework is set for a more rather than less meaningful conversation.

MR. SCHWARTZ: Yes? Did you have your hand up, Carmen, too?

Q: I did.

MR. SCHWARTZ: And Mat, you did, too? OK. We’ll take in that order.

Q: Hi. I’m Eli Wine (ph) with the RAND Corporation. Thank you both for amazing presentations, and really, really thought-provoking.

I think you made the comment that science and technology, they’re progressing at an exponential pace, and our policy and our regulatory responses are progressing at a glacial pace.

MR. METZL: Right.

Q: And when you consider swarms of drones, the future of genetic engineering, are these – are there self-evident or basic steps that the United States government could take to reorganize itself to make itself more adaptable to changes in science and technology, or is that kind of a far-fetched – a far-fetched dream?

MR. METZL: I think Vijay’s best because you were serving in this capacity in the White House, yeah.

MR. KUMAR: Well, one thing – one thing I realized after two years in Washington is that we have lots of constraints – budgetary constraints, political pressures and so on and so forth – but fundamentally we’re talent-starved. If you can attract the right people to Washington and if you can empower them, I think government will start to play an active role. That’s my firm belief.

And I was just – you know, in my two years I was very impressed with the quality of people I met, and I think we just need to double the number of people who come to Washington, work for peanuts, work for the government, work for our society, and actually bring these issues to life. And to me it’s all about the people, and that’s what America has and that’s what we need to exploit.

MR. METZL: And I just would add to that – and I know you think this, Vijay, because we talked about it just before coming in – is that we also have issues of political paralysis that interfere with our ability to have some of these conversations. But in a lot of these more revolutionary technologies, I think that there are maybe two or three things that we can do.

And one is to begin a conversation just talking about what are our values. So it’s not – sometimes it may be too early to have specific laws. And I know with drones, if we’re too early in regulating, you can actually inadvertently squash innovation, but I think we can – we can determine what are our values. What are the red lines that don’t want to cross? I think that’s one thing.

And then a second thing that we can do is a lot of these topics require – like robotics, like artificial intelligence, like genetics – it just requires a lot of conversation, and I think that government can play a catalytic role in just promoting dialogues, inclusive dialogues, that will – so that when the you-know-what hits the fan at some point later people will have thought through these issues. They will have heard opposing views. I think that’s something that the government can – and universities and organizations like the Atlantic Council – can play an active role in promoting.

MR. SCHWARTZ: So I’ve just been given the 10-minute sign, so what I’m going to ask is that all – we’ve got Carmen, we’ve got Mat, we’ve got Paula. Did I miss anybody? OK. So I’m going to ask all three of you to make your comment or question, and then I’ll ask our two speakers to respond, OK?

Carmen, over to you.

Q: So Carmen Medina, Deloitte Consulting.

Tongue in cheek, sometimes I think that the Unabomber was right – (laughter) – if you read his manifesto. Question: The implications of all this for unequal opportunity and sort of a technological poverty and what that will mean for the social stability of countries, of societies.

MR. SCHWARTZ: Yeah. What was the – for the younger ones in the room – (laughter) – yes, you and I remember, Carmen, but what did the Unabomber say that was –

Q: The Unabomber’s manifesto was that technological change was sort of denying humans the life that humans were supposed to leave and was sort of eviscerating humans, if that’s the right word. Is that fair?

MR. SCHWARTZ: Yeah. Yeah, and he chose to live a life without technology, et cetera.

Q: Without technology, yes.

MR. SCHWARTZ: You want to add something to it, Fred? OK.


Q: Mat Burrows, Atlantic Council.

Do we have the international structures in place to really talk about these technological issues? You don’t see much of this in the G-20, for example. (Laughter.)

MR. SCHWARTZ: Really? I’m surprised.

OK. Paula and then Fred.

Q: Thank you. Oh.

MR. SCHWARTZ: Well, just –

Q: Go ahead.

Q: All right.

No, I – thank you very much. My question or comment is the role that those individuals in the medical schools, who are teaching in the – kind of these public policy questions and dealing with these ethical questions, there must be advisory groups which, if they have not been set up nationally, should be formally set up nationally and/or internationally now. Now is maybe even too late.

And the other two points I wanted to make was that my daughter, when she was in high school – and she’s now a doctor – deal with both of these issues in two different plays, “Twilight of the Golds,” if you know that one, and “Frankenstein in Love.” So there are – have been people dealing with these things, but I think it really does require really authority and leadership at the national and international level.

MR. SCHWARTZ: Yeah. Yeah, Mary Shelley did put it on the agenda, in a sense.


Q: Since you’re doing novels, Jamie, be George Orwell, maybe 2034, and particularly on the future of what does warfare look like in that year. And you may want to jump in as well with your robots.

MR. SCHWARTZ: Yeah, we’re also going to get into that in the next session as well, future warfare. So feel free to comment on any of those, the international dimension, whether the medical schools are right. Maybe the Unabomber was right; we should actually be fundamentally cautious because human life actually ought to be what it once was as opposed to technologically mediated. And what about the future of warfare? Feel free to respond to any of those.

MR. METZL: All right. I’m going to – because I can’t resist, I’m going to quickly respond to all of them.

So Carmen, of course, we all know that the Unabomber was a fool. The issue of technological division is a real one. But I think that every generation thinks of natural as just what they grew up with, and so it’s idiotic of the Unabomber to say, oh, I’m for nature, which is that’s the way things used to be in the 1920s. Just on every area, even the anti-GMO people, like, they want – what, do they want to go into – have food like you would have found at the end of the Ice Age? I mean, everyone would starve to death. So we think of these things as natural, but what we really mean are things that are familiar to us. But those familiar things are often things that have been completely manipulated by our ancestors. I mean, the Unabomber wasn’t going around assassinating all dogs because dogs are just this human creation that we’ve made out of wolves. And so it’s just – I just think this whole idea of nature is a fiction, but it’s a fiction that’s very meaningful to us.

For Mat’s question, that’s an easy, quick one: There’s no meaningful international structure to discuss and explore these issues. And there really needs to be.

And then, Paula, completely – there are – bioethics is a growing field. In some places – in the United States and elsewhere – it’s actually pretty well-developed and there are structures for oversight for all kinds of research. China, for example, as the governor knows, has quite good laws, but enforcement is spotty at best. And so a lot of this research, this human embryonic research, it may have been against the law in some way, but there have to be mechanisms – and we have to globalize those mechanisms because, from a species perspective, it doesn’t matter what happens where. But on these issues, I think – and I think the Atlantic Council can play a meaningful role in putting forward an agenda for what needs to happen.

And then Fred’s question, I was just asked to do a story for this future of warfare project. But I believe – some people talk about using genetic engineering to create these super soldiers, people who can, you know, stay up for a week and have big muscles and – but I think that generalized intelligence across a population will be the greatest asset. And so right now, if the United States fights a war with some country and every one of their – every one of their soldiers is some kind of super Olympic-level athlete, they won’t have a chance against us because of the way that we integrate all of the elements of power. So I think that warfare, as it is today, will be societies – one form of societal organization competing against another form of societal organization. And that’s – and the societies that use these technologies most creatively will be most advanced, not just in warfare but in everything else.


MR. KUMAR: Just had a couple of comments.

So robots will unfortunately fight wars for us. My only hope is that maybe wars can be fought by only robots. If country A’s robots beat country B’s robots, country A wins and that’s it, right? (Laughter.) So that could be –

MR. METZL: Rock ’em, sock ’em.

MR. KUMAR: Yeah, that’s right. (Laughs.) It’s like “BattleBots” except on a bigger scale.

I think this question of – you’re going back to the Unabomber. So on the one hand I talked about how technology is being democratized and we’re actually making it easy for people to enter the field, lowering the barrier to entry. At the same time, we’re also making it harder for people to learn the basics, right? So if you look at Philadelphia, despite all the proliferation of these massively online courses, we’re unable to reach 40 percent of our high school kids because they just don’t have the Internet access that they need to access all of this. So what’s that saying for our society? So I think there’s a – there’s a deeper question here. And again, to me, it goes back to investments in the future of our country.

MR. SCHWARTZ: With that, that’s a great place to end. Thank you to you both.

And how long is our break?

FREDERICK KEMPE: So we’ll take a quick break for about 20 minutes, if it’s all right with everyone. We had such an interesting first panel we ran a little bit over. But thanks very much to our speakers. Thank you, Peter. We’ll reconvene here at I guess about five minutes after four for the very next panel. Thanks very much. (Applause.)