December 10, 2013
Transcript: 2013 Strategic Foresight Forum - The Bio-ICT Convergence: Increasing Risks, Ubiquitous Vulnerability, Promising Breakthroughs
|Welcome and Moderator:
Dina Fine Maron,
Vice Chair, Board of Trustees,
Author of "The Infinite Resource: The Power of Ideas on a Finite Planet"
Michael R. Nelson,
Principal Technology Strategist, Technology Policy Group,
Location: Atlantic Council, Washington, D.C.
Federal News Service
I just want to frame the issue for us before we dive in a little deeper. With so much that we're seeing on the front lines of science right now, both in our headlines and in our research journals – everything from our abilities right now to create more test-tube meat that one day we might see on our plates, 3-D printing that is opening up new frontiers for our ability to build prosthetics, for our abilities to build cell parts, for our abilities to really transform the future of health care for us.
And then more on the tech side, looking at how mobile apps are really changing the face of health care, our ability to quantify for ourselves what's going on in our daily lives, how much exercise we're getting, how much sleep, how we're processing pharmaceuticals, asthma risk, things like that that are really going to potentially fundamentally change what's going forward for us. And even in areas like disaster relief, mobile phones are really changing our abilities to track and respond.
But with all of these new breakthroughs, there are also new challenges and ethical dilemmas that we hadn't really anticipated going forward. With 3-D printing, it's raising new questions about counterfeit and how to regulate these issues. With better electronic medical records and health IT, we still don't have much regulation in that space. In fact, in January of this year it's congressionally required in the United States for the Department of Health and Human Services to issue its first regulations looking at health IT questions and how we are going to regulate them and what we can do going forward. So we'll be keeping our eyes peeled for that.
To tackle some of these questions, I have this great panel I'd just like to introduce to you briefly. In the center, we have Ramez Naam. He's a technologist and author who helped create Microsoft Internet Explorer and Microsoft Outlook. He is currently CEO of Apex Nanotechnologies, which is a nanotech software company. And he's the author of numerous nonfiction and science fiction books. So you can take a look at that.
Directly next to me is Richard Danzig. He's the vice chair of the board of trustees at the RAND Corporation. He also has other roles including he's a member of the Defense Policy Board and the president's intelligence advisory board. Notably, he was the 71st secretary of the Navy from 1998 to 2001, and a senior adviser to then-Senator Obama from 2007 until he became president in 2008.
And to the far side is Mike Nelson. He's a principle technology policy strategist at Microsoft. He works on issues that define future of technology. So it's good that he's on our panel today. And he's also been a professor at Georgetown looking at these questions and helping shape future minds. So it's great that we have all three of them today.
Before we get started, they're each going to do about five minutes of response. I'd really love it if within that you could try to keep your comments a little brief and we're just going to ask a little bit about some of the opportunities you see going forward as well as how we can prepare for them. Ramez, you want to begin?
RAMEZ NAAM: Thanks, Dina. So obviously with the use of IT in medicine and health care there's tremendous opportunities about better diagnostic systems, health records, all sorts of technology we can use there. But I want to talk about a different phenomenon, which is in biotech and IT. Richard Dawkins was once asked if he could summarize biology in one word. And the word he chose was digital. What he meant was that the genome itself is information. It's encoded in quaternary, if you will. That's how each base pair of our genome is.
And you're all familiar with the concept of Moore's law, the exponential progress in computing, by a factor of 100 faster, we get computer per dollar each decade. Well, the last decade we've gotten about 1 million times faster at genome sequencing per dollar and about a million times better at printing genomes per dollar and similar improvements – I mean, not that quite – maybe not quite that radical – in editing genomes. So that's a fundamental thing that is a – sort of a different slant on the ICIT and biotech intercepting.
And that's a platform technology that I think has three big opportunities for us. The first of those is outside of health care at all because so much of the world that we care about outside of our bodies is actually biological. Earlier we had a session about the food, water, energy and access. And a lot of those problems are actually problems that biotech is being applied to. We have to grow about 70 percent more food by 2050 to feed the world. We don't have the luxury of chopping down a lot more forest to do it. And so we see that biotech applied to crops is on the big ways we can do that.
We know that we have to reduce fresh water use while growing those crops. And again, reducing water use per calorie grown is a big frontier in agronomy and biotech is key to that. For all that we were talking about transport improvements and so on, there's about 800 million vehicles with internal combustion engines on the road today. And the only viable path we see today to fueling those vehicles in carbon-neutral way is next-generation biofuels, which is the sorts of things that genomic pioneers like Craig Venter and George Church are now devoting themselves to. So that's one big platform direction that we can take this.
Second in health care is diagnostics. So now we do most diagnostics via symptoms or via markers in your bloodstream or other sorts of biological markers. But we don't actually get inside the cell. But increasingly, as the cost of gene sequencing has dropped from that $3 billion the first genome cost to now approaching $1,000 and in another decade perhaps $5 or $10, we're looking at the ability to actually look what genes you have expressed – not just what genes you have, but what genes are active in a cell, or an unhealthy cell, and actually know what the root cause of an illness that you're experiencing is. And that's a potential breakthrough in medicine.
And the third, which is possibly the hardest, is actually applying genomic technology to therapies. It's maybe the hardest because we are, for a variety of reasons, very, very risk averse to messing with the human body. Our tolerance of error in energy technologies or food is much, much higher. You know, you mess something up, it's OK. Our tolerance for error in applying a therapy to a human is very, very, very low.
But even so, there's a potential there to not just treat a disease in a chronic sense, but to go in and actually rewire the genetic problems that are in the body and fix diseases – not even just the diseases that we think of as being genetically based, but problems like as you age heart tissue starts to express a certain gene less, which makes your heart less efficient. That's the sort of thing that we could tweak later on. And those sort of developments will take decades to come to fruition.
And there are, of course, with this risks that lower and lower cost for biotech means that now things that it used to take a major research university to do can be done in a garage. We have the emergence of the DIY bio movement, the do-it-yourself bio movement. In Seattle, where I live, there are now multiple biohacker spaces where can genetically engineer individual bacterium or viruses in what was a garage. High school students are doing genetic engineering.
And that means that there is the real possibility that at some point we'll face a national security threat where someone could have engineered a pathogen. Now, I think we'll find that's actually much, much harder to do in an effective way that makes an effective weapon than the scary fiction will make it sound. Most of these things will simply just fail to work. But it is something we have to keep our eyes on. So with that, I'll stop and pass it off to Richard.
RICHARD DANZIG: Well, thank you. I should begin by apologizing for my informal look. As a national security type, the phrase Snowden had new meaning to me today. (Laughter.) Oh.
I do want to speak, though, as a national security type on this panel because I think my role is to suggest something of the darker side here, the kinds of risks associated with the wonderful upside of the technologies we're talking about. And I think you've said it well, Ramez, when you talk about the – our tolerance for error is lower in the health arena than it is many other arenas. So is our tolerance for insecurity dramatically lower.
I like the combination in the title for this panel of the information age technologies and the biological technologies. They're siblings – maybe even twins, born around the same time, blossoming through their childhoods in the '50s. And the convergence of these twins almost, if you'll forgive the metaphor, in an incestuous way now is quite striking, as we harness in a hundred ways the information revolution to the biological revolution and the biological revolution to the information revolution.
They traffic in many of the same kinds of things. But this opens up a set of problems. As we move towards electronic records and the kinds of dependences on censors and installed bodyware that change our biological makeup, so the same kinds of problems that afflict the hacker world – that afflict us through hacking or through state interventions or criminal interventions in the computer world will come to trouble us in the biological world.
As one who's looked some at both these worlds, I'm struck by the frequency with which we're repeating in the new world of application of IT to the biological world some of the same kinds of errors – the built-in vulnerabilities, the insensitivity to security that we encountered in the – have encountered in the more purely classical cyber world. And we can talk more about that if you'd like.
I ask myself about this, as a security type, what are my security concerns? And I think the answer is that I have lots of them in an everyday kind of way – easy to imagine changing a blood type in a hospital record and assassinating somebody that way. And if you want to find examples of criminality and the realm of possibility here, take a look at some of the fiction like Suarez's book "Daemon" and the volumes that follow it. It well-conveys some of the issues here. But I must say, I'm not terribly concerned about that. We confront criminality all the time and we confront even murder, et cetera, with some frequency in day-to-day society.
The central question, I think, is the catastrophic one: Are there, in the new world we're creating, potential for – potentials for attackers to – either because they're vandals or because they're essentially anarchists or because they're working for groups or have some ideological agenda among them, even agendas like environmental improvement and the like, or there are opportunities for them to do things that essentially hold the rest of us hostage or prisoner – can the state, the U.S. in this context, be unable to exercise its will because of attacks of – using these new technologies or because people undermine our dependence on these technologies.
And it seems to me those possibilities exist and they exist in the context of the biology, principally in the consequence of the fact that DNA is information, that this information can be passed, that synthetic biology opens up the opportunity for individuals to create new pathogens that then can be used as instruments of attack against which we're poorly defended. We can say more about this in the context of the panel. I would just conclude by observing this is not a new risk. We have lived with the risk of use of biological weapons. And some of us have been quite concerned with this over the years. These risks have not materialized to the extent that many of us feared.
There are reasons why that's so and there are breaks on the propensity of the system to run out of control this way. But it is a – with the coming of synthetic biology, we have more democratization of the technology, more efficacy available to would-be attackers and the potential for the creation of illnesses against which we have no existing defenses. Those then open up a different kind of – set of security challenges, which we can talk about. But I'll stop at this point because it's Michael's turn.
MICHAEL NELSON: Well, thank you very much. I'm very glad to be here. When Barry invited to speak about five or six months ago, I had a different job. I was a professor of Internet studies and innovation at Georgetown. And I guess I'm going to speak mostly in that role. I'm still teaching classes about how to predict the future and still helping the master students in our program figure out what's coming, that's why I'm wearing the tweed jacket and looking academic.
That's one way of saying I'm not presenting Microsoft's vision of the future. This is Mike Nelson's vision of the future. I am going to be a little bit different from both of you. First off, I'm going to be a pathological optimist on the technology but I'm going to be a total pessimist on the regulation and I'm going to talk to both of them in the course of this discussion. And I'm also going to redefine the question a little bit. When I was on Capitol Hill working with Senator Gore, I learned the importance of redefining the questions.
And I like the question as it is – information – info, bio. But I think bio is a lot more than just biotech. So info, bio, psycho and socio, and I want to touch on each. I brought a prop along to talk about the quantified self. How many people consider themselves part of the quantified self movement? I knew you would be, Peter. (Laughter.) OK. This is mostly athletes, people who are trying to track their own body's behavior, trying to understand what's going on, what their weight is, what their exercise routine is, what their sleep patterns are.
Has anyone seen one of these? This is a Zeo sleep monitor. So you wear it on your head at night. It's a wireless device. It attaches –
MR. : Very kind of you not to wear it while I was speaking. (Laughter.)
MR. NELSON: If I had been, you might have known that I was generating REM waves in deep sleep. But over the course of the evening, it will record whether you're in deep sleep, in REM sleep or awake. And it actually is attached to, by wireless links, to a device that looks like an alarm clock. And you can adjust the alarm clock so that it won't wake you up out of a good dream. (Laughter.) Or if you want to remember your dreams, it will wake you up in the middle of a dream.
What's exciting is it will track you sleep behavior from night to night. And you can post these and share these with friends. (Laughter.) Larry Smarr, one of the pioneers of supercomputing, posted an entire year's worth of his sleep data showing, you know, sort of what – how he had slept in different places, looking at all the different things that affected his sleep. And it was a very effective way for him to improve his sleep. A lot of us swear by these devices. The only problem is the company is now bankrupt, but – (laughter) – there weren't enough of us.
But it's a great example of the kind of better health through better data. And I'm going focus on that primarily because you both have talked about better technology for curing people. I want to talk more about better data for keeping people from getting sick in the first place. Another example in this area is 23andMe. How many people have tried that out? Wonderful service that does over 70 different genetic tests just from a little bit of spit that you send to them. They were closed down by the FDA a week and a half ago.
But it's a great service because you get these tests – most of which are for incredibly obscure diseases – one in a million, one in 10 million. But knowing that you had that possibility in your genetic code means that if you do start showing strange symptoms and show up at the doctor, they know to look into the possibility that you're showing symptoms of that genetic disease.
It's also reassuring if you, like me, find that on the big four – diabetes, cancer, Parkinson's and Alzheimer's – you don't have a higher than average propensity to those diseases. Unfortunately, the FDA has decided they weren't doing enough to inform people about what this information meant and the possibility of error. And so for now they've been shut down.
Another example of this is in the area of social media where people are forming communities to get healthier. Pew Internet and American Life Project, which I serve on the advisory committee for, has done a lot of surveys looking at how people use the net to get information about their health and to monitor their health. More than 30 percent of Americans have gone out and shared information about their health with others online.
I have something called the inter-continental Facebook weight loss challenge. And me and a couple dozen friends have all decided we needed to lose some weight, we usually do this over the holidays, and we weigh in every week and just the peer pressure has kept – helped us lose a total of about 210 pounds for 20 people, and more importantly, to keep it off for two years. This is another example of how online communities can help people become healthier.
Another example, they're finding more and more that what your social networks do influence what you do, whether it's smoking, overeating, overdrinking. And by understanding how these communities work together, by understanding the sociology of health using the data that come from big data, we have a chance to really build a better, safer, healthier environment. You mentioned the incredible progress we've made on genetic testing.
There's also Zuckerberg's law – anybody familiar with Zuckerberg's law? Zuckerberg's law, and this is validated through the data, is that every year Americans will overshare twice as much information as they did last year. That's a gold mine if you're trying to track healthy behaviors, trying to understand what causes people to not be healthy. We have a huge opportunity in this area of big data to improve the health of Americans. And this is genetic data. It's exercise data. It's diet data.
And my other prop I just wanted to throw up here, this is the big data of – the big book of big data. It came out last year just in time for Christmas. I don't know how many of you already have one. If you're a real geek you got this under the Christmas tree last year. But it's by Rick Smolan, who's the guy who did a day in the life of America, a day in the life of the Internet, a day in the life of the Soviet Union. And he – this is full of beautiful images of all the different ways in which people are using data, many for health and medicine.
I urge you to get this. You don't have to buy the book, although I understand it's discounted because the app is even better. So you can – the human face of big data, it's a great way also of – if you have a CEO who doesn't understand big data, you buy a copy and put it on the coffee table in his waiting area – or her waiting area. And out of curiosity they will open it and every page has a wonderful examination of how data is changing the world and about a third of them are related to health. So that's my take on this. And happy to go deeper –
MS. MARON: Thanks so much. Those were great opening comments, thank you. I noticed you each touched on aspects of the democratization of health and the democratization of technology. And what I'm really interested in hearing, to pick up a thread from the other panels earlier, is how you think this is really going to transform inequality and the widening gap between the rich and the poor.
Certainly, we have more opportunities with do-it-yourself things like 23andMe – or had -- have more opportunities with 3-D printing, if you can get a 3-D printer or access to one. The D.C. public library, for example, has a 3-D printer and you can sign up to get on a list and go in and utilize it. Or you can take an opportunity if you have a smartphone to do more quantitative information on yourself, but not everyone has a smartphone. Would one of you like to start responding to that question?
MR. NELSON: One thing we started to see is that governments are paying for Medicaid recipients, Medicare recipients to have a smartphone because the telephone – the smartphone is the most powerful medical device you can have for many patients because it can track their – whether they're taking their medicine. And for people who have mental health problems in particular, having that daily reminder to take the pill can really make a difference.
Having people check in on a regular basis so you know if things are OK. That – (audio break) – in the long term that it's well-worth the few hundred dollars a year to provide these phones. Microsoft's been doing some experiments with various state health care providers to do just this. And the apps are very simple. Again, they're sometimes just collecting data about whether people move around in the course of a day. If somebody stays at home all day for three days, maybe somebody should check on that.
MR. DANZIG: You know – sorry, after you, Ramez.
MR. NAAM: Well, so I would say that in general what we see throughout history is that when technologies are centralized under their control, that leads to more inequality and that the democratization of technology usually lifts more boats than not. That's the – sort of the general rule. We see in a – we used to have a great deal of worry about the digital divide. That was a very real concern. Would poor people, even in the U.S., get access to digital technology?
And as a previous panelist mentioned, now we're approaching 7 billion cellphones in the world and we see clear data – farmers in Africa use cell phones to make more money because they can text message someone and find out what the cost for their crops is at market and see if they should make the day-long trip and so on. That's a very good thing.
Then to dovetail off of what Mike was just saying, there is a huge potential value of digital technology saving money on health care. When we spend $2 trillion a year on health care in the U.S. – that's about $7,000 per person in the U.S. – and a smartphone costs a lot less than that, really. So if you can cut down to the manufacturing costs, you got a hundred bucks, maybe, for a smartphone. If – that can easily pay for itself. But a lot of the ways that it could pay for itself might be things that Americans are not necessarily ready for.
So let me give you a couple of examples. The number one drivers of health care costs that Americans could control easily are behavioral. So what you eat, do you smoke, do you exercise? So how many people would be excited about an app or a thing that you wore – it might be an implanted sensor in you – that gave you little feedback and maybe gave you real-time discount on your insurance rates or a real-time kickback every time you ate healthy; penalized you when you had the foie gras and so on.
So under the ACA, the only thing besides age and a couple of rare conditions that insurers can use to change your health care costs, is whether or not you smoke. That's based on a voluntary – like, you filling out a form saying, yes, I smoke, yes, I don't. Sensors could actually tell your insurer whether or not you smoked. Sensors could tell your insurer how fatty the meat you're eating is, or whatever your meal is, how often you went to the gym and so on. Right now the law does not allow them to base their pricing decisions to you on that.
Maybe it should. Those sorts of behavioral signals would be potentially a huge savings medically on – financially, but also a huge benefit to quality of life for Americans. But I'm not sure how many of us would be excited.
MR. NELSON: The numbers are huge. It's, like, 700 billion bucks we could save per year. And that's the most conservative estimate.
MR. DANZIG: I would just add, though, that the technologies are so wonderful and their proliferation is so rapid, the democratization is so striking, that I think we risk seeing their power as greater than it is and things as becoming a panacea. So for sure, historically as we got the industrial revolution, for example, people could wear better clothing – not really a good example of this but still – could wear better clothing that approached the elite. And in that sense, the inequality gap is narrowed. They eventually became able to drive automobiles and horse and carriage, which wasn't so available, becomes available.
But would any of us say that we don't live in a fundamentally unequal society? I don't think so. I think the inequality is still very large. So I think we ought to recognize as we celebrate these achievements that inequality is not something that is likely to be ultimately addressed by technology, however powerful the technology may be. It can chip away at it in the margins, but a large number of cultural and other kinds of variables are causing that and are likely to persist.
MR. NELSON: But the healthy and the wealthy have a real interest in making sure that the poorest and the sickest are taken care of. I mean, you look at the numbers and it's astonishing how much money is spent on the sickest 1 percent in the country – it's the other 1 percent. And this is where technology could really make a huge difference. These are often the homeless people who are spending 200 (dollars), $300 -- $300,000 a year in hospital expenses. They're in the hospital maybe two months of the year in the emergency room.
And having the technology to, you know, again, know where these people are, know that they're taken care of at night so they're not ending up in the hospital for a week with frostbite. There's a lot of places where you and I as taxpayers could have a lower tax bill if we deployed technology to take care of these people.
MR. DANZIG: And I think that's –
MR. NELSON: And the good news is that the data's coming out now about these people. We – in law enforcement, you have a similar problem. Have you heard of the million-dollar blocks? In New York City you have single blocks where we're spending a million dollars a year to incarcerate people who came from that neighborhood. And this is a health issue. These people – these people are not being taken care of. They're often, you know, suicidal or they're doing things that they – homicidal, because of mental health problems that are not being treated.
MR. DANZIG: Mike, I think this is exactly right. But – and I think – my own argument, I suspect you might agree, is that the thing that has best reduced inequality is the public health system.
MR. NELSON: Yes.
MR. DANZIG: But the reduction in inequality in these domains should not lead us, I think, to believe that if a panel were meeting 50 years from now to discuss this subject, they would be saying, well, inequality in our societies has been very substantially eradicated, any more than we now say, as compared with 1900, well, inequality has been dramatically eradicated. And that's all I'm –
MR. NAAM: Yeah. Well, I think we have a whole set of other challenges. As Andrew McAfee was talking about this morning, automation and a lot of people – you know, the robot – the Google self-driving car is amazing. There's also about 4 million people who drive cars for a living in the U.S. The Google self-driving car will save lives, will save money, about 50 billion hours a year that Americans spend driving, 8 percent of GDP by some estimates – awesome. Four million people drive cars. So there's a lot of issues outside of health care on that.
I do want to add one point though, which is not strictly about inequality but a little bit related, which is about price and price of these technologies. I was just talking about the health care savings potential of changing behaviors, but when we talk about these digital technologies – especially the kind that Mike was talking about – that are sold to consumers right now, we see very aggressive price trends on them, they're sort of dropping in price. In fact, frankly, most of them you simply could not buy for any price five years ago. So it's actually really quite amazing, right?
But when you look overall at the market, the only things where you see price dropping in health care are the things that are bought out of pocket where there is price competition between vendors and the consumer is having to make a choice. Which of these is cheaper? And that's exerting a huge pressure on vendors to lower their costs. And in the rest of the market, we don't see that. Health care prices have slowed in their rise a little bit the last few years, but they're still rising faster than GDP. In Europe even, where we admire that their costs are so much lower, the costs are still rising at twice the cost of GDP.
So independent of the things that we can do by gathering all this information, there's still a matter of getting the right incentives in the system. And overall, there's a correlation between more technology use in health care and higher prices. So technology can bring prices down, but only if it's mated with the right incentives in the system to somehow encourage use of the highest ROI services and encourage technology provides and service provides to innovate in bringing prices down. And that's something that no developed country, the U.S. or Europe, has yet figured out. So that's another way of health care reform hanging out there, 10, 20 years in the future, I think.
MR. NELSON: I'm really glad you raised that, because big data can help us fix the institutional problems. There's a project from ProPublica called dollars for docs. They tracked all the payments that pharmaceutical companies were giving to doctors, created an app that allows you to go into your doctor and find out how much money they are getting from different providers and to see whether they're giving you a $25-a-pill drug because perhaps they're getting a reward, a speaking fee, from a certain pharmaceutical company, even though the $2 or 50 cent pill would work even better.
MS. MARON: Touching on a point that Ramez made in his, like, opening statement, I'm really interested in what's going to be a frontier problem going forward with better genetic testing, which is incidental findings and how to handle them. When you're looking for one thing and then from a cost perspective, an insurance perspective, it was cheaper to do a bundled test, let's say for something genetic, and you stumble upon something else that you weren't looking for.
And maybe it's something that there isn't actually a treatment for yet. Maybe it's something that would never really develop into a problem – let's say, a marker for Alzheimer's. Right now, professional organizations are the ones that are starting to talk about making some sorts of guidelines for what to do in those sorts of situations. But we don't have anything steady and we don't have any state or federal mandates or laws in those perspectives. Ramez, can you speak to what you think we can do?
MR. NAAM: So there is one law that mandates how insurers and employers can use this data. It's called GINA, the Genetic Information Nondiscrimination Act. So you've all probably Seen "Gattaca" where your genetic information controls everything about you and so on. So we basically have passed the anti-Gattaca act. It basically says that whatever your genes say, your health insurer cannot use it against you. They cannot use it in any decision making in any way. And your employer cannot use it in decision making in any way. Life insurers can. There's a few other things that can. But for the big ones, you're immune from that.
The questions right now are mostly ethical. So a doctor finds out in testing you for what kind of tumor is this for a curable cancer that you're likely to get Alzheimer's 20 years down the road. Should the doctor tell you or not? And that's actually part of the subtext of why the FDA is regulating 23andMe and why some states, like New York and California, tried to ban 23andMe ever providing genetic data directly to consumers and tried to force all genetic information companies to route through doctors instead, was the fear that if consumers got access to this information about themselves they would not be able to handle it in some way.
In fundamentally reject that. You can see just in how I phrased it, but the – like the you can't handle the truth thing I think is condescending to consumers. I do think we have to best practices for now to communicate this information. I think that when you communicate it, you probably do want information right there, as to how to contact a genetic counselor to talk about this issue. You want some good framing of what that means for risk and how far away this is or not. But I think that putting more information directly in the hands of consumers is more empowering.
MR. DANZIG: Ramez, isn't this – is this different or is just an intensification of something we've been familiar with before – as, for example, PSA information?
MR. NAAM: Yeah, it's true. In many states you can't – you can't go in and get a blood test for your cholesterol or for your blood iron without a doctor's prescription – that is the state of affairs in most of the country. And that to me also, frankly, seems sort of backwards on things that are not that hard to interpret. And we'll see if that is ever reversed. But this is a new frontier. The arguments that genetic information providers are making is: This isn't even necessarily health information. Some of this is ancestry information. Some of it is just interesting information about you. And I hope that they can win that fight, but because there is so much precedent that you can't, you know, get a normal blood test –
MR. DANZIG: But this isn't another example of democratization in different context.
MR. NAAM: Yes.
MR. DANZIG: The professional groups want to retain their monopolies and the laymen struggle for access. And now we have more information, better ability to communicate. So it intensifies that struggle.
MR. NAAM: I'll give you another example of that. So we've known for a while now – this is another tangent, but it's a topic on ICT and health care. We've known for a while now that in some cases expert systems, AI as written for health care, can do better diagnostics with more accuracy than doctors. Watson, you all know the IBM AI, Watson, that won on "Jeopardy"? It didn't just win on "Jeopardy;" it beat the two highest-grossing human players on "Jeopardy" of all time. The human top two champions of all time, and it crushed them. It wiped the floor with them. It made some errors, but overall it wiped the floor with them. So IBM sees health care as their big business for this.
MR. DANZIG: You got to wonder how much –
MR. NELSON: Based on big data.
MR. NAAM: Based on big data.
MR. DANZIG: You got to wonder how much Watson would upgrade this panel. (Laughter.)
MR. NELSON: He had some pretty funny –
MR. NAAM: Well, Richard, I think that – (laughter). But the real barriers to Watson in health care are going to be institutional. It's going to be about – liability is going to be used as an excuse, but it's going to be about are doctors willing to accept the use of this tool that, to their minds, sort of undermines their specialness in this area or not? That's what will get in the way of a lot of uses of ICT in a lot of industries, actually.
MS. MARON: Dovetailing off of that, Richard, I'm really interested. You talked a little bit about cybersecurity before. And there's still this sort of ephemeral idea that a lot of medical devices are at risk going forward with cybersecurity. And as we more and more integrate with telehealth purposes to share information with – let's say you have a second opinion from a doctor far away so they upload even your exam results, your CT so people can see it, that is always a risk that a virus, indeed, could come in. When you put a patch in to update a system, that could be a risk. But you talked about before in other areas of cybersecurity how risk that we've been concerned about has never – in some cases, has never really presented itself in the fashion that we were concerned about. What do you see going forward here?
MR. DANZIG: Well, the historic concern in the cyber world is that as we got the developments of the '60s and the '70s and the '80s, people saw the opportunities in the technology and the economic incentives to produce the results and the effect was that not enough attention was paid to the security risk that came with them. And in some ways, this is abetted by the legal system, the incentive structures, et cetera, all of which could be talked about.
The effect is that in the cyber world, we've created architectures that need catch up, but they're very difficult to transform because of the existing legacy systems. And the concern that many people have I think is well-founded that we're now expanding in many domains cyber capabilities. The health system is one of those. But as we do that expansion, if we pay no more attention to the securities implications than we historically with respect to those systems of the '70s and the '80s, people are going to be vulnerable.
I am not phenomenally concerned about those vulnerabilities in a day-to-day way. It worries me, but 25,000 people die from automobiles in the United States every year. I can accept a measure of insecurity. What worries me is if the cases become so dramatic in these areas that they undermine people's willingness to use these systems or they create such an incentive for regulation that we dampen the innovation dramatically.
Better to come to grips with these issues early and leave the scope for innovation relatively untrammeled than to do so hastily and reactively later. And I don't think we yet are really grasping that lesson.
MS. MARON: I see you nodding, Mike. Did you want to chime in?
MR. NELSON: Just wanted to say that the most unfortunate thing about these regulatory barriers is that we're slowing down the collection of the big data. We're slowing down the development of very cheap and easy tests that could actually help us deal with an epidemic caused by either a naturally occurring virus or a bioengineered virus. The curves are still moving. We're going to be able to track antibodies in people's systems. Just a single drop of blood now can be tested for dozens of different antibodies and viruses and contaminates.
One of the other things we haven't talked about is drug testing. I don't know how many people followed the travails of Rob Ford up in Toronto, one of the funny twists there was one day he challenged the entire city council to get a drug test on their hair because, you know, you can now do a drug test and find out what people have inhaled or taken over the last couple of months. At one point he was thinking about shaving his hair so this would not be done to him. But we can learn a lot now from these tests applied to just – to everybody.
MR. NAAM (?): I think I should shave my head.
MR. NELSON: You better, yeah. (Laughs.)
MR. DANZIG: Well, this is why historically we had politicians called Whigs. Never mind. (Laughter.) I'm just trying to contribute to the discussion.
MR. NELSON: Tweetable moment. (Laughs.)
MR. DANZIG: But just to stay with the point about the regulations, this comes back to something you said in the opening, Ramez. This is – well, a striking difference between the cyber world and the health care world is that the health care world is, as Ramez said, heavily regulated. It is a much more controlled world. It also has more developed norms, professional norms, as, for example, amongst the doctors. It has an entrenched elite of the doctors and the professionals.
The cyber world didn't have, really, these attributes. There is a science of biology in a way that computer science hasn't quite achieved, and this will cause the trajectory of regulation and of control and the balance between innovation and regulation to look different in the health care world than it does in the other arenas.
The same underlying forces will get refracted through that prism. It's just that it will be a different prism.
MR. NELSON: We talked earlier in the conference about the democratization of innovation. We really need to have a democratization of diagnosis and treatment. You know, this thing is not just for diagnosing your own problems. Has anyone heard of Spousal Arousal Syndrome? It sounds like a good thing, right, to arouse your spouse? It actually isn't. It's what happens when somebody who snores a lot or has sleep apnea disrupts the sleep of their spouse in bed. This can also diagnose your partner, who may be not getting enough sleep even though they stay in bed for eight or nine hours. Spousal Arousal Syndrome causes depression, it causes divorce, I mean incredibly expensive consequences that could be detected with a hundred-dollar device.
MR. NAAM: And really, when two-thirds of all deaths in the U.S. come down to heart disease, cancer, diabetes, those are all diseases that are heavily amenable to lifestyle. So, like, any of these things that can help you get data on your lifestyle, whether or not the incentives for the insurance companies and so on get put in place, give you more control of your health than a doctor really can have, to a certain extent.
MR. NELSON: And a lot of the other problems are mental health problems, people getting depressed, isolated. And again, these technologies can help people take advantage of the caring economy, which is starting to develop. If people take care of each other, know about each other, extend their – people can extend their social networks more easily now, particularly important for people in their '70s and '80s and '90s.
MS. MARON: And with that, let's open it up to the audience. If anyone would like to ask a question, please identify yourself, tell us your affiliation, and try to put a question mark at the end of your statement. (Laughter.) Anybody? Barry, yeah?
MR. NELSON: We weren't scary enough, I guess.
Q: No, I mean I'm really interested in this do-it-yourself movement. You know, 20, 30 years later, we understand what it was to write code for computers and to hack cyberspace. But if any of you could give us sort of some vignettes, for good and for bad, maybe, what kinds of things are we talking about? What could be really promising? You know, is there a biological Steve Jobs somewhere? And then what kinds of things might be really concerning, as well?
MS. MARON: Good question.
MR. NAAM: There's a few things happening out there in the DIY bio movement. The sorts of things that you can actually access right now is – there was a kick-starter for a glowing plants project. It might yet be regulated out of existence, but I signed up for it. So you'll get some plants where they've taken an enzyme, luciferase – it's what allows fireflies to glow – and they've inserted it into plants. So you can have a plant that will glow a little bit at night. So mostly they're sort of toys.
But there are – there's a movement called BioBricks or iGEM, I-G-E-M, that has a competition every year where they take students and they're trying to build biological circuits that you can plug and play with each other. And mostly what they can do really comes down to sensors. You can make a biological circuit that has a receptor that responds to some other molecule, either something in the body or some environmental toxin, let's say, and then causes something else to happen usually telling you that it's there. We are a long – that could lead to some interesting actual applications. And there's also potential for applications for things in fuels and so on.
We're a long way from being able to use that to do bioterrorism, which is the real big risk. I saw Peter Schwartz nod when I said that I think bioterrorism will be a lot harder than it's made out to be. The reason for that is that in biology, there's a billion ways more to fail at something than there are to succeed, and especially with diseases.
Let's say you want to make a communicable disease. And I'm talking about the bad stuff here, but I just want to explain how hard the bad stuff is. You're making a communicable disease. You have a very narrow range of parameters that it has to fit in to actually kill a lot of people. It has to transmit to somebody easily, then it has to stick around being undetected but causing enough symptoms that it gets them to transmit it, then it has to kill them. It can't kill them too fast, because if it kills them too fast, they won't be alive long enough to transmit it and it will just end up killing dozens of people instead of the millions that you want. It has to actually be lethal and not be curable by any existing antibiotics that are out there or antivirals that are out there. It can't be too easily detectable or they'll catch it too fast.
So all of these things, the reality is that if you try to make – you know, many – a lot of places have the code for smallpox today and you can insert various genes in it to make it more virulent. The reality is, if you try to spread one of these things today, most likely you would kill one or two people, if that, and then it would just fizzle out. And the problem with anybody trying to do that is they have to test it in some way. And just like we can catch somebody making a nuke by the fact that they have to have large enrichment facilities, we would probably catch anyone trying to make an effective bioweapon by the fact they have to have a large testing facility and actually have to be killing lots of human beings to test to see if their virus is actually going to be effective as a major bioweapon rather than some minor scare.
MR. DANZIG: You know, I largely agree with that, and I just want to say two things, one to reinforce it but then a second to qualify it. On the reinforcement side, as some of you know – it occurred to me there was a group that had actually worked hard at developing a biological weapon that was accessible to us, and that was the Aum Shinrikyo terrorist group in Japan. And so I spent a few summers with a team of people – while the rest of you were choosing better vacation spots, I visited the Tokyo Metropolitan Detention Center and talked lot with the terrorists in Aum Shinrikyo and wrote a report on this, which is accessible on the Center for New American Security website.
And what was striking was the difficulties they had in the mid-'90s developing a biological weapon, for some of the kinds of reasons that you've said and a number of others which (I ?) can walk through in some real detail.
Having said that, my qualification on it is that the democratization that we're talking about and the proliferation of availability runs counter to that. So whatever we think about the difficulty of making biological weapons, for Aum Shinrikyo in the mid-'90s – let's call it X – it's now some fraction of X, that difficulty. And 10 years from now it will be some fraction less of X. And as more and more people may be tempted to provide it, to attempt it, I think clearly that risk goes up.
I think that testing is a relevant variable, but people relatively indifferent to human life may test, by actual application, in ways that are pretty low visibility, so you see actually Aum Shinrikyo spraying anthrax or, in the end, spraying sarin, in contexts where it's ascribed to other things, even though they kill judges and so forth, or spraying botulinum.
So the questions I think are among the most difficult to assess, which is, what weight do you give to different variables that are changing over time? And the variables of constraint are very strong, but the variables of constraint are eroding, and the incentives to do it are also very strong.
MS. MARON: Great. Did you have a question?
MR. NELSON: I had one – couple of nightmare scenarios here. One of them has to do with, again, mental health and the fact that these crazy people are finding each other online and building their own little filter bubble, like the fellow in Norway who killed dozens of people. He actually had his own little support network of people who believed the same thing he did. We have filter bubbles of people who don't think vaccinations should be used, and that's a public health problem as well.
On the other side, one of the things we haven't mentioned is the Human Microbiome, and this real revolution in how we understand our health. They now realize that the three or four pounds of viruses and fungi in our gut and on our skin influences all sorts of aspects of our health; and with biotech now, the ability to do these mass sequencings, so we can actually get a genetic map of our gut. So rather than just the DNA of our – our DNA – the genetic code of our DNA, we can find out what bacteria live inside of our stomachs and see if we have a healthy mix or an unhealthy mix. And most Americans apparently have an unhealthy mix, which leads to obesity, in some cases diabetes, some cases depression.
So this is a good-news story that using the power of genetic sequencing we'll be able to better understand the full extent of our system. The bad news is we've been using antibiotics for years that completely screw up the human biogenome – the microgenome, and we'll have to learn more about that, too.
MS. MARON: We'll circle back to a related question with that, but first –
Q: Stan Schmidt, Scientific American. I just wanted to hear your thoughts on where – you mentioned sort of the institutional pushback against a DIY or hacker diagnostic movement, if you will. And I just wonder, especially for some of the more touch and less sort of diagnostic – I'm thinking gerontology, in particular, where you see that sort of playing out, where you don't really need high-end medical care but just basic informatics would help. And is this the kind of thing that can happen on a massive scale for an aging West, for instance?
MS. MARON: And I'm going to couple that with this question from Twitter, as well. Katie Putz (sp) asks: There have been many negative mentions of health care regulation, the medical elite and FDA. Does the risk of bad self-diagnosis – with the do-it-yourself movement, I think – also concern you?
MR. NAAM: Well, I'll add one example, to get to your question, because as you're pointing out, there's a huge fraction of our economy actually gets spent on elder care. There's an interesting photograph I saw just the other day, which was a pair of slippers for the elderly to wear that, just by measuring gait and speed and variability, is diagnostic as to health, as to decline of mental function and so on. Things – innovations like that, it's cheap, it's digital, it's not bio in any way, but it's actually a great health analytics tool, actually, that can provide a great deal of help and maybe prolong the amount of time someone can stay in their own home rather than in nursing care, or maybe helped in a nursing environment as well.
MR. NELSON: And the problem is that grandma doesn't want to wear those, so there's actually another solution, which is a sensor carpet, so when grandma comes down to make her tea in the morning and walks across the living room carpet, you actually can detect the gait and all the same things that the slippers would detect.
MR. NAAM: And I also don't want to – I don't want to sound negative on the FDA. The FDA serves an absolutely, absolutely vital role. Health care is very, very complex. Cures – therapeutics, pharmaceuticals and medical devices – are incredibly complex, and we desperately need somebody playing their role of testing to see are these things safe and do they perform as promised. I think even with informatics, like – (inaudible) – I think there is a role to just double-check is the data they're providing legit. I think that is worthwhile. As long as that doesn't get to the point of saying, no, you can't get that data directly to the consumer, then I'm pretty happy with it.
MR. NELSON: One of the great things about the big data movement is we have now a double-check on some of these drug evaluations. How many people in here have been affected by a(n) adverse drug reaction? I mean, I personally – I was sicker than I've ever been in my life because of a drug that had a side effect, apparently that was well-known, and I was – (inaudible) – they tripled the dose, and I was out of commission for a week and a half.
Q: (Off mic.)
MR. NELSON: Yeah. So this is the kind of thing that we need to track better. And if we're monitoring that after people take drugs, we have the chance to detect some of these strange interactions or even just unrecognized side effects.
MR. DANZIG: I just want to pause to underscore your references, Mike, several references, to big data, because the tendency – as, for example, in my comments – is to talk about the information revolution in terms of the information technologies – Moore's law, et cetera – and then the biological revolution. Big data is a critical third piece of that puzzle and is more transformative in some ways than people think about now in everyday ways. And it ultimately is transformative for both the biology and the computer world. So it deserves to be featured on its own but is relatively neglected, except in Mike's comments.
MS. MARON: Great. Other questions? Yes.
Q: So in late 2011, it was published – I think Reuters first had it – that the H5N1 virus had been sequenced, but both Science and Nature were requested not to publish that data, for the obvious reason that it could be used to, you know, recreate or re-engineer the virus to make it even more virulent. So if we consider big data the ability to draw meaning out of these very large sequences and then the ability to essentially nano-3-D-print viruses – which is a capability, although crude, has been demonstrated – do you think that we're going to be seeing more action like that where we need to tread carefully and, you know, literally limit the publication of some of these discoveries because of the downside potential?
MS. MARON: I'll let Richard start with that.
MR. DANZIG: (Chuckles.) Thanks a lot, Dana.
MS. MARON: You're welcome. (Chuckles.)
MR. DANZIG: Well, putting aside the particulars, because there have been several different kinds of cases where this issue's come up in recent years, I understand and appreciate the impetus towards control, and we have an easy example of efforts in that regard that yielded some benefits, in the classification of a substantial amount of nuclear physics in the period surrounding the atomic bomb. The reality, though, is that that information also has gotten out, and our capabilities to control information seem inadequate, so that the distortive effects of those kinds of controls tend to outweigh, I think, the positive effects. Therefore I find it extremely tempting to do – why would we want this information to be out there? But it is always the case that there are ambiguities about the positive value, as compared with the downside negative value, and that when you try and control the information, it turns out that the information leaks in a hundred ways or is independently recreated in a global world outside the reach of your jurisdiction.
So on balance, I have a pretty heavy presumption against any controls of that type. And a number of the cases that you're referring to in fact are illustrative of that when panels and others backed away from their initial impetus to control because it was judged in – on reflection that the game wasn't worth the candle. Tough question. That's where I am at the moment.
MR. NELSON: Me too. And you scare people out of the field who are the good guys because you need to have expertise to understand what pathogens might come out of these deadly viruses. And if you're – if you're telling them they can't publish, they're going to go find something else to work on. We also need to encourage a lot more work on the defensive side so that we have ways to build responses, vaccines, antibodies, things that could be used to treat the kind of bioweapons.
MR. NAAM: I agree with all of that. So the – if you were a bioterrorist and you really wanted a sample of H5N1, you could get one. You know, it might take a little bit of work and some patients, wait till next flu season, wait in Asia and say, if you're sick with the flu, we'll pay you 500 bucks to come in, and we'll get some samples, all right, so that holding that data back isn't going to save you a whole lot.
But what Mike just said about the defense – we need agility and resilience. So about SARS – when SARS happened, within 36 hours of getting a positive sample of SARS, we had SARS sequenced. But we still don't have an effective SARS vaccine. So what we really need to be investing in is increasing our agility. How fast can we make that pipeline from a genetic sample of a new pathogen to an effective vaccine in a small sample to mass scaling of it? That's the sort of thing that we ought to be working on. How large is our sensor network, our public health network? That's big data, wiring hospitals together around the world so that we know when there's a new outbreak of a disease. CDC has done some of that in the U.S., but not enough. So early detection, early isolation of samples and then a rapid pipeline to actual vaccines or therapeutics. And we have a little bit of that in place, but that's way more profitable for NIH and DOD, perhaps, or DHS to be investing in that than to be trying to hold back details of a sequence of a newly found virus.
MS. MARON: All while states and federal government have been complaining about cuts to the budgets for those kind of preparedness strategies.
Yeah, a question in the front.
Q: Yeah, two questions. One is the issue of privacy. The health data is the most private (you have ?), and do you want to have them in the cloud? I think that are – there are a reluctance today that was not there a year ago. I wonder how you see it. The other one is how reliable is actually the research? It goes so fast, and Economist a few weeks ago addressed the problem of verification. And I would like to hear your comments on the fact that we have a rather shrinking part of the scientific data that actually can be verified, or at least – or verified in a reasonable time.
MS. MARON: And can you just identify yourself, please?
Q: Olaf Araknos (ph), Swedish Foreign Ministry.
MS. MARON: Thank you.
MR. NELSON: I'm really glad you raised both those issues. That article in The Economist is required reading for anybody who cares about science and anybody who tries to use scientific results.
I think, though, there is a huge difference between the different fields. Some of the most high-profile examinations of bogus research has actually been in experimental psychology. A lot of small studies have been done, and every week or so you come up with these new results that show that, you know, of 35 people surveyed, this was the result, and when they try to reproduce those, lo and behold, it doesn't work because it's such a small sample set. I think in this area, in genetic testing and things, there's been less problem, except there were cases about (right ?) fraud, and you have had cases of people in pursuit of the Nobel Prize and fabricating their data and the like, but when there's a big breakthrough, there's always a lot of people coming behind them trying to do it, and because this is pretty replicable, it's not based on some random selection of 35 undergraduates, there tends to be a self-correction process.
I think your second – your first question is even more important, though, and that is how can we get the big data if people are not giving it to – not willing to share it? With 23andMe, they have over a million different analyses, and a lot of the people who did the tests either did share it entirely – I call those people digital exhibitionists – (laughter) – and they were willing to share, you know, what genetic diseases they had, what markers they had. Significantly, they were sharing that information for themselves and also, in some cases, for their family members. And that's one of the interesting things about this. When you violate – you can violate other people's privacy by sharing your own genetic data.
But the main thing that people did is what I did, which is I will make this available anonymously, and I will provide answers to a 40-question survey on, you know, where I come from, what my family history is. And with that data, they were able to make some very interesting correlations. But we have to – we have to provide that security – means better encryption. I think we can store the data in the cloud, but we have to provide better security and show people that their data is protected. And we also have to solve this problem of reidentification. A lot of data is anonymized, but if you get enough pieces of data, you can track back and figure out, OK, that really is Ramez. And for sensitive data like this, we have to avoid that.
MR. DANZIG: I'd just – sorry, Ramez, if –
MR. NAAM: You go ahead, and I'll just add one brief thing.
MR. DANZIG: I just wanted to hitchhike on the second – science knowledge is really like other knowledge. It's frequently better derived, but it ought always to be questioned –indeed, this is a basic science tenet – but is accepted too readily by the public without that kind of questioning. And for me, there are two clear markers in bringing that point home.
One was a wonderful experience I had when I was the Navy secretary, and someone the Navy sponsored for research won the Nobel Prize for Chemistry. And I asked around and said, how many people has the Navy supported and who won Nobel Prizes? And the answer turned out to be some number like 35. So I had the bright idea, well, let's bring them all in for lunch to celebrate. Of course, all I really want to do is meet them myself. (Laughter.) We got some 18 in. And after lunch, I asked the sort of obvious question when you have a group like that, which was we get together in 2100, and what's different? And this physicist said, the laws of physics. And I said, no, no, no, you don't understand; I'm not asking what's the same; what's different? And he said, the laws of physics. And I said, well, why do you think that? And he said, well, look, in 1900, we thought the laws of physics were one thing, and in 2000 we think it's quite another; why would we think it'd be different in 2100?
Now, there's obviously a sort of linguistic gambit in there because he's talking about our perceptions of the laws of physics. But the basic point is, I think, extremely powerful and right. And the second thing is – so as some of you will know more accurately than me, the medical journals carried some years ago some very nice studies in which an astute researcher went back and look at the leading articles that were peer-reviewed in the eminent publications that were the most cited and asked, how did they look 10 years later in terms of their accuracy? These are the best articles in the best journals. And what he found was that for those where control groups were run and there were fastidious matched groups, the results stood up very well, but for those who were – where there wasn't that control, which was a large number of the articles, something like a half – you guys may remember the numbers better than I do – turned out to be inaccurate, pointed in the wrong direction.
So we need to view all knowledge with a degree of skepticism, and science knowledge is no exception.
MR. NELSON: One good news is that they're starting to publish the raw data with studies now, and so people can go and look more detailed at what really was done in the analysis, and even studies that didn't prove anything, the ones that were not able to make any correlations, are now being put up online, so you can go and mine that data and maybe use it in a totally new way. So it's – the phrase "data exhaust" is the buzzword.
MR. NAAM: So I think these guys both nailed it, but I'll just add a couple really quick concepts. One is something called single study syndrome, or SSS. (Laughter.) If you ever see something being based off of one study, just assume that it's not true yet; wait till a second or a third study by – done by different labs verify it; then you can say, oh, maybe this is true. All right, one study doesn't mean anything this day and age.
MR. DANZIG: It does mean headlines for the newspapers.
MR. NAAM: It means a great headline and maybe another grant for the authors.
MR. NELSON: Could I just footnote that you have just usefully explained why all the generalizations that occur in security studies from history and the like are not valid? (Laughter.)
MR. NAAM: It's true in cancer studies too, it turns out.
Two, bioscience does have replication problem. But before I hit The Economist, it was – that was going around in biotech – or biosciences journals for a couple years, and so it's gotten better. So there's a lot of initiatives underway to make that better. And there have been some recent – just the last month there's been another set of replications done of classic studies, and mostly they hold up – not all, but mostly.
And three, the best person to read on this, for a lay audience, is Gary Marcus. He's a neuroscientist. He writes a blog of The New Yorker. And he is amazingly good at dispelling hype, and then when someone is too critical of something new, kind of bringing it back up and saying, actually, there is something cool and new here. And he specifically handled this issue of reproducibility of science and has intelligent, balanced things to say about it.
MR. NELSON (?): A hype filter. Yes, he's really good.
MS. MARON: To Richard's point about how we always need to be skeptical, and just circling back to your 23andMe statement – and I remember there was a really famous case in, like, 2009, 2010 where 23andMe results said that a brother and sister were not in fact brother and sister, and one was, I think, the father and the other was the daughter, and of course it's very confusing to see and was not accurate. And it turns out that in fact, from a genetic perspective, it turned out they were half-siblings, and that's why their genes were the way that were. But it goes to show you – it's another example of when you get do-it-yourself data, you have to think about how you're going to think about that and the risk, again, of incidental findings.
MR. DANZIG: One of the most fascinating things about 23andMe is that when you get to the big four – do you have the gene for Alzheimer's – the lawyers have inserted three warnings. The first warning is, do you really want to see this? The second one: If you see this, you might get suicidal. Do you really want to see this? And then the third one was: If you see this and you tell your relatives that you have this gene, they might get suicidal. Do you still want to see this? So they tried to do a little bit of genetic counseling online.
MS. MARON: We're going to group a couple questions. You first.
Q: First, thanks for an excellent panel. Carlos from Strategic Foresight, Atlantic Council. Having Ramez here – he's a prominent sci-fi author. I wanted to get your take on a utopian and dystopian future. You've wrote about human augmentation. We talked about, during this forum, inequality, the haves and have-nots. So I was guessing if you can do a quick take on what could happen, biotech-related, to create this divide, or if you see actually biotech being transferred to developing countries and getting up to speed? But depending on where we go, we could have a utopia of fantastically enhanced humans and a beautiful future or a really dystopian one with, like – you know it. (Chuckles.) Thank you.
MS. MARON: Great. Other questions? We – this is our last round. Towards the back.
Q: Thank you. Ina Monte (ph), Booz Allen Hamilton. I was wondering – we discussed printing viruses. And on the other end of it, I was wondering if there's experimentation with printing drugs because that would revolutionize drug distribution, including drugs we use for treatment, but also more nefarious drugs. So you could address that – be great, thanks.
MS. MARON: Other questions? OK, so the questions are biotech and transcountry transfer – will it lead to utopia or dystopia? And the other question, printing drugs, risks and promises.
MR. NAAM: So on that – the human augmentation side, what Carlos is talking about is I've written both nonfiction and science fiction about using biotech to enhance human abilities. And it's somewhat scientifically plausible. We know of genes that, in animals, we can do gene therapy, boost their learning rate, take a mouse that only takes five tries to learn something, reduce it to one. We've made primates and other mammals stronger, faster, more athletic and so on, for life, with single injections of gene therapy. It's quite possible that athletes have done this already. It's actually not detected by the current types of testing used by the Olympic Committee.
So the question is will that sort of thing lead to utopia or dystopia? Well, neither, really. But the biggest concern with inequality is a type of enhancement is very expensive and gives a large benefit and stays very expensive, then it pulls the rich away from everybody else, all right? If only the rich can afford enhancement X, whether it's a smartphone or glasses or the best school for their kids or a brain implant that makes you 10 times as smart, then it diverges society. On the other hand, if it gets cheaper over time, then it spreads out, and it has an equalizing effect.
And overall, what we've seen for most types of technology, mostly outside of health care, is A, the technologies have gotten cheap fast – so the digital tech has gotten incredibly cheaper over time – B, the price-to-quality curve is incredibly steep, so to get something twice as good, you may be spending 10 or 20 times as much. If you want to get an 80-inch screen, you're not spending, you know, 10 percent more than a 70-inch screen; you're spending maybe twice as much. That's just how it is in society. So the very richest spend a lot more, usually at an early version that is less good than what somebody spends less to get five years down the road. So that's – there are early adopters that are testing the safety of new enhancements and paying through their nose for less enhancement. So that's generally what we see.
Health care is different because the economics are different. And so we don't see the same price declines in health care. So that's what we'd like to see. We do see price declines in health care where it's paid for out of pocket, as I've said. Lasik went from $5,000 an eye to 300 (dollars) an eye; cosmetic surgery, we see, drops in price over time, whereas everything paid for by insurance does not. So that gives us some hope that it's actually not a function of the technology, that it's a function of the systems that we have that are – and I'm not, by the way, proposing that we pay for everything in health care out of pocket. That has its own problems. But it says that there's something with technology should make it possible to make these things cheaper and cheaper and available to a wide slice of humanity over time, if and when they're actually possible and safe. They're possible today; they just might give you cancer to make you smarter.
MR. NELSON: I would just add a couple of things on the – in response to your question, Carlos. I also have to note that – very much appreciate Carlos' praise of the panel. But since he put the panel together – (laughter) – you might views this as an example of why skepticism is well-founded when you hear that.
On the general point, though, very striking how this democratization of technology that we're talking about occurs, for example, with cellphones, the most rapidly proliferating invention in the history of mankind, no question about it. A set of the kind of movement that Ramez was talking about in terms of the increase in performance – very, very dramatic. You are – all are carrying around in your pockets more computing power than existed in 1985 in the highest-level high-end stuff. The effect of this is indeed a democratization. But the cellphone's different from other things in that if you've got cellphone communicative power with others in the network, you by and large are as well-off in that domain as others who have that power. In the health care world, if I've got the ability to check myself with a particular disease early on and don't have the ability to cure it because I don't have the economics but other people do, or I have the ability to cure it but there are secondary costs associated with time in a hospital or whatever, or my quality of life is anyway going to be very low or my life expectancy is going to be low because I'm going to be killed off by other things, I don't feel nearly as equal as I did when I benefited from the cellphone revolution.
So I think this is a part of that complexity about utopia and dystopia that we're touching on and why I think it's right to recognize that it's not likely to wind up at one of these extremes. The picture is likely to be more muddled.
MR. DANZIG: Just to play my role as the pathological optimist here, I earlier mentioned the bottom 1 percent, the people who are really badly off. If you look around the world, you have 1, 2, maybe 5 percent in some countries where people are actually stunted; the kids didn't get enough nutrition; they lacked a particular nutrient; in some cases, they were exposed to heavy metals at a critical point in their development. These technologies now are going to help us detect those problems before they do permanent damage. The child who has an IQ of 70 when they could have had an IQ of 110 is going to be a burden on society for decades and might grow up to be a terrorist or a child soldier or whatever and cause even more problems. So I'm hoping, in my utopian vision, that with the data, with the sensors, with the cellphones, we can start addressing those problems of the – of the – of the bottom 1, 2, 5 percent.
MR. NELSON: Could we just do 30 seconds on drug printing?
MS. MARON: Sure, and then we should really wrap it up.
MR. NELSON: OK. I'm happy to do these, but you're going to do them better.
MR. NAAM (?): Well, you start.
MR. NELSON: Yes, I think we can print drugs in the way – with the revolutionary kinds of effects that you're describing. Since essentially 3-D printing lays down composites, there's no reason why ultimately it can't occur for DNA and for the components of drugs, whatever they may be.
But I'd just note, there isn't anything so spectacularly unusual about this technology. We need always, along with our skepticism, to ask, in what way could we not accomplish these things as well with other modes? So for example, using bacteria or viruses or yeast to produce drugs are very effective modalities as well, which we're only beginning to experiment with, and we're going to need to – and we're going to see, I think, these kinds of revolutionary changes.
MR. NAAM: So I think that a potential is there, and we see progress towards these sort of technologies. Molecular printer is actually happening. They're probably still a couple decades out from being really totally viable. I think it has a couple interesting implications.
One is already it's the case that the cost of manufacturing of drugs is trivial. A drug company spends a billion dollars to the R&D for a drug, and then it spends sub-1 cent per dose developed, usually. That's why when you buy aspirin, you buy, you know, a hundred-unit, 325-mg jar, and it's a buck or something, right? The cost – and mostly you're paying for the jar and the transport, I think. It's basically free. But for drugs that are still on patent, that have no competitors, where they're charging you $20 a dose, or if it's a chemotherapy drug and they're charging you $500 a dose, this has the chance to disrupt their economics, which is good or bad. It's good in the short term for the consumer, but if it saps all of the profits the drug company can make, then why will they invest in doing more drug R&D? So if that becomes a reality and drug pirating becomes a possibility, then we have to find an alternate economic model for the pharma companies to incent them to do the R&D.
On the illegal drug side, yes, if anybody could invent – could print their own cocaine or heroin, that would have certain social ills. It would also cut organized crime completely out of the picture. So I think that would have way more social benefits than ills, actually. (Laughter.) I'm all for it.
MS. MARON: Great. And we're going to end with Mike's comments if he has anything to add. Great. Well, thank you so much, and thanks to our panelists.
MR. : Thank you. (Applause.)