Every industry today is awash with talk about “innovation” and how organizations can do it better. There are countless studies on the factors underpinning innovation and the key elements needed to incubate it. The defense industry is no different.
While definitions of innovation often differ, we can broadly describe it as the process of transforming an idea, concept, or knowledge into a product or service that delivers significant new value. It is the combination of structured process with creative and diverse perspectives that creates the best chance of “innovative” thinking.
The Atlantic Council has long understood the value of non-traditional and creative perspectives for new thinking about the complexities of the future, embodied in its Art of the Future project. It is a view increasingly embraced by the defense and national security community.
It was in this spirit that in April 2018 the Atlantic Council’s network of creative futurists joined military officers, business leaders, and prominent technologists in Silicon Valley in support of Johns Hopkins University Applied Physics Lab’s (JHU APL) “Tactical Advancements of the Next Generation” (TANG) event on future submarine designs.
The event was not intended to create a new idea for submarine designs, but rather to understand the ways society and technology may change in the next twenty years to provide context for engineers to anticipate how tomorrow’s military personnel will engage with technology and interface with each other.
There are plenty of existing examples of military equipment being built to be more intuitive to young soldiers—from touch screens in armored vehicles to commercial console controllers being used for unmanned vehicles, sensors, and even weapon systems. Anyone who has observed today’s toddlers interacting with touchscreens and smart-voice assistants like Alexa can readily understand how drastically this will change again when they are tomorrow’s soldiers—within the timeframe of this TANG’s focus.
The TANG process includes leveraging a diverse group of perspectives to ideate on aspects of future society. In this case, the event focused on mobility, communication, education, and knowledge transfer. Through an iterative design process, a multitude of ideas was created, evolved, and down selected to identify possible deflection points in the future—where new cultural forces and technology can deviate current trendlines for the future.
Through this process, twenty-seven short narratives of the future were created. These were then down selected to the six most compelling, which were eventually combined into three stories of the future that encapsulated the vast thinking of the previous two days.
Across the many narratives, several key themes emerged: the perils and promises of genetics; the issues, implications, and pitfalls of artificial intelligence (AI) making ethical decisions; the risks of stateless technology; balancing technology and responsibility; the partnership between humans and technology, particularly regarding AI; the hazards and opportunity of virtual reality and virtual presence; the future of work; and the implications of a gamified world.
Three compelling stories came from this. The first explored how genetic modifications could provide an edge to those who embrace it, particularly in education and developing smarter workers. This narrative is poignant considering the ethical and values-based concerns many in the West have about such human enhancement of modification. These values will likely change over time, but it is nonetheless a concern that will not limit potential adversaries. In this narrative, the advantage of gene manipulation was offset by radical new machine-assisted learning techniques.
The second story explored how virtual presence using unmanned systems and virtual reality could enable faster response to natural disasters and project advanced medical capabilities to remote locations. Given the increasing occurrence and severity of natural disasters, it is a vignette that resonates powerfully.
The final story explored the moral quandaries of AI: what happens if artificial intelligence does exactly what we programmed it to, but still makes decisions we are morally uncomfortable with? In this case, an AI system that was designed to help people improve their lives befriended a young girl and made decisions about her family and living arrangements that created more moral questions than solutions. It also explored how human relationships with technology, and AI systems in particular, could shape humans’ future decisions about their lives.
The outcomes of this design workshop were used to help military designers think about the different ways technology will change our society, and how that will impact the way that capabilities will be employed and how we humans will expect to interact with both them and with each other.
There is an old adage about planning that it is not the outcome that is important, but rather the value is in undergoing the process. The same is true for thinking about the future. No one event is likely to perfectly perceive a future state. But in undertaking the process, new thinking about the future can generate ideas and concepts that can be transformed into new value. It is a process worth repeating often.
John Watts is a nonresident senior fellow in the Atlantic Council’s Scowcroft Center for Strategy and Security. Follow him on Twitter @John_T_Watts.