In 1965, Dr. Gordon E. Moore wrote an article (PDF) based on a trend he noticed that the number of transistors in a dense integrated circuit (IC) doubles approximately every two years. Fueled by unrelenting demands from more complex software, faster games, and greater broadband video, this observation was later dubbed Moore’s Law and has held true for nearly 50 years. It became the de facto roadmap against which the semiconductor industry drives its research and development. But that roadmap may be faltering now due to fundamental physics limitations incurred at the incredibly small scales at which we fabricate chips. Can we find novel ways to circumvent these limits and thereby achieve Moore’s Law 2.0? If we are successful, what implications might such computational capacity have for society?

 Limits to Moore’s Law

People have been heralding the end of Moore’s Law for decades. For example, in the early 2000s, some technical pundits were convinced that we would never be able to lay down more transistors in a given IC than possible at that time. One challenge was reducing the wavelength of light that could be applied in the micro-stereolithography systems used to make the chips – the smaller the wavelength, the more transistors. However, new optical designs soon made it into production with exotic approaches such as quartz glass lenses immersed in liquid; these materials enabled reduction of the smallest transistor feature size and thus increased the numbers of transistors on a single IC. Essentially, it seems that every time we hit a technical wall, someone somewhere concocts a way to blast through it with some new material or process. The semiconductor industry is highly motivated to keep on track with Moore’s Law as technologies such as wearables and smartphones demand ever more computational capacity.

But in recent years, the realization has come that we may have truly reached a limit with Moore’s Law, at least with silicon chip architectures as they are today. Transistors are now approaching the scale of individual atoms and their quantum interactions. “Individual device features in the latest chips are as small as 14 nanometers across – the width of fewer than 100 atoms, and close to the limits set by physics.” So many electrons are flying around in our ICs that there is an entire field in the semiconductor industry devoted just to chip cooling; without such cooling, the ICs simply fry and fail. However, new research is pushing to extend Moore’s Law, albeit perhaps without our classic silicon. A suite of technologies is being pursued to either maximize IC capacity using new circuit designs and/or break away to entirely new materials or IC architectures.

More Moore?

A compelling idea comes from IBM: neuromorphic computing, or chips that mimic the human brain. Recently published in Science and widely reported by mainstream press, the IBM chip uses a new, ‘brain-like’ approach to IC design. Today’s chips rely on sequential, centralized architecture for their calculations. While reliable, such designs are nevertheless slow and consume vast amounts of energy for a given calculation compared to the biologically-evolved brain. We routinely use almost all of our neurons in countless waves of electrical signals to process information, all while consuming only tens of watts (compared to thousands of watts for a standard computer). A chip mimicking the brain’s architecture has the potential to be both more efficient and consume far less power than even the most advanced present-day chip. IBM therefore designed and built one – “The new chip, called TrueNorth, marks a radical departure in chip design and promises to make computers better able to handle complex tasks such as image and voice recognition-jobs at which conventional chips struggle…Each TrueNorth chip contains 5.4 billion transistors wired into an array of 1 million neurons and 256 million synapses. Efforts are already underway to tile more than a dozen such chips together.” IBM has also coded entirely new software and is encouraging computer scientists to create new applications with their neuromorphic chip.

Nanomaterials are also being researched as a potential saving grace for Moore’s Law. Materials such as carbon nanotubes are being leveraged to replace silicon and thus enable a whole new class of ICs. A carbon nanotube is essentially a single sheet of graphite rolled-up like chicken wire. A useful property of carbon nanotubes is their rapid transport of electrons. Recently, groups such as those at IBM have fabricated ICs from carbon nanotubes. “Chips made with nanotube transistors, which could be five times faster, should be ready around 2020,” says IBM. A major challenge is incorporating nanotechnology chip production into existing fab plants; the semiconductor industry is loath to write-off billions of investment dollars they’ve already made with silicon.

In short, it remains to be seen which of these or other technologies will achieve high-volume production to extend Moore’s Law. Regardless of the approach chosen, it will have to address fundamental limitations to material, device, circuit design, system, and software.

Beyond Moore

Robust functionality, optimization, and production scale volumes of other chip technologies are perhaps a little further out (2030+?), but they have the potential to disrupt Moore’s Law itself. One emerging technology is optical computing. Instead of electrons, light (photons) could be used as the information conducting medium between transistors. Thus, computations could literally occur at the speed of light, faster than conventional electron-driven chips. Groups such as at MIT, the University of Arizona, and the University of Alberta in Canada are driving research in advanced optical switches, lasers, and nano-optic cables to further optical computing.

Quantum computing is another potentially disruptive technology to Moore’s Law. Traditional computers operate with bits that are either ‘on’ or ‘off’ depending on their electrical signaling. Quantum computing operates with qubits, in which an individual bit can be both on and off simultaneously. As with other advanced computational architectures, quantum computing is still in early stages of development. Aside from hardware issues, it also requires a wholly new approach to software since the binary on/off paradigm is no more. Dr. Robert Meagley — founder, chief executive officer, and chief technology officer of ONE Nanotechnologies, LLC — says potential areas where quantum computing could have significant impact include energy (by accelerating discovery in materials chemistry and modeling of optimization problems), medicine (by modeling of complex biochemical systems to accelerate drug discovery), information (by enabling big data analytics and new encryption approaches), and materials (by facilitating complex materials systems modeling). The company D-Wave Systems claims they have built a quantum computer; their computers have been purchased by Google, Lockheed Martin, and others. Great potential exists with quantum computing, but also substantial research remains necessary.

Implications

Chips that mimic human thought, achieve speed-of-light calculations, or operate at quantum levels will have profound societal implications. National security and defense rely on ICs. Autonomous vehicles (from drones to self-driving cars), traditional military transportation and weapons (from submarines to jets to missiles), and general communications on the battlefield (from 4G to 5G+ smartphones) are all wholly dependent on the next generation of ICs for faster, more efficient processing power. In the public sector, advanced technologies such as always-on wearables, augmented reality, and artificial intelligence (imagine Apple’s iPhone Siri being able to accurately answer any question) will only be optimally developed with more powerful chips. Technology trackers anticipate a time soon when computing is as pervasive as breathing. For that to happen, we simply cannot plateau on the current positive slope of Moore’s Law.

In short, yes, Moore’s Law is topping out with our current technologies. However a range of other technical capabilities may soon push us onto the next computational platform. The potential implications of bringing supercomputer capabilities to the masses are profound – for example:

  • Will the Internet of Things become the Internet of Everything?
  • Could we finally achieve instantaneous, accurate machine language translation?
  • Could diseases be routinely cured through the modeling of new drugs or treatments?
  • What new weapons might be developed?
  • Could big data analytics become automated?
  • Answers remain to be seen, but some version of Moore’s Law will surely take us headlong into the future soon enough.

     

    Acknowledgements

    I gratefully acknowledge Dr. Banning Garrett, independent consultant, Dr. Robert Meagley, chief technology officer of One Nanotechnologies, and Ms. Emily Kale, ICTAS Writer, for their reviews.

     

    Thomas A. Campbell, Ph.D. is a research associate professor at the Institute for Critical Technology and Applied Science (ICTAS) at Virginia Tech, and a senior fellow (non-resident) at the Atlantic Council.