Artificial Intelligence Energy & Environment Technology & Innovation
GeoTech Cues June 21, 2024

The sustainability questions policymakers should be asking about AI

By Tiffany J. Vora, Kathryn Thomas, Anna Ferré-Mateu, Catherine Lopes, and Marissa Giustina

Advances in artificial intelligence (AI) promise to achieve efficiency and progress for a variety of applications, including cutting-edge research, business, and whole industries. However, a major gap has opened: the need for transparency around the sustainability of AI initiatives throughout their whole lifecycle.

“Sustainability” is not just an environmental concern. In a broader sense, such as that employed by the United Nations Sustainable Development Goals (SDGs), sustainability requires improving human health, prosperity, and economic growth. And in discussing sustainability in AI, following a framing described by the Sustainable AI Lab at University of Bonn, it is important to discuss not only AI applications for sustainability, but also the sustainability of the AI industry itself.

The Organisation for Economic Co-operation and Development pointed out in November 2022 that it is important to consider both the direct sustainability impacts of computing as well as the indirect impacts of AI applications. However, the sustainability of computing seems to rarely be mentioned in current conversations about the governance of AI development and deployment or in new legislation or guidance such as the European Union (EU) AI Act, United Nations resolution A/78/L.49, Canada’s C27 bill, the Australian government’s interim response report, the White House executive order on AI and follow-on actions, or requirements in various US states. Instead, these and many other conversations around the world focus primarily on the also-critical topics of trustworthy AI, data privacy, alignment, and ethics.

If policymakers close this gap and focus today on the sustainability of the AI industry, they will have the opportunity to steer entire industries toward contributing to a positive future for both people and the planet.

To develop and leverage AI at the scale imagined by researchers, businesses, and governments, significant physical resources will be required for the design and deployment of the requisite computing hardware and software. While all AI approaches merit attention regarding their sustainability, generative AI is particularly resource-intensive: One such AI-powered chatbot is reportedly consuming the energy equivalent of 33,000 homes. (Note that while it is complicated to estimate such equivalences—given variations in operational timescales and details, home location, user numbers, etc.—various calculations have yielded estimated energy use equivalent to that of tens to hundreds of thousands of US homes.)

In addition, new data centers are being designed and built with high demand and at a fast pace, new AI-critical hardware components are being designed and fabricated, and organizations large and small are experiencing urgency in setting their short-term tactics and long-term strategies for AI. Demands on data centers will only continue to grow as AI-powered applications spread through industries and around the world. For example, a recent International Energy Agency report projected an increase in data center energy consumption in 2026 equivalent to the energy consumption of Japan.

Sustainability-focused regulation of AI, if deployed in a timely manner, can incentivize further improvements in the efficiency of data center operation and even the efficiency of software itself. Unfortunately, in the past, similar opportunities to promote the sustainable development of emerging technologies across industries have been missed. Failure to act during the rise of cryptocurrency mining has led to concerns today about the industry’s electricity and water use and to tension—internationally and domestically—around regulation and resource accessibility. For example, blockchain advocates filed a lawsuit against the US Department of Energy after the agency attempted to conduct an emergency survey of energy use by crypto miners, with the advocates arguing that it forced businesses to divulge sensitive information.

More broadly, global digitization and its associated technologies have spurred crises in e-waste, supply-chain fragility, and human rights, to name a few. Early consideration and prioritization of these issues could have prevented harmful patterns from becoming embedded in today’s systems and processes. Crucially, the projected demands on data centers in the coming years due to the rise of AI—in terms of hardware, power, cooling, land and water use, and access to physical infrastructure and network bandwidth (a particular concern in growing urban areas)—are likely to far outstrip demands associated with other technologies. The potential cumulative impacts of the AI revolution, including resource consumption and byproduct production, underscore the urgency of acting today.

Questions for a sustainable industry

In order for policymakers to introduce measures that encourage AI initiatives (and the entire AI industry) to be more sustainable—and to enable consumers to choose sustainable AI tools—there needs to be more transparency around the sustainability of developing, training (including storing data), and deploying AI models, and into the lifecycle of attendant hardware and other infrastructure. Policymakers should require that any new AI initiative, early in planning, complete sustainability reporting that helps estimate a proposed AI initiative’s physical impact on the planet and people, both now and in the future. This transparency is not only necessary for guiding future regulation and consumer choice; it is also a crucial part of fostering a culture that prioritizes developing and regulating technology with the future in mind.

The questions that policymakers should require organizations developing and deploying AI initiatives to answer should, to use a metaphor, address the entire “iceberg.” In other words, these questions should inquire about visible sustainability issues (such as the production of carbon dioxide) as well as less-visible issues below the “waterline” (such as whether the land underlying physical infrastructure could have been used for food production). These questions should cover three overarching categories:

  1. The consumption of readily detectable resources,
  2. The production of byproducts, and
  3. The achievement of broader sustainability goals.

In developing the questions for reporting, policymakers should gather insights from regulators, AI technologists, environmental scientists, businesses, communities near AI infrastructure, and end users. The questions should be useful (easily interpretable and insights from them point to potential areas of improvement), be extensible (applicable across current AI models and for future models), and result in reliable answers (roughly repeatable using distinct tools). Framing questions in a way that results in the reporting of concrete and preferably quantitative answers can set the stage for organizations to implement internal, dashboard-style approaches to sustainable AI development and deployment.

Beyond the wording of such questions, the timing of asking organizations matters as well. Answers to these questions should be reported in the earliest stages of an AI initiative’s planning, as they will help organizations conduct cost/benefit analyses and assess their return on investment. Real-time insights gathered during the operational lifetime of an AI initiative would enable not only monitoring of the project’s sustainability, but also execution of in silico experiments that could reveal novel operational, budgetary, and sustainability benefits. The questions should apply equally to all organizations in the public and private sectors using AI. Finally, policymakers should revisit the questions regularly as AI technologies continue to develop and be deployed—and as user needs and geopolitics change.

To capture these broad considerations in a concise set of questions, policymakers should look to the following key sustainability questions as a starting point.

What resources (inputs) are being consumed, directly and indirectly, throughout the lifecycle of an AI initiative?

  • How much energy is required? What are the sources of this energy? What percentage of this energy is renewable? What is the Power Usage Effectiveness for the initiative?
  • How much water is required, for example for cooling? What are the sources of this water and, for example, is it recycled water? How much of this water could have been suitable for human consumption or agricultural use? What is the Water Usage Effectiveness for the initiative?
  • How much land is required, for example for physical infrastructure? How close is each land parcel to human habitation? How much of this land is appropriate for food production or human habitation? How has local biodiversity been impacted by the use of this land for AI initiatives?
  • What rare metals are used and what are their sources? What are the sources of all metals required for hardware (such as graphics processing units, also known as GPUs)—land, ocean, or recycled? How are local communities and workers, in areas where these metals are procured, engaged or affected?

What byproducts (outputs) are being produced, directly and indirectly, throughout the lifecycle of an AI initiative?

  • How much greenhouse gas (embodied carbon) is produced, in metric tons of carbon dioxide equivalent?
  • What is the projected functional lifetime of each of the top five most abundant hardware components (such as central processing units—also known as CPUs—or GPUs)?
  • How much hardware waste is generated each year? How much of this waste is recycled effectively? How much of this waste will go to the landfill? How much waste pollutes the air and water? How much of this waste is toxic to human health and to the environment?
  • How much wastewater is produced, where does it go, and what can it be used for? Does it require further treatment? Can it be released back into the environment, and how would its release impact the environment (e.g., changing the water temperature of an ecosystem)? Is it used as gray water for other applications?

What broader sustainability opportunities are being harnessed through each AI initiative, using the United Nations’ SDGs as a framework?

  • How resilient is the associated physical infrastructure to earthquakes, floods, droughts, fires, storms, and other disasters? (SDGs 9 and 11)
  • How much of the broader labor force is local to the land and community being used for an AI initiative? How competitive are wages relative to the industry? (SDGs 1 and 8; broader questions around AI and labor disruption are critical but go beyond the scope of the current discussion)
  • How safe and healthy are working conditions for all contributing employees and contractors, both local and remote to the physical infrastructure of the initiative? (SDG 3)
  • How many educational opportunities are being produced by, and contributing to, the AI initiative? (SDG 4)
  • Regarding gender equality and broader inclusivity, what percentage of the workforce, both full-time and contract, identifies as a member of a marginalized group? Are efforts being made to reduce inequality within and between countries that provide AI workforce? (SDGs 5, 10, and 11)

Sticking the landing

Any organization working with AI—whether the organization is using in-house compute resources or external (cloud) service providers to develop and deploy AI models—should report their answers to the above sustainability questions yearly. Several tools and frameworks for reporting and answering some sustainability questions already exist; adopting new policies such as required reporting will spur the development of further tools.

For the time being, transparency obligations should fall on the organizations that are developing and deploying AI models—not on consumers who are only end users of AI models. That may change if large numbers of end users themselves end up training and developing their own models, causing a rapid expansion in AI-associated resource consumption and byproduct production. However, the question about where transparency obligations fall must be revisited regularly as AI technologies continue to develop rapidly and increasingly resource-intensive queries by users become possible. Crucially, hypothetical future affordances of AI must not be factored into the answers to these sustainability questions. For example, if the goal of an AI initiative is to help an end user reduce their carbon emissions, then that hypothetical future reduction must not be factored into the organization’s assessment of the carbon emissions of this AI initiative this year.

Policymakers should promote the monitoring and reporting of accurate information, rather than define “good” answers to these questions and penalize companies that do not meet those benchmarks. The EU’s Sustainable Finance Disclosure Regulation framework, with its emphasis on the power of transparency to shape and amplify market forces, can serve as a model for such an approach. If reported data were gathered in a single, open-access database (perhaps analogous to the European Single Access Point), then regulators, investors, technology companies, nonprofits, and the general public would be able to reward progress toward sustainability goals, over various time horizons, through a variety of mechanisms. It will be important to have external auditors to ensure the credibility of reported data, as they have done for sustainable finance.

Authority to penalize nonreporting should be assigned to a designated agency. For example, for the United States, while the Securities and Exchange Commission and environmental protection agencies at the federal and state levels could be logical candidates for this authority, this environment-centered approach overlooks the larger definitions of sustainability that could be encompassed by regulation. The Office of Science and Technology Policy at the White House may be more appropriate as a centralizing point, given this entity’s mandate to pursue “bold visions” and “unified plans” for US science and technology, as well as its ability to engage with external partners in industry, government, academia, and civil society. The agencies selected to carry out this responsibility should have direct lines of communication with their counterparts in other countries, enabling an agile and coordinated international response to rapid advances in AI.

Critically, international regulators, researchers, businesses, and other developers and users of AI should maintain a collaborative—rather than adversarial—relationship, as doing so could position sustainability as an investment in the future that delivers returns in the near to medium term. Subsidies from federal, state, or local governments could be used to assist small and medium-sized enterprises with the administrative and other financial burdens of this reporting, as mentioned by the EU’s AI Act. To ease the burden on organizations as they comply with potential future reporting and auditing requirements about the sustainability of their AI operations, policymakers should identify metrics and processes that can be used for parallel disclosures. For example, this can be done by requiring data that a single company could use to fulfill their transparency obligations for sustainable AI, sustainable finance, and sustainable corporate reporting such as the EU’s Corporate Sustainability Reporting Directive. Policymakers should also strive to maintain consistency internationally, perhaps following the EU’s lead in sustainability policy to date. Ultimately, the International Organization for Standardization should expand its current AI offerings to include standards for the transparency of AI sustainability (such as the questions suggested above), in alignment with its current standards addressing environmental management, energy management, social responsibility, and more.

A unique moment

The sustainability of AI is an urgent and pressing issue with long-lasting, global impacts. Today, the world still dedicates a great deal of attention to AI; the technology has not yet faded into the background or become ubiquitous and invisible, much like electricity has. However, the current moment—of unprecedented demand for the extraction and deployment of AI-enabling physical resources—is a crucial turning point.

Current and future generations depend on policymakers to steward the world’s resources sustainably, especially as a wave of global resource expenditure—with an anticipated long tail—approaches. In light of this impending growth, the opportunity for action is brief and the need is immediate. Although the scale of the challenge is daunting, international responses to ozone depletion and Antarctic geopolitical tension showcase the power of international collaboration for rapid and high-impact action.

With the framing of key sustainability questions, policymakers can gather the insights they need to adequately build a regulatory framework that encourages responsible resource expenditure and adapts to the inevitable shifts in a nascent industry. Transparency can empower consumers and investors to incentivize sustainable AI development. International cooperation on this effort can foster transparency and inspire collaborative action to build a future that is sustainable in many senses of the word.

Tiffany J. Vora is a nonresident senior fellow at the Atlantic Council’s GeoTech Center. She has a PhD in molecular biology from Princeton University.

Kathryn Thomas is the chief operating officer of Blue Lion. She has a PhD in water quality and monitoring from the University of Waterloo.

Anna Ferré-Mateu is a Ramón y Cajal fellow at the Instituto de Astrofísica de Canarias and an adjunct fellow at the Center of Astronomy and Supercomputing of the Swinburne University of Technology. She has a PhD in astrophysics from the Instituto de Astrofísica de Canarias.

Catherine Lopes is the chief data and AI strategist of Opsdo Analytics. She has a PhD in machine learning from Monash University.

Marissa Giustina is a research scientist and quantum electronics engineer. She has a PhD in physics from the University of Vienna. She conducted the research for this article outside of her employment with Google DeepMind and this article represents her own views and those of her coauthors.

The authors gratefully acknowledge David Rae of EY for fruitful discussions. The authors also acknowledge Homeward Bound Projects, which hosted the initial working session that led to the ideas in this article.

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

Further reading

Image: Credit: Markus Spiske via Unsplash