Artificial Intelligence Internet Politics & Diplomacy Security & Defense Technology & Innovation United States and Canada

New Atlanticist

October 30, 2023

Experts react: What does Biden’s new executive order mean for the future of AI?

By Atlantic Council experts

“Can machines think?” The mathematician Alan Turing posed this question in 1950, imagining a future human-like machine that observed the results of its own behavior and modified itself to be more effective. After observing the rapid development of artificial intelligence (AI) in recent months, US President Joe Biden issued an executive order on Monday intended to modify how humans use these “thinking machines.” The thinking behind the order is to make AI safer, more secure, and more trustworthy. Will it be effective? Below, our own “thinking machines”—that is, Atlantic Council experts—share their insights.

Click to jump to an expert analysis:

Graham Brookie: What stands out are the implications for AI use in the US government

Lloyd Whitman: Executive action alone won’t get the job done

Rose Jackson: The US still must have hard conversations about AI

Trisha Ray: Establishing AI ethics is a task the US must tackle with allies and partners

Newton H. Campbell: This aggressive but necessary order will introduce regulatory burdens on AI

Frances G. Burwell: The order lacks the legislation with enforcement of Europe’s AI Act

Maia Hamin: A one-two punch to put the US on a path toward standardized testing of AI models

Rachel Gillum: A potential catalyst for responsible private sector innovation

Ramayya Krishnan: US leadership on AI will create new opportunities for workers and businesses

Steven Tiell: The executive order is vast in scope—and the equivalent of vaporware

Carole House: A bold, comprehensive vision facing potential challenges with implementation


What stands out are the implications for AI use in the US government

The Biden administration’s executive order on AI is a simple, pragmatic step forward in coherent and connective tech policy. The proliferation of AI governance efforts this year at nearly every level, including local, national, multinational, multi-stakeholder, and more, has been a natural extension of the rapid deployment of AI and industry reorientation around it. This executive order is an opening salvo not meant to be comprehensive or final, but it sets a significant policy agenda as other bodies—including Congress and aligned international partners—consider next steps. It is a clear signal from the United States ahead of the AI Safety Summit in the United Kingdom later this week. 

What stands out the most is not necessarily the rules set out for industry or broader society, but rather the rules for how the government itself will begin to consider the deployment of AI, with security being at the core. As policy is set, it will be extremely important for government bodies to “walk the walk” as well.

Graham Brookie is the vice president and senior director of the Atlantic Council’s Digital Forensic Research Lab.


Executive action alone won’t get the job done

The Biden-Harris administration has taken strong action with the comprehensive executive order on safe, secure, and trustworthy AI. But an executive order can only do so much, limited by the existing authorities and appropriations of the executive branch agencies. While priority-setting, principles and best practices, frameworks, and guidance across the federal AI landscape are important, much of the teeth of this order will require rule-making and other administrative actions that take time, are subject to judicial review, and can be revoked by a future administration. US leadership on AI will require bipartisan recognition of the opportunities and challenges AI presents for our economic security and national security, and thoughtful legislation ensuring a balanced, transparent, and accountable approach to promoting and protecting this critical emerging technology.

Lloyd Whitman is the senior director of the Atlantic Council’s GeoTech Center. He previously served at the National Science Foundation as assistant to the director for science policy and planning. He also held senior positions at the White House Office of Science and Technology Policy in the Obama and Trump administrations.


The US still must have hard conversations about AI

The White House’s executive order comes days before world leaders head to the United Kingdom for a major summit on “AI Safety.” Amid a flurry of partner government and multilateral regulation, convenings, and conversations, the administration is clearly trying to both make its mark in a crowded space and begin to make sense of the AI landscape within the powers it has. It’s worth noting that this massive executive order builds on a few years of action from the administration, including the Commerce Department’s release of a Risk Management Framework, the more recent voluntary principles negotiated with major AI companies, and the White House’s Blueprint for an AI Bill of Rights. 

We’ve seen these existing actions serve as the basis for US engagement on the Group of Seven’s (G7’s) Guiding Principles and Code of Conduct on Artificial Intelligence, which was released just this morning. We should expect to see echoes of the same in the commitments to come out of the AI Safety Summit in London later this week. 

However, this executive order is more than just posturing. By requiring every government agency to examine how and where AI is relevant to their jurisdictions of policy and regulation, the United States is taking a major step in advancing a sectoral approach to AI governance. With nods to data privacy action and a clear call for Congress to pass legislation, there are plenty of hooks for meaningful action here. This is a substantive move that sets up the United States to have the hard conversations required to ensure AI is leveraged toward a better future. 

Rose Jackson is the director of the Democracy + Tech Initiative at the Atlantic Council’s Digital Forensic Research Lab. She previously served as the chief of staff to the Bureau of Democracy, Human Rights, and Labor at the State Department.


Decoding Artificial Intelligence


Establishing AI ethics is a task the US must tackle with allies and partners

The Biden administration’s executive order is a timely signal of the United States’ intent to lead the global conversation on AI ethics by example. The order’s emphasis on international engagement is welcome, given the current moment of convergence of several trends in AI development and geopolitical tensions. In this vein, the US government should prioritize supporting existing multilateral and multi-stakeholder processes and recommendations. With the United States having rejoined the United Nations Educational, Scientific, and Cultural Organization (UNESCO) earlier this year, this includes UNESCO’s “Recommendation on the Ethics of Artificial Intelligence,” adopted in November 2021. Similarly, the executive order also calls for “the development of a National Security Memorandum that directs further actions on AI and security.” In doing so, the order finally, albeit only partially, addresses the void left by the 2023 US policy on “Autonomy in Weapons Systems” regarding the use of AI in law enforcement and border control, among other applications outside conflict. This memorandum could serve as an important signal to democratic allies and partners in a sphere that is often treated as an exception to broader principles of AI ethics.

Trisha Ray is an associate director and resident fellow at the Atlantic Council’s GeoTech Center.


This aggressive but necessary order will introduce regulatory burdens on AI

Today’s executive order from Biden on a safe, secure, and trustworthy artificial intelligence is quite aggressive and will likely encounter some hurdles and court challenges. Nonetheless, direction was needed from the executive branch. The order is necessary to strike a balance between AI innovation and responsible use in the federal government, where new AI models, applications, and safeguards are constantly being developed. It emphasizes safety, privacy, equity, and consumer protection, which are essential for building trust in AI technologies. I see the emphasis on privacy-preserving technologies and the focus on establishing new international frameworks as a positive step for global AI governance.

The order directs every federal agency to regulate and shape AI’s growth to protect the public, national security, and the economy. But with limited power (the improbability of Congress passing any real laws that align funded activity to accommodate these new constraints and responsibilities), the order will introduce regulatory burdens, potentially slowing AI development and other AI-impacted processes due to an evolving skills gap in the government. The potential misalignment of new government programs and funding is of significant concern, and will likely be used to reinforce political narratives of government inefficiency.

Newton H. Campbell is a nonresident fellow at the Atlantic Council’s Digital Forensic Research Lab and the director of space programs at the Australian Remote Operations for Space and Earth Consortium.


The order lacks the legislation with enforcement of Europe’s AI Act

The new White House executive order is a notable step forward toward protecting Americans from the biggest risks of advanced AI. The European Union (EU) is about to conclude negotiations over its own AI Act, and the similarity in ambitions between the two initiatives is remarkable. Both call for testing and documentation, greater security against cyberattacks, safeguards against discrimination and deception, and transparency for consumers, along with other measures. But the EU AI Act is legislation with enforcement, including significant fines, while the executive order depends on the market influence of the federal government. 

Will developing standards and best practices aimed at preventing algorithmic discrimination, for example, and pushing these through federal programs and procurement, be sufficient? It will be some time before we know, but it is a worthwhile experiment. In the meantime, this executive order gives the US administration credibility as it works with other countries, in the G7 and elsewhere, to ameliorate the risks of AI and focus on the opportunities.

Frances G. Burwell is a distinguished fellow at the Atlantic Council’s Europe Center and a senior director at McLarty Associates.


A one-two punch to put the US on a path toward standardized testing of AI models

The executive order directs the National Institute of Standards and Technology (NIST) to develop standards for red-teaming (adversarial testing for risks and bad behavior in AI models), and then separately proposes using the Defense Production Act to compel AI companies to disclose the results of their own red-teaming to the government. This one-two punch could be a path to getting something like a regime for pre-release testing for highly capable models without needing to wait on congressional action. Hopefully, the NIST standards will encompass both the cybersecurity of the model (e.g., its susceptibility to malicious attacks and circumvention) and its usefulness for malicious cyber activity. It will also be important to test models as integrated with other systems, such as code interpreters or autonomous agent frameworks, that give AI systems additional capabilities, such as executing code or taking actions autonomously.

The direction for the Department of Commerce to develop standards for detecting AI-generated content is important: any regime for AI content labeling that can be used by many different AI companies and communications platforms will rely on standardization. I’m glad to see the executive order mention both the watermarking of AI-generated content and authentication of real, non-AI generated content, as I suspect both may be necessary in the future. 

I admire the White House’s goal to build AI to detect and fix software vulnerabilities, but I’ll be curious to see how they think about managing risks that could arise from powerful AI systems custom-built to hunt for vulnerabilities. I also hope they’ll tie new tools into existing efforts to “shift the burden of responsibility” in cyberspace to ensure AI vulnerability finders create secure-by-design software rather than endless patches.

It’s good to see privacy mentioned, but, as always, painful that no path appears but the congressional one, which has remained at an impasse for years now. However, the presence of privacy-preserving technologies is excitingthese technologies may help secure a policy that balances painful tradeoffs between individual privacy and innovation in data-hungry spaces like AI.

Maia Hamin is an associate director with the Atlantic Council’s Cyber Statecraft Initiative under the Digital Forensic Research Lab.


A potential catalyst for responsible private sector innovation

The Biden administration’s executive order on AI is an important step toward steering the fast-moving AI sector toward responsible development. Its impact will largely depend on how the private sector reacts to its incentives and enforceability.

The order rightly focuses on safeguarding societal and consumer interests, such as identifying misleading or deceptive AI-generated content. However, an effective technological solution to this critical issue is still needed. Ideally, this directive will serve as a catalyst for investments in this space. Similarly, the inclusion of the National AI Research Resource pilot has the potential to democratize AI advancements, reducing reliance on major tech companies and encouraging innovations that prioritize societal benefits.

I welcome the executive order’s focus on immediate-term societal risks, especially its efforts to empower the government to enforce existing anti-discrimination laws. These efforts should incentivize developers to build these protections into their systems by design rather than consider them after the fact. However, effective enforcement will only be feasible if agencies are adequately equipped for this work. The executive order attempts to address this by attracting the desperately needed AI talent to government positions, but more needs to be done to facilitate interagency coordination to avoid fragmented policymaking and inconsistent enforcement.

Lastly, the order wisely aims to relax immigration barriers for skilled AI professionals, a bipartisan issue often overlooked yet strongly advocated for by the private sector. Nevertheless, equal emphasis should be placed on domestic education and retraining programs to create a comprehensive talent pipeline and support today’s workforce.

Rachel Gillum is a nonresident fellow at the Atlantic Council’s Digital Forensic Research Lab. She is also vice president of Ethical and Humane Use of Technology at Salesforce and served as a commissioner on the Commission on Artificial Intelligence Competitiveness, Inclusion, and Innovation.


US leadership on AI will create new opportunities for workers and businesses

This executive order is comprehensive, and it establishes an important first step in US leadership in AI policy and governance to go hand in hand with our leadership in AI innovation and technology. Its timing right before the United Kingdom’s AI Safety Summit signals the US approach to lead in AI policy. The use of the Defense Production Act to get major model developers to share their internal red teaming AI safety data with the government prior to release is an important step beyond securing voluntary commitments from Model Developing Firms such as Open AI and Google. The executive order correctly calls for assessments by federal agencies in their use of AI, and that will require investments in building capability, tools, and technology, along with accountable AI methods and processes. All in all, it is an important step and I look forward to the rule-making to follow, as well as legislation from Congress aligned with the themes highlighted in this order. It is essential for both our economic and national security.

In particular, the focus on AI talent and making the United States the destination of choice, as well as directing improvements and changes in visas and green cards, will help to ensure that the US leads globally in AI innovation. It also has the opportunity to lead in using AI to improve skills development for both future and current workers, and to assess how AI can be used to augment the skills of current workers. 

The commitments on AI safety, security, and reliability are the strongest I have seen globally, and the commitment to privacy and accountable use of AI will result in the United States becoming the leader in trustworthy and responsible AI. US AI leadership will create new opportunities for business and civil society to use AI to support economic opportunity and improve the quality of life for Americans. The parts of the order that deal with AI talent and the use by federal agencies of AI procurement to shape responsible AI will help US firms build the competitive advantage in operationalizing trustworthy AI. It is this operationalization that is the “know-how” required to go from policy to practice, and which will be a comparative advantage to US firms.

Ramayya Krishnan is a member of the Geotech Commission of the Atlantic Council. He is also the W. W. Cooper and Ruth F. Cooper Professor of Management Science and Information Systems at Heinz College and the Department of Engineering and Public Policy at Carnegie Mellon University.


The executive order is vast in scope—and the equivalent of vaporware

This executive order is vast in scope, addressing multiple very difficult problems in responsible AI. It will be good for driving dialogue and investigation at agencies. But this executive order is the equivalent of vaporware in software—something that sounds nice, doesn’t exist, and likely never will (at least in the form it was presented). While it’s clear there is a strong appetite for AI regulation in the United States, it’s likely several years away. That said, there are signals from this administration for what it could include. What that regulation will look like will surely evolve yet again. 

In October 2022, the White House Office of Science and Technology Policy published a Blueprint for an AI Bill of Rights. The Blueprint suggested that the United States would drive toward a rights-based approach to regulating AI. The new executive order, however, departs from this philosophy and focuses squarely on a hybrid policy and risk-based approach to regulation. In fact, there’s no mention of notice, consent, opt-in, opt-out, recourse, redress, transparency, or explainability in the executive order, while these topics comprised two of the five pillars in the AI Bill of Rights. 

Before federal AI regulation comes to fruition, the United States has an opportunity it shouldn’t miss to pivot back to making humans—and their rights—the drivers of AI regulation. 

Steven Tiell is a nonresident senior fellow with the Atlantic Council’s GeoTech Center. He is also a strategy executive with wide technology expertise and particular depth in data ethics and responsible innovation for artificial intelligence.


A bold, comprehensive vision facing potential challenges with implementation

Biden’s executive order sets out an extremely comprehensive vision on ensuring responsible developments in AI systems. It includes measures to ensure their safety and security in defense of the United States and Americans, ranging from initiatives on safeguarding US national security and leadership in technological innovation, to ensuring equity and privacy in these systems, to addressing issues for recruitment and retention of top AI talent in the United States, including in the federal government. Future digital economies will rely on the use of AI, and it will generate many higher-order commercial and technological developments. The Biden administration is taking steps here to ensure that the future is one that leverages the benefits of AI but also safeguards the significant exploitation that this technology could potentially bring.

The executive order thoughtfully builds on prior administration efforts, such as its Blueprint for an AI Bill of Rights, tech sprints focused on AI and cybersecurity, and a previous executive order on advancing racial equity and support for underserved communities. It also inherently acknowledges that the government must lead by example but also cannot drive or subsidize responsible AI development on its own. While Biden’s directives are pointed at agencies, the principles that he outlines and the broader impact of these initiatives are pointed at an entire ecosystem of public, private, academic, and international stakeholders via directed regulatory actions, standards efforts, and research and development promotion.

The executive order prioritizes efforts on transparency, content authenticity, and cybersecurity and privacy, and these are especially important in driving competitive and democratic uses of AI. Transparency and explainability are some of the most critical features needed for AI systems (or really any emerging technologies) since they create the foundation for assessing a system’s ability to meet other objectives like equity and bias minimization, as well as safety and soundness of systems. Driving research and development and developing guidance for content authentication and watermarking to clearly label AI-generated content can provide huge steps forward on enabling trust in AI ecosystems and combating significant threats to national security, the economy, and public trust, including foreign malign influence, fraud, and cybercrime. Cybersecurity and privacy are also critical efforts reflected in the executive order that demand that developers and those who use AI systems understand and account for the security of sensitive data and functions to prevent exploitation.

The emphasis on standards and best practice efforts, as well as the comprehensive accounting for democratic principles and policy goals, is laudable. However, the White House will likely face challenges in the actual implementation of this ambitious initiative and in balancing restrictive controls and positive promoting efforts. The administration’s efforts really rely on cooperation by tech companies, and in my view those who invest in them, to meet these objectives. The US government still faces significant limitations and fragmentation of some of the regulatory authorities referenced, such as over critical infrastructure sectors. Other issues may challenge the government’s ability to meet the aspirations and timelines set out in the order. These issues include federal agency lags in implementing privacy goals set since the Obama administration, an extended timeline of ongoing regulatory efforts for infrastructure services referenced in the order, and a significant number of interagency asks, potentially without meeting resourcing needs via appropriations.

Despite these challenges, a bold vision and outline of multipronged, mutually reinforcing efforts to address these complex issues is likely what the government and industry need as a north star at this time, especially given likely challenges with any near-term meaningful legislation in this space. Biden has set forth a vision for responsible development of AI that can guide interagency, industry, and international cooperative efforts for years to come.

Carole House is a nonresident senior fellow at the Atlantic Council GeoEconomics Center.

Further reading

Related Experts: Graham Brookie, Lloyd Whitman, Rose Jackson, Trisha Ray, Newton Henry Campbell Jr., Frances Burwell, and Maia Hamin

Image: Microsoft Bing Chat and AI chat applications are seen on a mobile device in this photo illustration in Warsaw, Poland on 21 July, 2023. (Photo by Jaap Arriens/NurPhoto)