The world’s regulatory superpower is taking on a regulatory nightmare: artificial intelligence

The humans are still in charge—for now. The European Parliament, the legislative branch of the European Union (EU), passed a draft law on Wednesday intended to restrict and add transparency requirements to the use of artificial intelligence (AI) in the twenty-seven-member bloc. In the AI Act, lawmakers zeroed in on concerns about biometric surveillance and disclosures for generative AI such as ChatGPT. The legislation is not final. But it could have far-reaching implications since the EU’s large size and single market can affect business decisions for companies based elsewhere—a phenomenon known as “the Brussels effect.”

Below, Atlantic Council experts share their genuine intelligence by answering the pressing questions about what’s in the legislation and what’s next. 

1. What are the most significant aspects of this draft law? 

The European Parliament’s version of the AI Act would prohibit use of the technology within the EU for controversial purposes like real-time remote biometric identification in public places and predictive policing. Member state law enforcement agencies are sure to push back against aspects of these bans, since some of them are already using these technologies for public security reasons. The final version could well be more accommodating of member states’ security interests.

Kenneth Propp is a nonresident senior fellow with the Atlantic Council’s Europe Center and former legal counselor at the US Mission to the European Union in Brussels.

The most significant aspect of the draft AI Act is that it exists and has been voted on positively by the European Parliament. This is the only serious legislative attempt to date to deal with the rapidly evolving technology of AI and specifically to address some of the anticipated risks, both due to the technology itself and to the ways people use it. For example, a government agency might use AI to identify wrongdoing among welfare recipients, but due to learned bias it misidentifies thousands of people as participating in welfare fraud (this happened in the Netherlands in 2020). Or a fake video showing a political candidate in a compromising position is released just prior to the election. Or a government uses AI to track citizens and determine whether they exhibit “disloyal” behavior.

To address these concerns, EU policymakers have designed a risk-management framework, in which higher-risk applications would receive more scrutiny. A few uses of AI—social scoring, real-time facial recognition surveillance—would be banned, but most companies deploying AI, even the higher-risk cases, would have to file extensive records on training and uses. Above all, this is a law about transparency and redress: humans should know when they are interacting with AI, and if AI makes decisions about them, they should have a right of redress to a fellow human. In the case of generative AI, such as ChatGPT, the act requires that images be marked as coming from AI and the AI developer should list the copyrighted works on which the AI trained.

Of course, the act is not yet finished. Next, there will be negotiations between parliament and the EU member states, and we can expect significant opposition to certain bans from European law enforcement institutions. Implementation will bring other challenges, especially in protecting trade secrets while examining how algorithms might steer users toward extreme views or criminal fraudsters. But if expectations hold, by the end of 2023 Europe will have the first substantive law on AI in the world.

Frances Burwell is a distinguished fellow at the Atlantic Council’s Europe Center and a senior director at McLarty Associates.

There are numerous significant aspects of this law, but there are two and a half that really stand out. The first is establishing a risk-based policy where lawmakers identify certain uses as presenting unacceptable risk (for example, social scoring, behavioral manipulation of certain groups, and biometric identification by groups including police). Second, generative AI systems would be regulated and required to disclose any copyrighted data that was used to train the generative model, and any content AI outputs would need to carry a notice or label that it was created with AI. It’s also interesting what’s included as guidance for parliament to “ensure that AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory, and environmentally friendly.” This gives parliament a wide mandate that could see everything from data provenance to data center energy use be regulated under this draft law.

Steven Tiell is a nonresident senior fellow with the Atlantic Council’s GeoTech Center. He is a strategy executive with wide technology expertise and particular depth in data ethics and responsible innovation for artificial intelligence.

2. What impact would it have on the industry?

Much as the EU’s General Data Protection Regulation (GDPR) became a globally motivating force in the business community, this law will do the same. The burden on companies to maintain and keep separate infrastructure exclusively for the EU is much higher than the cost of compliance. And the cost (and range) of noncompliance for companies (and individuals) has risen—prohibited uses, those deemed to have unacceptable risk, will incur a fine up to forty million euros or 7 percent of worldwide annual turnover (total global revenue) for the preceding financial year, whichever is greater. Violations of human-rights laws or any type of discrimination perpetrated by an AI will incur fines up to twenty million euros or 4 percent of worldwide turnover. Other noncompliance offenses, including from foundational models (again, the draft regulation affects generative AI), are subjected to fines of up to ten million euros or 2 percent of worldwide annual turnover. And those supplying false, incomplete, or misleading information to regulators can be fined up to five million euros or 1 percent of worldwide annual turnover. These fines are a big stick to encourage compliance. 

—Steven Tiell 

As I wrote for Lawfare when the European Commission proposed the AI Act two years ago, the proposed AI regulation is “a direct challenge to Silicon Valley’s common view that law should leave emerging technology alone.” At the same time, though the legislation is lengthy and complex, it is far from the traditional caricature of EU measures as heavy-handed, top-down enactments. Rather, as I wrote then, the proposal “sets out a nuanced regulatory structure that bans some uses of AI, heavily regulates high-risk uses, and lightly regulates less risky AI systems.” The European Parliament has added some onerous requirements, such as a murky human-rights impact assessment of AI systems, but my earlier assessment remains generally true.

It’s also worth noting that other EU laws, such as the GDPR adopted in 2016, will have an important and still-evolving impact on the deployment of AI within EU territory. For example, earlier this week Ireland’s data protection commission delayed Google’s request to deploy Bard, its AI chatbot, because the company had failed to file a data protection impact assessment, as required by the GDPR. Scrutiny of AI products by multiple European regulatory authorities employing precautionary approaches likely will mean that Europe will lag in seeing some new AI products.

—Kenneth Propp

3. How might this process shape how the rest of the world regulates AI?

It will have an impact on the rest of the world, but not simply by becoming the foundation for other AI acts. Most significantly, the EU act puts certain restrictions on governmental use of AI in order to protect democracy and a citizen’s fundamental rights. Authoritarian regimes will not follow this path. The AI Act is thus likely to become a marker, differentiating between those governments that value democracy more than technology, versus those that seek to use technology to control their publics.

—Frances Burwell

Major countries across the globe from Brazil to South Korea are in the process of developing their own AI legislation. The US Congress is slowly moving in the same direction with a forthcoming bill being developed by Senate Majority Leader Chuck Schumer likely to have important influence. If the EU sticks to its timetable of adopting the AI Act by the end of the year, its legislation could shape other countries’ efforts significantly by virtue of being early out of the gate and comprehensive in nature. Countries more concerned with promoting AI innovation, such as the United Kingdom, may stake out a lighter-touch approach than the EU, however.

Kenneth Propp

The world’s businesses will comply with the EU’s AI Act if they have any meaningful amount of business in the EU and governments in the rest of the world are aware of this. Compliance with the EU’s AI Act will be table stakes. It can be assumed that many future regulations will mimic many components, big and small, of the EU’s AI Act, but where they deviate will be interesting. Expect to see other regulators emboldened by the fines and seek commensurate remuneration for violations in their countries. Other countries might extend more of the auditing requirements to things such as maintaining outputs from generative models. Consumer protections in different countries will be more variable as well. And it will be interesting to see if countries such as the United States and United Kingdom pivot their legislation toward being more risk-based as opposed to principles-based.

—Steven Tiell 

4. What are the chances of this becoming law, and how long will it take? 

Unlike in the United States, where congressional passage of legislation is typically the decisive step, the European Parliament’s adoption on Wednesday of the AI Act only prepares the way for a negotiation with the EU’s member states to arrive at the final text. Legislative proposals can shift substantially during such closed-door “trilogues” (so named because the European Commission as well as the Council of the European Union also participate). The institutions aim for a final result by the end of 2023, during Spain’s presidency of the Council, but legislation of this complexity and impact easily could take longer to finalize.

Kenneth Propp

Based on this week’s vote, there are strong signals of overwhelming support for this draft law. The next step is trilogue negotiations among the parliament, the Council of the European Union, and the European Commission, and these negotiations will determine the law’s final form. There are strong odds these negotiations will finish by the end of the year. At that point, the act will take about two years to transpose to EU member states for implementation, similar to what happened with GDPR. Also similar to GDPR, it could take at least that long for member states to develop the expertise to assume their role as market regulators. 

—Steven Tiell 

5. What are some alternative visions for regulating AI that we may see?

In general, we see principles-based, risk-based, and rights-based legislation. Depending on the government and significance of the law, different approaches might be applied. The EU’s AI Act is somewhat unique and interesting as it started life as a principles-based approach, but through its evolution became primarily risk-based. Draft legislation in the United States and the United Kingdom is principles-based today. Time will tell if these governments are influenced by the EU’s approach.

—Steven Tiell

Further reading

Related Experts: Kenneth Propp, Frances Burwell, and Steven Tiell

Image: Photo montage/symbol image: illustration of data folders over blurred human faces as a symbol of the dangers of artificial intelligence for big data analysis on May 14, 2023 in Germany. Duesseldorf NRW Germany Copyright: xPieroxNigrox