Artificial Intelligence Digital Policy European Union Technology & Innovation United States and Canada

GeoTech Cues

April 22, 2024

EU AI Act sets the stage for global AI governance: Implications for US companies and policymakers

By Mohamed Elbashir

The European Union (EU) has made a significant step forward in shaping the future of Artificial Intelligence (AI) with the recent approval of the EU Artificial Intelligence Act (EU AI Act) by the European Parliament. This historic legislation, passed by an overwhelming margin of 523-46 on March 13, 2024, creates the world’s first comprehensive framework for AI regulation. The EU will now roll out the new regulation in a phased approach through 2027. The bloc took a risk-based approach to AI governance, strictly prohibiting AI practices that are considered unacceptable, with some AI systems classified as high-risk, while encouraging responsible innovation.

The law is expected to enter into force between May and June after approval from the European Council; its impact is expected to extend far beyond the EU’s borders, reshaping the global AI landscape and establishing a new standard for AI governance around the world.

While reviewing the EU AI Act’s requirements for tech companies, it is critical to distinguish between core obligations that will have the greatest impact on AI development and deployment and those that are more peripheral.

Tech companies should prioritize transparency obligations such as disclosing AI system use, clearly indicating AI-generated content, maintaining detailed technical documentation, and reporting serious incidents or malfunctions. These transparency measures are critical for ensuring AI systems’ trustworthiness, accountability, and explainability, which are the Act’s primary goals.

More peripheral requirements exist, such as registering the classified high-risk AI systems in a public EU database or establishing specific compliance assessment procedures. Prioritizing these key obligations allows tech companies to demonstrate their commitment to responsible AI development while also ensuring compliance with the most important aspects of the EU AI Act.

The Act strictly prohibits certain high-risk AI practices that have been deemed unacceptable. These prohibited practices include using subliminal techniques or exploiting vulnerabilities to materially distort human behavior, which has the potential to cause physical or psychological harm, particularly to vulnerable groups such as children or the elderly. The Act prohibits social scoring systems, which rate individuals or groups based on social behavior and interactions. These systems can be harmful, discriminatory, and racially biased.

Certain AI systems are classified as high-risk under the EU AI Act due to their potential to have a significant or severe impact on people and society. These high-risk AI systems include those used in critical infrastructure like transportation, energy, and water supply, where failures endanger citizens’ lives and health. AI systems used in educational or vocational training that affect access to learning and professional development, such as those used to score exams or evaluate candidates, are also considered high-risk. The Act also classifies AI systems used as safety components in products, such as robot-assisted surgery or autonomous vehicles, as high-risk, as well as those used in employment, worker management, and access to self-employment, such as resume-sorting software for recruitment or employee performance monitoring and evaluation systems.

Furthermore, AI systems used in critical private and public services, such as credit scoring or determining access to public benefits, as well as those used in law enforcement, migration, asylum, border control management, and the administration of justice and democratic processes, are classified as high-risk under the EU AI Act.

The Act set stringent requirements for these systems include thorough risk assessments, high-quality datasets, traceability measures, detailed documentation, human oversight, and robustness standards. Companies running afoul of the new rules could face fines of up to 7 percent of global revenue or $38 million, whichever is higher.

The Act classifies all remote biometric identification systems as high-risk and generally prohibits their use in publicly accessible areas for law enforcement purposes, with only a few exceptions. The national security exemption in the Act has raised concerns among civil society and human rights groups because it creates a double standard between private tech companies and government agencies when it comes to AI systems used for national security, potentially allowing government agencies to use these same technologies without the same oversight and accountability.

The EU AI Act has far-reaching implications for US AI companies and policymakers. Companies developing or deploying AI systems in or for the EU market will have to navigate the Act’s strict requirements, which requires significant changes to their AI development and governance practices. This likely would involve investments to improve risk assessment and mitigation processes, ensure the quality and representativeness of training data, implement comprehensive policies and documentation procedures, and establish strong human oversight mechanisms. Besides significant penalties, noncompliance with the Act’s provisions may result in reputational damage which can be significant and long-lasting, resulting in a severe loss of trust and credibility, as well as widespread public backlash, negative media coverage, customer loss, partnerships, investment opportunities, and boycott calls.

The AI Act’s extraterritorial reach means that US companies will be impacted if their AI systems are used by EU customers. This emphasizes the importance for US AI companies to closely monitor and adapt to the changing regulatory landscape in the EU, regardless of their primary market focus.

As Thierry Breton, the European Commissioner for Internal Market, said on X (formerly Twitter), “Europe is NOW a global standard-setter in AI”. The EU AI Act will likely shape AI legislation in other countries by setting a high-risk-based regulation standard for AI governance. Many countries are already considering the EU AI Act as they formulate their AI policies. François-Philippe Champagne, Canada’s Minister of Innovation, Science, and Industry, has stated that the country is closely following the development of the EU AI Act as it works on its own AI legislation. A partnership that is already strong with the boost of their joint strategic digital partnership to address AI challenges by implementing the EU-Canada Digital Partnership.

Similarly, the Japanese government has expressed an interest in aligning its AI governance framework with the EU’s approach as Japan’s ruling party is expected to push for AI legislation within 2024. As more countries find inspiration in the EU AI Act, similar AI penal provisions are likely to become the de facto global standard for AI regulation.

The impact of the EU AI Act on the technology industry is expected to be significant, as companies developing and deploying AI systems will need to devote resources to compliance measures, which raise costs and slow innovation in the short term, especially for startups. However, the Act’s emphasis on responsible AI development and protecting fundamental rights is the region’s first attempt to set up guardrails and increase public trust in AI technologies, with the overall goal of promoting long-term growth and adoption.

Tech giants, like Bill Gates, Elon Musk, Mark Zuckerberg, and Sam Altman have repeatedly asked governments to regulate AI. Sundar Pichai, CEO of Google and Alphabet, stated last year that “AI is too important not to regulate”, and the EU AI Act is an important step toward ensuring that AI is developed and used in a way that benefits society at large.

As other countries look to the EU AI Act as a model for their own legislation, US policymakers should continue engaging in international dialogues to ensure consistent approaches to AI governance globally, helping to ease regulatory fragmentation.

The EU AI Act is a watershed moment in the global AI governance and regulatory landscape, with far-reaching implications for US AI companies and policymakers. As the Act approaches implementation, it is critical for US stakeholders to proactively engage with the changing regulatory environment, adapt their practices to ensure compliance and contribute to the development of responsible AI governance frameworks that balance innovation, competitiveness, and fundamental rights.

Logo of the Commission on the Geopolitical Impacts of New Technologies and Data. Includes an 8-point compass rose in the middle with the words "Be Bold. Be Brave. Be Benevolent" at the bottom.

GeoTech Center

Championing positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

Image: Credit: Guillaume Perigois via Unsplash