Ten billion dollars. That’s how much the United States’ largest generative artificial intelligence (AI) firm, OpenAI, raised in private funding rounds between 2022-2023. While the makers of ChatGPT are in a league of their own, it’s clear US-based firms have raised substantially more capital than their European counterparts:
Missing from this estimation is China. Yet while there is little data on Chinese private funding for generative AI, a comparison of broader AI-related venture capital deals places it in third, after the US and EU.
Policymakers in the United States and European Union increasingly view generative AI, which can produce text, images, or other data from user-generated prompts, as one of the technological “commanding heights” of the coming decade. The increase in productivity from widespread adoption could add up to $4.4 trillion to the global economy annually, according to a McKinsey estimate—a figure comparable to the entire GDP of Germany. However, the technology has also raised new concerns over privacy, election misinformation, and cybersecurity. Likewise, the ability to produce advanced foundation models (large, general-purpose models which underlie generative AI) has implications for national security, where such models may be used for military training, cybersecurity and autonomous or biological weapons systems.
Like earlier waves of startups, many small tech firms rely on venture capital (VC) to scale their operations. Transatlantic divergence in this respect is stark. Last year, over 90 percent of venture capital dedicated to generative AI was concentrated in the United States. In similar fashion, nearly twice as many generative AI startups were founded in the United States as in the European Union and UK combined.
More broadly, these figures reflect a smaller European VC market. The US has just 23 startups per VC firm, and an average of $4.9 million for each. The typical EU entrepreneur has less than one-fourth that amount available–and 198 other startups per VC firm. Yet in tech, the gulf widens. When it comes to private funding for these new commanding heights, the Rockies reach far higher than the Alps.
To some, this disparity in funding can be attributed to differences in regulation. In December the European Parliament reached agreement on the final text of the EU AI Act, a sweeping set of regulations on general AI models intended to encourage transparency and protect copyright holders. Earlier versions drew opposition from France, Germany, and Italy, along with warnings from the US, that the legislation would stifle the growth of continental competitors in AI. (While the United States has not passed comparable legislation, the Biden administration released an executive order on AI in October.)
Others may recall earlier tech waves (think Amazon, Alphabet, and Apple, and the rest of the “Magnificent Seven”) in which the European Union produced few startups but many standards, including on privacy. In the optimistic view, Europe’s policies, such as the General Data Protection Regulation (GDPR), Digital Markets Act (DMA), and Digital Services Act (DSA), have helped shape standards of foreign tech giants—a so-called “Brussels Effect.” In the pessimistic view, they have engendered long-running disputes and created serious compliance (and competitiveness) challenges for the continent’s youngest firms.
Today however, the new EU and US approaches on AI bear significant similarities. To be sure, the US executive order on AI lacks strong enforcement mechanisms included in the AI Act, the latter of which includes substantial fines (7 percent of global turnover) for non-compliant firms. Nevertheless, both adopt a similar focus on “risk-based” approaches, transparency requirements, and testing. More broadly, the United States and the EU have coordinated their approaches through the G7 Hiroshima AI Process, UK AI Safety Summit, Administrative Arrangement on Artificial Intelligence, and the Trade and Technology Council (TTC).
One contrast with previous tech waves is that the European Union is increasingly pairing injunctions with incentives. Shortly after the European Commission reached agreement on the AI Act, it announced new measures to assist AI startups, including dedicated access to supercomputers (“AI Factories”) and other financial support expected to raise $4 billion across the sector by 2027.
While increasingly aligned on regulation, such measures aim to overcome the more enduring disparity in private funding between the two jurisdictions. For now, while Europe is trying to catch-up in the innovation race when it comes to the newest chatbots, the United States still looks more, well, generative.
Ryan Murphy is a program assistant at the Atlantic Council’s GeoEconomics Center. He works within the Center’s Economic Statecraft Initiative, supporting events and research on economic security, sanctions, and illicit finance.
This post is adapted from the GeoEconomics Center’s weekly Guide to the Global Economy newsletter. If you are interested in getting the newsletter, email SBusch@atlanticcouncil.org
At the intersection of economics, finance, and foreign policy, the GeoEconomics Center is a translation hub with the goal of helping shape a better global economic future.
Thu, Jun 29, 2023
Issue Brief By
Behind the hype and fear lies a crucial truth—AI is designed to augment human intelligence, not replace it. This primer explains how developers strive to create systems that mimic human capabilities by finding patterns, making predictions, and generating meaningful and actionable insights using data generated by our information-rich world.
Wed, Jan 31, 2024
New Atlanticist By Katherine Walla
The standardization of technologies is already being dominated by nonmarket and Chinese players, the two officials warned at an AC Front Page event.
Thu, Jun 15, 2023
New Atlanticist By
Atlantic Council experts answer the most pressing questions on the EU's AI Act, including what's in it, when it could become law, and what it means for the world.