What drives the divide in transatlantic AI strategy?
As both the United States and European Union unveiled their respective AI strategies this summer, a paradox emerges: despite sharing broadly similar objectives—boosting domestic AI capabilities, maintaining technological leadership, and managing AI risks—the two allies find themselves increasingly at odds over how to achieve these goals. The divergence reflects fundamental differences in regulatory philosophy, economic structure, and geopolitical positioning — all of which threaten to fragment what should be a unified Western approach to AI governance at a critical moment of competition with China.
The Donald Trump administration’s “Winning the Race: America’s AI Action Plan” outlines a vision of AI as a decisive frontier of global economic and security competition. The first pillar advocates for a deregulated, private-sector-led environment by reducing regulations, promoting open-source AI models, and fast-tracking AI deployment in industries such as healthcare, while tackling some questions about workforce transition. The second pillar addresses energy capacity by upgrading the electric grid, restoring domestic semiconductor manufacturing, building secure data centers, and establishing cybersecurity measures including incident response capabilities. The third pillar on international diplomacy and security, seeks to counter Beijing’s growing influence in international governance bodies and export the full stack of US AI to allies and partners. The plan also identifies financial services as both an opportunity and a vulnerability. AI is viewed as a driver of financial innovation and efficiency, but also as a channel for risks including misinformation, cyber fraud, and systemic instability.
The European Commission’s AI Continent Action Plan was unveiled in April 2025, and is part of a long series of reports and regulations undertaken by the EU to bolster its competitiveness in AI. It lays out a five-pronged plan to scale up computation models through new AI factories, innovation hubs, and pooled resources, improve access to and availability of high-quality data, accelerate application of AI through public services and industrial activities, enable the Draghi report’s ambition to “exceed the US in education” when it comes to training and retaining skilled talent, and further fortify the European single market for AI.
Both the approaches aim to buttress domestic adoption and application of AI—often through nudges from the state when it comes to exploring applications in public services, and encouragement for many kinds of commercial activities. China has come to a similar conclusion, with its continual emphasis on using local government action plans to diffuse AI into public service provisions, and all kinds of industrial activities through its “AI Plus” initiative. There are few references to China in the EU’s latest AI document, while Washington’s approach has both implicit and explicit connotations of a largely two-way race between itself and Beijing.
Approaches from the United States and the EU are both likely to face issues regarding capital and financing of these action plans. While US private-sector investments in AI are many-fold those in the EU and China, the scale and focus of spending make a big difference. In the United States, the Trump administration has put AI contracts front and center in its broader deregulation approach—recent quarters have seen dozens of venture capital rounds above $100 million, and large megadeals (one of about $40 billion in the first quarter of 2025 alone aside) are becoming more common. Major players like Microsoft have committed to $80 billion this year for AI-capable data centers, and overall US tech capital expenditure for AI and infrastructure is being projected in the hundreds of billions over the next few years.
Meanwhile, across the EU, fiscal rules constrain deficit and debt levels: member states are required to keep deficits below 3 percent of gross domestic product (GDP) (though some exceed this threshold) and debt below 60 percent. The EU’s budget amounts to about 1 percent of GDP, and key instruments such as the Recovery and Resilience Facility are set to expire in 2026—leaving a gap in large-scale funding. The EU is currently negotiating its next seven-year budget (2028–2034), which is expected to place strong emphasis on large-scale investments, including a proposed Competitiveness Fund. In China, while growth targets remain and fiscal policy is being kept “flexible,” debt burdens, weak investment returns in sectors such as property and manufacturing, and slowing external demand limit what Beijing can unilaterally spend without risking macroeconomic instability.
These differences mean that even when headline figures like “$500 billion investments” are floated, much of that tends to flow into private capital for infrastructure, cloud and chip production, startup rounds, and acquisitions. They are not distributed evenly or necessarily aimed at building strategic domestic capabilities. Europe and China risk being unable to match the pace of US capital expenditure, not only because of absolute capital constraints but because of institutional, regulatory, and macro-fiscal drags.
Challenges to US-EU alignment on AI
These structural spending imbalances are compounded by inconsistent US policy decisions that leave European partners scrambling to adapt. For example, Joe Biden administration’s AI diffusion rule in January 2025 left many countries in Europe with restrictions on importing advanced chips from the United States, and led to a call for maintaining a “secure transatlantic supply chain on AI technology and super computers, for the benefit of our companies and citizens on both sides of the Atlantic.” The Trump administration repealed this rule and, in its place, the EU committed to purchasing $40 billion of US-made chips as a part of its trade agreement with the United States.
This interaction lays bare the two tensions complicating the US-EU alignment on AI strategies. The first concerns the strategies’ time horizons and the enabling actions undertaken by each jurisdiction. The EU’s approach has been solidified with years of iterative public discussion amid the market transformation from AI—starting with the Draghi report, the AI Act and even Ursula von der Leyen’s European Commission presidency campaign. In contrast, the US AI strategy has seemed reactive and temperamental—shifting focuses between administrations on important issues such as risk and safety, open-source models, and export controls. Recent partnerships with the Gulf states and lifting of controls on NVIDIA’s H20 chips sale to China have also demonstrated a deal-making approach to AI, which is often at odds with the stated US strategy.
The EU has embraced binding rules such as the AI Act, in line with its broader tradition of digital regulation. By contrast, US administrations have favored light-touch, voluntary frameworks, and sectoral oversight rather than comprehensive law. This reflects a bipartisan reluctance to over-regulate the industry. This divergence in regulatory culture means that even when Washington and Brussels agree on broad goals, they often diverge on the instruments used to achieve them,
The second tension in the US and EU strategies concerns the EU’s own complicated motivations in the context of its present economic interdependence on the United States and China. This reliance is visible across the entire AI input stack. At the software level, European firms overwhelmingly depend on US-developed foundational models, cloud platforms, and AI tools provided by companies such as Microsoft, Google, and OpenAI, reflecting the absence of a globally competitive European alternative. In 2025, the United States produced about forty large foundation models, China around fifteen, and the EU only about three. At the infrastructure and cloud level, the “big three” US cloud hyperscalers are estimated to power about 70 percent of European digital services. At the hardware level, the EU remains structurally reliant on advanced semiconductors designed in the United States and fabricated in Asia, with Europe’s domestic semiconductor sector making up less than 10 percent of global production. Supply chains for critical minerals and legacy chips further reinforce exposure to Chinese producers, which control a significant share of upstream inputs and mid-tier manufacturing. Chinese companies dominate the refining of critical minerals such as rare earths and graphite, essential for chipmaking and AI datacenter equipment. They are also leading suppliers of mid-range GPUs, networking hardware, and AI server components, which European firms may increasingly source to diversify away from US vendors. Chinese technology companies, including Baidu and Alibaba, are also emerging players in foundation model training and deployment, reinforcing Europe’s reliance on external providers. These dependencies complicate the EU’s sovereignty ambitions and its ability to balance relations with the United States.
Recognizing these vulnerabilities, the EU launched initiatives to expand domestic capacity, raising about €20 billion to build “AI gigafactories.” These factories would be capable of hosting large-scale compute infrastructure, with the aim of catching up to the US and China. While these projects signal a commitment to reduce dependency, they remain long-term efforts. Even as Europe invests in its own infrastructure, there is still high exposure to non-EU supply chains for the critical inputs into AI. The European Central Bank noted that about half of Euro area manufacturers sourcing critical inputs from China report being exposed to supply chain risk.
These two tensions—uncertainty in US policy actions and the gap between the EU’s ambitions of sovereignty and its reliance on US and China for critical inputs—will continue to play out over the next few years.
The financial services sector and AI action plans
For financial services in particular, AI adoption is accelerating—banks now flag AI as core to transformation. JPMorgan reports hundreds of production use cases across fraud, marketing, and risk in its shareholder communications, while Bank of America’s “Erica” virtual assistant has logged more than 2 billion client interactions—evidence that AI is reshaping front-, middle-, and back-office processes from customer service to underwriting to treasury operations. This brings opportunities including cost and error reduction, real-time risk sensing, and new AI-enabled products like cash flow intelligence for corporate treasurers.
But financial services also represent one of the highest-risk sectors for AI adoption, given the direct societal impact of errors or bias in lending, risk modeling, or compliance monitoring. The AI Index 2025 shows that measurable gains remain modest, with most firms reporting less than 10-percent cost savings or revenue growth below 10 percent. AI adoption for financial services also lags in key areas. Many institutions remain in pilot phases, data quality and legacy infrastructure limit deployment, and regulatory uncertainty combined with talent shortages slows uptake in high-risk applications such as credit scoring and underwriting. Regulatory divergence sharpens these trade-offs: The United States leans on voluntary risk-management tooling (the National Institute of Standards and Technology – Artificial Intelligence Risk Management Framework) that gives firms latitude to innovate, whereas the EU’s binding AI Act and sectoral guidance from the European Securities and Markets Authority impose high-risk classifications and board-level accountability for AI in investment services—raising documentation, testing, and oversight burdens for cross-border finance.
Ultimately, the private sector and business in both jurisdictions need to adapt to these tensions and, in some cases, even begin to view them as productive in their journey of AI adoption and diffusion across various functions. What the AI action plans have done is provide a broad framework of AI strategy. But for financial services companies and the broader commercial sector, the devil is in the details and will require closing the transatlantic gap in the regulatory approach to AI. This seems more difficult than it would have a year ago.
About the authors
Ananya Kumar is the deputy director, Future of Money, at the GeoEconomics Center.
Alisha Chhangani is an assistant director at the GeoEconomics Center
Related content
Explore the program

At the intersection of economics, finance, and foreign policy, the GeoEconomics Center is a translation hub with the goal of helping shape a better global economic future.
Image: iStock.