The AI Supply Chain

The companies, chokepoints, and capital flows behind AI infrastructure

Ricardo Ledan · March 2026

This report provides a layer-by-layer breakdown of the AI infrastructure supply chain — who manufactures the chips, who designs the processors, who hosts the compute, and who funds whom. It maps market concentration at each layer, identifies the single points of failure, and traces the circular capital flows between the companies that control the stack.

Key Findings

~0%
of pure-play foundry revenue held by TSMC — dominates leading-edge logic manufacturing
~0%
of AI training GPUs designed by NVIDIA, locked in by 6M+ developers on CUDA
0%
of cloud infrastructure services controlled by three US companies
0%
of global GPU cluster performance hosted in the United States
$0B
raised by OpenAI ($110B) and Anthropic ($30B) in February 2026 — from many of the same investors
Few
organizations worldwide can afford to train frontier-scale models — concentrated in the US and China

The dependency chain

Every AI application sits on an 8-layer supply chain. The interactive stack below traces the full chain from lithography equipment to end-user applications, with market share data and chokepoint analysis at each level. Click any layer to expand.

THE DEPENDENCY CHAIN — 8 LAYERS · Click any segment for detail
LITHOGRAPHY & EQUIPMENTWho makes the machines that make chips
ASML 80%
8%
6%
4%
SILICON FABRICATIONWho manufactures the physical chips
TSMC 70%
7%
3%
6%
Others 14%
HIGH-BANDWIDTH MEMORYWho makes the memory AI chips need
SK Hynix 53%
Samsung 38%
9%
AI CHIP DESIGNWho designs the processors
NVIDIA 80%
8%
5%
3%
4%
CLOUD INFRASTRUCTUREWho hosts the compute
AWS 30%
Azure 20%
GCP 13%
4%
Others 33%
FOUNDATION MODELSWho trains the AI
OpenAI 28%
DeepMind 20%
Anthropic 18%
Meta AI 15%
Others 19%
SOFTWARE & FRAMEWORKSWho builds the developer ecosystem
CUDA 35%
PyTorch 25%
HuggingFace 15%
LangChain 10%
Others 15%
APPLICATIONSWho builds on top
Enterprise 35%
Consumer 25%
Vertical 20%
Open/Local 20%
↑ EVERY LAYER DEPENDS ON THE LAYERS BELOW ↑
foundry, cloud, and chip shares from TrendForce, Synergy Research, and company filings. foundation model, framework, and application shares are editorial estimates of ecosystem presence, not measured revenue.

Vertical integration

What makes this supply chain unusual is not its depth — every technology has dependencies. What makes it unusual is that a small set of US companies recur across many layers.

NVIDIA designs the GPUs, controls the dominant software platform (CUDA with 6M+ developers), invested $30B in OpenAI's February 2026 round, and has a previously announced strategic partnership with Anthropic — then sells GPUs to both. Microsoft invests in both OpenAI and Anthropic, serves as OpenAI's primary cloud partner, and resells their models as Copilot. Amazon funds both labs, hosts Anthropic's training on AWS, and competes against them with Bedrock.

Chips
NVIDIA
Google
Cloud
Microsoft
Amazon
Google
OpenAI
NVIDIA
Microsoft
Amazon
Anthropic
NVIDIA
Microsoft
Amazon
Google
Llama
Meta
Software
NVIDIA
Google
Meta
Products
NVIDIA
Microsoft
Amazon
Google
Meta
Designs / buildsProvides cloud / infraInvests / fundsMultiple roles

This is not a market with independent competitors. It is a set of interlinked positions held by the same capital — where the investors, the cloud providers, the chip designers, and the customers are often the same companies.

Single points of failure

Every layer of the AI stack has a chokepoint — a company, facility, or technology whose disruption cascades through the entire chain. Select a scenario below to see the impact at each layer.

TSMC holds ~70% of pure-play foundry revenue and dominates leading-edge logic manufacturing. Overseas buildout is underway — some sites in Arizona and Japan are already operating — but Taiwan remains the center of gravity and is far from matched in scale. A disruption there means no new AI GPUs for anyone.

NVIDIA's moat is not hardware — it is software. Even if AMD matched GPU performance tomorrow, migrating six million developers off CUDA takes two or more years. The ecosystem lock-in is deeper than any hardware advantage.

Lithography
ASML unaffected — equipment already delivered
0%
Fabrication
~70% of foundry revenue share lost. Leading-edge logic halted. No new NVIDIA, AMD, Apple GPUs
70%
Memory (HBM)
SK Hynix/Samsung unaffected — separate fabs
0%
AI chip supply
No new H100/H200/B200 GPUs. No new TPUs. AMD MI series halted
90%
Cloud compute
Existing GPUs keep running. No new capacity. Prices spike 3–5×
60%
Model training
Frontier model training frozen. No new GPT-5 or Claude 5 class models
80%
Applications
Inference continues on existing hardware. Costs rise. Innovation stalls
40%
A TSMC disruption (earthquake, invasion, blockade) would freeze AI hardware production worldwide within weeks. There is no backup — Samsung and Intel cannot fabricate at TSMC's volume or node density. Existing GPUs in data centers keep running, but no new capacity enters the market.

Follow the money

The funding relationships between these companies reveal a circular structure. NVIDIA invests $30B in OpenAI. OpenAI spends that money buying NVIDIA GPUs. Microsoft invests $13B+ in OpenAI and $5B in Anthropic, charges both Azure fees, and resells their models as Copilot — three revenue events from a single technology. The money flows in a circle, and it tightens with every round.

Sovereign wealth funds — GIC Singapore, MGX Abu Dhabi — are beginning to invest directly, hedging the concentration risk by buying into the oligopoly itself. In February 2026 alone, OpenAI raised $110B and Anthropic raised $30B. While the named round participants differ, the broader capital web — NVIDIA, Microsoft, Amazon, Google — touches both companies through overlapping investment and cloud partnerships.

Investment
Revenue / purchases
Supply chain
NVIDIAChip Designer · InvestorMicrosoftCloud · InvestorAmazonCloud · InvestorGoogleCloud · Chips · InvestorOpenAI$110B raise · $730B preAnthropic$30B raise · $380B postMetaOpen-weight · 1B+ downloadsSoftBankInvestor · StargateTSMC70% foundry share
View all funding flows and sources
🇯🇵 SoftBankOpenAI$30B
Part of $110B round (Feb 2026). 11% stake. [src]
🇺🇸 AmazonOpenAI$50B
$110B round (Feb 2026). $35B milestone-conditioned. [src]
🇺🇸 NVIDIAOpenAI$30B
$110B round (Feb 2026). OpenAI is a major GPU customer. [src]
🇺🇸 MicrosoftOpenAI$13B+
49% profit share. Azure is primary cloud partner. [src]
🇺🇸 AmazonAnthropic$8B
AWS is Anthropic's primary cloud + training partner. [src]
🇺🇸 GoogleAnthropic$3B+
Prior equity + Oct 2025 TPU expansion (up to 1M TPUs, tens of $B in cloud spend). [src]
🇺🇸 MicrosoftAnthropicStrategic
Previously announced partnership partially rolled into Series G. $30B Azure compute commitment. [src]
🇺🇸 NVIDIAAnthropicStrategic
Previously announced partnership partially rolled into Series G. GPU supplier via cloud. [src]
🇸🇬 GIC SingaporeAnthropicCo-led $30B
Sovereign wealth fund co-led Feb 2026 round. [src]
🇺🇸 NVIDIAIntel$5B equity
Sep 2025. ~4% ownership in Intel Corp. [src]
🇺🇸 US GovernmentIntel$7.86B grants
CHIPS Act direct funding (Nov 2024) + up to $11B in federal loans. [src]
🇹🇼 TSMCAll NVIDIA GPUsFab contract
TSMC manufactures every NVIDIA AI GPU. [src]
🇳🇱 ASMLTSMC + SamsungEquipment
Sole EUV supplier. No EUV = no advanced fabrication. [src]
🇰🇷/🇺🇸 SK Hynix + MicronNVIDIA GPUsHBM supply
HBM3E for H200. Three-supplier ecosystem forming for HBM4. [src]

What to Watch

Dynamics that could reshape the supply chain over the next 12–24 months.

  • TSMC geographic risk — overseas buildout underway with some sites operating, but years from matching Taiwan's scale
  • Intel 18A — whether Intel Foundry becomes a viable second source for advanced chips
  • CUDA alternatives — AMD ROCm, OpenAI Triton, and custom silicon efforts chipping at NVIDIA's software moat
  • US export controls — restrictions on China cascading through the supply chain, reshaping global AI access
  • Sovereign compute — nations beginning to treat AI compute as national infrastructure, not a market commodity
  • Open-weight models — Llama, DeepSeek, Qwen narrowing the gap with closed frontier models
  • Self-hosting economics — the cost of running inference locally continues to drop, expanding who can build

Citation

Ricardo Ledan. “The AI Supply Chain.” ledan.ai, 2026-03-27.