The companies, chokepoints, and capital flows behind AI infrastructure
Ricardo Ledan · March 2026
This report provides a layer-by-layer breakdown of the AI infrastructure supply chain — who manufactures the chips, who designs the processors, who hosts the compute, and who funds whom. It maps market concentration at each layer, identifies the single points of failure, and traces the circular capital flows between the companies that control the stack.
Key Findings
~0%
of pure-play foundry revenue held by TSMC — dominates leading-edge logic manufacturing
~0%
of AI training GPUs designed by NVIDIA, locked in by 6M+ developers on CUDA
0%
of cloud infrastructure services controlled by three US companies
0%
of global GPU cluster performance hosted in the United States
$0B
raised by OpenAI ($110B) and Anthropic ($30B) in February 2026 — from many of the same investors
Few
organizations worldwide can afford to train frontier-scale models — concentrated in the US and China
The dependency chain
Every AI application sits on an 8-layer supply chain. The interactive stack below traces the full chain from lithography equipment to end-user applications, with market share data and chokepoint analysis at each level. Click any layer to expand.
THE DEPENDENCY CHAIN — 8 LAYERS · Click any segment for detail
LITHOGRAPHY & EQUIPMENTWho makes the machines that make chips
ASML 80%
8%
6%
4%
SILICON FABRICATIONWho manufactures the physical chips
TSMC 70%
7%
3%
6%
Others 14%
HIGH-BANDWIDTH MEMORYWho makes the memory AI chips need
SK Hynix 53%
Samsung 38%
9%
AI CHIP DESIGNWho designs the processors
NVIDIA 80%
8%
5%
3%
4%
CLOUD INFRASTRUCTUREWho hosts the compute
AWS 30%
Azure 20%
GCP 13%
4%
Others 33%
FOUNDATION MODELSWho trains the AI
OpenAI 28%
DeepMind 20%
Anthropic 18%
Meta AI 15%
Others 19%
SOFTWARE & FRAMEWORKSWho builds the developer ecosystem
CUDA 35%
PyTorch 25%
HuggingFace 15%
LangChain 10%
Others 15%
APPLICATIONSWho builds on top
Enterprise 35%
Consumer 25%
Vertical 20%
Open/Local 20%
↑ EVERY LAYER DEPENDS ON THE LAYERS BELOW ↑
foundry, cloud, and chip shares from TrendForce, Synergy Research, and company filings. foundation model, framework, and application shares are editorial estimates of ecosystem presence, not measured revenue.
Vertical integration
What makes this supply chain unusual is not its depth — every technology has dependencies. What makes it unusual is that a small set of US companies recur across many layers.
NVIDIA designs the GPUs, controls the dominant software platform (CUDA with 6M+ developers), invested $30B in OpenAI's February 2026 round, and has a previously announced strategic partnership with Anthropic — then sells GPUs to both. Microsoft invests in both OpenAI and Anthropic, serves as OpenAI's primary cloud partner, and resells their models as Copilot. Amazon funds both labs, hosts Anthropic's training on AWS, and competes against them with Bedrock.
This is not a market with independent competitors. It is a set of interlinked positions held by the same capital — where the investors, the cloud providers, the chip designers, and the customers are often the same companies.
Single points of failure
Every layer of the AI stack has a chokepoint — a company, facility, or technology whose disruption cascades through the entire chain. Select a scenario below to see the impact at each layer.
TSMC holds ~70% of pure-play foundry revenue and dominates leading-edge logic manufacturing. Overseas buildout is underway — some sites in Arizona and Japan are already operating — but Taiwan remains the center of gravity and is far from matched in scale. A disruption there means no new AI GPUs for anyone.
NVIDIA's moat is not hardware — it is software. Even if AMD matched GPU performance tomorrow, migrating six million developers off CUDA takes two or more years. The ecosystem lock-in is deeper than any hardware advantage.
Lithography
ASML unaffected — equipment already delivered
0%
Fabrication
~70% of foundry revenue share lost. Leading-edge logic halted. No new NVIDIA, AMD, Apple GPUs
70%
Memory (HBM)
SK Hynix/Samsung unaffected — separate fabs
0%
AI chip supply
No new H100/H200/B200 GPUs. No new TPUs. AMD MI series halted
90%
Cloud compute
Existing GPUs keep running. No new capacity. Prices spike 3–5×
60%
Model training
Frontier model training frozen. No new GPT-5 or Claude 5 class models
80%
Applications
Inference continues on existing hardware. Costs rise. Innovation stalls
40%
A TSMC disruption (earthquake, invasion, blockade) would freeze AI hardware production worldwide within weeks. There is no backup — Samsung and Intel cannot fabricate at TSMC's volume or node density. Existing GPUs in data centers keep running, but no new capacity enters the market.
Follow the money
The funding relationships between these companies reveal a circular structure. NVIDIA invests $30B in OpenAI. OpenAI spends that money buying NVIDIA GPUs. Microsoft invests $13B+ in OpenAI and $5B in Anthropic, charges both Azure fees, and resells their models as Copilot — three revenue events from a single technology. The money flows in a circle, and it tightens with every round.
Sovereign wealth funds — GIC Singapore, MGX Abu Dhabi — are beginning to invest directly, hedging the concentration risk by buying into the oligopoly itself. In February 2026 alone, OpenAI raised $110B and Anthropic raised $30B. While the named round participants differ, the broader capital web — NVIDIA, Microsoft, Amazon, Google — touches both companies through overlapping investment and cloud partnerships.