How Nvidia Makes its Money: Revenue Breakdown
A breakdown of Nvidia (NVDA) financials. See how Nvidia makes money from GPUs, data center chips, and AI infrastructure using their latest annual report.
How Does Nvidia Make its Money?
Nvidia designs graphics processing units (GPUs) and AI accelerator chips, then sells them at enormous margins to cloud providers, enterprises, gamers, and automakers. The company does not manufacture its own chips — it designs the architecture and TSMC fabricates the silicon — but Nvidia captures the bulk of the value chain through its proprietary chip designs and the CUDA software ecosystem that makes its hardware indispensable for AI workloads.
Nvidia’s revenue more than doubled in FY2025 (ending January 2025) to $130.5 billion, driven almost entirely by the explosive demand for AI training and inference infrastructure. To appreciate the scale of this growth: Nvidia generated more revenue in FY2025 than Intel and AMD combined. Five years ago, the company’s total revenue was $16.7 billion.
The AI boom has transformed Nvidia from a gaming chip company into the most important supplier of AI infrastructure in the world. Every major AI model — ChatGPT, Gemini, Claude, Llama — was trained on Nvidia GPUs. This near-monopoly in AI compute is the foundation of the company’s extraordinary financial results.
Nvidia (NVDA) Business Model
Nvidia’s business model is often called “fabless semiconductor” because the company designs chips but outsources manufacturing to TSMC. This model has three key advantages. First, it avoids the $20-50 billion cost of building and operating leading-edge chip fabrication plants. Second, it allows Nvidia to focus all engineering resources on chip design and software, where its competitive advantage is strongest. Third, it provides manufacturing flexibility — Nvidia can ramp production up or down without being locked into fixed factory costs.
The real moat, however, is software. CUDA (Compute Unified Device Architecture) is Nvidia’s parallel computing platform, launched in 2006. Over nearly two decades, millions of developers have learned to write code in CUDA, thousands of scientific applications have been optimized for it, and the entire machine learning software stack (PyTorch, TensorFlow) runs most efficiently on CUDA-compatible GPUs. This creates an enormous switching cost: even if a competitor offers faster hardware, the effort to rewrite software and retrain development teams makes switching prohibitively expensive for most organizations.
Nvidia further reinforces this lock-in through full-stack AI offerings. DGX systems are complete AI supercomputers (servers packed with eight GPUs, networking, and pre-installed software). Nvidia AI Enterprise provides a software subscription layer. Nvidia Networking (InfiniBand and Spectrum-X) handles the high-bandwidth connections between thousands of GPUs in a data center cluster. By selling the full stack, Nvidia captures revenue at multiple points per deployment.
Nvidia Competitors
Nvidia’s dominance in AI chips is real but not unchallenged. AMD’s MI300X accelerator is the most credible direct GPU competitor, offering similar raw compute performance and better memory capacity at lower prices. AMD has won meaningful data center GPU market share, though it remains a distant second. Broadcom competes indirectly through its custom chip (ASIC) business — it designs specialized AI accelerators for hyperscalers like Google (which uses Broadcom-designed TPUs).
The longer-term competitive threat comes from vertical integration. Google trains its models on in-house TPUs. Amazon has its Trainium and Inferentia chips for AWS. Microsoft and Meta are both developing custom AI silicon. If these hyperscale customers — which collectively represent the majority of Nvidia’s Data Center revenue — shift significant workloads to their own chips, Nvidia’s growth rate could decelerate meaningfully. TSMC is not a competitor but rather a critical partner, as it manufactures nearly all of Nvidia’s chips on its most advanced process nodes.
Revenue Breakdown
| Revenue Stream | FY2025 (Jan) | FY2024 (Jan) | YoY Growth |
|---|---|---|---|
| Data Center | $115.2B | $47.5B | +142.5% |
| Gaming | $11.4B | $10.4B | +9.6% |
| Professional Visualization | $2.1B | $1.6B | +31.3% |
| Automotive | $1.7B | $1.1B | +54.5% |
| Total Revenue | $130.5B | $60.9B | +114.2% |
How Nvidia’s Most Recent Quarter Flows to Profit
The Sankey chart below uses Nvidia’s latest reported quarter (Q4 FY2026, released on February 25, 2026) and traces revenue by segment to gross profit, operating income, and net income.
Source: NVIDIA Announces Financial Results for Fourth Quarter and Fiscal 2026.
Data Center — 88% of Revenue
The Data Center segment is the reason Nvidia became one of the most valuable companies on earth. Revenue reached $115.2 billion in FY2025, up 142% year-over-year, as every major technology company raced to build AI infrastructure. This segment now represents nearly 9 out of every 10 dollars Nvidia earns — a dramatic shift from 2020 when Gaming was the majority of revenue.
The primary products are AI accelerator GPUs (H100, H200, B100, B200). These chips, which can sell for $25,000-40,000 each, are purchased in quantities of thousands by hyperscale cloud providers — Microsoft (for Azure), Alphabet (for Google Cloud), Amazon (for AWS), Meta (for AI research), and Oracle (for OCI). Microsoft alone is estimated to have purchased over $10 billion in Nvidia GPUs in a single year.
Beyond GPUs, this segment includes Nvidia Networking (InfiniBand and Spectrum-X switches and cables), which connect thousands of GPUs in a single data center cluster. Networking has become increasingly important because AI training involves massive data transfers between GPUs. DGX systems — pre-built AI servers containing eight GPUs priced at $200,000-500,000 each — provide turnkey solutions for enterprises that lack the expertise to build custom GPU clusters.
Software contributes a smaller but growing share of Data Center revenue. Nvidia AI Enterprise is a $4,500/GPU/year software subscription that provides optimized AI frameworks, pretrained models, and enterprise support. CUDA, the foundational GPU programming platform, remains free but creates the software dependency that drives hardware sales.
Gaming — 9% of Revenue
Gaming generated $11.4 billion in FY2025, growing 10% year-over-year. This was Nvidia’s original business and remains important even as Data Center dominates. GeForce RTX 40-series GPUs (RTX 4070, 4080, 4090) serve PC gamers, while Nvidia also provides custom chips for Nintendo’s consoles.
Gaming growth has moderated because supply allocation has shifted toward Data Center GPUs, which carry higher margins and face higher demand. The upcoming GeForce RTX 50-series (Blackwell-based) consumer GPUs are expected to reinvigorate this segment, though at higher price points than previous generations.
Professional Visualization — 2% of Revenue
Revenue of $2.1 billion from RTX GPUs sold for professional workstations used in architecture, engineering, movie production, and CAD/CAM applications. This niche segment is growing 31% year-over-year as ray tracing and AI-assisted rendering become standard in professional workflows, but it is too small to materially impact Nvidia’s overall results.
Automotive — 1% of Revenue
Automotive contributed $1.7 billion, growing 55% year-over-year. Nvidia DRIVE Orin and the next-generation DRIVE Thor platforms provide the computing brains for autonomous driving and advanced driver-assistance systems (ADAS). Nearly every major automaker and autonomous vehicle developer (Mercedes-Benz, BMW, BYD, Waymo) has adopted or is testing Nvidia’s automotive platform. While still small, Nvidia has an automotive design pipeline worth several billion in future annual revenue as new vehicle models ship.
Income Statement Overview
| Metric | FY2025 | FY2024 |
|---|---|---|
| Total Revenue | $130.5B | $60.9B |
| Cost of Revenue | $29.5B | $16.6B |
| Gross Profit | $101.0B | $44.3B |
| Operating Expenses | $17.4B | $12.2B |
| Operating Income | $83.6B | $32.1B |
| Net Income | $72.9B | $29.8B |
Nvidia’s income statement reveals extraordinarily efficient economics. The company spent $29.5 billion on cost of revenue (primarily TSMC manufacturing fees, packaging, and testing) to generate $130.5 billion in sales — meaning each dollar of production cost generated $4.42 in revenue. Operating expenses ($17.4B) are dominated by R&D ($12.9B), which funds the architectural innovations that sustain Nvidia’s technology lead.
The $72.9 billion in net income means Nvidia earned more profit in FY2025 than the total revenue of 470 out of 500 S&P 500 companies. This profit scale, achieved at a 55.9% net margin, is unprecedented in the semiconductor industry.
Financial data sourced from Nvidia SEC Filings.
Key Financial Metrics
-
Gross Margin: 77.4% — Nearly unheard of for a semiconductor company. For comparison, AMD operates at roughly 50% gross margins, and Intel at about 40%. Nvidia’s premium reflects a near-monopoly in high-end AI accelerators where demand far exceeds supply, allowing the company to command pricing with minimal discounting.
-
Operating Margin: 64.1% — An operating margin above 60% at $130 billion in revenue is one of the highest in the history of public companies. For context, even high-margin software companies like Microsoft (44.6%) and Alphabet (32%) operate well below this level.
-
Revenue Growth: 114.2% — Doubling revenue at a $130 billion base is historically extraordinary. The closest comparison might be the early iPhone years at Apple, but even that was at a fraction of Nvidia’s current scale.
-
R&D Spending: $12.9B — Nvidia reinvests about 10% of revenue into R&D, a relatively modest ratio that still represents an enormous absolute dollar amount. This funds next-generation architectures (Blackwell, Rubin) and software platform extensions.
Is Nvidia Profitable?
Nvidia is not just profitable — it earned $72.9 billion in net income in FY2025, making it one of the most profitable companies in the world in absolute terms. Only Apple and Microsoft earn more. The 55.9% net margin means more than half of every dollar of revenue becomes profit, a ratio more typical of luxury goods or monopoly utilities than semiconductor companies.
The profitability is driven by two factors working in tandem: extreme demand for AI compute infrastructure, and Nvidia’s near-monopoly position as the supplier. When a customer needs GPU clusters for AI training and CUDA is the required software platform, there is essentially no substitute — giving Nvidia the pricing power to maintain 77% gross margins. As long as AI infrastructure spending continues to accelerate and no competitive alternative achieves software parity, these margins are likely sustainable.
Where Does Nvidia Spend its Money?
Nvidia’s cost structure is relatively lean for a company this large:
- Cost of Revenue ($29.5B): Manufacturing costs paid to TSMC, packaging, testing, and warranty. Nvidia doesn’t own fabs, which keeps capital expenditure low.
- Research & Development ($12.9B): The largest operating expense. Nvidia employs ~32,000 people, many of them chip architects and software engineers building the next generation of GPU architectures and AI frameworks.
- Sales, General & Administrative ($4.5B): Marketing, sales teams, corporate overhead.
What to Watch
-
Blackwell architecture ramp — Nvidia’s next-generation Blackwell GPUs (B100, B200, GB200) began shipping in late FY2025 and will dominate FY2026 revenue. Blackwell offers roughly 2.5x the AI training performance and 5x the inference performance of the prior Hopper generation. The production ramp — particularly in new form factors like the GB200 NVL72 (a rack-scale system with 72 GPUs and liquid cooling) — will determine whether Nvidia’s growth trajectory continues.
-
Customer concentration risk — An estimated 40-50% of Nvidia’s Data Center revenue comes from just four customers: Microsoft, Meta, Google, and Amazon. Any single customer reducing spending or shifting to custom silicon could have an outsized impact. Microsoft has publicly stated plans to spend $80 billion on AI infrastructure in FY2025, but capex cycles are inherently variable.
-
Custom silicon encroachment — Google’s TPUs, Amazon’s Trainium 2, and Meta’s MTIA chips represent serious vertical integration efforts by Nvidia’s largest customers. While these custom chips are unlikely to completely replace Nvidia GPUs (they are typically designed for inference rather than training), they could cap Nvidia’s market share in the fastest-growing use cases.
-
China export controls — U.S. government restrictions have effectively blocked Nvidia from selling advanced GPUs to Chinese customers. Before the controls, China represented 20-25% of Nvidia’s Data Center revenue. Nvidia has created compliance-specific chips (the H20), but they generate significantly less revenue per unit. Huawei’s Ascend 910B chip is gaining traction domestically as a substitute.
-
CUDA moat durability — Nvidia’s software ecosystem moat is arguably stronger than its hardware advantage because it is harder to replicate. However, open-source alternatives (AMD’s ROCm, Triton by OpenAI, and PyTorch’s expanding multi-backend support) are slowly reducing CUDA lock-in. If AI developers become genuinely hardware-agnostic, Nvidia would need to compete more on price.
Nvidia (NVDA) Financial Summary
Nvidia (NVDA) generated $130.5 billion in total revenue in fiscal year 2025 (ending January 2025), more than doubling year-over-year with 114.2% growth. Net income reached $72.9 billion at a 77.4% gross margin — the highest among large-cap semiconductor companies by a wide margin. The Data Center segment accounted for 88% of revenue, driven by insatiable demand for AI training infrastructure from hyperscale cloud providers. Nvidia’s combination of GPU hardware dominance, CUDA software lock-in, and networking infrastructure control gives it a comprehensive AI platform position that no competitor currently matches.
Weekly Company Breakdowns — Visualized
See how top companies actually make money. Visual revenue breakdowns delivered free every week.