How Does Nvidia Make its Money?

Nvidia designs and sells graphics processing units (GPUs), data center accelerators, and related software for AI, gaming, automotive, and professional visualization. The company doesn’t manufacture its own chips — Taiwan Semiconductor (TSMC) fabricates them — but Nvidia designs the architecture and sells the finished products along with software platforms like CUDA and AI Enterprise.

Nvidia operates through two reportable segments: Data Center and Gaming, with smaller contributions from Professional Visualization and Automotive.

Revenue Breakdown

Revenue Stream FY2025 (Jan) FY2024 (Jan) YoY Growth
Data Center $115.2B $47.5B +142.5%
Gaming $11.4B $10.4B +9.6%
Professional Visualization $2.1B $1.6B +31.3%
Automotive $1.7B $1.1B +54.5%
Total Revenue $130.5B $60.9B +114.2%

Data Center — 88% of Revenue

The Data Center segment is Nvidia’s growth engine and now accounts for nearly 9 out of every 10 dollars the company earns. This includes:

  • AI accelerators (H100, H200, B100, B200 GPUs) sold to hyperscalers like Microsoft, Google, Amazon, Meta, and Oracle
  • Networking equipment (InfiniBand, Spectrum-X) connecting GPU clusters
  • DGX systems — turnkey AI supercomputers
  • Software and cloud services including CUDA, AI Enterprise, and Nvidia DGX Cloud

The $115.2 billion in Data Center revenue represents a staggering 142% increase year-over-year, driven by the global AI infrastructure buildout. Nvidia’s GPUs are the de facto standard for training and running large language models.

Gaming — 9% of Revenue

Nvidia’s original business. The Gaming segment sells GeForce GPUs for PC gaming and provides chips to Nintendo and other console makers. Revenue grew a modest 10% as the segment takes a back seat to the AI supercycle.

Professional Visualization — 2% of Revenue

RTX GPUs for workstations used in architecture, engineering, media production, and design. Growing steadily as ray tracing and AI-assisted rendering gain adoption.

Automotive — 1% of Revenue

DRIVE Orin and next-generation DRIVE Thor platforms for autonomous vehicles and in-car computing. Still small but growing 55% year-over-year as automakers integrate more AI.

Income Statement Overview

Metric FY2025 FY2024
Total Revenue $130.5B $60.9B
Cost of Revenue $29.5B $16.6B
Gross Profit $101.0B $44.3B
Operating Expenses $17.4B $12.2B
Operating Income $83.6B $32.1B
Net Income $72.9B $29.8B

Key Financial Metrics

  • Gross Margin: 77.4% — Extremely high for a semiconductor company, reflecting Nvidia’s pricing power in AI chips where demand far outstrips supply.
  • Operating Margin: 64.1% — An operating margin above 60% is almost unheard of at this scale. It demonstrates how little competition Nvidia faces in high-end AI accelerators.
  • Revenue Growth: +114.2% — Doubling revenue at a $130 billion base is historic. For context, Nvidia’s FY2025 revenue exceeds Intel and AMD combined.
  • Net Income: $72.9B — Nvidia generated more net profit in a single year than most S&P 500 companies generate in revenue.

Where Does Nvidia Spend its Money?

Nvidia’s cost structure is relatively lean for a company this large:

  • Cost of Revenue ($29.5B): Manufacturing costs paid to TSMC, packaging, testing, and warranty. Nvidia doesn’t own fabs, which keeps capital expenditure low.
  • Research & Development ($12.9B): The largest operating expense. Nvidia employs ~32,000 people, many of them chip architects and software engineers building the next generation of GPU architectures and AI frameworks.
  • Sales, General & Administrative ($4.5B): Marketing, sales teams, corporate overhead.

What to Watch

  1. Blackwell ramp — Nvidia’s next-generation Blackwell architecture (B100/B200) began shipping in late FY2025. These chips offer significantly better performance per watt, and the ramp into FY2026 will determine whether the growth trajectory continues.
  2. Customer concentration — A significant portion of Data Center revenue comes from 4-5 hyperscalers. Any pullback in cloud capital spending would directly impact Nvidia.
  3. China restrictions — U.S. export controls limit Nvidia’s ability to sell advanced chips to Chinese customers, cutting off a market that previously generated billions in revenue.
  4. Competition — AMD’s MI300X, Google’s TPUs, and Amazon’s Trainium chips are all positioning as alternatives to Nvidia. Custom silicon (ASICs) from hyperscalers could erode Nvidia’s market share over time.
  5. Software moat — CUDA’s dominance as the AI programming platform is arguably Nvidia’s deepest competitive advantage. As long as developers and researchers default to CUDA, Nvidia’s hardware flywheel continues.