Cost of Compute 2026: H200, B200, MI325X, and the TPU v6 Trendline
Per-FLOP and per-token cost trends across NVIDIA H200/B200, AMD MI325X, and Google TPU v6 in 2026 — and what the curve says about 2027.
What's Cheap and What's Not
Compute costs for AI workloads in 2026 are dropping fast for inference and roughly flat per-FLOP for training. The mix of available hardware has broadened: NVIDIA still dominates but AMD and Google have gained share. This piece walks through the 2026 numbers and where the curves are heading.
The Hardware Lineup
flowchart TB
NV[NVIDIA] --> H100[H100<br/>2022-2024 mainstream]
NV --> H200[H200<br/>2024-2026 mainstream]
NV --> B200[Blackwell B200<br/>2025-2026 frontier]
NV --> GB[GB200 NVL72<br/>rack-scale]
AMD[AMD] --> MI300[MI300X<br/>2024]
AMD --> MI325[MI325X<br/>2025]
AMD --> MI355[MI355X<br/>2026]
Goo[Google] --> TPU5[TPU v5p<br/>2024]
Goo --> TPU6[TPU v6 'Trillium'<br/>2025-2026]
Per-FLOP Trends
For BF16/FP8 throughput per dollar, the rough 2026 picture (numbers approximate, vary by deal):
- H100: baseline (1.0x)
- H200: ~1.2x
- B200: ~2.5-3x H100 per dollar at FP8
- MI355X: ~2-2.5x H100 per dollar at FP8, FP4 native
- TPU v6: ~2x TPU v5p, comparable to mid-tier GPU economics
For FP4 training and inference (Blackwell native, MI355X native, TPU v6 partial), the per-FLOP cost is roughly 2-3x cheaper still.
Per-Token Inference Cost
Per-million-token inference cost for a 70B-class model in 2026:
Hear it before you finish reading
Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.
- $0.10-0.40 cents range for hosted providers
- Self-hosted on Blackwell: comparable when amortized
- Open-source models on cheap GPU rentals (Lambda, RunPod): substantially cheaper for batch workloads
The 2026 inference cost curve has dropped roughly 5-10x from 2024 levels for comparable quality. Training costs have dropped less — perhaps 2x for like-for-like compute.
Memory Is the Constraint
flowchart LR
Param[Parameters] --> Mem[Memory required]
Mem -->|HBM3e is fast and expensive| Cost[Cost dominated by memory]
The dominant cost in 2026 inference is memory bandwidth, not raw compute. HBM3e capacity per GPU varies:
- H100: 80GB HBM3
- H200: 141GB HBM3e
- B200: 192GB HBM3e
- MI355X: 288GB HBM3e
- TPU v6: 32GB per chip but liquid-cooled clusters scale memory across many chips
Larger memory per GPU lets you fit larger models on fewer cards, dropping serving cost.
The Power Wall
By 2026, data center power is the binding constraint for new AI capacity in many regions. A B200 rack runs at 120-140 kW; existing data centers were not built for this density. The capex shift toward purpose-built AI campuses (Microsoft, Meta, Amazon, OpenAI, x.AI) is partly a response.
Still reading? Stop comparing — try CallSphere live.
CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.
This is a topic in itself; covered in the next article.
Choosing Hardware in 2026
flowchart TD
Q1{New deployment?} -->|Yes| Q2
Q1 -->|No, existing H100 fleet| Keep[Keep until H100 depreciated]
Q2{Frontier training?} -->|Yes| BTPU[B200 or TPU v6]
Q2 -->|No, inference| Q3{Cost optimized?}
Q3 -->|Yes| MI[MI355X or hosted]
Q3 -->|No, lowest latency| B[B200]
What's Coming
- B300 / Rubin (NVIDIA): expected late 2026 / 2027
- MI400 series (AMD): expected late 2026
- TPU v7: expected 2027
- More aggressive FP4 and beyond (FP3? Sub-bit-precision?) on next-generation hardware
The doubling cadence of AI compute capacity per dollar that drove 2022-2025 is showing signs of slowing as we approach physical limits. 2026-2028 will be a slower curve than 2022-2025.
What This Means for Builders
For most teams, the action is straightforward:
- Use the best inference hardware your provider offers (B200, MI355X, TPU v6)
- Quantize to FP4 / FP8 wherever quality allows
- Cache aggressively (prompt caching, KV-cache, response caching)
- Monitor per-task cost; the trend is your friend
For teams that own infrastructure: the 2024 H100s are still useful but the depreciation schedule should reflect that B200 / MI355X are 2-3x cheaper per equivalent throughput. New deployments should default to current generation.
Sources
- NVIDIA GTC announcements — https://www.nvidia.com/gtc
- AMD Instinct roadmap — https://www.amd.com/en/products/instinct
- Google TPU v6 — https://cloud.google.com/tpu
- "AI compute trends" Epoch AI — https://epochai.org
- "Artificial Analysis" benchmarks — https://artificialanalysis.ai
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.