MXFP4 Quantization Explained: The Microscaling Format Behind 2026 Inference
MXFP4 is the quantization format powering 2026 inference on NVIDIA Blackwell, AMD MI355X, and Intel Gaudi 3. What it does, why it works, and what it costs.
What MXFP4 Is
MXFP4 (Microscaling FP4) is a 4-bit floating-point quantization format from the Open Compute Project's Microscaling specification. It is the format that NVIDIA Blackwell, AMD MI355X, and Intel Gaudi 3 all natively accelerate, and it is the format most 2026 inference servers ship as default for new deployments. If you are running a frontier model in 2026, you are very likely running MXFP4 weights with MXFP6 or MXFP8 activations.
This is what MXFP4 does, why it works, and where it breaks.
The Microscaling Idea
flowchart LR
Block[32 elements<br/>group] --> Scale[1 shared<br/>scale factor: E8M0]
Scale --> Quant[32 elements<br/>each 4 bits]
Quant --> Total[Total: 32 × 4 + 8 = 136 bits]
Total --> Avg[Avg: 4.25 bits/element]
A microscaling block is a group of 32 values that share a single E8M0 (8-bit exponent, 0 mantissa) scale factor. Each value is then 4 bits (1 sign bit, 2 exponent bits, 1 mantissa bit for E2M1; or 1 sign + 3 exponent for E3M0; the standard supports several sub-formats).
The total bits per element is roughly 4.25 — far closer to true 4-bit than older formats that needed bigger scale headers.
Hear it before you finish reading
Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.
Why It Beats INT4 in Practice
- Wider dynamic range: floating-point formats handle activations with extreme values better than integer formats; LLMs have those.
- Block-level scaling: aligns with the natural distribution of weight magnitudes across rows of a matrix.
- Hardware native: tensor cores on Blackwell, MI355X, and Gaudi 3 execute MXFP4 multiplies at full speed.
- Open standard: vendors implement to the same spec, so portability is real.
The MXFP4 vs INT4 quality gap is small but measurable on most LLMs — perplexity penalty drops by 30-50 percent compared to INT4 at the same bit count.
Where MXFP4 Lives in the Stack
flowchart TB
Train[Training in BF16 / FP8] --> Calib[Calibration on small dataset]
Calib --> Convert[Quantization conversion]
Convert --> Weights[MXFP4 weights]
Weights --> Serve[Inference server]
Serve --> Tensor[Tensor cores execute MXFP4]
Training is still typically in BF16 or FP8 (FP4 training is emerging — see DeepSeek V4). Inference is increasingly MXFP4.
What You Lose
The honest tradeoffs:
- Tail-token quality: the rarest tokens lose more accuracy than common tokens. Code, math, and multilingual benchmarks show small but consistent regressions.
- Long-context behavior: at extreme context lengths, MXFP4 KV-caches accumulate quantization error.
- Distillation sensitivity: models distilled into smaller architectures sometimes need MXFP6 weights to retain quality; MXFP4 can be too aggressive.
For a typical chat or agentic workload at 4-32K context, MXFP4 is essentially free quality-wise. For research-grade math or long-context retrieval, you may want MXFP6 weights.
Still reading? Stop comparing — try CallSphere live.
CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.
Cost Math
For a 70B parameter model:
- BF16: 140 GB
- FP8: 70 GB
- MXFP4: ~37 GB
The 37 GB number means a 70B model fits on a single 48 GB GPU, where BF16 needed multiple cards. That collapses inference cost roughly 4x relative to BF16 baselines.
How to Adopt It
Most users do not have to do anything: vLLM, TensorRT-LLM, SGLang, and TGI all ship MXFP4 support, and providers like Together, Fireworks, and DeepInfra serve MXFP4 by default in 2026.
If you are quantizing your own model:
- Use the Hugging Face
compressed-tensorslibrary or NVIDIA's TRT-LLM quantization toolkit - Calibrate on a representative dataset (~512 sequences typically suffices)
- Verify quality on a held-out task suite, not just perplexity
- For activations, MXFP6 is the safe default; drop to MXFP4 only if benchmarks confirm quality
The 2026 Adoption Curve
By April 2026, public model APIs from OpenAI, Anthropic, Google, and most open-source-as-a-service providers run MXFP4 as the default inference format. Self-hosted deployments are split: large enterprises run BF16 or FP8 on H200/H100 fleets they bought before Blackwell; new deployments are largely MXFP4 on Blackwell.
Sources
- OCP Microscaling specification — https://www.opencompute.org/documents/ocp-microscaling-formats-mx-v1-0-spec-final.pdf
- "MX-FP4: efficient inference" NVIDIA — https://developer.nvidia.com/blog
- "Microscaling formats for AI" research — https://arxiv.org/abs/2310.10537
- Hugging Face compressed-tensors — https://github.com/neuralmagic/compressed-tensors
- AMD MI355X MXFP4 documentation — https://www.amd.com/en/products/instinct
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.