Quantization-Aware Training in PyTorch: FP4, INT8, and BF16 Mixed
QAT is how you get small models without quality regressions. The 2026 PyTorch patterns for FP4, INT8, and BF16 mixed-precision training.
What QAT Does
Post-training quantization (PTQ) takes a trained float-precision model and quantizes it. Quality often drops. Quantization-aware training (QAT) bakes quantization into training: the model learns to be robust to it. Quality regressions are typically smaller.
By 2026 QAT in PyTorch is well-supported for INT8 and increasingly for FP4 and FP8.
When QAT Pays Off
flowchart TD
Q1{PTQ accuracy regressing?} -->|Yes| QAT[Use QAT]
Q1 -->|No| Skip[PTQ is enough]
Q2{Aggressive quantization needed?} -->|FP4 / sub-INT8| QAT2[QAT recommended]
Q3{Model size critical?} -->|Yes| QAT3[QAT often necessary for FP4]
For modest quantization (BF16, FP8 inference of a BF16-trained model), PTQ is usually fine. For aggressive (FP4, INT4), QAT typically restores most of the lost quality.
How QAT Works
During training, fake quantization layers simulate the rounding errors of low-precision inference. Gradients flow through them. The model learns to produce values that are robust to rounding.
Hear it before you finish reading
Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.
flowchart LR
Forward[BF16 forward] --> Sim[Simulate FP4 rounding]
Sim --> Loss[Loss computed with rounded values]
Loss --> Back[Backward pass]
Back --> Update[Update master weights]
Master weights are kept in higher precision; only the simulated rounding affects loss.
PyTorch Tooling
- torch.ao.quantization: PyTorch's native quantization
- torchao: newer, more comprehensive (FP4, FP8, INT4)
- bitsandbytes: practical INT8 / INT4 fine-tuning
- Hugging Face PEFT + bnb: end-to-end QAT workflows
- NVIDIA Modelopt: vendor-aligned tooling
For most 2026 production training, torchao + Hugging Face conventions is the workflow.
FP8 Mixed-Precision Training
Trains in BF16 master weights with FP8 forward/backward. Stable on H200/B200 hardware. Speeds up training 2x vs BF16.
FP4 Training
Newer (DeepSeek V4 demonstrated). Stable in mixed-precision with careful loss scaling, microscaling, and selective high-precision layers (norms, embeddings).
Calibration
QAT needs calibration data — representative inputs that the simulated quantization sees. Patterns:
Still reading? Stop comparing — try CallSphere live.
CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.
- Calibrate on the actual training distribution
- Calibrate per-channel (per-row of weight matrices)
- Use enough samples (typically 256-1024)
Bad calibration data produces poorly-quantized models.
Validation
After QAT, validate:
- Quality on held-out test set vs unquantized baseline
- Quality on edge cases (long-tail tokens, rare inputs)
- Inference speed on target hardware
- Memory consumption
A quality regression of >2 percent typically requires re-tuning.
Common Failure Modes
- Calibration data too small or unrepresentative
- Over-aggressive quantization (e.g., trying to FP4 at small model sizes)
- Numerical instability without proper master weights
- Quantization breaks specific layer types (BatchNorm, LayerNorm specifics)
A Production Workflow
flowchart LR
Train[BF16 train] --> Cal[Calibrate]
Cal --> QAT2[QAT fine-tune in target precision]
QAT2 --> Val[Validate]
Val --> Export[Export quantized weights]
End-to-end this is a 1-2 week effort for a typical mid-sized model. The payback: smaller deployment artifacts and faster inference.
What QAT Cannot Fix
- Model architecture inappropriate for the target precision
- Training data quality problems
- Fundamental capability gaps
QAT preserves quality; it does not create it.
Sources
- PyTorch quantization documentation — https://pytorch.org/docs/stable/quantization.html
- torchao — https://github.com/pytorch/ao
- bitsandbytes — https://github.com/bitsandbytes-foundation/bitsandbytes
- "MX-FP4 training" research — https://arxiv.org
- DeepSeek V4 technical report — https://github.com/deepseek-ai
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.