Quantizing Embeddings: int8, Binary, and Matryoshka
Embedding quantization cuts storage 4-32x at modest recall cost. The 2026 quantization techniques and where each one wins.
Why Quantize
A 1024-dim float32 embedding is 4 KB. Ten million of them is 40 GB. Quantization reduces this dramatically with modest recall impact:
- int8 quantization: 4x smaller (~1 KB per vector)
- Binary quantization: 32x smaller (~128 bytes per vector)
- Matryoshka: configurable, often 2-4x smaller
For large corpora, quantization is the difference between fitting in RAM and not.
The Three Approaches
flowchart TB
Quant[Quantization] --> Q1[int8: scale to 8 bits per dim]
Quant --> Q2[Binary: 1 bit per dim]
Quant --> Q3[Matryoshka: truncate to fewer dims]
int8 Quantization
Each float32 dimension is mapped to an int8. A scale factor and zero point are stored per vector or per group.
- Recall impact: typically 1-3 percent drop
- Storage: 4x smaller
- Compute: SIMD-friendly; often faster
The 2026 default for cost-conscious deployments.
Hear it before you finish reading
Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.
Binary Quantization
Each dimension reduced to 1 bit (sign of the value). Distance computed via Hamming distance.
- Recall impact: can be substantial (5-15 percent)
- Storage: 32x smaller
- Compute: very fast (XOR popcount)
- Re-rank: typically rerank top candidates with full-precision
Binary works best with rerank: candidate generation in binary; final scoring in full precision.
Matryoshka Embeddings
The embedding model is trained so that truncating to fewer dimensions still produces useful vectors. Truncate to 256 or 512 dims for storage savings; rehydrate to full dims for accurate scoring.
- Recall impact: small if the model is Matryoshka-trained
- Storage: configurable
- Best with models that explicitly support it: OpenAI text-embedding-3, Cohere embed-v4
Decision Matrix
flowchart TD
Q1{Storage critical?} -->|Yes, extreme| Bin[Binary]
Q1 -->|Yes, modest| int8[int8]
Q1 -->|No| Q2{Model supports Matryoshka?}
Q2 -->|Yes| Mat[Matryoshka truncation]
Q2 -->|No| Full[Keep full precision]
For most cost-sensitive deployments in 2026, int8 is the sweet spot: substantial savings, small recall impact.
Still reading? Stop comparing — try CallSphere live.
CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.
Combining Approaches
You can combine:
- Matryoshka truncation to 512 dims, then int8 quantization → 8x storage saved
- Binary for top-K candidate generation, full precision for top-N rerank → 30x candidate-stage savings, full quality at top
These compositions are how 2026 production systems hit 1B+ vector scale on reasonable hardware.
What Quantization Does Not Help
- Very small corpora — savings are dollars, not real
- Latency-bound, RAM-constrained workloads where recall is paramount
- Recall-critical use cases that cannot tolerate even 1 percent drop
Implementation in 2026
- pgvector: native int8 support added in recent versions
- Qdrant: int8 + binary + Matryoshka support
- Milvus: native quantization support
- Pinecone: int8 / binary modes available
- FAISS: extensive quantization options (PQ, OPQ, etc.)
For most cloud vector DBs, quantization is a configuration toggle, not custom code.
Recall vs Storage Curve
Empirical 2026 numbers (varies by domain):
| Setting | Storage | Recall@10 vs full |
|---|---|---|
| float32 | 1x | 100% |
| int8 | 0.25x | 98-99% |
| Matryoshka 512 | 0.5x | 99% |
| Matryoshka 256 | 0.25x | 96-98% |
| Binary | 0.03x | 85-92% |
| Binary + rerank | 0.03x | 96-98% |
The "binary + rerank" combination is especially compelling.
Common Gotchas
- Mixing quantized and unquantized vectors in the same query — distances are not comparable
- Re-quantizing on small data — quantization is fitted on the data; small samples produce poor mappings
- Forgetting to renormalize after quantization where it matters
- Comparing quantized vectors with different schemes
Sources
- "Matryoshka Representation Learning" — https://arxiv.org/abs/2205.13147
- Cohere embed quantization — https://docs.cohere.com
- pgvector quantization — https://github.com/pgvector/pgvector
- Qdrant quantization documentation — https://qdrant.tech/documentation
- "Binary embeddings" research — https://arxiv.org
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.