Mixture of Depths: Adaptive Compute per Token for Cost-Efficient LLMs
Mixture of Depths lets models skip layers for easy tokens and spend compute on hard tokens. The 2026 implementations and what they save.
The Insight
Standard transformers spend the same compute on every token regardless of difficulty. The article the gets the same number of layer passes as a complex named entity. Mixture of Depths (MoD) — DeepMind's 2024 contribution to the architecture toolkit — lets a model skip layers for easy tokens and spend more compute on hard ones.
By 2026 MoD has shown up in several production stacks (sometimes alongside MoE) and the cost savings are real.
How MoD Works
flowchart LR
Tok[Token] --> Router[Router decides:<br/>compute or skip]
Router -->|compute| Layer[Pass through layer]
Router -->|skip| Bypass[Skip layer, pass embedding through]
Layer --> Next[Next layer]
Bypass --> Next
At each layer, a learned router decides which tokens get the full layer compute. The skipped tokens carry their previous embedding forward. The decision is made per-token, per-layer.
The naming nods to Mixture of Experts (MoE) but the mechanism is different: MoE picks a subset of experts per token; MoD picks a subset of tokens per layer.
Hear it before you finish reading
Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.
Why It's Compatible With MoE
The two compose. A model can be a Mixture-of-Experts AND Mixture-of-Depths simultaneously: at each layer, a subset of tokens are computed (MoD), and for those that are computed, a subset of experts is activated (MoE).
This is the configuration that delivers the largest cost savings while preserving quality.
Production Performance
The 2024-2025 papers and 2026 follow-up reports show:
- ~50 percent reduction in FLOPs at training time
- ~30-50 percent reduction at inference time
- Minimal quality loss on standard benchmarks (within 0.5 percent on most)
- Larger savings at long context (where many tokens are routine)
The savings come from the fact that on most natural language, only a fraction of tokens at any layer need full compute.
Still reading? Stop comparing — try CallSphere live.
CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.
What This Looks Like in Practice
flowchart TB
Sentence[Sentence: 'The cat sat on the mat under the tree']
Sentence --> L1[Layer 1: route 'cat', 'mat', 'tree' for compute;<br/>others skip]
L1 --> L2[Layer 2: route 'sat' (verb predicting where);<br/>others skip]
L2 --> L3[Layer 3: route relations]
L3 --> Out[Output]
Easy tokens (the, on, under) skip many layers. Content words and harder tokens get fuller compute. Across an entire document, total FLOPs drop significantly.
Where It Underperforms
- Tasks where every token's representation matters equally (some downstream classification heads)
- Very short sequences where the router overhead is not amortized
- Adversarial inputs that try to confuse the router
Implementation Considerations
Three practical notes:
- The router needs to be trained with a load-balancing penalty so it does not skip every token
- Per-batch routing requires careful kernel-level work for efficiency
- Inference frameworks (vLLM, TensorRT-LLM) ship MoD support in 2026 for models trained with it
What This Means for Builders
For most teams using LLM APIs, MoD is invisible — a frontier provider may use it but you do not see it. For teams self-hosting, MoD-trained models give you cheaper inference at comparable quality. For researchers, it is a clean axis to explore independently of MoE and quantization.
The 2026 frontier-model trend is clear: any new architecture that combines MoD + MoE + FP4 + speculative decoding gets multiple multiplicative cost wins. The composite is what makes very large models economical to serve.
Sources
- "Mixture of Depths" paper (Raposo et al.) — https://arxiv.org/abs/2404.02258
- "MoE + MoD combined" 2025 — https://arxiv.org
- DeepMind technical blog — https://deepmind.google/discover
- "Adaptive computation in transformers" — https://arxiv.org/abs/2308.05772
- "Token-level routing" survey — https://arxiv.org
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.