---
title: "Mixture of Depths: Adaptive Compute per Token for Cost-Efficient LLMs"
description: "Mixture of Depths lets models skip layers for easy tokens and spend compute on hard tokens. The 2026 implementations and what they save."
canonical: https://callsphere.ai/blog/mixture-of-depths-adaptive-compute-per-token-2026
category: "Large Language Models"
tags: ["Mixture of Depths", "MoD", "LLM Optimization", "Adaptive Compute"]
author: "CallSphere Team"
published: 2026-04-25T00:00:00.000Z
updated: 2026-05-08T17:27:37.123Z
---

# Mixture of Depths: Adaptive Compute per Token for Cost-Efficient LLMs

> Mixture of Depths lets models skip layers for easy tokens and spend compute on hard tokens. The 2026 implementations and what they save.

## The Insight

Standard transformers spend the same compute on every token regardless of difficulty. The article `the` gets the same number of layer passes as a complex named entity. Mixture of Depths (MoD) — DeepMind's 2024 contribution to the architecture toolkit — lets a model skip layers for easy tokens and spend more compute on hard ones.

By 2026 MoD has shown up in several production stacks (sometimes alongside MoE) and the cost savings are real.

## How MoD Works

```mermaid
flowchart LR
    Tok[Token] --> Router[Router decides:
compute or skip]
    Router -->|compute| Layer[Pass through layer]
    Router -->|skip| Bypass[Skip layer, pass embedding through]
    Layer --> Next[Next layer]
    Bypass --> Next
```

At each layer, a learned router decides which tokens get the full layer compute. The skipped tokens carry their previous embedding forward. The decision is made per-token, per-layer.

The naming nods to Mixture of Experts (MoE) but the mechanism is different: MoE picks a subset of experts per token; MoD picks a subset of tokens per layer.

## Why It's Compatible With MoE

The two compose. A model can be a Mixture-of-Experts AND Mixture-of-Depths simultaneously: at each layer, a subset of tokens are computed (MoD), and for those that are computed, a subset of experts is activated (MoE).

This is the configuration that delivers the largest cost savings while preserving quality.

## Production Performance

The 2024-2025 papers and 2026 follow-up reports show:

- ~50 percent reduction in FLOPs at training time
- ~30-50 percent reduction at inference time
- Minimal quality loss on standard benchmarks (within 0.5 percent on most)
- Larger savings at long context (where many tokens are routine)

The savings come from the fact that on most natural language, only a fraction of tokens at any layer need full compute.

## What This Looks Like in Practice

```mermaid
flowchart TB
    Sentence[Sentence: 'The cat sat on the mat under the tree']
    Sentence --> L1[Layer 1: route 'cat', 'mat', 'tree' for compute;
others skip]
    L1 --> L2[Layer 2: route 'sat' (verb predicting where);
others skip]
    L2 --> L3[Layer 3: route relations]
    L3 --> Out[Output]
```

Easy tokens (the, on, under) skip many layers. Content words and harder tokens get fuller compute. Across an entire document, total FLOPs drop significantly.

## Where It Underperforms

- Tasks where every token's representation matters equally (some downstream classification heads)
- Very short sequences where the router overhead is not amortized
- Adversarial inputs that try to confuse the router

## Implementation Considerations

Three practical notes:

- The router needs to be trained with a load-balancing penalty so it does not skip every token
- Per-batch routing requires careful kernel-level work for efficiency
- Inference frameworks (vLLM, TensorRT-LLM) ship MoD support in 2026 for models trained with it

## What This Means for Builders

For most teams using LLM APIs, MoD is invisible — a frontier provider may use it but you do not see it. For teams self-hosting, MoD-trained models give you cheaper inference at comparable quality. For researchers, it is a clean axis to explore independently of MoE and quantization.

The 2026 frontier-model trend is clear: any new architecture that combines MoD + MoE + FP4 + speculative decoding gets multiple multiplicative cost wins. The composite is what makes very large models economical to serve.

## Sources

- "Mixture of Depths" paper (Raposo et al.) — [https://arxiv.org/abs/2404.02258](https://arxiv.org/abs/2404.02258)
- "MoE + MoD combined" 2025 — [https://arxiv.org](https://arxiv.org)
- DeepMind technical blog — [https://deepmind.google/discover](https://deepmind.google/discover)
- "Adaptive computation in transformers" — [https://arxiv.org/abs/2308.05772](https://arxiv.org/abs/2308.05772)
- "Token-level routing" survey — [https://arxiv.org](https://arxiv.org)

## Mixture of Depths: Adaptive Compute per Token for Cost-Efficient LLMs — operator perspective

Mixture of Depths: Adaptive Compute per Token for Cost-Efficient LLMs is the kind of news that lives or dies on second-week behavior. The first benchmark is marketing. The eval suite a week later is the truth. The CallSphere stack treats announcements as input to an evals queue, not a product roadmap. Production agents stay pinned; new releases earn their slot only after a regression suite confirms cost, latency, and tool-call reliability move the right way.

## Base model vs. production LLM stack — the gap that costs you uptime

A base model is a checkpoint. A production LLM stack is a whole different artifact: eval gates that fail the build on regression, prompt caching that cuts repeated-system-prompt cost by 40-70%, structured outputs that prevent JSON drift on tool calls, fallback chains that route to a smaller-model retry when the primary times out, and request-side guardrails that cap tool calls per session before the loop spirals. CallSphere runs LLMs in tandem on purpose: `gpt-4o-realtime` for the live call (streaming audio in and out, tool calls inline) and `gpt-4o-mini` for post-call analytics (sentiment scoring, lead qualification, summary generation, and the lower-stakes async work that doesn't need realtime). That split is not a cost optimization — it's a reliability decision. Realtime is optimized for low-latency turn-taking; mini is optimized for cheap, deterministic batch scoring. Mixing them lets each do what it's good at without one regressing the other. The teams that struggle with LLMs in production almost always made the same mistake: they treated "the model" as a single dependency, instead of as a small portfolio of models, each pinned to a job, each behind its own eval suite, each with a documented fallback.

## FAQs

**Q: Why isn't mixture of Depths an automatic upgrade for a live call agent?**

A: Most of the time it doesn't, and that's the right starting assumption. The relevant test is whether it improves at least one of: p95 first-token latency, tool-call argument accuracy on noisy inputs, multi-turn handoff stability, or per-session cost. CallSphere runs 37 specialized AI agents wired to 90+ function tools across 115+ database tables in 6 live verticals.

**Q: How do you sanity-check mixture of Depths before pinning the model version?**

A: The eval gate is unsentimental — a regression suite that simulates real call traffic (noisy ASR, partial inputs, tool-call timeouts) measures four numbers, and a candidate has to win on three of four without losing badly on the fourth. Anything else is treated as a blog post, not a stack change.

**Q: Where does mixture of Depths fit in CallSphere's 37-agent setup?**

A: In a CallSphere deployment, new model and API capabilities land first in the post-call analytics pipeline (lower stakes, async, easy to roll back) and only later in the live realtime path. Today the verticals most likely to absorb new capability first are Sales and IT Helpdesk, which already run the largest share of production traffic.

## See it live

Want to see healthcare agents handle real traffic? Walk through https://healthcare.callsphere.tech or grab 20 minutes with the founder: https://calendly.com/sagar-callsphere/new-meeting.

---

Source: https://callsphere.ai/blog/mixture-of-depths-adaptive-compute-per-token-2026
