Skip to content
Large Language Models
Large Language Models8 min read0 views

Mixture of Depths: Adaptive Compute per Token for Cost-Efficient LLMs

Mixture of Depths lets models skip layers for easy tokens and spend compute on hard tokens. The 2026 implementations and what they save.

The Insight

Standard transformers spend the same compute on every token regardless of difficulty. The article the gets the same number of layer passes as a complex named entity. Mixture of Depths (MoD) — DeepMind's 2024 contribution to the architecture toolkit — lets a model skip layers for easy tokens and spend more compute on hard ones.

By 2026 MoD has shown up in several production stacks (sometimes alongside MoE) and the cost savings are real.

How MoD Works

flowchart LR
    Tok[Token] --> Router[Router decides:<br/>compute or skip]
    Router -->|compute| Layer[Pass through layer]
    Router -->|skip| Bypass[Skip layer, pass embedding through]
    Layer --> Next[Next layer]
    Bypass --> Next

At each layer, a learned router decides which tokens get the full layer compute. The skipped tokens carry their previous embedding forward. The decision is made per-token, per-layer.

The naming nods to Mixture of Experts (MoE) but the mechanism is different: MoE picks a subset of experts per token; MoD picks a subset of tokens per layer.

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →

Why It's Compatible With MoE

The two compose. A model can be a Mixture-of-Experts AND Mixture-of-Depths simultaneously: at each layer, a subset of tokens are computed (MoD), and for those that are computed, a subset of experts is activated (MoE).

This is the configuration that delivers the largest cost savings while preserving quality.

Production Performance

The 2024-2025 papers and 2026 follow-up reports show:

  • ~50 percent reduction in FLOPs at training time
  • ~30-50 percent reduction at inference time
  • Minimal quality loss on standard benchmarks (within 0.5 percent on most)
  • Larger savings at long context (where many tokens are routine)

The savings come from the fact that on most natural language, only a fraction of tokens at any layer need full compute.

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

What This Looks Like in Practice

flowchart TB
    Sentence[Sentence: 'The cat sat on the mat under the tree']
    Sentence --> L1[Layer 1: route 'cat', 'mat', 'tree' for compute;<br/>others skip]
    L1 --> L2[Layer 2: route 'sat' (verb predicting where);<br/>others skip]
    L2 --> L3[Layer 3: route relations]
    L3 --> Out[Output]

Easy tokens (the, on, under) skip many layers. Content words and harder tokens get fuller compute. Across an entire document, total FLOPs drop significantly.

Where It Underperforms

  • Tasks where every token's representation matters equally (some downstream classification heads)
  • Very short sequences where the router overhead is not amortized
  • Adversarial inputs that try to confuse the router

Implementation Considerations

Three practical notes:

  • The router needs to be trained with a load-balancing penalty so it does not skip every token
  • Per-batch routing requires careful kernel-level work for efficiency
  • Inference frameworks (vLLM, TensorRT-LLM) ship MoD support in 2026 for models trained with it

What This Means for Builders

For most teams using LLM APIs, MoD is invisible — a frontier provider may use it but you do not see it. For teams self-hosting, MoD-trained models give you cheaper inference at comparable quality. For researchers, it is a clean axis to explore independently of MoE and quantization.

The 2026 frontier-model trend is clear: any new architecture that combines MoD + MoE + FP4 + speculative decoding gets multiple multiplicative cost wins. The composite is what makes very large models economical to serve.

Sources

## Mixture of Depths: Adaptive Compute per Token for Cost-Efficient LLMs — operator perspective Mixture of Depths: Adaptive Compute per Token for Cost-Efficient LLMs is the kind of news that lives or dies on second-week behavior. The first benchmark is marketing. The eval suite a week later is the truth. The CallSphere stack treats announcements as input to an evals queue, not a product roadmap. Production agents stay pinned; new releases earn their slot only after a regression suite confirms cost, latency, and tool-call reliability move the right way. ## Base model vs. production LLM stack — the gap that costs you uptime A base model is a checkpoint. A production LLM stack is a whole different artifact: eval gates that fail the build on regression, prompt caching that cuts repeated-system-prompt cost by 40-70%, structured outputs that prevent JSON drift on tool calls, fallback chains that route to a smaller-model retry when the primary times out, and request-side guardrails that cap tool calls per session before the loop spirals. CallSphere runs LLMs in tandem on purpose: `gpt-4o-realtime` for the live call (streaming audio in and out, tool calls inline) and `gpt-4o-mini` for post-call analytics (sentiment scoring, lead qualification, summary generation, and the lower-stakes async work that doesn't need realtime). That split is not a cost optimization — it's a reliability decision. Realtime is optimized for low-latency turn-taking; mini is optimized for cheap, deterministic batch scoring. Mixing them lets each do what it's good at without one regressing the other. The teams that struggle with LLMs in production almost always made the same mistake: they treated "the model" as a single dependency, instead of as a small portfolio of models, each pinned to a job, each behind its own eval suite, each with a documented fallback. ## FAQs **Q: Why isn't mixture of Depths an automatic upgrade for a live call agent?** A: Most of the time it doesn't, and that's the right starting assumption. The relevant test is whether it improves at least one of: p95 first-token latency, tool-call argument accuracy on noisy inputs, multi-turn handoff stability, or per-session cost. CallSphere runs 37 specialized AI agents wired to 90+ function tools across 115+ database tables in 6 live verticals. **Q: How do you sanity-check mixture of Depths before pinning the model version?** A: The eval gate is unsentimental — a regression suite that simulates real call traffic (noisy ASR, partial inputs, tool-call timeouts) measures four numbers, and a candidate has to win on three of four without losing badly on the fourth. Anything else is treated as a blog post, not a stack change. **Q: Where does mixture of Depths fit in CallSphere's 37-agent setup?** A: In a CallSphere deployment, new model and API capabilities land first in the post-call analytics pipeline (lower stakes, async, easy to roll back) and only later in the live realtime path. Today the verticals most likely to absorb new capability first are Sales and IT Helpdesk, which already run the largest share of production traffic. ## See it live Want to see healthcare agents handle real traffic? Walk through https://healthcare.callsphere.tech or grab 20 minutes with the founder: https://calendly.com/sagar-callsphere/new-meeting.
Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

AI Models

Token Efficiency: Why GPT-5.5 Uses 40% Fewer Output Tokens Than GPT-5.4 (and 72% Fewer Than Opus 4.7)

GPT-5.5's biggest under-the-hood change is output token efficiency. Here is what that means for cost, latency, and how you should architect prompts for both models.

Large Language Models

KV-Cache Offloading Strategies: CPU, GPU, and NVMe Tradeoffs in 2026

KV-cache is the dominant memory cost in long-context inference. The 2026 offloading strategies that make 1M-token serving practical.

Agentic AI

The Context Window Challenge in Multi-Agent Systems: Managing Token Explosion | CallSphere Blog

Multi-agent AI systems generate up to 15x more tokens than single-agent setups. Learn proven context management strategies to control costs and maintain performance.

Agentic AI

Multi-Token Prediction: The Technique Accelerating AI Agent Response Times by 3x | CallSphere Blog

Deep dive into multi-token prediction and speculative decoding techniques that deliver up to 3x faster AI agent response times without sacrificing output quality.

Learn Agentic AI

Tool Result Formatting: Helping LLMs Understand Tool Outputs

Master the art of formatting tool results so LLMs can effectively parse and reason about them. Covers string formatting strategies, truncation, structured vs unstructured results, error messages, and token-efficient output design.

Learn Agentic AI

Cache Strategies for AI Agents: Avoiding Redundant LLM Calls

Master caching strategies for AI agents — from response caching and embedding caching to tool result caching and smart invalidation — to reduce latency, cut API costs, and improve throughput.