Skip to content
Large Language Models
Large Language Models8 min read0 views

Paged Attention and Its Descendants: Memory-Efficient LLM Serving in 2026

PagedAttention launched a family of memory-management techniques that make modern LLM serving possible. The 2026 descendants and what they fix.

What PagedAttention Solved

In 2023, the dominant problem in LLM serving was KV-cache memory fragmentation. Traditional implementations allocated contiguous KV-cache slots per sequence, sized for the maximum context the sequence might reach. Most of that allocation was wasted, and external fragmentation made it hard to fit more sequences.

PagedAttention (Kwon et al., paper title "Efficient Memory Management for Large Language Model Serving with PagedAttention") fixed this by paging the KV-cache. Sequences allocate fixed-size blocks on demand. Memory utilization went from ~30 percent to ~95 percent. vLLM ate the world.

Three years later, the family has grown. This piece walks through what shipped after PagedAttention and what each addition fixes.

The Original Idea

flowchart LR
    Seq[Sequence's KV cache] --> P1[Block 1: 16 tokens]
    Seq --> P2[Block 2: 16 tokens]
    Seq --> P3[Block 3: 16 tokens]
    Seq --> Tab[Block table:<br/>logical to physical]
    Tab --> Pool[(Physical block pool)]

The KV-cache is split into fixed-size blocks (typically 16 tokens each). A per-sequence block table maps logical positions to physical blocks. Unused logical positions consume zero physical memory.

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →

RadixAttention (SGLang)

The first big extension: blocks are deduplicated across sequences. If two sequences share a prefix, they share the same physical blocks for the prefix tokens. The data structure is a radix tree of prefix → block list.

flowchart TB
    R[Root] --> P1[Prefix: 'You are a helpful...']
    P1 --> B1[Sequence A continuation]
    P1 --> B2[Sequence B continuation]
    P1 --> B3[Sequence C continuation]

This was the unlock for chat and RAG workloads in 2024-2025. Common system prompts, retrieved documents, and conversation prefixes are physically shared.

Prefix Caching (vLLM, others)

vLLM's "prefix caching" is a simpler version of the same idea: hash incoming prompts, look up matching cached blocks, reuse them. Less elegant than RadixAttention but lower-overhead. By default in vLLM 2026.

Disk-Backed KV Cache (2025-2026)

For very long-running sessions or massive prefix reuse, hot blocks live on GPU, warm on CPU, cold on NVMe. The block table is extended to include "where does this block currently live?" The block manager swaps blocks in and out as needed.

Distributed Block Pool (2026)

For multi-GPU deployments, the block pool spans GPUs connected via NVLink. The block manager picks the closest copy when serving a token. NVIDIA Blackwell's NVLink Switch makes this practical at the 72-GPU rack scale.

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

Speculative Block Allocation

When a sequence's KV-cache nears the end of its current allocation, the engine speculatively pre-allocates the next block. Reduces stalls on the hot path. Implemented in vLLM 0.7 and TRT-LLM.

Block Reuse Across Models (Experimental)

If two models share architecture (or relevant prefixes), can their KV-caches share blocks? Research-stage in 2026; partial results from Berkeley and CMU.

A Modern Memory Manager

flowchart TB
    Req[New Request] --> Hash[Hash prompt prefix]
    Hash --> Lookup[Lookup in radix tree]
    Lookup -->|Hit| Reuse[Reuse blocks]
    Lookup -->|Miss| Alloc[Allocate fresh blocks]
    Reuse --> Sched[Scheduler]
    Alloc --> Sched
    Sched --> Run[Run forward pass]
    Run --> Update[Update cache]

vLLM 0.7 and SGLang 0.4 both implement essentially this pipeline by default in 2026.

What This Means for Workloads

Three workload shapes get the largest wins:

  • Heavy prefix reuse: chat with system prompts, RAG with shared retrieved docs, agentic loops with shared scratchpads. 5-10x cost reduction.
  • Long-tail context lengths: when sequences vary widely in context, paged allocation captures the savings naive contiguous allocation cannot.
  • High concurrency, modest context: more sequences fit in memory, throughput rises.

For workloads with little prefix reuse and uniform context lengths (some batch-inference jobs), the gains are smaller but still positive.

Where It Hurts

  • Tiny block sizes: paging overhead becomes noticeable at very small block sizes; 16 is roughly optimal
  • Very high turnover: short, one-shot requests with no prefix reuse get less benefit
  • Coherence with quantization: paged KV with FP4 KV is actively researched; some block-size and microscaling-block-size interactions need tuning

Sources

## Paged Attention and Its Descendants: Memory-Efficient LLM Serving in 2026 — operator perspective Most coverage of Paged Attention and Its Descendants: Memory-Efficient LLM Serving in 2026 stops at the press release. The interesting part is the implementation cost — what changes for a team running 37 agents and 90+ tools in production? For an SMB call-automation operator the cost of chasing every new release is real — re-baselining evals, re-pricing per-session economics, retraining the on-call team. The ones that ship adopt slowly and on purpose. ## Base model vs. production LLM stack — the gap that costs you uptime A base model is a checkpoint. A production LLM stack is a whole different artifact: eval gates that fail the build on regression, prompt caching that cuts repeated-system-prompt cost by 40-70%, structured outputs that prevent JSON drift on tool calls, fallback chains that route to a smaller-model retry when the primary times out, and request-side guardrails that cap tool calls per session before the loop spirals. CallSphere runs LLMs in tandem on purpose: `gpt-4o-realtime` for the live call (streaming audio in and out, tool calls inline) and `gpt-4o-mini` for post-call analytics (sentiment scoring, lead qualification, summary generation, and the lower-stakes async work that doesn't need realtime). That split is not a cost optimization — it's a reliability decision. Realtime is optimized for low-latency turn-taking; mini is optimized for cheap, deterministic batch scoring. Mixing them lets each do what it's good at without one regressing the other. The teams that struggle with LLMs in production almost always made the same mistake: they treated "the model" as a single dependency, instead of as a small portfolio of models, each pinned to a job, each behind its own eval suite, each with a documented fallback. ## FAQs **Q: Does paged Attention and Its Descendants actually move p95 latency or tool-call reliability?** A: Most of the time it doesn't, and that's the right starting assumption. The relevant test is whether it improves at least one of: p95 first-token latency, tool-call argument accuracy on noisy inputs, multi-turn handoff stability, or per-session cost. Real Estate deployments run 10 specialist agents with 30 tools, including vision-on-photos for listing intake and follow-up. **Q: What would have to be true before paged Attention and Its Descendants ships into production?** A: The eval gate is unsentimental — a regression suite that simulates real call traffic (noisy ASR, partial inputs, tool-call timeouts) measures four numbers, and a candidate has to win on three of four without losing badly on the fourth. Anything else is treated as a blog post, not a stack change. **Q: Which CallSphere vertical would benefit from paged Attention and Its Descendants first?** A: In a CallSphere deployment, new model and API capabilities land first in the post-call analytics pipeline (lower stakes, async, easy to roll back) and only later in the live realtime path. Today the verticals most likely to absorb new capability first are After-Hours Escalation, which already run the largest share of production traffic. ## See it live Want to see it helpdesk agents handle real traffic? Walk through https://urackit.callsphere.tech or grab 20 minutes with the founder: https://calendly.com/sagar-callsphere/new-meeting.
Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.