GPT-Realtime-2 128K Context: What It Unlocks for Voice Agents
OpenAI's GPT-Realtime-2 quadruples voice context to 128K tokens. Here is exactly what the 32K-to-128K jump changes for production phone agents.
The Announcement, Plain English
On May 7, 2026, OpenAI shipped three new realtime voice models: GPT-Realtime-2, GPT-Realtime-Translate, and GPT-Realtime-Whisper. The headline change for voice teams is the context window on GPT-Realtime-2 — it jumps from the prior 32K tokens to 128K, with a 32K max output. That is a 4x increase in how much conversation, instructions, and tool history the model can keep live in a single voice session.
For anyone who has shipped a phone agent on the previous Realtime API, 32K was the ceiling that quietly broke long calls. 128K removes that ceiling for almost every real-world use case.
Why The Window Mattered More Than People Realized
Voice traffic is denser than text traffic. A 12-minute call produces roughly 1,800 spoken words. With system prompt, JSON tool schemas, function-call results, RAG snippets, and per-turn audio transcripts, a moderately complex healthcare or sales agent could exhaust 32K by minute 15 — and silently start losing earlier context.
The visible symptoms looked like model regressions: the agent forgets the caller's name, re-asks for the appointment time, repeats the same disclaimer twice, or loses track of which slot was already offered. They were not regressions. They were context truncation.
Hear it before you finish reading
Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.
128K turns those 12–15 minute walls into 45–50 minute walls. For most B2C voice work, that is the difference between "almost always fits" and "always fits."
The Real Numbers
From OpenAI's May 7 launch:
- Context window: 128K tokens (up from 32K)
- Max output: 32K tokens
- Audio input: $32 per 1M tokens
- Audio output: $64 per 1M tokens
- Cached input: $0.40 per 1M tokens
- Capabilities: interruptions, tool use, longer multi-turn conversations
- GPT-5-class reasoning in the realtime stack
Cached input at $0.40 per 1M is the line item to internalize. If your system prompt is 6K tokens and you serve 50,000 calls a month, the system-prompt portion of every call after the first one costs you 80x less than non-cached input. That is what makes long, instruction-heavy voice agents actually affordable at 128K.
What 128K Unlocks In Production
Six concrete things change once you have 128K:
- End-to-end calls without summarization passes. Teams previously ran a mid-call "summarize and compact" pass to free tokens. Most of that machinery can go away.
- Richer system prompts. You can keep full policy text, escalation rules, full FAQ, and per-vertical playbooks inline instead of RAG-ing fragments mid-call.
- More tool schemas. Function-calling shines when the model sees the whole tool surface at once. 128K lets you expose 30–50 tools without trimming.
- In-context customer history. The last 5 calls, the last 20 SMS messages, and the last 3 tickets can sit inside the window — no retrieval needed.
- Longer transfers. Warm transfers to a second agent can carry the full prior conversation rather than a summary.
- Multi-task calls. "Reschedule my appointment, then refill my prescription, then update my address" no longer needs careful state machines — the model holds it.
Production Tradeoffs Teams Are Already Hitting
A bigger window does not automatically mean better answers. Three caveats that came up within 72 hours of launch:
- Latency creeps with context size. First-token latency on a 100K-loaded session is materially higher than on a 5K one. Pin only what you need.
- Cache invalidation is the new bug class. Tiny edits to the top of your prompt blow the cache. Stable, ordered prefixes matter more than ever.
- More context is not more obedience. Long instructions still benefit from being chunked, labeled, and prioritized. 128K is a budget, not a strategy.
Where CallSphere Fits
CallSphere is a managed AI voice and chat agent platform. Teams that do not want to build directly against the raw Realtime API — wiring up tool schemas, prompt caching, telemetry, HIPAA-safe storage, and 57+ language voice routing themselves — buy CallSphere instead. We run 6 live verticals (healthcare, real estate, sales, salon/beauty, IT helpdesk, after-hours escalation) with ~14 function tools and a managed memory layer that already maps to longer windows. Pricing starts at $149/mo Starter (2,000 interactions) and scales to $1,499/mo Scale (50,000). Most customers go live in 3–5 business days.
Still reading? Stop comparing — try CallSphere live.
CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.
See it live: callsphere.ai/demo.
What To Do This Week
If you already run a voice agent, three concrete actions:
- Audit your current context usage. If you ever hit the 32K ceiling, design the next iteration assuming 128K is the default.
- Stand up prompt caching. At $0.40 per 1M for cached input, this is the single largest cost lever.
- Re-evaluate any mid-call summarization passes — you may be able to delete them outright.
FAQ
Q: Is GPT-Realtime-2 backward compatible with the old Realtime API? A: Mostly. The endpoint changed and a few event names shifted. Plan a half-day migration, not a rewrite.
Q: Does 128K mean I can stop using RAG? A: No. RAG still beats stuffing everything in context for cost, recency, and access control. 128K just lets you stop micro-trimming.
Q: How does cost scale on a 10-minute call now? A: Audio in/out dominates. A 10-minute call is roughly $0.30–$0.60 in model spend depending on how chatty the agent is; system prompt becomes negligible once cached.
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.