Capacity Planning for Voice AI: Concurrent Calls, GPU Headroom, and Burst Math
Concurrent-call capacity is the single most-important scalability metric for voice AI. Here's the math behind sizing a fleet that survives Monday-morning peaks without overpaying on GPUs.
TL;DR — Forecast peak concurrent calls, multiply by 1.5x burst factor, then size workers, model TPM, and GPU memory against that number. Most teams under-size by 3x.
What goes wrong
flowchart TD
Client[Client] --> Edge[Cloudflare Worker]
Edge -->|WS upgrade| DO[Durable Object]
DO --> AI[(OpenAI Realtime WS)]
AI --> DO
DO --> Client
DO -.hibernation.-> Storage[(Persisted state)]Voice AI concurrent-call capacity is the metric that decides whether your customers hit busy signals on a Monday morning. Different platforms in 2026 land at wildly different ceilings: LuMay at 10k concurrent, Cognigy at tens of thousands across 100+ languages, Bland AI at 20k calls/hour, Synthflow's starter tier at 5 concurrent. A single agency client running a campaign generates 20–50 concurrent in a peak hour; ten clients means hundreds.
Three things break under capacity pressure:
- WebSocket layer — limits on FDs per process, ulimits, kernel TCP backlogs.
- Model TPM — OpenAI's tokens-per-minute caps; Realtime sessions count against them differently.
- GPU memory — for self-hosted ASR/TTS, each session pins KV cache and audio buffers.
How to monitor
Track three capacity metrics:
- Active concurrent sessions — gauge per vertical.
- Headroom — concurrent / capacity_ceiling. Alert at 70% sustained for 10 min.
- Burst factor — peak / median over 28 days. Use as multiplier for capacity planning.
Plan for a 1.5x burst over forecast peak, with a 2x growth buffer for the next quarter.
Hear it before you finish reading
Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.
CallSphere stack
CallSphere is sized for ~3000 concurrent sessions across 6 verticals on a k3s cluster behind Cloudflare Tunnel. Per-vertical ceilings:
- Healthcare FastAPI
:8084— 800 concurrent. Each session uses ~120MB RAM, ~0.05 vCPU. 4 replicas × 4 vCPU × 16GB. - Real Estate 6-container NATS pod — 600 concurrent (NATS message-per-call cap is the binding constraint). 4 replicas of the pod.
- Sales WebSocket + PM2 — 1000 concurrent. 8 PM2 workers per node × 4 nodes. ulimit set to 65k FDs.
- After-hours Bull/Redis queue — async, capacity is queue depth (cap 50k jobs). Scales differently because not real-time.
OpenAI Realtime caps: we're at 20M tokens/min on the production org; one minute of voice = ~6000 tokens, so theoretical ceiling is 3300 concurrent at full duplex. Actual real ceiling is lower because of headroom.
37 agents and 90+ tools means each session may invoke 5+ tool calls; we run tool executors as a horizontally scalable service (Sales WebSocket layer doubles as the tool runner).
$149 plan = 50 concurrent; $499 = 200; $1499 = 1000+ with reserved capacity. Try the 14-day trial.
Implementation
Forecast peak from history. Pull 13 weeks of
call_startedevents; compute weekly p99 concurrent. That's your forecast peak.Right-size per-pod.
Still reading? Stop comparing — try CallSphere live.
CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.
# k3s deployment
spec:
replicas: 4
template:
spec:
containers:
- name: healthcare
resources:
requests: { cpu: "2", memory: "8Gi" }
limits: { cpu: "4", memory: "16Gi" }
- HPA on concurrent_sessions metric.
metrics:
- type: Pods
pods:
metric: { name: active_sessions_per_pod }
target: { type: AverageValue, averageValue: "200" }
Pre-provision burst. During known peaks (Monday 9–11am for Sales), pre-scale via a CronJob to 1.5x baseline.
Reserve model capacity. OpenAI Provisioned Throughput (PTU) for the $1499 enterprise tier guarantees TPM. $149/$499 share burst capacity.
FAQ
Q: How do I forecast new growth? A: Forecast at the customer-onboarding level, not aggregate. Each new customer ships an estimated load profile.
Q: What's the right HPA metric? A: Active session count, not CPU. CPU lags reality by 30 seconds in voice.
Q: Can I oversubscribe model TPM? A: A little (10%). Beyond that you risk 429s mid-call.
Q: How do I cap burst per customer? A: Per-tenant rate limit at the WebSocket gateway. Returns a polite message and falls back to a queue.
Q: GPU vs API for self-hosted? A: API for < 1k concurrent. GPU when reserved capacity becomes cheaper than per-token. Math at our scale: cross-over around 2.5k sustained concurrent.
Sources
## Capacity Planning for Voice AI: Concurrent Calls, GPU Headroom, and Burst Math: production view Capacity Planning for Voice AI: Concurrent Calls, GPU Headroom, and Burst Math sounds like a single decision, but in production it splits into eval design, prompt cost, and observability. The deeper you push toward live traffic, the more those three pull against each other — better evals catch silent failures, prompt cost limits how often you can re-run them, and weak observability hides which retries are actually saving conversations versus burning latency budget. ## Serving stack tradeoffs The big fork is managed (OpenAI Realtime, ElevenLabs Conversational AI) versus self-hosted on GPUs you operate. Managed wins on cold-start, model freshness, and zero-ops; self-hosted wins on unit economics past a certain conversation volume and on data residency for regulated verticals. CallSphere runs hybrid: Realtime for live calls, self-hosted Whisper + a hosted LLM for async, both routed through a Go gateway that enforces per-tenant rate limits. Latency budgets are non-negotiable on voice. End-to-end target is sub-800ms ASR-to-first-token and sub-1.4s first-audio-out; anything beyond that and turn-taking feels stilted. GPU residency in the same region as your TURN servers matters more than choosing a slightly bigger model. Observability is the unglamorous backbone — every conversation produces logs, traces, sentiment scoring, and cost attribution piped to a per-tenant dashboard. **HIPAA + SOC 2 aligned** isolation keeps healthcare traffic separated from salon traffic at the storage layer, not just the API. ## FAQ **How does this apply to a CallSphere pilot specifically?** CallSphere runs 37 production agents and 90+ function tools across 115+ database tables in 6 verticals, so most workflows you'd want already have a template. For a topic like "Capacity Planning for Voice AI: Concurrent Calls, GPU Headroom, and Burst Math", that means you're not starting from scratch — you're configuring an agent template that's already been hardened across thousands of conversations. **What does the typical first-week implementation look like?** Day one is integration mapping (scheduler, CRM, messaging) and prompt tuning against your top 20 real call transcripts. Day two through five is shadow-mode running, where the agent transcribes and recommends but a human still answers, so you can compare side-by-side. Go-live is the moment your eval pass-rate clears your internal bar. **Where does this break down at scale?** The honest answer: it scales until your tool catalog gets stale. The agent is only as good as the integrations it can actually call, so the operational discipline is keeping schemas, webhooks, and fallback paths green. The platform handles the rest — observability, retries, multi-region routing — without your team owning the GPU layer. ## Talk to us Want to see how this maps to your stack? Book a live walkthrough at [calendly.com/sagar-callsphere/new-meeting](https://calendly.com/sagar-callsphere/new-meeting), or try the vertical-specific demo at [healthcare.callsphere.tech](https://healthcare.callsphere.tech). 14-day trial, no credit card, pilot live in 3–5 business days.Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.