Skip to content
AI Engineering
AI Engineering9 min0 views

Opus Codec Tuning for AI Voice Agents: 2026 Production Defaults That Actually Help

Most teams ship Opus at WebRTC defaults. AI voice agents need different defaults — tighter jitter buffers, FEC on, DTX off, and a bitrate that respects STT.

Opus at WebRTC defaults is tuned for human-to-human chat. AI voice agents have a different listener — a STT model — and a different deadline (sub-200 ms first turn). Tune accordingly.

What it is and why now

flowchart TD
  Client[Browser] --> Sig[Signaling /ws]
  Sig --> Peer[RTCPeerConnection]
  Peer --> SRTP[(SRTP audio)]
  SRTP --> Edge[Edge node]
  Edge --> LLM[Voice LLM]
  LLM --> Edge
  Edge --> SRTP
CallSphere reference architecture

Opus is the only audio codec WebRTC mandates. It runs from 6 kbps narrowband up to 510 kbps fullband and switches dynamically. The browser pipeline before Opus is: capture → resample to 48 kHz mono → WebRTC APM (AEC, AGC, NS, HPF) → encode.

For AI voice agents in 2026, the defaults that need adjustment are: jitter buffer length (lower), FEC (on), DTX (off for STT), and bitrate (respect downstream model preferences).

How WebRTC fits AI voice (architecture)

The Opus path matters at three stages:

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →
  1. Browser → Server: encoder bitrate, FEC, DTX. Tune for STT robustness.
  2. Server → STT: decode to PCM at 16 kHz; most STTs (Deepgram, Whisper) want 16 kHz mono PCM.
  3. TTS → Server → Browser: re-encode TTS audio at 24 kHz Opus; matches our `gpt-realtime` output and avoids double-resample artifacts.

A common mistake: leaving DTX (Discontinuous Transmission) on means silence is replaced by tiny "comfort noise" packets. STT models hate that — they may interpret comfort noise as speech onsets.

CallSphere implementation

CallSphere defaults across all 37 agents:

  • Bitrate: 24 kbps both directions; HD enough for STT, cheap on the wire.
  • FEC: enabled in-band; tolerates 5–10% loss without retransmits.
  • DTX: disabled — we feed STT a continuous stream.
  • Jitter buffer: 30 ms target (vs WebRTC default 60–100 ms). Shaves a noticeable chunk off perceived latency.
  • WebRTC APM: AEC + NS on; AGC off — we let our TTS levels set themselves.

The 6-container pod (CRM writer, calendar, MLS lookup, SMS, audit, transcript) sees PCM after the gateway decodes Opus. NATS carries 16 kHz frames between containers.

Code snippet (TypeScript, Opus tuning)

```ts const stream = await navigator.mediaDevices.getUserMedia({ audio: { echoCancellation: true, noiseSuppression: true, autoGainControl: false }, }); const track = stream.getAudioTracks()[0];

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

const transceiver = pc.addTransceiver(track, { direction: "sendonly" });

const sender = transceiver.sender; const params = sender.getParameters(); params.encodings = [{ maxBitrate: 24_000, priority: "high", networkPriority: "high" }]; await sender.setParameters(params);

// Munge SDP to enable FEC and disable DTX pc.createOffer().then(async (offer) => { offer.sdp = offer.sdp! .replace(/useinbandfec=0/g, "useinbandfec=1") .replace(/usedtx=1/g, "usedtx=0"); await pc.setLocalDescription(offer); }); ```

Build / migration steps

  1. In `getUserMedia`, enable AEC + NS; turn off AGC unless you need it.
  2. Set encoder `maxBitrate` to 24 kbps for voice-only AI agents.
  3. SDP-munge to set `useinbandfec=1; usedtx=0`.
  4. On the server, decode Opus to 16 kHz PCM before STT; do not pass Opus directly to most STTs.
  5. Aim for a 30 ms jitter buffer target on the server (most SFUs expose a knob).
  6. Measure STT word error rate against a fixed prompt before and after — expect a 5–10% improvement.

FAQ

Will lower bitrate hurt STT? Below ~16 kbps, yes. 24 kbps is the sweet spot. Should I disable AEC for AI? No — your TTS audio leaks back into the mic without it. What about ACGC? Auto-gain often fights TTS levels; disable in production. Does `gpt-realtime` use Opus? Yes, both directions, 24 kHz on the WebRTC path. Can I use a different codec? WebRTC mandates Opus for voice — there is no good reason to leave it.

Sources

Hear the tuning in action on /demo. Pricing on /pricing.

## Opus Codec Tuning for AI Voice Agents: 2026 Production Defaults That Actually Help: production view Opus Codec Tuning for AI Voice Agents: 2026 Production Defaults That Actually Help sounds like a single decision, but in production it splits into eval design, prompt cost, and observability. The deeper you push toward live traffic, the more those three pull against each other — better evals catch silent failures, prompt cost limits how often you can re-run them, and weak observability hides which retries are actually saving conversations versus burning latency budget. ## Shipping the agent to production Production AI agents live or die on three loops: evals, retries, and handoff state. CallSphere runs **37 agents** across 6 verticals, each with its own eval suite — synthetic call transcripts replayed nightly with assertion checks on extracted entities (date, time, party size, insurance, address). Without that loop, prompt regressions ship silently and you only find out when bookings drop. Structured tools beat free-form text every time. Our **90+ function tools** all enforce JSON schemas validated server-side; if the model hallucinates an integer where a string is required, we retry with a corrective system message before falling back to a deterministic path. For long-running flows, we treat agent handoffs as a state machine — booking → confirmation → SMS — so context survives turn boundaries. The Realtime API vs. async decision usually comes down to "is the user holding the phone right now?" If yes, Realtime; if no (callback queue, after-hours voicemail), async wins on cost-per-conversation, which we track per agent in **115+ database tables** spanning all 6 verticals. ## FAQ **How does this apply to a CallSphere pilot specifically?** CallSphere runs 37 production agents and 90+ function tools across 115+ database tables in 6 verticals, so most workflows you'd want already have a template. For a topic like "Opus Codec Tuning for AI Voice Agents: 2026 Production Defaults That Actually Help", that means you're not starting from scratch — you're configuring an agent template that's already been hardened across thousands of conversations. **What does the typical first-week implementation look like?** Day one is integration mapping (scheduler, CRM, messaging) and prompt tuning against your top 20 real call transcripts. Day two through five is shadow-mode running, where the agent transcribes and recommends but a human still answers, so you can compare side-by-side. Go-live is the moment your eval pass-rate clears your internal bar. **Where does this break down at scale?** The honest answer: it scales until your tool catalog gets stale. The agent is only as good as the integrations it can actually call, so the operational discipline is keeping schemas, webhooks, and fallback paths green. The platform handles the rest — observability, retries, multi-region routing — without your team owning the GPU layer. ## Talk to us Want to see how this maps to your stack? Book a live walkthrough at [calendly.com/sagar-callsphere/new-meeting](https://calendly.com/sagar-callsphere/new-meeting), or try the vertical-specific demo at [healthcare.callsphere.tech](https://healthcare.callsphere.tech). 14-day trial, no credit card, pilot live in 3–5 business days.
Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

AI Engineering

Latency Benchmarking AI Voice Agent Vendors (2026)

Vapi 465ms optimal, Retell 580-620ms, Bland ~800ms, ElevenLabs 400-600ms — but those are best-case. We design a fair benchmark harness, P95 measurement, and a reproducible methodology for 2026.

AI Infrastructure

Defense, ITAR & AI Voice Vendor Compliance in 2026

ITAR technical-data definitions don't care if a human or an LLM produced the output. CMMC Level 2 has been mandatory since November 2025. Here is what an AI voice vendor needs to ship to defense in 2026.

AI Voice Agents

WebRTC Mobile Testing with BrowserStack + Sauce Labs (2026)

BrowserStack offers 30,000+ real devices; Sauce Labs ships deep Appium automation. Here is how AI voice agent teams use both for WebRTC mobile QA in 2026.

AI Engineering

Latency vs Cost: A Decision Matrix for Voice AI Spend in 2026

Every 100ms of latency costs you. So does every cent per minute. Here is the decision matrix we use across 6 verticals to pick where to spend and where to save on voice AI infrastructure.

AI Infrastructure

WebRTC Over QUIC and the Future of Realtime: Where Voice AI Goes After 2026

WebTransport is Baseline as of March 2026. Media Over QUIC ships in production within the year. Here is what changes for AI voice agents — and what stays the same.

Agentic AI

Streaming Agent Responses with OpenAI Agents SDK and LangChain in 2026

How to stream tokens, tool-call deltas, and intermediate steps from an agent — with code for both the OpenAI Agents SDK and LangChain — and the gotchas that bite in production.