---
title: "Tuning VAD for AI Voice Agents: Thresholds, Hangover, Hysteresis (2026)"
description: "VAD doesn't make ASR more accurate — it controls endpointing latency, barge-in feel, and turn-taking. We tune Silero VAD threshold, prefix_padding, and silence_duration_ms with real production traces."
canonical: https://callsphere.ai/blog/vw8c-vad-voice-activity-detection-tuning-2026
category: "AI Engineering"
tags: ["VAD", "Endpointing", "Voice AI", "Silero", "Latency"]
author: "CallSphere Team"
published: 2026-03-25T00:00:00.000Z
updated: 2026-05-08T17:26:02.471Z
---

# Tuning VAD for AI Voice Agents: Thresholds, Hangover, Hysteresis (2026)

> VAD doesn't make ASR more accurate — it controls endpointing latency, barge-in feel, and turn-taking. We tune Silero VAD threshold, prefix_padding, and silence_duration_ms with real production traces.

> **TL;DR** — VAD's job is not transcription — it's deciding *when the user stopped*. Tune `threshold` for noise, `silence_duration_ms` for endpoint latency, `prefix_padding_ms` for capture safety. Default settings rarely work; build a harness that measures FAR/h and end-of-speech delay distributions per vertical.

## The latency problem

VAD lives at the front of every voice turn. A too-eager VAD endpoints mid-word and you cut the caller off. A too-lazy VAD waits an extra 500ms after silence and you blow the latency budget. Most platforms ship a default that is wrong for *your* acoustic profile.

## Where the ms come from

The VAD adds three controllable delays:

- **Activation threshold** (0.0-1.0) — higher = more noise tolerance, less false-positive triggering
- **prefix_padding_ms** — audio captured *before* speech detection (ensures you don't clip the first phoneme)
- **silence_duration_ms** — silence required before declaring end-of-speech (the hangover)

Typical OpenAI Realtime defaults: threshold 0.5, prefix_padding 300ms, silence_duration 500ms. That 500ms is where most of the "feels slow" complaints originate.

```mermaid
flowchart LR
  AUDIO[Audio frames] --> NN[Silero NN]
  NN --> PROB[Speech prob]
  PROB --> THR{> threshold?}
  THR -->|Yes| ON[Speech ON
capture w/ prefix_padding]
  THR -->|No| HOLD{Silence
> silence_duration_ms?}
  HOLD -->|Yes| END[End-of-speech]
  HOLD -->|No| ON
```

## CallSphere stack

CallSphere uses **server-side VAD on OpenAI Realtime PCM16 24kHz** for Healthcare, plus **Silero VAD** in the FastAPI :8084 path for other verticals. Each of the **6 verticals** ships with a tuned VAD profile (e.g. salons run noisy, so threshold 0.55; legal intake runs quiet, so threshold 0.4). **37 agents, 90+ tools, 115+ DB tables**. Pricing **$149/$499/$1,499**, **14-day trial**, **22% affiliate**.

[Test the VAD](/demo) or [start a trial](/trial).

## Optimization steps

1. Build a per-vertical labeled dataset of ~500 calls. Measure FAR/h (false-activation rate per hour) and end-of-speech delay.
2. Tune `threshold` for FAR < 5/h on the noisy vertical, then check ASR WER didn't degrade.
3. Drop `silence_duration_ms` from 500 → 250-300 for fast-feeling agents; raise it for callers who pause mid-sentence (elderly, complex intake).
4. Increase `prefix_padding_ms` if you hear clipped first words; decrease if it's bloating buffers.
5. Add hysteresis — different on/off thresholds — to stop chatter at the boundary.

## FAQ

**Q: Does better VAD mean better ASR?**
No. ASR has its own VAD internally. Better VAD = better feel.

**Q: Should I use Silero or WebRTC's built-in VAD?**
Silero — neural, far more robust to noise and music.

**Q: What's a good silence_duration_ms?**
250-400ms for transactional voice agents, 500-700ms for support/intake.

**Q: Does VAD help with barge-in?**
Yes — it's the trigger that detects the user starting to speak during agent TTS.

**Q: How does CallSphere update VAD profiles?**
Weekly recalibration job re-fits thresholds against the last 10k labeled calls per vertical.

## Sources

- [Picovoice — Complete Guide to VAD 2026](https://picovoice.ai/blog/complete-guide-voice-activity-detection-vad/)
- [LiveKit — Turn Detection: VAD, Endpointing, Model-Based](https://livekit.com/blog/turn-detection-voice-agents-vad-endpointing-model-based-detection)
- [OpenAI Realtime VAD Docs](https://developers.openai.com/api/docs/guides/realtime-vad)
- [Silero VAD GitHub](https://github.com/snakers4/silero-vad)

## Tuning VAD for AI Voice Agents: Thresholds, Hangover, Hysteresis (2026): production view

Tuning VAD for AI Voice Agents: Thresholds, Hangover, Hysteresis (2026) is also a cost-per-conversation problem hiding in plain sight.  Once you instrument tokens-in, tokens-out, tool calls, ASR seconds, and TTS seconds against booked-revenue per call, the right tradeoff between Realtime API and an async ASR + LLM + TTS pipeline becomes obvious — and it's almost never the same answer for healthcare as it is for salons.

## Shipping the agent to production

Production AI agents live or die on three loops: evals, retries, and handoff state. CallSphere runs **37 agents** across 6 verticals, each with its own eval suite — synthetic call transcripts replayed nightly with assertion checks on extracted entities (date, time, party size, insurance, address). Without that loop, prompt regressions ship silently and you only find out when bookings drop.

Structured tools beat free-form text every time. Our **90+ function tools** all enforce JSON schemas validated server-side; if the model hallucinates an integer where a string is required, we retry with a corrective system message before falling back to a deterministic path. For long-running flows, we treat agent handoffs as a state machine — booking → confirmation → SMS — so context survives turn boundaries.

The Realtime API vs. async decision usually comes down to "is the user holding the phone right now?" If yes, Realtime; if no (callback queue, after-hours voicemail), async wins on cost-per-conversation, which we track per agent in **115+ database tables** spanning all 6 verticals.

## FAQ

**How does this apply to a CallSphere pilot specifically?**
Setup runs 3–5 business days, the trial is 14 days with no credit card, and pricing tiers are $149, $499, and $1,499 — so a vertical-specific pilot is a same-week decision, not a quarterly project. For a topic like "Tuning VAD for AI Voice Agents: Thresholds, Hangover, Hysteresis (2026)", that means you're not starting from scratch — you're configuring an agent template that's already been hardened across thousands of conversations.

**What does the typical first-week implementation look like?**
Day one is integration mapping (scheduler, CRM, messaging) and prompt tuning against your top 20 real call transcripts. Day two through five is shadow-mode running, where the agent transcribes and recommends but a human still answers, so you can compare side-by-side. Go-live is the moment your eval pass-rate clears your internal bar.

**Where does this break down at scale?**
The honest answer: it scales until your tool catalog gets stale. The agent is only as good as the integrations it can actually call, so the operational discipline is keeping schemas, webhooks, and fallback paths green. The platform handles the rest — observability, retries, multi-region routing — without your team owning the GPU layer.

## Talk to us

Want to see how this maps to your stack? Book a live walkthrough at [calendly.com/sagar-callsphere/new-meeting](https://calendly.com/sagar-callsphere/new-meeting), or try the vertical-specific demo at [escalation.callsphere.tech](https://escalation.callsphere.tech). 14-day trial, no credit card, pilot live in 3–5 business days.

---

Source: https://callsphere.ai/blog/vw8c-vad-voice-activity-detection-tuning-2026
