---
title: "POLQA and PESQ for AI Voice Quality Monitoring in 2026"
description: "POLQA (ITU-T P.863) replaced PESQ for wideband and super-wideband AI voice. Here is how we score MOS in production, why 4.0+ is table stakes, and how to wire perceptual quality into a Twilio call pipeline without a phone lab."
canonical: https://callsphere.ai/blog/vw6d-polqa-pesq-voice-quality-monitoring-2026
category: "AI Infrastructure"
tags: ["POLQA", "PESQ", "MOS", "Voice Quality", "AI Voice", "Twilio"]
author: "CallSphere Team"
published: 2026-03-15T00:00:00.000Z
updated: 2026-05-08T17:26:02.807Z
---

# POLQA and PESQ for AI Voice Quality Monitoring in 2026

> POLQA (ITU-T P.863) replaced PESQ for wideband and super-wideband AI voice. Here is how we score MOS in production, why 4.0+ is table stakes, and how to wire perceptual quality into a Twilio call pipeline without a phone lab.

> Vendors quote MOS like it is a single number. In reality, the perceptual algorithms behind it - PESQ from 2001 and POLQA from 2011 - measure different bandwidths, score differently on VoIP, and disagree by 0.3 to 0.5 points on the same call. If you are running an AI voice agent in 2026 with Opus, EVS, or G.722 wideband audio, PESQ will lie to you. POLQA P.863 v3 is the standard, and you should be sampling it in production, not just at QA time.

## What goes wrong

PESQ (ITU-T P.862) was designed for narrowband telephony. Run it on a wideband Opus stream and it under-scores by 0.3 to 0.6 MOS, flagging perfectly fine audio as degraded. POLQA was developed during 2006-2011 and ships in P.863 third edition (2018) with full-band support for VoLTE, 5G, and OTT codecs - including OPUS and EVS. Most AI voice teams still copy-paste PESQ scripts from a 2015 blog post and wonder why their wideband TTS scores 3.4.

The other failure mode is sampling once. A call has a perceptual quality envelope - it can start at MOS 4.5 and degrade to 2.8 mid-call when a Wi-Fi handoff blows the jitter buffer. Single-shot scoring at start-of-call hides the worst minutes.

## How to detect

Score every Nth recorded call (we sample 5%) with POLQA against a reference TTS sample injected at the start of the conversation. Use ten-second windows so the time series shows degradation, not just an average. Flag any window below 3.5 MOS as a quality incident. Cross-reference flagged windows with RTCP packet loss and jitter from the same time slice - 80% of MOS drops correlate to a packet loss spike or codec renegotiation.

```mermaid
flowchart TD
    A[Call recorded - Twilio] --> B[Pull reference TTS sample]
    B --> C[Align degraded vs reference]
    C --> D[POLQA P.863 score per 10s window]
    D --> E{MOS |Yes| F[Flag window + RTCP correlate]
    E -->|No| G[Aggregate to call score]
    F --> H[Quality incident dashboard]
    G --> I[Tenant MOS time series]
```

## CallSphere implementation

CallSphere runs Twilio Programmable Voice across all six verticals (Healthcare AI, Real Estate AI, Sales Calling AI, Salon AI, IT Helpdesk AI, After-Hours AI) with 37 specialized agents, 90+ integrated tools, and 115+ database tables. We sample POLQA on 5% of calls per tenant, store ten-second MOS windows in a TimescaleDB hypertable (one of our 115+ tables), and surface degradation incidents on the tenant admin dashboard. Healthcare tenants on /industries/healthcare get 100% sampling for compliance. Pricing is $149 Starter / $499 Growth / $1499 Scale; quality dashboards ship on Growth and Scale. Affiliates earn 22% on every plan, including quality-driven upsells.

## Build steps

1. Record both legs of the call via Twilio dual-channel recording; archive WAV at 16 kHz minimum (Opus wideband requires this).
2. Inject a known reference TTS sample for the first 1.5 seconds of the agent leg; this becomes your POLQA reference signal.
3. Run POLQA P.863 v3 (commercial license from OPTICOM or HEAD acoustics) on ten-second sliding windows.
4. Persist (call_id, window_start, mos, jitter_ms, loss_pct) to TimescaleDB with a 30-day retention.
5. Wire a Grafana panel showing P50/P95 MOS per tenant per hour; alert below 3.5 P95.
6. Backfill from Twilio Voice Insights for jitter and packet loss to correlate the cause.

## FAQ

**Is POLQA really required, or can I keep PESQ?**
PESQ is fine for narrowband 8 kHz only. Any modern AI voice using Opus, G.722, or EVS needs POLQA. Vendor scores comparing PESQ-on-Opus to POLQA-on-Opus are not the same number.

**Do I need a license?**
Yes. POLQA P.863 is licensed by OPTICOM and HEAD acoustics. Single-server commercial licenses are typically $5k to $15k/year. PESQ has open-source ITU-T reference C code.

**What MOS target should I set?**
4.0+ is "good." 3.5 to 4.0 is "fair, monitor." Below 3.5 is "user-noticeable, alert." Below 3.0 is "incident, page on-call."

**Can I score without a reference?**
Yes - non-intrusive metrics like ITU-T P.563 or NISQA exist. Accuracy is lower (correlation to subjective MOS around 0.7 vs 0.9 for POLQA), but they work on live traffic.

**How often should I sample?**
5% on standard tenants, 100% on healthcare and regulated verticals. Storing every call is cheap; running POLQA on every call is the cost driver.

## Sources

- [Twilio Voice Insights Advanced Features](https://www.twilio.com/docs/voice/voice-insights/advanced-features)
- [POLQA - ITU-T P.863](http://www.polqa.info/)
- [Wikipedia - Perceptual Objective Listening Quality Analysis](https://en.wikipedia.org/wiki/Perceptual_Objective_Listening_Quality_Analysis)
- [POLQA vs PESQ - Operata](https://operata.com/blog/polqa-vs-pesq)

Start a [14-day trial](/trial) with quality scoring on, browse [pricing](/pricing) for 5% vs 100% sampling, or [book a demo](/demo) of the MOS dashboard. Healthcare tenants get 100% sampling on /industries/healthcare; partners earn 22% via the [affiliate program](/affiliate).

## POLQA and PESQ for AI Voice Quality Monitoring in 2026: production view

POLQA and PESQ for AI Voice Quality Monitoring in 2026 forces a tension most teams underestimate: agent handoff state.  A single LLM call is easy. A booking agent that hands a confirmed slot to a billing agent that hands a follow-up to an escalation agent — that's where context loss, hallucinated IDs, and double-bookings live. Solving it well means treating the conversation as a stateful workflow, not a chat.

## Serving stack tradeoffs

The big fork is managed (OpenAI Realtime, ElevenLabs Conversational AI) versus self-hosted on GPUs you operate. Managed wins on cold-start, model freshness, and zero-ops; self-hosted wins on unit economics past a certain conversation volume and on data residency for regulated verticals. CallSphere runs hybrid: Realtime for live calls, self-hosted Whisper + a hosted LLM for async, both routed through a Go gateway that enforces per-tenant rate limits.

Latency budgets are non-negotiable on voice. End-to-end target is sub-800ms ASR-to-first-token and sub-1.4s first-audio-out; anything beyond that and turn-taking feels stilted. GPU residency in the same region as your TURN servers matters more than choosing a slightly bigger model.

Observability is the unglamorous backbone — every conversation produces logs, traces, sentiment scoring, and cost attribution piped to a per-tenant dashboard. **HIPAA + SOC 2 aligned** isolation keeps healthcare traffic separated from salon traffic at the storage layer, not just the API.

## FAQ

**What's the right way to scope the proof-of-concept?**
Real Estate runs as a 6-container pod (frontend, gateway, ai-worker, voice-server, NATS event bus, Redis) backed by Postgres `realestate_voice` with row-level security so multi-tenant data never crosses tenants. For a topic like "POLQA and PESQ for AI Voice Quality Monitoring in 2026", that means you're not starting from scratch — you're configuring an agent template that's already been hardened across thousands of conversations.

**How do you handle compliance and data isolation?**
Day one is integration mapping (scheduler, CRM, messaging) and prompt tuning against your top 20 real call transcripts. Day two through five is shadow-mode running, where the agent transcribes and recommends but a human still answers, so you can compare side-by-side. Go-live is the moment your eval pass-rate clears your internal bar.

**When does it make sense to switch from a managed model to a self-hosted one?**
The honest answer: it scales until your tool catalog gets stale. The agent is only as good as the integrations it can actually call, so the operational discipline is keeping schemas, webhooks, and fallback paths green. The platform handles the rest — observability, retries, multi-region routing — without your team owning the GPU layer.

## Talk to us

Want to see how this maps to your stack? Book a live walkthrough at [calendly.com/sagar-callsphere/new-meeting](https://calendly.com/sagar-callsphere/new-meeting), or try the vertical-specific demo at [salon.callsphere.tech](https://salon.callsphere.tech). 14-day trial, no credit card, pilot live in 3–5 business days.

---

Source: https://callsphere.ai/blog/vw6d-polqa-pesq-voice-quality-monitoring-2026
