Skip to content
AI Engineering
AI Engineering11 min read0 views

Realtime Topic Classification on Voice Calls With Embeddings and BERTopic in 2026

Stream embeddings of transcript chunks into a vector index, then assign each chunk to a topic cluster in <120 ms. We compare BERTopic, OpenAI embeddings, and bge-large-en-v2 for live call topic routing.

TL;DR — Embed each transcript chunk with text-embedding-3-small or bge-large-en-v2, look up the nearest topic cluster centroid in pgvector, and route. Refresh clusters offline weekly with BERTopic. CallSphere ships this for inbound triage across 6 verticals.

Why this pipeline

Hardcoded keyword routing ("if 'cancel' then escalate") rots the moment customers speak naturally. Topic classification with embeddings adapts: you embed each chunk, find the nearest cluster centroid, and act. BERTopic combines transformer embeddings, HDBSCAN clustering, and c-TF-IDF labels to produce human-readable topics from raw transcripts — perfect for offline cluster training.

The realtime path is just the lookup. Clusters change slowly (weekly), so the heavy compute happens in batch and the hot path is one ANN query.

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →

Architecture

flowchart LR
  Chunk[Transcript chunk] --> Emb[Embedding API<br/>text-embedding-3-small]
  Emb --> ANN[(pgvector / Qdrant<br/>topic_centroids index)]
  ANN -->|nearest topic| Route[Router]
  Route -->|book| Book[Booking agent]
  Route -->|complaint| Esc[Escalation agent]
  Route -->|info| FAQ[FAQ agent]
  Hist[(Historical transcripts<br/>S3 / ClickHouse)] -.weekly.-> BT[BERTopic batch job]
  BT -.refresh.-> ANN

Hot path is sub-150 ms (embedding + ANN). The weekly batch is a single Python script that re-fits BERTopic on the last 90 days of transcripts and upserts new centroids.

CallSphere implementation

CallSphere has 37 agents · 90+ tools · 115+ DB tables · 6 verticals. Pricing $149 / $499 / $1499 at /pricing; 14-day trial; 22% affiliate. The Healthcare vertical at /industries/healthcare uses 18 topic clusters (booking, refill, billing, complaint, ...); the router emits a topic + confidence into the orchestrator which picks the right specialist agent. Try it at /demo.

Build steps with code

  1. Pick an embedding modeltext-embedding-3-small (1536-d, $0.02 / 1M tok) is the 2026 default; on-prem use bge-large-en-v2.
  2. Spin up pgvector or Qdrant for ANN with HNSW index.
  3. Train BERTopic on the last 90 days of transcripts; export the centroid embeddings + labels.
  4. Upsert centroids into pgvector with topic_id and human label.
  5. At runtime, embed the chunk, find the nearest centroid (cosine), and emit the topic to the orchestrator.
  6. Track confidence — fall back to a generalist agent when cosine_sim < 0.55.
  7. Re-train weekly on a cron; ship new centroids with versioned topic IDs so router state is stable.
from openai import OpenAI
from sqlalchemy import text
ai = OpenAI()

def classify(chunk_text: str, db) -> dict:
    emb = ai.embeddings.create(
        model="text-embedding-3-small",
        input=chunk_text,
    ).data[0].embedding
    row = db.execute(text("""
        SELECT topic_id, label, 1 - (embedding <=> :v) AS sim
        FROM topic_centroids
        ORDER BY embedding <=> :v
        LIMIT 1
    """), {"v": emb}).first()
    return {"topic": row.label, "confidence": row.sim}

Pitfalls

  • Re-fitting BERTopic on every transcript — far too slow; do it weekly and only if drift > 5%.
  • Cosine threshold too low — 0.45 lets nonsense match; 0.55+ is sane for short utterances.
  • Single embedding for full call — embed per chunk; the call mixes 4 topics.
  • No versioning of topic IDs — re-train shifts cluster IDs; always use stable string labels.
  • Skipping HNSW index — flat scan dies past 500k centroids; build HNSW with m=16.

FAQ

Why not classify with GPT-4o-mini directly? It works, but at 50k calls/day the embedding-based approach is 10x cheaper and faster.

How many topics? Start with 12–20; more than 30 makes the router itself ambiguous.

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

Embeddings for 90-day transcripts — how big? ~5M chunks × 1536 floats × 4 bytes = 30 GB; pgvector handles it on a single 32-GB instance.

Can we use this for compliance routing? Yes — flag a topic like hipaa.disclosure with a higher escalation threshold.

Drift detection? Weekly job computes silhouette score; alert if < 0.3.

Sources

## Realtime Topic Classification on Voice Calls With Embeddings and BERTopic in 2026: production view Realtime Topic Classification on Voice Calls With Embeddings and BERTopic in 2026 ultimately resolves into one engineering question: when do you use the OpenAI Realtime API versus an async pipeline? Realtime wins on latency for live calls. Async wins on cost, retries, and structured tool reliability for callbacks and SMS flows. Most teams need both, and the routing layer between them becomes the most load-bearing piece of the stack. ## Shipping the agent to production Production AI agents live or die on three loops: evals, retries, and handoff state. CallSphere runs **37 agents** across 6 verticals, each with its own eval suite — synthetic call transcripts replayed nightly with assertion checks on extracted entities (date, time, party size, insurance, address). Without that loop, prompt regressions ship silently and you only find out when bookings drop. Structured tools beat free-form text every time. Our **90+ function tools** all enforce JSON schemas validated server-side; if the model hallucinates an integer where a string is required, we retry with a corrective system message before falling back to a deterministic path. For long-running flows, we treat agent handoffs as a state machine — booking → confirmation → SMS — so context survives turn boundaries. The Realtime API vs. async decision usually comes down to "is the user holding the phone right now?" If yes, Realtime; if no (callback queue, after-hours voicemail), async wins on cost-per-conversation, which we track per agent in **115+ database tables** spanning all 6 verticals. ## FAQ **Is this realistic for a small business, or is it enterprise-only?** 57+ languages are supported out of the box, and the platform is HIPAA and SOC 2 aligned, which removes most of the procurement friction in regulated verticals. For a topic like "Realtime Topic Classification on Voice Calls With Embeddings and BERTopic in 2026", that means you're not starting from scratch — you're configuring an agent template that's already been hardened across thousands of conversations. **Which integrations have to be in place before launch?** Day one is integration mapping (scheduler, CRM, messaging) and prompt tuning against your top 20 real call transcripts. Day two through five is shadow-mode running, where the agent transcribes and recommends but a human still answers, so you can compare side-by-side. Go-live is the moment your eval pass-rate clears your internal bar. **How do we measure whether it's actually working?** The honest answer: it scales until your tool catalog gets stale. The agent is only as good as the integrations it can actually call, so the operational discipline is keeping schemas, webhooks, and fallback paths green. The platform handles the rest — observability, retries, multi-region routing — without your team owning the GPU layer. ## Talk to us Want to see how this maps to your stack? Book a live walkthrough at [calendly.com/sagar-callsphere/new-meeting](https://calendly.com/sagar-callsphere/new-meeting), or try the vertical-specific demo at [urackit.callsphere.tech](https://urackit.callsphere.tech). 14-day trial, no credit card, pilot live in 3–5 business days.
Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.