---
title: "BigQuery + Pub/Sub for AI Call Analytics: Continuous Queries and ADK Agents in 2026"
description: "BigQuery continuous queries + Pub/Sub direct subscriptions + Vertex AI ADK agents form a fully managed pipeline. We show how to triage calls in real time, with autonomous investigation when sentiment crashes."
canonical: https://callsphere.ai/blog/vw5c-bigquery-pubsub-ai-call-analytics-continuous-queries-2026
category: "AI Infrastructure"
tags: ["BigQuery", "Pub/Sub", "Vertex AI", "ADK", "Continuous Query"]
author: "CallSphere Team"
published: 2026-04-09T00:00:00.000Z
updated: 2026-05-08T17:26:02.700Z
---

# BigQuery + Pub/Sub for AI Call Analytics: Continuous Queries and ADK Agents in 2026

> BigQuery continuous queries + Pub/Sub direct subscriptions + Vertex AI ADK agents form a fully managed pipeline. We show how to triage calls in real time, with autonomous investigation when sentiment crashes.

> **TL;DR** — Pub/Sub BigQuery subscriptions write events directly to BigQuery (no Dataflow). BigQuery continuous queries run forever, exporting matching rows back to Pub/Sub. Wire an ADK agent on the topic and you have an autonomous incident responder for sentiment drops.

## Why this pipeline

GCP shops want managed, not DIY. The 2026 stack is:

- **Pub/Sub direct-to-BigQuery** subscriptions — no Dataflow worker.
- **BigQuery continuous queries** — `EXPORT DATA` + `CONTINUOUS` keep an SQL filter running forever.
- **ADK (Agent Development Kit) on Vertex AI** — drop a Pub/Sub topic in front of an agent and it triages.

Together, they're an event-driven AI pipeline with no servers.

## Architecture

```mermaid
flowchart LR
  Voice[Voice agent] -->|JSON| Pub[Pub/Sub topic
call.events]
  Pub -->|BigQuery subscription| BQ[(BigQuery
call_events)]
  BQ -->|continuous query
WHERE sentiment  ADK[Vertex AI ADK agent]
  ADK -->|investigate + page| Slack[Slack / On-call]
```

The continuous query acts as a perpetual filter; the ADK agent investigates each alert.

## CallSphere implementation

CallSphere — **37 agents · 90+ tools · 115+ DB tables · 6 verticals**. **$149 / $499 / $1499** at [/pricing](/pricing). [14-day trial](/trial), [22% affiliate](/affiliate). The Healthcare ([/industries/healthcare](/industries/healthcare)) sentiment alert pipeline runs on BigQuery continuous queries for customers on GCP. The ADK agent reads the alert topic and runs a 4-step playbook: pull last 10 calls, summarize, post to the founder's Slack, page if 3+ in 5 min. See [/demo](/demo).

## Build steps with code

1. **Create a Pub/Sub topic** `call.events` and a BigQuery subscription writing to `call_events` table.
2. **Define the continuous query** with `EXPORT DATA` to `call.alerts` topic.
3. **Build an ADK agent** on Vertex AI with tools for BigQuery read, Slack send, PagerDuty.
4. **Subscribe the ADK agent** to `call.alerts` via Pub/Sub push.
5. **Test** by injecting low-sentiment events.
6. **Add a glossary term** in Dataplex so Conversational Analytics agents understand "sentiment drop."
7. **Monitor** Pub/Sub backlog and BigQuery slot usage.

```sql
-- BigQuery continuous query
EXPORT DATA OPTIONS (
  format = 'CLOUD_PUBSUB',
  uri = 'https://pubsub.googleapis.com/projects/cs-prod/topics/call.alerts'
)
AS
SELECT
  call_id, vertical, sentiment_score, transcript_summary, ts
FROM `cs-prod.voice.call_events`
WHERE sentiment_score < -0.6
  AND _CONTINUOUS = TRUE;
```

```python
# Vertex AI ADK agent (sketch)
from google.adk.agents import LlmAgent
from google.adk.tools import bigquery_tool, slack_tool, pagerduty_tool

agent = LlmAgent(
    model="gemini-2.5-flash",
    tools=[bigquery_tool, slack_tool, pagerduty_tool],
    instruction="On a sentiment drop event, fetch last 10 calls, summarize, post to Slack. Page if 3+ events in 5 min.",
)
```

## Pitfalls

- **Pub/Sub subscription with Dataflow** — unnecessary for direct ingest; use BigQuery subscription.
- **Continuous query without retention** — `call_events` grows; partition by day, expire after 90.
- **ADK agent without tool guardrails** — agent that can page on every event becomes alert spam; add throttling.
- **Schema drift between Pub/Sub message and BigQuery table** — version your CloudEvents schema.
- **Forgetting Slot Reservations** — continuous queries hold slots; reserve them.

## FAQ

**Cost vs. ClickHouse?** BigQuery is more expensive per TB scanned but managed end-to-end. Often cheaper after staffing.

**Latency?** Pub/Sub → BigQuery is ~2–5s; continuous query → topic adds ~1–2s.

**Can we use Gemini instead of GPT-4o-mini?** Yes — `AI.GENERATE_TABLE` works inside BigQuery without a roundtrip.

**HIPAA?** GCP signs BAAs for Pub/Sub, BigQuery, and Vertex AI; redact PII first.

**ADK vs. raw LLM call?** ADK adds tool calling, memory, and observability; worth it for anything multi-step.

## Sources

- [Building Event-Driven Data Agents (Google Cloud Blog)](https://cloud.google.com/blog/topics/developers-practitioners/building-event-driven-data-agents-with-bigquery-pubsub-and-adk)
- [Pub/Sub BigQuery Subscriptions](https://oneuptime.com/blog/post/2026-02-02-pubsub-bigquery/view)
- [Pub/Sub Direct Path to BigQuery](https://cloud.google.com/blog/products/data-analytics/pub-sub-launches-direct-path-to-bigquery-for-streaming-analytics)
- [BigQuery 2026 Guide (Anomaly AI)](https://www.findanomaly.ai/google-bigquery-data-analytics-complete-guide-2026)
- [BigQuery Release Notes](https://docs.cloud.google.com/bigquery/docs/release-notes)

## BigQuery + Pub/Sub for AI Call Analytics: Continuous Queries and ADK Agents in 2026: production view

BigQuery + Pub/Sub for AI Call Analytics: Continuous Queries and ADK Agents in 2026 sits on top of a regional VPC and a cold-start problem you only see at 3am.  If your voice stack lives in us-east-1 but your customer is calling from a Sydney mobile network, the round-trip time alone wrecks turn-taking. Multi-region routing, GPU residency, and warm pools become the difference between "natural" and "robotic" — and it's all infra, not the model.

## Serving stack tradeoffs

The big fork is managed (OpenAI Realtime, ElevenLabs Conversational AI) versus self-hosted on GPUs you operate. Managed wins on cold-start, model freshness, and zero-ops; self-hosted wins on unit economics past a certain conversation volume and on data residency for regulated verticals. CallSphere runs hybrid: Realtime for live calls, self-hosted Whisper + a hosted LLM for async, both routed through a Go gateway that enforces per-tenant rate limits.

Latency budgets are non-negotiable on voice. End-to-end target is sub-800ms ASR-to-first-token and sub-1.4s first-audio-out; anything beyond that and turn-taking feels stilted. GPU residency in the same region as your TURN servers matters more than choosing a slightly bigger model.

Observability is the unglamorous backbone — every conversation produces logs, traces, sentiment scoring, and cost attribution piped to a per-tenant dashboard. **HIPAA + SOC 2 aligned** isolation keeps healthcare traffic separated from salon traffic at the storage layer, not just the API.

## FAQ

**Is this realistic for a small business, or is it enterprise-only?**
The IT Helpdesk product is built on ChromaDB for RAG over runbooks, Supabase for auth and storage, and 40+ data models covering tickets, assets, MSP clients, and escalation chains. For a topic like "BigQuery + Pub/Sub for AI Call Analytics: Continuous Queries and ADK Agents in 2026", that means you're not starting from scratch — you're configuring an agent template that's already been hardened across thousands of conversations.

**Which integrations have to be in place before launch?**
Day one is integration mapping (scheduler, CRM, messaging) and prompt tuning against your top 20 real call transcripts. Day two through five is shadow-mode running, where the agent transcribes and recommends but a human still answers, so you can compare side-by-side. Go-live is the moment your eval pass-rate clears your internal bar.

**How do we measure whether it's actually working?**
The honest answer: it scales until your tool catalog gets stale. The agent is only as good as the integrations it can actually call, so the operational discipline is keeping schemas, webhooks, and fallback paths green. The platform handles the rest — observability, retries, multi-region routing — without your team owning the GPU layer.

## Talk to us

Want to see how this maps to your stack? Book a live walkthrough at [calendly.com/sagar-callsphere/new-meeting](https://calendly.com/sagar-callsphere/new-meeting), or try the vertical-specific demo at [sales.callsphere.tech](https://sales.callsphere.tech). 14-day trial, no credit card, pilot live in 3–5 business days.

---

Source: https://callsphere.ai/blog/vw5c-bigquery-pubsub-ai-call-analytics-continuous-queries-2026
