Skip to content
AI Voice Agents
AI Voice Agents10 min0 views

Expo + WebRTC for AI Voice Apps (2026): What Works, What Doesn't

Expo Go cannot run WebRTC. EAS Dev Client can. Here is the 2026 config-plugin recipe for shipping AI voice agents on Expo without abandoning the managed workflow.

Expo Go is the cleanest mobile dev loop on Earth, but it cannot run WebRTC. The trade-off in 2026 is to swap Expo Go for an EAS Dev Client and use the @config-plugins/react-native-webrtc plugin. You keep the managed workflow, you lose 30 seconds at the install step, and you gain real audio.

Background

react-native-webrtc requires custom native code (libwebrtc.so/.framework). Expo Go ships a fixed set of native modules and cannot dynamically load new ones — so any RN app that needs WebRTC must move to a Dev Client built with EAS. This has been true since Expo SDK 43, when the @config-plugins/react-native-webrtc plugin was first released. In 2026 the plugin works with Expo SDK 50+ and react-native-webrtc 124.x, with two caveats: it disables Bitcode on iOS (required) and bumps Android minSdkVersion to 24 (which can break older library deps).

For AI voice agent apps, "Dev Client + config plugin" is the cleanest way to keep Expo's update OTA story (EAS Update), the managed AndroidManifest/Info.plist generation, and the React Native WebRTC stack on the same project.

Architecture

```mermaid flowchart LR Dev[Developer] -- expo install --> Project[Expo Project] Project -- @config-plugins/react-native-webrtc --> EAS[EAS Build] EAS -- Dev Client IPA/APK --> Device[Physical Device] Device -- WebRTC --> Gateway[Pion Go gateway 1.23] Gateway -- NATS --> Pod[6-container agent pod] ```

CallSphere implementation

CallSphere uses Expo for prototyping vertical-specific clients before promoting them to bare RN or native:

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →
  • Real Estate (OneRoof) — Internal field-rep prototypes were Expo-first; production app moved to bare RN to integrate native CallKit deeper. See /industries/real-estate.
  • /demo browser path — Web demos are pure Next.js; no Expo. See /demo.
  • Healthcare — Privacy review ruled out Expo Go for clinical-touching code paths; the production app is iOS-native + Kotlin. See /industries/healthcare.

37 agents · 90+ tools · 115+ DB tables · 6 verticals · HIPAA + SOC 2 · $149/$499/$1499 · 14-day /trial · 22% affiliate at /affiliate.

Build steps with code

```bash npx create-expo-app@latest agent-app cd agent-app npx expo install expo-dev-client npx expo install react-native-webrtc @config-plugins/react-native-webrtc ```

```json // app.json { "expo": { "ios": { "infoPlist": { "NSMicrophoneUsageDescription": "Voice agent needs mic" } }, "android": { "permissions": ["RECORD_AUDIO", "MODIFY_AUDIO_SETTINGS"] }, "plugins": [ ["@config-plugins/react-native-webrtc"] ] } } ```

```bash

Build a Dev Client and install on a physical device

eas build --profile development --platform ios eas build --profile development --platform android ```

After installing the Dev Client, run `npx expo start --dev-client` and the JS hot-reload still works exactly like Expo Go — only the bundled native modules differ.

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

Pitfalls

  • Trying to test in Expo Go — instant red error: "react-native-webrtc requires native code". Build a Dev Client.
  • iOS Bitcode build failures — the plugin disables Bitcode for all iOS builds; required.
  • minSdkVersion bump breaking other deps — pin compatible versions of any other module that hardcodes 21.
  • EAS Build cache stuck — clear with `eas build --clear-cache` after the first plugin install.
  • Expo SDK 50 and event-target-shim — Expo SDK 50 used event-target-shim@5; react-native-webrtc needs @6. Upgrade to SDK 51+ or pin shim manually.

FAQ

Can I still ship EAS Update? Yes — JS bundle updates work fine; native plugin updates require a new Dev Client build.

Does Expo support CallKit/Telecom? Yes via react-native-callkeep with its own config plugin in app.json.

Can I avoid Dev Client entirely? No — there is no path to WebRTC inside Expo Go.

How long is an EAS build? First iOS build is 12-25 minutes; subsequent builds with cache are 4-8 minutes.

Does it work with the New Architecture? Yes since react-native-webrtc 118 and Expo SDK 51.

Sources

Try CallSphere agents at /demo, see /pricing, or start a /trial.

## How this plays out in production One layer below what *Expo + WebRTC for AI Voice Apps (2026): What Works, What Doesn't* covers, the practical question every team hits is multi-turn handoffs between specialist agents without losing slot state, sentiment, or escalation context. Treat this as a voice-first system from the first prompt: the agent's persona, its tool surface, and its escalation rules all flow from that single decision. Teams that ship fast tend to instrument the loop end-to-end before they tune any single component, because the bottleneck is rarely where intuition puts it. ## Voice agent architecture, end to end A production-grade voice stack at CallSphere stitches Twilio Programmable Voice (PSTN ingress, TwiML, bidirectional Media Streams) to a realtime reasoning layer — typically OpenAI Realtime or ElevenLabs Conversational AI — with sub-second response as a hard SLO. Anything north of one second of perceived silence and callers either repeat themselves or hang up; that single number drives the whole architecture. Server-side VAD with proper barge-in support is non-negotiable, otherwise the agent talks over the caller and the conversation collapses. Streaming TTS with phoneme-aligned interruption keeps the cadence natural even when the user changes their mind mid-sentence. Post-call, every transcript is run through a structured pipeline: sentiment, intent classification, lead score, escalation flag, and a normalized slot extraction (name, callback number, reason, urgency). For healthcare workloads, the BAA-covered storage path, audit logs, encryption-at-rest, and PHI-safe transcript redaction are wired in from day one, not bolted on at compliance review. The end state is a system where every call produces a row of structured data, not just a recording. ## FAQ **What is the fastest path to a voice agent the way *Expo + WebRTC for AI Voice Apps (2026): What Works, What Doesn't* describes?** Treat the architecture in this post as a starting point and instrument it before you tune it. The metrics that matter most early on are end-to-end latency (target < 1s for voice, < 3s for chat), barge-in correctness, tool-call success rate, and post-conversation lead score distribution. Optimize whatever the data flags as the bottleneck, not whatever feels slowest in your head. **What are the gotchas around voice agent deployments at scale?** The two failure modes that bite hardest are silent context loss across multi-turn handoffs and tool calls that succeed in dev but get rate-limited in production. Both are solvable with a proper agent backplane that pins state to a session ID, retries with backoff, and writes every tool invocation to an audit log you can replay. **What does the CallSphere outbound sales calling product do that a regular dialer does not?** It uses the ElevenLabs "Sarah" voice, runs up to 5 concurrent outbound calls per operator, and ships with a browser-based dialer that transfers warm calls back to a human in one click. Dispositions, transcripts, and lead scores write back to the CRM automatically. ## See it live Book a 30-minute working session at [calendly.com/sagar-callsphere/new-meeting](https://calendly.com/sagar-callsphere/new-meeting) and bring a real call flow — we will walk it through the live outbound sales dialer at [sales.callsphere.tech](https://sales.callsphere.tech) and show you exactly where the production wiring sits.
Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

AI Infrastructure

Defense, ITAR & AI Voice Vendor Compliance in 2026

ITAR technical-data definitions don't care if a human or an LLM produced the output. CMMC Level 2 has been mandatory since November 2025. Here is what an AI voice vendor needs to ship to defense in 2026.

AI Voice Agents

WebRTC Mobile Testing with BrowserStack + Sauce Labs (2026)

BrowserStack offers 30,000+ real devices; Sauce Labs ships deep Appium automation. Here is how AI voice agent teams use both for WebRTC mobile QA in 2026.

AI Infrastructure

WebRTC Over QUIC and the Future of Realtime: Where Voice AI Goes After 2026

WebTransport is Baseline as of March 2026. Media Over QUIC ships in production within the year. Here is what changes for AI voice agents — and what stays the same.

AI Engineering

Latency vs Cost: A Decision Matrix for Voice AI Spend in 2026

Every 100ms of latency costs you. So does every cent per minute. Here is the decision matrix we use across 6 verticals to pick where to spend and where to save on voice AI infrastructure.

AI Strategy

AI Agent M&A Activity 2026: Aircall–Vogent, Meta–PlayAI, OpenAI's Six Deals

Q1 2026 saw a record acquisition wave: Aircall bought Vogent (May), Meta acquired Manus and PlayAI, OpenAI closed six deals. The voice AI consolidation phase has begun.

AI Infrastructure

OpenAI's May 2026 WebRTC Rearchitecture: How Voice Latency Got Real

On May 4 2026 OpenAI published its Realtime stack rebuild — split-relay plus transceiver edge. Here is what changed and what it means for production voice agents.