---
title: "Expo + WebRTC for AI Voice Apps (2026): What Works, What Doesn't"
description: "Expo Go cannot run WebRTC. EAS Dev Client can. Here is the 2026 config-plugin recipe for shipping AI voice agents on Expo without abandoning the managed workflow."
canonical: https://callsphere.ai/blog/vw4e-expo-webrtc-ai-voice-limits-2026
category: "AI Voice Agents"
tags: ["Expo", "WebRTC", "Voice AI", "Mobile", "EAS"]
author: "CallSphere Team"
published: 2026-03-27T00:00:00.000Z
updated: 2026-05-08T17:25:15.457Z
---

# Expo + WebRTC for AI Voice Apps (2026): What Works, What Doesn't

> Expo Go cannot run WebRTC. EAS Dev Client can. Here is the 2026 config-plugin recipe for shipping AI voice agents on Expo without abandoning the managed workflow.

> Expo Go is the cleanest mobile dev loop on Earth, but it cannot run WebRTC. The trade-off in 2026 is to swap Expo Go for an EAS Dev Client and use the @config-plugins/react-native-webrtc plugin. You keep the managed workflow, you lose 30 seconds at the install step, and you gain real audio.

## Background

react-native-webrtc requires custom native code (libwebrtc.so/.framework). Expo Go ships a fixed set of native modules and cannot dynamically load new ones — so any RN app that needs WebRTC must move to a Dev Client built with EAS. This has been true since Expo SDK 43, when the @config-plugins/react-native-webrtc plugin was first released. In 2026 the plugin works with Expo SDK 50+ and react-native-webrtc 124.x, with two caveats: it disables Bitcode on iOS (required) and bumps Android minSdkVersion to 24 (which can break older library deps).

For AI voice agent apps, "Dev Client + config plugin" is the cleanest way to keep Expo's update OTA story (EAS Update), the managed AndroidManifest/Info.plist generation, and the React Native WebRTC stack on the same project.

## Architecture

```mermaid
flowchart LR
  Dev[Developer] -- expo install --> Project[Expo Project]
  Project -- @config-plugins/react-native-webrtc --> EAS[EAS Build]
  EAS -- Dev Client IPA/APK --> Device[Physical Device]
  Device -- WebRTC --> Gateway[Pion Go gateway 1.23]
  Gateway -- NATS --> Pod[6-container agent pod]
```

## CallSphere implementation

CallSphere uses Expo for prototyping vertical-specific clients before promoting them to bare RN or native:

- **Real Estate (OneRoof)** — Internal field-rep prototypes were Expo-first; production app moved to bare RN to integrate native CallKit deeper. See [/industries/real-estate](/industries/real-estate).
- **/demo browser path** — Web demos are pure Next.js; no Expo. See [/demo](/demo).
- **Healthcare** — Privacy review ruled out Expo Go for clinical-touching code paths; the production app is iOS-native + Kotlin. See [/industries/healthcare](/industries/healthcare).

37 agents · 90+ tools · 115+ DB tables · 6 verticals · HIPAA + SOC 2 · $149/$499/$1499 · 14-day [/trial](/trial) · 22% affiliate at [/affiliate](/affiliate).

## Build steps with code

```bash
npx create-expo-app@latest agent-app
cd agent-app
npx expo install expo-dev-client
npx expo install react-native-webrtc @config-plugins/react-native-webrtc
```

```json
// app.json
{
  "expo": {
    "ios": { "infoPlist": { "NSMicrophoneUsageDescription": "Voice agent needs mic" } },
    "android": { "permissions": ["RECORD_AUDIO", "MODIFY_AUDIO_SETTINGS"] },
    "plugins": [
      ["@config-plugins/react-native-webrtc"]
    ]
  }
}
```

```bash

# Build a Dev Client and install on a physical device

eas build --profile development --platform ios
eas build --profile development --platform android
```

After installing the Dev Client, run `npx expo start --dev-client` and the JS hot-reload still works exactly like Expo Go — only the bundled native modules differ.

## Pitfalls

- **Trying to test in Expo Go** — instant red error: "react-native-webrtc requires native code". Build a Dev Client.
- **iOS Bitcode build failures** — the plugin disables Bitcode for all iOS builds; required.
- **minSdkVersion bump breaking other deps** — pin compatible versions of any other module that hardcodes 21.
- **EAS Build cache stuck** — clear with `eas build --clear-cache` after the first plugin install.
- **Expo SDK 50 and event-target-shim** — Expo SDK 50 used event-target-shim@5; react-native-webrtc needs @6. Upgrade to SDK 51+ or pin shim manually.

## FAQ

**Can I still ship EAS Update?** Yes — JS bundle updates work fine; native plugin updates require a new Dev Client build.

**Does Expo support CallKit/Telecom?** Yes via react-native-callkeep with its own config plugin in app.json.

**Can I avoid Dev Client entirely?** No — there is no path to WebRTC inside Expo Go.

**How long is an EAS build?** First iOS build is 12-25 minutes; subsequent builds with cache are 4-8 minutes.

**Does it work with the New Architecture?** Yes since react-native-webrtc 118 and Expo SDK 51.

## Sources

- [https://www.npmjs.com/package/@config-plugins/react-native-webrtc](https://www.npmjs.com/package/@config-plugins/react-native-webrtc)
- [https://github.com/expo/config-plugins/tree/main/packages/react-native-webrtc](https://github.com/expo/config-plugins/tree/main/packages/react-native-webrtc)
- [https://www.daily.co/blog/deploying-webrtc-on-an-expo-react-native-app-2/](https://www.daily.co/blog/deploying-webrtc-on-an-expo-react-native-app-2/)
- [https://github.com/livekit/client-sdk-react-native/wiki/Expo-Development-Build-Instructions](https://github.com/livekit/client-sdk-react-native/wiki/Expo-Development-Build-Instructions)
- [https://react-native-webrtc.github.io/handbook/guides/extra-steps/expo.html](https://react-native-webrtc.github.io/handbook/guides/extra-steps/expo.html)

Try CallSphere agents at [/demo](/demo), see [/pricing](/pricing), or start a [/trial](/trial).

## How this plays out in production

One layer below what *Expo + WebRTC for AI Voice Apps (2026): What Works, What Doesn't* covers, the practical question every team hits is multi-turn handoffs between specialist agents without losing slot state, sentiment, or escalation context. Treat this as a voice-first system from the first prompt: the agent's persona, its tool surface, and its escalation rules all flow from that single decision. Teams that ship fast tend to instrument the loop end-to-end before they tune any single component, because the bottleneck is rarely where intuition puts it.

## Voice agent architecture, end to end

A production-grade voice stack at CallSphere stitches Twilio Programmable Voice (PSTN ingress, TwiML, bidirectional Media Streams) to a realtime reasoning layer — typically OpenAI Realtime or ElevenLabs Conversational AI — with sub-second response as a hard SLO. Anything north of one second of perceived silence and callers either repeat themselves or hang up; that single number drives the whole architecture. Server-side VAD with proper barge-in support is non-negotiable, otherwise the agent talks over the caller and the conversation collapses. Streaming TTS with phoneme-aligned interruption keeps the cadence natural even when the user changes their mind mid-sentence. Post-call, every transcript is run through a structured pipeline: sentiment, intent classification, lead score, escalation flag, and a normalized slot extraction (name, callback number, reason, urgency). For healthcare workloads, the BAA-covered storage path, audit logs, encryption-at-rest, and PHI-safe transcript redaction are wired in from day one, not bolted on at compliance review. The end state is a system where every call produces a row of structured data, not just a recording.

## FAQ

**What is the fastest path to a voice agent the way *Expo + WebRTC for AI Voice Apps (2026): What Works, What Doesn't* describes?**

Treat the architecture in this post as a starting point and instrument it before you tune it. The metrics that matter most early on are end-to-end latency (target < 1s for voice, < 3s for chat), barge-in correctness, tool-call success rate, and post-conversation lead score distribution. Optimize whatever the data flags as the bottleneck, not whatever feels slowest in your head.

**What are the gotchas around voice agent deployments at scale?**

The two failure modes that bite hardest are silent context loss across multi-turn handoffs and tool calls that succeed in dev but get rate-limited in production. Both are solvable with a proper agent backplane that pins state to a session ID, retries with backoff, and writes every tool invocation to an audit log you can replay.

**What does the CallSphere outbound sales calling product do that a regular dialer does not?**

It uses the ElevenLabs "Sarah" voice, runs up to 5 concurrent outbound calls per operator, and ships with a browser-based dialer that transfers warm calls back to a human in one click. Dispositions, transcripts, and lead scores write back to the CRM automatically.

## See it live

Book a 30-minute working session at [calendly.com/sagar-callsphere/new-meeting](https://calendly.com/sagar-callsphere/new-meeting) and bring a real call flow — we will walk it through the live outbound sales dialer at [sales.callsphere.tech](https://sales.callsphere.tech) and show you exactly where the production wiring sits.

---

Source: https://callsphere.ai/blog/vw4e-expo-webrtc-ai-voice-limits-2026
