Android AudioRecord + AI Voice Streaming (2026): The Low-Latency Path
Android's AudioRecord API is the lowest-level mic access for AI voice apps. Here is how to configure it for 16 kHz PCM, hardware AEC, and sub-300 ms streaming.
If you bypass WebRTC and stream raw PCM directly to a Realtime API, AudioRecord is the API you reach for. With the right config (VOICE_COMMUNICATION source, 16 kHz mono, hardware AEC) you can hit sub-300 ms first-byte latency on a mid-range Android.
Background
For AI voice agents that target an OpenAI Realtime, Gemini Live, or similar streaming endpoint with their own protocol, WebRTC is overkill. The path WebRTC.ventures used in their February 2026 Voice AI Android prototype: AudioRecord + WebSockets directly to a Python backend that proxied Gemini 2.0. Audio is captured as mono PCM at 16 kHz, 16-bit, with hardware AEC enabled — the canonical input format for streaming voice models.
This pattern is materially simpler than full WebRTC: no SDP, no ICE, no SFU, no DTLS-SRTP. The trade-off is that you handle reconnection, jitter buffering, and codec negotiation yourself. For one-to-one user-to-AI voice, that is often a good trade.
Architecture
```mermaid flowchart LR Mic[Mic] --> AudioRecord[AudioRecord MediaRecorder.AudioSource.VOICE_COMMUNICATION] AudioRecord --> Buffer[ByteBuffer 16kHz mono PCM] Buffer --> WS[WebSocket] WS --> Backend[Realtime API Proxy] Backend --> AudioTrack[AudioTrack playback] AudioTrack --> Speaker[Speaker] ```
CallSphere implementation
CallSphere's mobile clients use AudioRecord in two specific places, while WebRTC handles the rest:
Hear it before you finish reading
Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.
- Healthcare — The HIPAA path uses OpenAI Realtime over a controlled WebSocket. AudioRecord is used in the in-clinic kiosk Android client because it gives us frame-accurate timestamps for compliance audit. See /industries/healthcare.
- Real Estate (OneRoof) — Production agent path is full WebRTC to the Pion Go gateway 1.23 + NATS + 6-container pod. AudioRecord-based sampling is used only in the dictation-only flow. See /industries/real-estate.
- /demo browser path — Same agents, plain Chrome. See /demo.
37 agents · 90+ tools · 115+ DB tables · 6 verticals · HIPAA + SOC 2 · $149/$499/$1499 · 14-day /trial · 22% affiliate at /affiliate.
Build steps with code
```kotlin import android.media.AudioFormat import android.media.AudioRecord import android.media.MediaRecorder
class VoiceStreamer { private val sampleRate = 16_000 private val channelConfig = AudioFormat.CHANNEL_IN_MONO private val audioFormat = AudioFormat.ENCODING_PCM_16BIT private val bufferSize = AudioRecord.getMinBufferSize(sampleRate, channelConfig, audioFormat) * 2
private val recorder = AudioRecord( MediaRecorder.AudioSource.VOICE_COMMUNICATION, // hardware AEC sampleRate, channelConfig, audioFormat, bufferSize )
fun start(socket: WebSocket) { recorder.startRecording() Thread { val buf = ByteArray(bufferSize) while (!Thread.interrupted()) { val read = recorder.read(buf, 0, buf.size) if (read > 0) socket.send(buf.copyOf(read).toByteString()) } }.start() } } ```
```xml
Still reading? Stop comparing — try CallSphere live.
CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.
Pitfalls
- Using AudioSource.MIC instead of VOICE_COMMUNICATION — `MIC` skips hardware AEC and you get echo from the speaker.
- Ignoring getMinBufferSize() return value — On some OEMs the minimum is large enough that doubling is wasteful; on others it is so small it underruns.
- Not enabling AcousticEchoCanceler / NoiseSuppressor when available — Some hardware paths still require explicit `.create(audioSessionId)` even with VOICE_COMMUNICATION.
- Holding the mic in Doze mode — Android kills your foreground; wrap with a `microphone` foreground service type.
- Mismatching sample rate to the model — OpenAI Realtime expects 24 kHz; Gemini Live expects 16 kHz. Pick one and stick.
FAQ
Is AudioRecord faster than WebRTC? First-byte latency: yes, by 50-100 ms. End-to-end: depends on backend.
Does it support echo cancellation on cheap phones? Hardware AEC quality varies; pair with a software AEC fallback.
Can I use it with WebRTC? No — WebRTC owns the mic via its own AudioDeviceModule; you can have only one.
Does it work in the background? Only inside a foreground service of type `microphone` on Android 14+.
What sample rate should I use? 16 kHz for Gemini Live, 24 kHz for OpenAI Realtime, 48 kHz for music or non-voice.
Sources
- https://webrtc.ventures/2026/02/blog-voice-ai-android-app-gemini-prototype/
- https://developer.android.com/reference/android/media/AudioRecord
- https://openai.com/index/delivering-low-latency-voice-ai-at-scale/
- https://www.marktechpost.com/2026/03/26/google-releases-gemini-3-1-flash-live-a-real-time-multimodal-voice-model-for-low-latency-audio-video-and-tool-use-for-ai-agents/
- https://deepgram.com/learn/low-latency-voice-ai
Try CallSphere voice agents at /demo, see /pricing, or start a /trial.
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.