---
title: "ONNX Runtime + WebGPU for Browser Voice Agents (No Server, Sub-100ms)"
description: "Run Whisper, Kokoro, and LFM2.5-Audio entirely in the browser with ONNX Runtime Web + WebGPU. Flash Attention, qMoE, sub-100ms latency on a laptop. Privacy-first voice without a backend."
canonical: https://callsphere.ai/blog/vw6c-onnx-runtime-webgpu-browser-voice-2026
category: "AI Engineering"
tags: ["ONNX", "WebGPU", "Browser", "Voice", "Privacy"]
author: "CallSphere Team"
published: 2026-04-21T00:00:00.000Z
updated: 2026-05-08T17:26:02.267Z
---

# ONNX Runtime + WebGPU for Browser Voice Agents (No Server, Sub-100ms)

> Run Whisper, Kokoro, and LFM2.5-Audio entirely in the browser with ONNX Runtime Web + WebGPU. Flash Attention, qMoE, sub-100ms latency on a laptop. Privacy-first voice without a backend.

> **TL;DR** — ONNX Runtime Web + WebGPU has matured enough in 2026 to run Whisper-tiny + Kokoro + LFM2.5-Audio-1.5B entirely in the browser tab. Recent updates: Flash Attention, graph capture, Split-K MatMul, qMoE support. WebGPU works in Chrome, Edge, Safari (16+), Firefox Nightly. Result: a voice agent that sends **zero audio** to your servers — perfect for HIPAA, GDPR, or pure latency.

## Why browser inference for voice

- **Zero network round-trip** for STT/TTS — only the LLM tokens (or none, if you run a small SLM locally).
- **Privacy by construction** — audio never leaves the device.
- **Cost** — your users pay for the GPU.
- **Offline** — works on a plane.

## Architecture

```mermaid
flowchart LR
  MIC[Mic Stream] --> ORT[ONNX Runtime Web]
  ORT -->|WebGPU| STT[Whisper-tiny INT8]
  STT -->|text| LLM{LLM Choice}
  LLM -->|local| LFM[LFM2.5-Audio 1.5B in-browser]
  LLM -->|remote| API[Cloudflare / Groq]
  LFM & API -->|tokens| TTS[Kokoro-82M WebGPU]
  TTS -->|PCM| OUT[Audio Output]
```

## CallSphere stack with ONNX Web

CallSphere ships a **Browser Voice Widget** (``) that runs Whisper-tiny + Kokoro fully in-browser, then calls our remote LLM only for the language step. **37 agents · 90+ tools · 115+ DB tables · 6 verticals.** Plans: **$149 / $499 / $1,499**, 14-day [/trial](/trial), 22% affiliate via [/affiliate](/affiliate).

## Build steps

1. `npm install onnxruntime-web@1.21` (latest 2026 build with WebGPU EP improvements).
2. Convert Whisper-tiny to ONNX with `optimum-cli export onnx --model openai/whisper-tiny --task automatic-speech-recognition`.
3. Quantize to INT8 with `onnxruntime.quantization` for 4× smaller download.
4. In your page: `const session = await ort.InferenceSession.create(url, { executionProviders: ['webgpu'] });`.
5. Use Web Audio API `AudioWorklet` to grab 16kHz mono frames; feed into Whisper session.
6. Pipe Whisper output to remote LLM (Workers AI is closest), or to in-browser LFM2.5-Audio for true offline.
7. Pipe LLM tokens into Kokoro ONNX session; play via `AudioBufferSourceNode`.

## Pitfalls

- **WebGPU adoption** — Safari 16+, Chrome 113+, but mobile Safari has shader-compile bugs as of 2026. Test on real devices.
- **Model size** — Whisper-tiny is 39M params (≈80MB INT8). Kokoro 82M is ≈170MB. Lazy-load.
- **First-token latency** — Initial WebGPU pipeline compile is 2–5s; warm with a silent decode on page load.
- **Memory pressure** on iOS — keep total VRAM use under 1GB or iOS will kill the tab.
- **Microphone permissions** — UX matters; explain why before asking.

## FAQ

**Q: Can I run Llama in-browser?**
A: Yes — LFM2.5-Audio-1.5B (Liquid AI) and Phi-3.5-mini run on WebGPU. Speeds: 15–30 tok/s on M2 MacBook.

**Q: Whisper-large?**
A: Too big (1.6GB). Stick with tiny/base in-browser; route long-form to a server.

**Q: HIPAA?**
A: In-browser inference means audio never leaves the device — ideal for [/industries/healthcare](/industries/healthcare).

**Q: Mobile?**
A: WebGPU on iOS 17+; Android Chrome 113+. Performance varies wildly by GPU.

**Q: Cost?**
A: Free at runtime — users pay. CallSphere widget is included on Growth plan and above ([/pricing](/pricing)).

## Sources

- [ONNX Runtime WebGPU docs](https://onnxruntime.ai/docs/tutorials/web/ep-webgpu.html)
- [ONNX Runtime Web NPM](https://www.npmjs.com/package/onnxruntime-web)
- [WebGPU + ORT inference guides (Modexa)](https://medium.com/@Modexa/8-webgpu-onnx-runtime-web-inference-guides-4220cff29ad8)
- [State of On-Device AI in Browser](https://blog.openreplay.com/on-device-ai-browser/)
- [Liquid LFM2.5 cookbook](https://deepwiki.com/Liquid4All/cookbook/6.4-webgpu-browser-demos)

## ONNX Runtime + WebGPU for Browser Voice Agents (No Server, Sub-100ms): production view

ONNX Runtime + WebGPU for Browser Voice Agents (No Server, Sub-100ms) ultimately resolves into one engineering question: when do you use the OpenAI Realtime API versus an async pipeline?  Realtime wins on latency for live calls. Async wins on cost, retries, and structured tool reliability for callbacks and SMS flows. Most teams need both, and the routing layer between them becomes the most load-bearing piece of the stack.

## Shipping the agent to production

Production AI agents live or die on three loops: evals, retries, and handoff state. CallSphere runs **37 agents** across 6 verticals, each with its own eval suite — synthetic call transcripts replayed nightly with assertion checks on extracted entities (date, time, party size, insurance, address). Without that loop, prompt regressions ship silently and you only find out when bookings drop.

Structured tools beat free-form text every time. Our **90+ function tools** all enforce JSON schemas validated server-side; if the model hallucinates an integer where a string is required, we retry with a corrective system message before falling back to a deterministic path. For long-running flows, we treat agent handoffs as a state machine — booking → confirmation → SMS — so context survives turn boundaries.

The Realtime API vs. async decision usually comes down to "is the user holding the phone right now?" If yes, Realtime; if no (callback queue, after-hours voicemail), async wins on cost-per-conversation, which we track per agent in **115+ database tables** spanning all 6 verticals.

## FAQ

**Why does onnx runtime + webgpu for browser voice agents (no server, sub-100ms) matter for revenue, not just engineering?**
57+ languages are supported out of the box, and the platform is HIPAA and SOC 2 aligned, which removes most of the procurement friction in regulated verticals. For a topic like "ONNX Runtime + WebGPU for Browser Voice Agents (No Server, Sub-100ms)", that means you're not starting from scratch — you're configuring an agent template that's already been hardened across thousands of conversations.

**What are the most common mistakes teams make on day one?**
Day one is integration mapping (scheduler, CRM, messaging) and prompt tuning against your top 20 real call transcripts. Day two through five is shadow-mode running, where the agent transcribes and recommends but a human still answers, so you can compare side-by-side. Go-live is the moment your eval pass-rate clears your internal bar.

**How does CallSphere's stack handle this differently than a generic chatbot?**
The honest answer: it scales until your tool catalog gets stale. The agent is only as good as the integrations it can actually call, so the operational discipline is keeping schemas, webhooks, and fallback paths green. The platform handles the rest — observability, retries, multi-region routing — without your team owning the GPU layer.

## Talk to us

Want to see how this maps to your stack? Book a live walkthrough at [calendly.com/sagar-callsphere/new-meeting](https://calendly.com/sagar-callsphere/new-meeting), or try the vertical-specific demo at [urackit.callsphere.tech](https://urackit.callsphere.tech). 14-day trial, no credit card, pilot live in 3–5 business days.

---

Source: https://callsphere.ai/blog/vw6c-onnx-runtime-webgpu-browser-voice-2026
