Skip to content
AI Voice Agents
AI Voice Agents11 min read0 views

Build an AI Voice Agent with SolidStart + SolidJS + OpenAI Realtime (2026)

SolidStart 1.3 + Solid 1.9 deliver fine-grained reactivity with no VDOM — voice agents render at 30% lower CPU than React. Plug WebRTC into Solid signals.

TL;DR — Solid 1.9's fine-grained signals re-render only the exact DOM node that changed — perfect for voice agents that emit hundreds of transcript deltas per second. SolidStart 1.3 (Vite + Nitro) ships a clean place to mint ephemeral OpenAI keys.

What you'll build

A SolidStart route that mints an ephemeral key, a SolidJS component that opens WebRTC to OpenAI Realtime, and a signal-driven transcript that rerenders only the changed token.

Prerequisites

  1. solid-js@^1.9, @solidjs/start@^1.3, vinxi@^0.5.
  2. OPENAI_API_KEY in .env.
  3. Node 20+.

Architecture

flowchart LR
  S[Solid signal] --> UI[DOM node]
  UI --> SS[SolidStart /api/key]
  SS -- POST sessions --> OA1[OpenAI]
  OA1 --> SS --> UI
  UI -- WebRTC SDP --> OA2[OpenAI Realtime]

Step 1 — API route

```ts // src/routes/api/key.ts import type { APIEvent } from "@solidjs/start/server";

export async function POST(_e: APIEvent) { const r = await fetch("https://api.openai.com/v1/realtime/sessions", { method: "POST", headers: { Authorization: Bearer ${process.env.OPENAI_API_KEY}, "Content-Type": "application/json" }, body: JSON.stringify({ model: "gpt-realtime", voice: "verse" }), }); return new Response(await r.text(), { headers: { "Content-Type": "application/json" } }); } ```

Step 2 — Voice component

```tsx import { createSignal } from "solid-js";

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →

export function Voice() { const [live, setLive] = createSignal(false); const [transcript, setTranscript] = createSignal(""); let audioEl!: HTMLAudioElement;

const start = async () => { const { client_secret } = await fetch("/api/key", { method: "POST" }).then((r) => r.json()); const pc = new RTCPeerConnection(); pc.ontrack = (e) => (audioEl.srcObject = e.streams[0]); const ms = await navigator.mediaDevices.getUserMedia({ audio: true }); ms.getTracks().forEach((t) => pc.addTrack(t, ms));

const dc = pc.createDataChannel("oai-events");
dc.addEventListener("message", (e) => {
  const evt = JSON.parse(e.data);
  if (evt.type === "response.audio_transcript.delta")
    setTranscript((t) => t + evt.delta);
});

const offer = await pc.createOffer();
await pc.setLocalDescription(offer);
const ans = await fetch(
  "https://api.openai.com/v1/realtime?model=gpt-realtime",
  { method: "POST", body: offer.sdp,
    headers: { Authorization: `Bearer ${client_secret.value}`,
               "Content-Type": "application/sdp" } });
await pc.setRemoteDescription({ type: "answer", sdp: await ans.text() });
setLive(true);

};

return ( <>

{transcript()}
Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like