Skip to content
AI Engineering
AI Engineering11 min read0 views

Build a Durable AI Agent with Inngest Async Workflows in 2026

Inngest steps give every LLM call retries, sleeps, human-in-the-loop pauses, and replay-safe state. Build a research agent that survives 2-hour approvals.

TL;DR — Inngest's step.run makes every LLM call automatically retried, idempotent, and replay-safe. With step.waitForEvent you can pause an agent for hours waiting on a human approval — without keeping a process alive.

What you'll build

A research agent that (1) plans subqueries with an LLM, (2) fans out tool calls in parallel, (3) pauses for human approval on the synthesis step, and (4) emits a final report — all durable across redeploys.

Prerequisites

  1. inngest@^3.30, @inngest/agent-kit@^0.7, Node 20+.
  2. Inngest dev server (npx inngest-cli@latest dev) and a Vercel/Netlify/Node host.

Architecture

flowchart LR
  E[event: research.requested] --> P[step.run plan]
  P --> F[step.run fanout tools x N]
  F --> H[step.waitForEvent approval]
  H --> S[step.run synthesize]
  S --> O[event: research.completed]

Step 1 — Define the function

```ts import { Inngest } from "inngest"; export const inngest = new Inngest({ id: "research-agent" });

export const research = inngest.createFunction( { id: "research" }, { event: "research.requested" }, async ({ event, step }) => { const plan = await step.run("plan", async () => llm.plan(event.data.q)); const findings = await Promise.all( plan.subqueries.map((q, i) => step.run(fetch-${i}, () => searchTool(q)))); const approval = await step.waitForEvent("await-approval", { event: "research.approved", timeout: "2h", if: async.data.runId == "${event.id}", }); if (!approval) return { ok: false, reason: "timeout" }; return await step.run("synthesize", () => llm.synthesize(findings)); }, ); ```

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →

Step 2 — Wire AgentKit for the LLM logic

```ts import { createAgent, createNetwork, openai } from "@inngest/agent-kit";

const planner = createAgent({ name: "planner", model: openai({ model: "gpt-4o-mini" }), system: "Break the user question into 3-5 search subqueries.", }); const writer = createAgent({ name: "writer", model: openai({ model: "gpt-4o" }), system: "Write a 1-page synthesis with citations.", }); export const network = createNetwork({ agents: [planner, writer] }); ```

Step 3 — Mount the Next.js handler

```ts // app/api/inngest/route.ts import { serve } from "inngest/next"; import { inngest, research } from "@/inngest"; export const { GET, POST, PUT } = serve({ client: inngest, functions: [research] }); ```

Step 4 — Trigger + approve

```ts await inngest.send({ name: "research.requested", data: { q: "GLP-1 telehealth landscape" } }); // later, after a human reviewer approves: await inngest.send({ name: "research.approved", data: { runId: "" } }); ```

Step 5 — Watch in Inngest UI

Inngest's local dev UI shows each step's input, output, retries, and timing. Failed LLM calls auto-retry with exponential backoff (default 4 tries).

Step 6 — Production checklist

Add onFailure for dead-letter handling, set concurrency: { limit: 5 } to bound API spend, and enable Inngest's tracing export to Datadog or Honeycomb.

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

Pitfalls

  • Non-deterministic in step.run: Code inside step.run is replayed on retry — pure functions only, all side effects through tools.
  • Random IDs: Use step.run(\x-${i}`, ...)notMath.random` — Inngest uses the step ID as the cache key.
  • Long pauses: Default waitForEvent timeout is 7 days; bump for week-long human reviews, but state cost grows.

How CallSphere does this in production

CallSphere runs durable async agents on Inngest for the OneRoof real-estate product (Next.js 16 + React 19) — lead scoring, drip outreach, and CRM enrichment all flow through step.run with retries. The platform spans 37 agents, 90+ tools, 115+ DB tables, and 6 verticals at $149/$499/$1,499 with a 14-day no-card trial and 22% affiliate.

FAQ

Inngest vs Temporal? Inngest is event-first and serverless-native; Temporal is workflow-first and needs workers. Inngest deploys to Vercel/Netlify in minutes.

Pricing? Free tier ~50K runs/month; paid starts at $20/mo. Self-host is open-source.

Can I use it with LangGraph? Yes — wrap your LangGraph in a single step.run for durability boundaries.

Does AgentKit replace LangChain? AgentKit is lighter and TypeScript-native; it covers ~80% of agent use-cases with no Python.

Sources

## Build a Durable AI Agent with Inngest Async Workflows in 2026: production view Build a Durable AI Agent with Inngest Async Workflows in 2026 sits on top of a regional VPC and a cold-start problem you only see at 3am. If your voice stack lives in us-east-1 but your customer is calling from a Sydney mobile network, the round-trip time alone wrecks turn-taking. Multi-region routing, GPU residency, and warm pools become the difference between "natural" and "robotic" — and it's all infra, not the model. ## Shipping the agent to production Production AI agents live or die on three loops: evals, retries, and handoff state. CallSphere runs **37 agents** across 6 verticals, each with its own eval suite — synthetic call transcripts replayed nightly with assertion checks on extracted entities (date, time, party size, insurance, address). Without that loop, prompt regressions ship silently and you only find out when bookings drop. Structured tools beat free-form text every time. Our **90+ function tools** all enforce JSON schemas validated server-side; if the model hallucinates an integer where a string is required, we retry with a corrective system message before falling back to a deterministic path. For long-running flows, we treat agent handoffs as a state machine — booking → confirmation → SMS — so context survives turn boundaries. The Realtime API vs. async decision usually comes down to "is the user holding the phone right now?" If yes, Realtime; if no (callback queue, after-hours voicemail), async wins on cost-per-conversation, which we track per agent in **115+ database tables** spanning all 6 verticals. ## FAQ **Is this realistic for a small business, or is it enterprise-only?** The IT Helpdesk product is built on ChromaDB for RAG over runbooks, Supabase for auth and storage, and 40+ data models covering tickets, assets, MSP clients, and escalation chains. For a topic like "Build a Durable AI Agent with Inngest Async Workflows in 2026", that means you're not starting from scratch — you're configuring an agent template that's already been hardened across thousands of conversations. **Which integrations have to be in place before launch?** Day one is integration mapping (scheduler, CRM, messaging) and prompt tuning against your top 20 real call transcripts. Day two through five is shadow-mode running, where the agent transcribes and recommends but a human still answers, so you can compare side-by-side. Go-live is the moment your eval pass-rate clears your internal bar. **How do we measure whether it's actually working?** The honest answer: it scales until your tool catalog gets stale. The agent is only as good as the integrations it can actually call, so the operational discipline is keeping schemas, webhooks, and fallback paths green. The platform handles the rest — observability, retries, multi-region routing — without your team owning the GPU layer. ## Talk to us Want to see how this maps to your stack? Book a live walkthrough at [calendly.com/sagar-callsphere/new-meeting](https://calendly.com/sagar-callsphere/new-meeting), or try the vertical-specific demo at [sales.callsphere.tech](https://sales.callsphere.tech). 14-day trial, no credit card, pilot live in 3–5 business days.
Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.