---
title: "Build an AI Agent with Effect.ts: Typed Errors & Retries (2026)"
description: "Effect.ts v3 makes every LLM failure a typed effect channel. Wire OpenAI calls with Schedule retries, fallback layers, and Cause inspection — no try/catch."
canonical: https://callsphere.ai/blog/vw8h-build-ai-agent-effect-ts-typed-errors-2026
category: "AI Engineering"
tags: ["Effect.ts", "Functional", "Error Handling", "TypeScript", "Retries"]
author: "CallSphere Team"
published: 2026-03-27T00:00:00.000Z
updated: 2026-05-07T22:23:17.627Z
---

# Build an AI Agent with Effect.ts: Typed Errors & Retries (2026)

> Effect.ts v3 makes every LLM failure a typed effect channel. Wire OpenAI calls with Schedule retries, fallback layers, and Cause inspection — no try/catch.

> **TL;DR** — Effect.ts v3 (production at Vercel, Prisma) makes errors first-class in the type system. Combine `Effect.retry` with `Schedule.exponential` and a fallback Layer and your AI agent gracefully degrades from gpt-4o → gpt-4o-mini → cached answer.

## What you'll build

An "answer service" with three failure modes — `RateLimitError`, `TimeoutError`, `ContentFilterError` — each surfaced as a typed channel. The service retries 3x on rate-limits, falls back to a cheaper model on timeouts, and short-circuits on content filter.

## Prerequisites

1. `effect@^3.10`, `@effect/ai@^0.6`, `@effect/ai-openai@^0.6`, Node 20+ or Bun.

## Architecture

```mermaid
flowchart TD
  Q[Question] --> P[Effect program]
  P --> R{retry policy}
  R -->|429| RT[exp backoff x3]
  R -->|503/timeout| FB[fallback Layer mini]
  R -->|content filter| CR[Cause.fail]
  RT --> R
  FB --> O[reply]
```

## Step 1 — Define typed errors

```ts
import { Data } from "effect";

export class RateLimitError extends Data.TaggedError("RateLimit") {}
export class TimeoutError   extends Data.TaggedError("Timeout") {}
export class ContentFilter  extends Data.TaggedError("ContentFilter") {}
```

## Step 2 — Wrap OpenAI as an Effect

```ts
import { Effect, Schedule, Duration } from "effect";
import OpenAI from "openai";
const oa = new OpenAI();

export const ask = (q: string, model: string) =>
  Effect.tryPromise({
    try: () => oa.chat.completions.create({
      model, messages: [{ role: "user", content: q }],
    }),
    catch: (e: any) => {
      if (e?.status === 429) return new RateLimitError({ retryAfter: 2 });
      if (e?.code === "ETIMEDOUT") return new TimeoutError({ ms: 30000 });
      if (e?.error?.code === "content_filter") return new ContentFilter();
      return e;
    },
  }).pipe(Effect.map((r) => r.choices[0].message.content ?? ""));
```

## Step 3 — Compose with retry + fallback

```ts
import { pipe } from "effect";

const retryOnRate = Schedule.exponential(Duration.seconds(1))
  .pipe(Schedule.intersect(Schedule.recurs(3)),
        Schedule.whileInput((e: any) => e._tag === "RateLimit"));

export const answer = (q: string) => pipe(
  ask(q, "gpt-4o"),
  Effect.retry(retryOnRate),
  Effect.catchTag("Timeout", () => ask(q, "gpt-4o-mini")),
  Effect.catchTag("ContentFilter",
    () => Effect.succeed("I can't help with that.")),
);
```

## Step 4 — Run + observe Cause

```ts
import { Effect, Cause } from "effect";

const result = await Effect.runPromiseExit(answer("hello"));
if (result._tag === "Failure")
  console.error(Cause.pretty(result.cause));
```

## Step 5 — Tracing

```ts
import { NodeSdk } from "@effect/opentelemetry";
const Tracing = NodeSdk.layer(() => ({
  resource: { serviceName: "ai-agent" },
  spanProcessor: new BatchSpanProcessor(new OTLPTraceExporter()),
}));
Effect.runPromise(answer("...").pipe(Effect.provide(Tracing)));
```

## Step 6 — Add a fallback Layer

```ts
import { Layer } from "effect";
import { OpenAiClient } from "@effect/ai-openai";

const PrimaryAI  = OpenAiClient.layer({ apiKey: process.env.OPENAI_API_KEY! });
const FallbackAI = OpenAiClient.layer({ apiKey: process.env.AZURE_KEY!,
                                        baseUrl: "[https://eastus.openai.azure.com](https://eastus.openai.azure.com)" });
const AI = Layer.orElse(PrimaryAI, () => FallbackAI);
```

## Pitfalls

- **Async/await mental model**: Effect generators (`Effect.gen`) read like async/await but yield Effects — new contributors hit confusion fast.
- **Bundle size**: `effect` core is ~50kb min+gzip. Tree-shaking is good but watch lambda cold start.
- **Schedule precedence**: `Schedule.intersect` ANDs, `union` ORs — easy to mix up.

## How CallSphere does this in production

CallSphere uses Effect-style layered fallbacks across **37 agents** in **6 verticals** with **90+ tools** and **115+ DB tables** — primary OpenAI → Azure OpenAI → cached answer. Healthcare (FastAPI), OneRoof (Next.js 16 + React 19), Salon (NestJS 10 + Prisma), Sales (Node.js 20 + React 18 + Vite). **$149/$499/$1,499**, **14-day trial**, **22% affiliate**.

## FAQ

**Why not try/catch?** Try/catch loses the error type at the function boundary — Effect keeps it.

**Can I incrementally adopt?** Yes. Convert one service to Effect, return Promises at the edge with `Effect.runPromise`.

**Performance overhead?** ~5-15µs per Effect step, dwarfed by network latency.

**Effect.ts vs fp-ts?** Effect supersedes fp-ts in 2026 — fp-ts maintainer joined Effect team.

## Sources

- Effect-TS docs - [https://effect.website/](https://effect.website/)
- Effect-TS AI - [https://deepwiki.com/Effect-TS/effect/10-ai-and-external-services](https://deepwiki.com/Effect-TS/effect/10-ai-and-external-services)
- Noqta - Effect-TS 2026 tutorial - [https://noqta.tn/en/tutorials/effect-ts-typescript-error-handling-pipelines-2026](https://noqta.tn/en/tutorials/effect-ts-typescript-error-handling-pipelines-2026)
- Dev.to - Effect-TS in 2026 - [https://dev.to/ottoaria/effect-ts-in-2026-functional-programming-for-typescript-that-actually-makes-sense-1go](https://dev.to/ottoaria/effect-ts-in-2026-functional-programming-for-typescript-that-actually-makes-sense-1go)

---

Source: https://callsphere.ai/blog/vw8h-build-ai-agent-effect-ts-typed-errors-2026
