---
title: "Pulumi for AI Voice Infra: TypeScript + ESC + Pulumi AI (2026)"
description: "Provision an AI voice stack with Pulumi 3.230+ in TypeScript: ESC for secrets, AI-generated modules, deployments-as-a-service, and uv-backed Python providers."
canonical: https://callsphere.ai/blog/vw6h-pulumi-ai-voice-infra-typescript-esc-2026
category: "AI Infrastructure"
tags: ["Pulumi", "TypeScript", "ESC", "AI Infrastructure", "Tutorial"]
author: "CallSphere Team"
published: 2026-03-25T00:00:00.000Z
updated: 2026-05-07T16:46:15.101Z
---

# Pulumi for AI Voice Infra: TypeScript + ESC + Pulumi AI (2026)

> Provision an AI voice stack with Pulumi 3.230+ in TypeScript: ESC for secrets, AI-generated modules, deployments-as-a-service, and uv-backed Python providers.

> **TL;DR** — Pulumi ESC (now GA) gives you hierarchical, dynamic-credential environments. Combined with Pulumi Deployments and the AI agent CLI helpers, your AI voice stack provisions in a single `pulumi up` with no static secrets anywhere.

## What you'll set up

A TypeScript Pulumi program that stands up a k3s edge cluster on Hetzner, deploys a LiveKit + voice-agent + Postgres trio, and pulls all secrets at runtime from Pulumi ESC backed by AWS Secrets Manager.

## Architecture

```mermaid
flowchart LR
  PULUMI[pulumi up] --> ESC[Pulumi ESC]
  ESC --> ASM[(AWS Secrets Manager)]
  PULUMI --> HZ[Hetzner k3s nodes]
  HZ --> LK[LiveKit Deployment]
  HZ --> AG[Voice Agent Deployment]
  HZ --> PG[Postgres StatefulSet]
  ESC -->|runtime env| AG
```

## Step 1 — Initialize the project and ESC environment

```bash
pulumi new typescript -y
pulumi env init voice-prod
pulumi env set voice-prod aws.login.fn::open '{"provider":"aws-secrets","login":"arn:aws:iam::123:role/pulumi-esc"}'
pulumi env set voice-prod openai.apiKey '{"fn::secret":{"fn::open::aws-secrets":"openai/realtime"}}'
```

Now `openai.apiKey` is *defined* in ESC but only *fetched* at `pulumi up` time — the value never lives on disk.

## Step 2 — Wire the program to ESC

```yaml

# Pulumi.voice-prod.yaml

environment:

- voice-prod
```

Now every config read like `config.requireSecret("openai:apiKey")` resolves through ESC.

## Step 3 — Provision Hetzner k3s nodes

```typescript
import * as hcloud from "@pulumi/hcloud";
import * as command from "@pulumi/command";
import * as pulumi from "@pulumi/pulumi";

const sshKey = new hcloud.SshKey("ops", { publicKey: process.env.SSH_PUB! });
const server = new hcloud.Server("voice-edge-1", {
  serverType: "ccx33",   // 8 vCPU AMD, dedicated
  image: "ubuntu-24.04",
  location: "ash",       // Ashburn, VA — closest to OpenAI us-east
  sshKeys: [sshKey.id],
});

const k3sInstall = new command.remote.Command("k3s-install", {
  connection: { host: server.ipv4Address, user: "root", privateKey: process.env.SSH_KEY! },
  create: `curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.31.0+k3s1 sh -s - --disable traefik`,
});
```

`ccx33` (dedicated AMD, 8 vCPU / 32 GB) is the sweet spot for ~50 concurrent voice sessions on OpenAI Realtime in our benchmarks.

## Step 4 — Deploy the chart with the Kubernetes provider

```typescript
import * as k8s from "@pulumi/kubernetes";
import { interpolate } from "@pulumi/pulumi";

const kubeconfig = command.remote.runOutput({
  connection: { host: server.ipv4Address, user: "root", privateKey: process.env.SSH_KEY! },
  create: "cat /etc/rancher/k3s/k3s.yaml",
}, { dependsOn: k3sInstall }).apply(r => r.stdout.replace("127.0.0.1", server.ipv4Address));

const k = new k8s.Provider("k3s", { kubeconfig });

const ns = new k8s.core.v1.Namespace("voice", { metadata: { name: "voice" }}, { provider: k });

const cfg = new pulumi.Config();
const apiKey = cfg.requireSecret("openai:apiKey");

new k8s.core.v1.Secret("voice-secrets", {
  metadata: { namespace: ns.metadata.name, name: "voice-secrets" },
  stringData: { OPENAI_API_KEY: apiKey },
}, { provider: k });
```

`apiKey` is a `pulumi.Output` that's only resolved by the engine — your TypeScript code never sees the plaintext.

## Step 5 — Helm release for the agent

```typescript
import * as helmv4 from "@pulumi/kubernetes/helm/v4";

new helmv4.Chart("voice-agent", {
  chart: "oci://ghcr.io/acme/charts/voice-agent",
  version: "1.0.0",
  namespace: ns.metadata.name,
  values: {
    image: { repository: "ghcr.io/acme/voice-agent", tag: process.env.GIT_SHA },
    livekit: { url: "ws://voice-livekit:7880" },
    openai: { secretRef: "voice-secrets" },
  },
}, { provider: k });
```

Helm v4 provider in Pulumi (released 2025) does server-side apply and reports kstatus-aware readiness.

## Step 6 — Pulumi Deployments for review stacks

```yaml

# .pulumi/Pulumi.yaml -> deployment block

template:
  config:
    aws:region: { value: us-east-1 }
deploymentSettings:
  sourceContext: { git: { repoURL: "[https://github.com/acme/voice](https://github.com/acme/voice)", branch: main } }
  operationContext: { preRunCommands: ["uv sync --frozen"] }
```

Now every PR gets a `pulumi preview` posted as a comment, and merging `main` triggers `pulumi up` — Pulumi-managed runners, no GH Actions runner needed.

## Step 7 — Pulumi AI for one-off resources

```bash
pulumi ai prompt "add a Cloudflare DNS record voice.example.com pointing to the Hetzner server"
```

In 2026 the AI agent will read your existing program, generate a coherent diff, and offer it as a PR. Useful for boilerplate; review every line for IAM and networking.

## Pitfalls

- **`Output` leaks via `.toString()`** — if you log an Output you'll get `Calling .toString() on an Output` warning. Use `.apply` properly.
- **ESC environment imports across stacks** can cause circular references; keep your environments shallow.
- **Hetzner snapshot rebuild** wipes k3s state; back up etcd or use S3-snapshotter.
- **uv in Pulumi Deployments** — set `PIP_DISABLE_PIP_VERSION_CHECK=1` and `UV_NO_CACHE=1` to avoid stale wheel issues.
- **Helm v4 vs v3 provider** — values are not 1:1; `fetchOpts` becomes `repositoryOpts`. Migration takes ~10 min per chart.

## How CallSphere does this in production

CallSphere's primary infra is k3s on a dedicated host (not Hetzner — our Postgres lives at 72.62.162.83 behind Cloudflare Tunnel) but the Pulumi pattern above runs CallSphere fork environments for partners on Hetzner ccx33 nodes. ESC pulls model keys from a central vault per partner. 37 agents, 90+ tools, 115+ DB tables, $149/$499/$1499, 14-day [trial](/trial), 22% [affiliate](/affiliate).

## FAQ

**Q: Pulumi vs Terraform for AI infra?**
Pulumi if your team writes TypeScript/Python already; the typed SDKs catch IAM bugs at compile time. Terraform if you want HCL's declarative purity.

**Q: ESC vs External Secrets Operator?**
ESC for *infra-time* secrets (during `pulumi up`); ESO for *runtime* secrets (Kubernetes pods). Use both.

**Q: Pulumi Deployments cost?**
Free for individuals; team plans start ~$50/user/month and replace a self-hosted runner.

**Q: Can I import existing infra?**
Yes — `pulumi import aws:s3/bucket:Bucket my-bucket existing-name` generates the program.

## Sources

- [Pulumi Release Notes — Pulumi Copilot, ESC, Docker Provider](https://www.pulumi.com/blog/pulumi-release-notes-106/)
- [Pulumi 2.0, Now with Superpowers](https://www.pulumi.com/blog/pulumi-2-0/)
- [The Past 6 Months of Pulumi Releases](https://www.pulumi.com/blog/pulumi-release-notes-114/)
- [Pulumi Tutorial: IaC with TypeScript, Python & Go 2026 — env zero](https://www.env0.com/blog/what-is-pulumi-and-how-to-use-it-with-env0)
- [Pulumi TypeScript and Node.js docs](https://www.pulumi.com/docs/iac/languages-sdks/javascript/)

---

Source: https://callsphere.ai/blog/vw6h-pulumi-ai-voice-infra-typescript-esc-2026
