---
title: "Postgres Backup with pg_dump + pgBackRest: PITR for AI Data in 2026"
description: "pgBackRest was archived in April 2026 — but its successors (pgmoneta, WAL-G, Barman) still rely on the same ideas. A working PITR pipeline, restore drill, and the migration playbook off pgBackRest."
canonical: https://callsphere.ai/blog/vw7h-postgres-backup-pg-dump-pgbackrest-2026
category: "AI Infrastructure"
tags: ["Postgres", "Backup", "pgBackRest", "WAL-G", "Disaster Recovery"]
author: "CallSphere Team"
published: 2026-04-06T00:00:00.000Z
updated: 2026-05-08T17:26:02.853Z
---

# Postgres Backup with pg_dump + pgBackRest: PITR for AI Data in 2026

> pgBackRest was archived in April 2026 — but its successors (pgmoneta, WAL-G, Barman) still rely on the same ideas. A working PITR pipeline, restore drill, and the migration playbook off pgBackRest.

> **TL;DR** — pg_dump is for logical exports, not disaster recovery. For PITR you need WAL-G, pgmoneta, or (if you started before April 2026) the now-archived pgBackRest. This guide shows the full pipeline plus the restore drill that proves it works.

## What you'll build

A nightly full + 6-hourly incremental backup pipeline to S3 with 30-day PITR, plus a quarterly restore drill that verifies RPO ≤ 5 min and RTO ≤ 30 min.

## Schema considerations

Your AI data (vectors, JSONB metadata, audit logs) is normal Postgres data — backup tooling treats it identically. The only special handling: `vector` column type must exist on the target before restore (`CREATE EXTENSION vector` first).

## Architecture

```mermaid
flowchart LR
  PG[(Postgres primary)] --> WAL[WAL stream]
  WAL --> WALG[WAL-G ship]
  PG --> FULL[Nightly full]
  FULL --> S3[(S3)]
  WALG --> S3
  S3 --> RESTORE[Restore drill]
  RESTORE --> VERIFY[Verify table counts]
```

## Step 1 — Install WAL-G

```bash
sudo wget -O /usr/local/bin/wal-g \
  https://github.com/wal-g/wal-g/releases/download/v3.0.5/wal-g-pg-ubuntu-22.04-amd64
sudo chmod +x /usr/local/bin/wal-g
```

## Step 2 — Configure WAL archiving

```bash
# postgresql.conf
archive_mode = on
archive_command = '/usr/local/bin/wal-g wal-push %p'
wal_level = replica
```

```bash
# /etc/wal-g.json
{
  "WALG_S3_PREFIX": "s3://callsphere-pg-backups/prod",
  "AWS_REGION": "us-east-1",
  "WALG_COMPRESSION_METHOD": "zstd",
  "WALG_DELTA_MAX_STEPS": "6"
}
```

## Step 3 — Nightly base backup

```bash
# /etc/cron.d/wal-g-backup
0 2 * * * postgres /usr/local/bin/wal-g backup-push /var/lib/postgresql/17/main
```

```bash
wal-g backup-list
# name                   modified             wal_file_name     storage_name
# base_000000010000...   2026-05-06T02:00:01  000000010000...   default
```

## Step 4 — pg_dump as a logical companion

```bash
pg_dump --format=custom --jobs=8 --compress=9 \
  --file=/tmp/app_logical.dump \
  --dbname="$DATABASE_URL"
aws s3 cp /tmp/app_logical.dump s3://callsphere-pg-backups/logical/
```

Logical dumps are the only way to migrate across major versions or extract a single tenant.

## Step 5 — Restore drill

```bash
# On a fresh host
sudo systemctl stop postgresql
sudo -u postgres rm -rf /var/lib/postgresql/17/main/*
sudo -u postgres /usr/local/bin/wal-g backup-fetch \
  /var/lib/postgresql/17/main LATEST

cat > /var/lib/postgresql/17/main/recovery.signal > /var/lib/postgresql/17/main/postgresql.auto.conf
echo "recovery_target_time = '2026-05-06 14:32:00 UTC'" \
  >> /var/lib/postgresql/17/main/postgresql.auto.conf

sudo systemctl start postgresql
```

## Step 6 — Verify

```sql
SELECT relname, n_live_tup
FROM pg_stat_user_tables ORDER BY n_live_tup DESC LIMIT 20;

SELECT pg_last_wal_replay_lsn(), pg_last_xact_replay_timestamp();
```

Cross-check counts against your monitoring dashboard. If they match, the drill passes.

## Pitfalls

- **Forgetting CREATE EXTENSION before restore** — restoring a DB with vector columns to a fresh cluster fails until extensions exist.
- **WAL archive gap** — `archive_command` must succeed every time. Monitor `pg_stat_archiver`.
- **No restore drill** — backups you haven't tested don't exist. Quarterly minimum.
- **Single-region S3** — replicate to a second region for true DR.

## CallSphere production note

CallSphere runs WAL-G + nightly logical dumps across **115+ DB tables**, RPO ≤ 5 min, RTO ≤ 30 min. Healthcare's HIPAA cluster (Prisma `healthcare_voice`) backs up to a separate BAA-covered S3 bucket; OneRoof's RLS data and UrackIT's Supabase + ChromaDB each carry independent retention policies. **37 agents · 90+ tools · 6 verticals**. Quarterly restore drills are a hard SLO. Plans: $149/$499/$1,499 — 14-day trial, 22% affiliate.

## FAQ

**Q: Why was pgBackRest archived?**
April 2026 — Crunchy Data sale ended sponsorship, no replacement funding. Existing installs keep working.

**Q: WAL-G or pgmoneta?**
WAL-G for S3-first cloud workloads; pgmoneta for self-hosted with multiple repos.

**Q: pg_basebackup vs WAL-G?**
pg_basebackup for one-off snapshots, WAL-G for continuous PITR.

**Q: How big can a base backup get?**
WAL-G uses delta backups so each "full" only stores the diff. Typical: 10–20% of cluster size.

**Q: Can I restore a single table?**
Not directly from WAL — restore the cluster to a temp host and `pg_dump` the table out.

## Sources

- [pgBackRest user guide](https://pgbackrest.org/user-guide.html)
- [Christophe Pettus — After pgBackRest](https://thebuild.com/blog/2026/04/30/after-pgbackrest/)
- [Crunchy Data — Introduction to Postgres backups](https://www.crunchydata.com/blog/introduction-to-postgres-backups)
- [Bytebase — Open-source Postgres backup solutions 2026](https://www.bytebase.com/blog/top-open-source-postgres-backup-solution/)

## Postgres Backup with pg_dump + pgBackRest: PITR for AI Data in 2026: production view

Postgres Backup with pg_dump + pgBackRest: PITR for AI Data in 2026 sounds like a single decision, but in production it splits into eval design, prompt cost, and observability.  The deeper you push toward live traffic, the more those three pull against each other — better evals catch silent failures, prompt cost limits how often you can re-run them, and weak observability hides which retries are actually saving conversations versus burning latency budget.

## Serving stack tradeoffs

The big fork is managed (OpenAI Realtime, ElevenLabs Conversational AI) versus self-hosted on GPUs you operate. Managed wins on cold-start, model freshness, and zero-ops; self-hosted wins on unit economics past a certain conversation volume and on data residency for regulated verticals. CallSphere runs hybrid: Realtime for live calls, self-hosted Whisper + a hosted LLM for async, both routed through a Go gateway that enforces per-tenant rate limits.

Latency budgets are non-negotiable on voice. End-to-end target is sub-800ms ASR-to-first-token and sub-1.4s first-audio-out; anything beyond that and turn-taking feels stilted. GPU residency in the same region as your TURN servers matters more than choosing a slightly bigger model.

Observability is the unglamorous backbone — every conversation produces logs, traces, sentiment scoring, and cost attribution piped to a per-tenant dashboard. **HIPAA + SOC 2 aligned** isolation keeps healthcare traffic separated from salon traffic at the storage layer, not just the API.

## FAQ

**How does this apply to a CallSphere pilot specifically?**
CallSphere runs 37 production agents and 90+ function tools across 115+ database tables in 6 verticals, so most workflows you'd want already have a template. For a topic like "Postgres Backup with pg_dump + pgBackRest: PITR for AI Data in 2026", that means you're not starting from scratch — you're configuring an agent template that's already been hardened across thousands of conversations.

**What does the typical first-week implementation look like?**
Day one is integration mapping (scheduler, CRM, messaging) and prompt tuning against your top 20 real call transcripts. Day two through five is shadow-mode running, where the agent transcribes and recommends but a human still answers, so you can compare side-by-side. Go-live is the moment your eval pass-rate clears your internal bar.

**Where does this break down at scale?**
The honest answer: it scales until your tool catalog gets stale. The agent is only as good as the integrations it can actually call, so the operational discipline is keeping schemas, webhooks, and fallback paths green. The platform handles the rest — observability, retries, multi-region routing — without your team owning the GPU layer.

## Talk to us

Want to see how this maps to your stack? Book a live walkthrough at [calendly.com/sagar-callsphere/new-meeting](https://calendly.com/sagar-callsphere/new-meeting), or try the vertical-specific demo at [healthcare.callsphere.tech](https://healthcare.callsphere.tech). 14-day trial, no credit card, pilot live in 3–5 business days.

---

Source: https://callsphere.ai/blog/vw7h-postgres-backup-pg-dump-pgbackrest-2026
