---
title: "Postgres + DuckDB for AI Analytics: pg_duckdb Speeds Up OLAP 100x (2026)"
description: "pg_duckdb embeds DuckDB inside Postgres so transactional and analytic queries share the same database. AI dashboards that took 90 sec on Postgres run in <1 sec via DuckDB — without leaving Postgres."
canonical: https://callsphere.ai/blog/vw7h-postgres-duckdb-ai-analytics-2026
category: "AI Engineering"
tags: ["pg_duckdb", "DuckDB", "Postgres", "Analytics", "AI"]
author: "CallSphere Team"
published: 2026-04-15T00:00:00.000Z
updated: 2026-05-08T17:26:02.427Z
---

# Postgres + DuckDB for AI Analytics: pg_duckdb Speeds Up OLAP 100x (2026)

> pg_duckdb embeds DuckDB inside Postgres so transactional and analytic queries share the same database. AI dashboards that took 90 sec on Postgres run in <1 sec via DuckDB — without leaving Postgres.

> **TL;DR** — pg_duckdb 1.0+ ships DuckDB's columnar engine inside Postgres. Set `duckdb.force_execution=true` and your existing analytic SQL gets vectorized — TPC-DS queries see up to 1500x speedups, AI dashboards land under 1 sec.

## What you'll build

A pg_duckdb-enabled Postgres where transcripts and metrics live in row-store tables, while monthly aggregate dashboards execute through DuckDB's vectorized engine — same SQL, 100x faster.

## Schema

```sql
CREATE EXTENSION pg_duckdb;

CREATE TABLE call_events (
  ts TIMESTAMPTZ NOT NULL,
  tenant_id UUID,
  agent_id UUID,
  duration_sec INT,
  cost_cents INT,
  topic TEXT
);
CREATE INDEX ON call_events (ts) WHERE ts > now() - interval '30 days';

-- Iceberg / Parquet on S3, queryable via DuckDB
```

## Architecture

```mermaid
flowchart LR
  APP[App] --> PG[(Postgres OLTP)]
  PG --> ROW[Row-store tables]
  ANALYST[Analyst dashboard] --> EXEC{duckdb.force_execution?}
  EXEC -->|Yes| DDB[DuckDB vectorized]
  EXEC -->|No| PG_ENG[Postgres planner]
  DDB --> ROW
  DDB --> ICE[Iceberg / Parquet S3]
```

## Step 1 — Install pg_duckdb

```bash
sudo apt install postgresql-17-pg-duckdb
psql -c "CREATE EXTENSION pg_duckdb;"
```

## Step 2 — Toggle execution

```sql
SET duckdb.force_execution = true;

EXPLAIN ANALYZE
SELECT date_trunc('day', ts), tenant_id,
       count(*), avg(duration_sec), sum(cost_cents)
FROM call_events
WHERE ts >= now() - interval '90 days'
GROUP BY 1, 2;
```

EXPLAIN now shows `DuckDB Execution` instead of `Aggregate`.

## Step 3 — Query S3 Parquet directly

```sql
SELECT count(*) FROM read_parquet(
  's3://callsphere-lake/calls/year=2026/month=04/*.parquet'
);
```

No copy, no foreign data wrapper config — just point at the bucket.

## Step 4 — Mix Postgres + Parquet

```sql
SELECT pg.tenant_id, pg.body, ce.duration_sec
FROM conversations pg
JOIN read_parquet('s3://lake/events/*.parquet') ce
  ON pg.id = ce.conversation_id
WHERE pg.created_at >= now() - interval '7 days';
```

DuckDB plans the join across both data sources.

## Step 5 — MotherDuck (managed) integration

```sql
INSTALL motherduck;
CALL duckdb.attach_motherduck('my_token');
SELECT count(*) FROM motherduck.warehouse.calls;
```

Use MotherDuck for cross-org analytics with shared compute.

## Step 6 — Schedule DuckDB-backed materialized views

```sql
CREATE MATERIALIZED VIEW daily_call_summary AS
SELECT date_trunc('day', ts) AS d, tenant_id,
       count(*), avg(duration_sec)
FROM call_events
WHERE ts >= now() - interval '180 days'
GROUP BY 1, 2;

REFRESH MATERIALIZED VIEW CONCURRENTLY daily_call_summary;
```

Set `duckdb.force_execution = true` in the refresh job — 90-second refresh becomes 5 seconds.

## Pitfalls

- **`duckdb.force_execution` on transactional queries** — slows them. Use it per session, not globally.
- **Mutable rows + DuckDB scan** — DuckDB sees a snapshot; recent writes may not appear in the same transaction.
- **S3 credentials** — set `SET s3_access_key_id` per session; don't put creds in `postgresql.conf`.
- **Incompatible SQL** — most works, but uncommon Postgres operators may need rewrites.

## CallSphere production note

CallSphere uses pg_duckdb to power admin and finance dashboards across **115+ DB tables**. The `call_events` and `agent_runs` row-stores live on the OLTP primary; nightly Parquet exports go to S3 and are queried hot via DuckDB. Healthcare keeps PHI-isolated dashboards on the `healthcare_voice` Prisma cluster; OneRoof's RLS analytics use DuckDB respecting per-tenant filters; UrackIT's Supabase + ChromaDB stack syncs hourly to a DuckDB-backed reporting view. **37 agents · 90+ tools · 6 verticals**. Plans: $149/$499/$1,499 — 14-day trial, 22% affiliate.

## FAQ

**Q: pg_duckdb vs pg_analytics (ParadeDB)?**
Both wrap DuckDB. pg_duckdb is closer to upstream DuckDB; pg_analytics targets data-lake-only.

**Q: Can pg_duckdb write?**
Yes — `COPY ... TO 's3://...' (FORMAT parquet)` works.

**Q: Memory limits?**
DuckDB obeys `memory_limit` setting; spillover to disk on big aggregates.

**Q: Does it bypass RLS?**
DuckDB executes as the calling user — RLS still applies on Postgres tables.

**Q: When NOT to use pg_duckdb?**
Pure OLTP point queries. Postgres planner is already optimal there.

## Sources

- [pg_duckdb GitHub](https://github.com/duckdb/pg_duckdb)
- [MotherDuck — pg_duckdb 1.0 release](https://motherduck.com/blog/pg-duckdb-release/)
- [The New Stack — Postgres analytics with DuckDB](https://thenewstack.io/unleashing-postgres-for-analytics-with-duckdb-integration/)
- [MotherDuck — Postgres + DuckDB integration methods](https://motherduck.com/blog/postgres-duckdb-options/)

## Postgres + DuckDB for AI Analytics: pg_duckdb Speeds Up OLAP 100x (2026): production view

Postgres + DuckDB for AI Analytics: pg_duckdb Speeds Up OLAP 100x (2026) usually starts as an architecture diagram, then collides with reality the first week of pilot.  You discover that vector store choice (ChromaDB vs. Postgres pgvector vs. managed) is not really a vector store choice — it's a latency, freshness, and ops choice. Picking wrong forces a re-platform six months in, exactly when you have customers depending on it.

## Shipping the agent to production

Production AI agents live or die on three loops: evals, retries, and handoff state. CallSphere runs **37 agents** across 6 verticals, each with its own eval suite — synthetic call transcripts replayed nightly with assertion checks on extracted entities (date, time, party size, insurance, address). Without that loop, prompt regressions ship silently and you only find out when bookings drop.

Structured tools beat free-form text every time. Our **90+ function tools** all enforce JSON schemas validated server-side; if the model hallucinates an integer where a string is required, we retry with a corrective system message before falling back to a deterministic path. For long-running flows, we treat agent handoffs as a state machine — booking → confirmation → SMS — so context survives turn boundaries.

The Realtime API vs. async decision usually comes down to "is the user holding the phone right now?" If yes, Realtime; if no (callback queue, after-hours voicemail), async wins on cost-per-conversation, which we track per agent in **115+ database tables** spanning all 6 verticals.

## FAQ

**Is this realistic for a small business, or is it enterprise-only?**
The healthcare stack is a concrete example: FastAPI + OpenAI Realtime API + NestJS + Prisma + Postgres `healthcare_voice` schema + Twilio voice + AWS SES + JWT auth, all SOC 2 / HIPAA aligned. For a topic like "Postgres + DuckDB for AI Analytics: pg_duckdb Speeds Up OLAP 100x (2026)", that means you're not starting from scratch — you're configuring an agent template that's already been hardened across thousands of conversations.

**Which integrations have to be in place before launch?**
Day one is integration mapping (scheduler, CRM, messaging) and prompt tuning against your top 20 real call transcripts. Day two through five is shadow-mode running, where the agent transcribes and recommends but a human still answers, so you can compare side-by-side. Go-live is the moment your eval pass-rate clears your internal bar.

**How do we measure whether it's actually working?**
The honest answer: it scales until your tool catalog gets stale. The agent is only as good as the integrations it can actually call, so the operational discipline is keeping schemas, webhooks, and fallback paths green. The platform handles the rest — observability, retries, multi-region routing — without your team owning the GPU layer.

## Talk to us

Want to see how this maps to your stack? Book a live walkthrough at [calendly.com/sagar-callsphere/new-meeting](https://calendly.com/sagar-callsphere/new-meeting), or try the vertical-specific demo at [realestate.callsphere.tech](https://realestate.callsphere.tech). 14-day trial, no credit card, pilot live in 3–5 business days.

---

Source: https://callsphere.ai/blog/vw7h-postgres-duckdb-ai-analytics-2026
