---
title: "AgentKit 1.0 Evals Harness: Building Regression-Safe Agent CI"
description: "A practical guide to AgentKit 1.0's evals harness — golden traces, LLM-as-judge, regression gates, and how to ship agent updates safely in 2026."
canonical: https://callsphere.ai/blog/td30-oai-b-013
category: "AI Engineering"
tags: ["AgentKit", "Evals", "Testing", "CI/CD", "AI Engineering"]
author: "CallSphere Team"
published: 2026-04-18T00:00:00.000Z
updated: 2026-05-08T17:26:01.989Z
---

# AgentKit 1.0 Evals Harness: Building Regression-Safe Agent CI

> A practical guide to AgentKit 1.0's evals harness — golden traces, LLM-as-judge, regression gates, and how to ship agent updates safely in 2026.

Most agent teams ship without proper evals. Then they push a prompt change, break a critical flow in production, and learn the hard way. AgentKit 1.0's evals harness is the antidote.

## What the Evals Harness Actually Does

The harness is a YAML-based configuration that defines test cases for an agent. Each test case has inputs, expected outputs (or expected behaviors), and scoring rules. The harness runs these against any version of the agent and produces a pass/fail report with detailed traces.

Key primitives:

- **Golden traces**: known-good runs that are reused as fixtures
- **LLM-as-judge scoring**: a separate model evaluates the output for qualitative criteria
- **Schema scoring**: structured outputs are validated against JSON Schema
- **Regression gates**: minimum pass rates that block deployment

## Pricing

Evals cost $0.02 per evaluated trace. For a suite of 200 traces run on every PR, that is $4 per CI run. A team that ships 10 PRs/day spends ~$1,200/month on evals. Trivially cheap compared to the cost of a production regression.

## A Realistic Eval Suite

For a customer service agent, a representative suite includes:

- 50 happy-path tests (common questions with expected answers)
- 30 edge-case tests (typos, multi-language, unclear intent)
- 20 adversarial tests (prompt injection attempts, jailbreaks)
- 20 guardrail tests (PII handling, escalation logic)
- 30 tool-use tests (correct API calls with correct arguments)
- 50 end-to-end scenario tests (multi-turn conversations)

Total: 200 traces, $4 per run. Run on every PR plus nightly.

## CI Integration

```mermaid
graph LR
  A[PR opened] --> B[Build agent]
  B --> C[Run eval suite]
  C -->|Pass rate >= gate| D[Allow merge]
  C -->|Pass rate  F[Surface failing traces]
```

The pattern that works: gate on overall pass rate but require zero regressions on a labeled "critical" subset. This catches both "we broke a lot of tests" and "we broke the one test that really matters."

## LLM-as-Judge Pitfalls

LLM-as-judge is powerful but treacherous. Common mistakes:

- Using the same model to generate and judge (correlated errors)
- Vague judging prompts that produce inconsistent scores
- Not periodically validating judge accuracy against human labels

The fix: use a different model family for judging, write specific judging rubrics, and audit a sample of judge decisions monthly.

## Versioning and Snapshots

The harness supports snapshot testing. The first run creates a snapshot; subsequent runs compare against it. Changes to outputs require explicit snapshot acceptance, similar to Jest snapshots. This is a great pattern for catching unintended drift when you swap models or update prompts.

## When Evals Become a Liability

Over-fitting to the eval suite is a real risk. We have seen teams iterate on their suite until pass rate hits 100%, ship to production, and discover the agent is brittle on real traffic. Mitigations:

- Rotate in new test cases sourced from production traffic monthly
- Track production quality metrics independently of eval pass rate
- Treat the suite as a regression net, not a quality ceiling

## Frequently Asked Questions

**Do evals work for non-AgentKit agents?** Limited — the harness is tightly coupled to the AgentKit runtime.

**Can I export traces for offline analysis?** Yes, traces are available via the API in JSON format.

**Is there a free tier?** Yes, the first 1,000 evaluated traces per month are free.

**What about multimodal evals (vision, audio)?** Vision is supported. Audio evals are in private preview.

## Sources

- [https://platform.openai.com/docs/agents/evals](https://platform.openai.com/docs/agents/evals)
- [https://openai.com/blog/agentkit-1-0](https://openai.com/blog/agentkit-1-0)
- [https://techcrunch.com/2026/04/18/agentkit-evals](https://techcrunch.com/2026/04/18/agentkit-evals)
- [https://www.theverge.com/2026/4/18/agent-testing-2026](https://www.theverge.com/2026/4/18/agent-testing-2026)

## AgentKit 1.0 Evals Harness: Building Regression-Safe Agent CI: production view

AgentKit 1.0 Evals Harness: Building Regression-Safe Agent CI ultimately resolves into one engineering question: when do you use the OpenAI Realtime API versus an async pipeline?  Realtime wins on latency for live calls. Async wins on cost, retries, and structured tool reliability for callbacks and SMS flows. Most teams need both, and the routing layer between them becomes the most load-bearing piece of the stack.

## Shipping the agent to production

Production AI agents live or die on three loops: evals, retries, and handoff state. CallSphere runs **37 agents** across 6 verticals, each with its own eval suite — synthetic call transcripts replayed nightly with assertion checks on extracted entities (date, time, party size, insurance, address). Without that loop, prompt regressions ship silently and you only find out when bookings drop.

Structured tools beat free-form text every time. Our **90+ function tools** all enforce JSON schemas validated server-side; if the model hallucinates an integer where a string is required, we retry with a corrective system message before falling back to a deterministic path. For long-running flows, we treat agent handoffs as a state machine — booking → confirmation → SMS — so context survives turn boundaries.

The Realtime API vs. async decision usually comes down to "is the user holding the phone right now?" If yes, Realtime; if no (callback queue, after-hours voicemail), async wins on cost-per-conversation, which we track per agent in **115+ database tables** spanning all 6 verticals.

## FAQ

**Why does agentkit 1.0 evals harness: building regression-safe agent ci matter for revenue, not just engineering?**
57+ languages are supported out of the box, and the platform is HIPAA and SOC 2 aligned, which removes most of the procurement friction in regulated verticals. For a topic like "AgentKit 1.0 Evals Harness: Building Regression-Safe Agent CI", that means you're not starting from scratch — you're configuring an agent template that's already been hardened across thousands of conversations.

**What are the most common mistakes teams make on day one?**
Day one is integration mapping (scheduler, CRM, messaging) and prompt tuning against your top 20 real call transcripts. Day two through five is shadow-mode running, where the agent transcribes and recommends but a human still answers, so you can compare side-by-side. Go-live is the moment your eval pass-rate clears your internal bar.

**How does CallSphere's stack handle this differently than a generic chatbot?**
The honest answer: it scales until your tool catalog gets stale. The agent is only as good as the integrations it can actually call, so the operational discipline is keeping schemas, webhooks, and fallback paths green. The platform handles the rest — observability, retries, multi-region routing — without your team owning the GPU layer.

## Talk to us

Want to see how this maps to your stack? Book a live walkthrough at [calendly.com/sagar-callsphere/new-meeting](https://calendly.com/sagar-callsphere/new-meeting), or try the vertical-specific demo at [urackit.callsphere.tech](https://urackit.callsphere.tech). 14-day trial, no credit card, pilot live in 3–5 business days.

---

Source: https://callsphere.ai/blog/td30-oai-b-013
