Skip to content
Learn Agentic AI
Learn Agentic AI13 min read6 views

Agent Context and State Management with RunContextWrapper

Learn how to use RunContextWrapper to pass shared state between agents and tools in the OpenAI Agents SDK. Covers typed context, dependency injection, and practical patterns.

The Problem: Sharing State Across the Agent Loop

In real applications, agents and tools need access to shared state that goes beyond the conversation messages. A customer support agent needs the current user's account details. A database query tool needs a connection pool. An analytics agent needs the current tenant ID for data isolation.

The OpenAI Agents SDK solves this with the context system — a typed, dependency-injection-like mechanism that lets you pass arbitrary state through the entire agent loop, accessible by agents, tools, and handoff callbacks.

RunContextWrapper Basics

The RunContextWrapper is a generic wrapper around your custom context object. You define a context type, create an instance, and pass it to the runner:

flowchart TD
    START["Agent Context and State Management with RunContex…"] --> A
    A["The Problem: Sharing State Across the A…"]
    A --> B
    B["RunContextWrapper Basics"]
    B --> C
    C["Accessing Context in Dynamic Instructio…"]
    C --> D
    D["Accessing Context in Tools"]
    D --> E
    E["Mutable Context for State Accumulation"]
    E --> F
    F["Context in Multi-Agent Handoffs"]
    F --> G
    G["Typed Context Best Practices"]
    G --> H
    H["Practical Example: Multi-Tenant SaaS Ag…"]
    H --> DONE["Key Takeaways"]
    style START fill:#4f46e5,stroke:#4338ca,color:#fff
    style DONE fill:#059669,stroke:#047857,color:#fff
from dataclasses import dataclass
from agents import Agent, Runner, RunContextWrapper

@dataclass
class UserContext:
    user_id: str
    user_name: str
    account_tier: str
    language: str

agent = Agent[UserContext](
    name="Support Agent",
    instructions="Help the user with their account.",
)

context = UserContext(
    user_id="usr_12345",
    user_name="Alice",
    account_tier="premium",
    language="en",
)

result = Runner.run_sync(
    agent,
    "What features do I have access to?",
    context=context,
)

The Agent[UserContext] type annotation is optional but recommended — it enables IDE type checking and autocomplete when you access the context in tools and dynamic instructions.

Accessing Context in Dynamic Instructions

Dynamic instruction functions receive the context wrapper, letting you personalize the system prompt per user:

from agents import Agent, RunContextWrapper

@dataclass
class TenantContext:
    tenant_id: str
    tenant_name: str
    plan: str
    feature_flags: dict[str, bool]

def build_instructions(
    context: RunContextWrapper[TenantContext],
    agent: Agent[TenantContext],
) -> str:
    tenant = context.context
    features = tenant.feature_flags

    base = f"""You are a support agent for {tenant.tenant_name}.
Their plan: {tenant.plan}.

Available features:"""

    if features.get("advanced_analytics"):
        base += "\n- Advanced Analytics: Yes"
    else:
        base += "\n- Advanced Analytics: No (suggest upgrade)"

    if features.get("api_access"):
        base += "\n- API Access: Yes"
    else:
        base += "\n- API Access: No (available on Business plan)"

    return base

agent = Agent[TenantContext](
    name="Tenant Support",
    instructions=build_instructions,
)

This is a powerful pattern for multi-tenant SaaS applications where each customer gets a customized agent experience.

Accessing Context in Tools

Tools can access the context by adding a RunContextWrapper parameter. The SDK automatically detects this parameter, injects the context at runtime, and excludes it from the tool's JSON schema:

See AI Voice Agents Handle Real Calls

Book a free demo or calculate how much you can save with AI voice automation.

flowchart TD
    CENTER(("Core Concepts"))
    CENTER --> N0["Tenant isolation: The agent and tools o…"]
    CENTER --> N1["Personalization: Instructions adapt to …"]
    CENTER --> N2["Audit trail: All actions are logged in …"]
    CENTER --> N3["Type safety: The IDE knows exactly what…"]
    style CENTER fill:#4f46e5,stroke:#4338ca,color:#fff
from agents import function_tool, RunContextWrapper

@dataclass
class AppContext:
    user_id: str
    db_pool: any  # Database connection pool
    api_key: str  # External service API key

@function_tool
async def get_user_orders(
    context: RunContextWrapper[AppContext],
    status: str = "all",
    limit: int = 10,
) -> str:
    """Get orders for the current user.

    Args:
        status: Filter by order status (all, pending, shipped, delivered).
        limit: Maximum number of orders to return.
    """
    app = context.context

    # Use the database pool from context
    async with app.db_pool.acquire() as conn:
        if status == "all":
            rows = await conn.fetch(
                "SELECT * FROM orders WHERE user_id = $1 ORDER BY created_at DESC LIMIT $2",
                app.user_id, limit
            )
        else:
            rows = await conn.fetch(
                "SELECT * FROM orders WHERE user_id = $1 AND status = $2 ORDER BY created_at DESC LIMIT $3",
                app.user_id, status, limit
            )

    return format_orders(rows)

@function_tool
async def track_shipment(
    context: RunContextWrapper[AppContext],
    order_id: str,
) -> str:
    """Track the shipment status of an order.

    Args:
        order_id: The order ID to track.
    """
    app = context.context

    # Use the API key from context
    async with httpx.AsyncClient() as client:
        response = await client.get(
            f"https://shipping-api.example.com/track/{order_id}",
            headers={"Authorization": f"Bearer {app.api_key}"},
        )
        return response.text

The LLM only sees the status, limit, and order_id parameters — the context is invisible to the model but available to your code.

Mutable Context for State Accumulation

The context object can be mutable, allowing tools to accumulate state across the agent loop:

@dataclass
class AuditContext:
    user_id: str
    actions_taken: list[str]  # Mutable list
    total_cost: float = 0.0   # Running total

@function_tool
async def process_refund(
    context: RunContextWrapper[AuditContext],
    order_id: str,
    amount: float,
) -> str:
    """Process a refund for an order.

    Args:
        order_id: The order to refund.
        amount: The refund amount in USD.
    """
    audit = context.context

    # Record the action
    audit.actions_taken.append(f"Refund ${amount} for order {order_id}")
    audit.total_cost += amount

    return f"Refund of ${amount} processed for order {order_id}."

# After the agent run, inspect accumulated state
context = AuditContext(user_id="usr_123", actions_taken=[])
result = await Runner.run(agent, "Process refunds for orders ORD-1 ($50) and ORD-2 ($30)", context=context)

print(f"Actions taken: {context.actions_taken}")
print(f"Total refund cost: ${context.total_cost}")
# Actions taken: ['Refund $50.0 for order ORD-1', 'Refund $30.0 for order ORD-2']
# Total refund cost: $80.0

This is invaluable for audit logging, cost tracking, and post-run analysis.

Context in Multi-Agent Handoffs

When an agent hands off to another agent, the context carries over automatically. All agents in the workflow share the same context instance:

@dataclass
class SessionContext:
    user_id: str
    conversation_id: str
    escalation_count: int = 0

billing_agent = Agent[SessionContext](
    name="Billing Agent",
    instructions="Handle billing inquiries.",
)

support_agent = Agent[SessionContext](
    name="Support Agent",
    instructions="Handle general support. Hand off billing questions to the Billing Agent.",
    handoffs=[billing_agent],
)

# Both agents see the same SessionContext
context = SessionContext(user_id="usr_456", conversation_id="conv_789")
result = await Runner.run(support_agent, "I need a refund", context=context)

Typed Context Best Practices

Use Dataclasses or Pydantic Models

Dataclasses are the simplest option:

from dataclasses import dataclass, field

@dataclass
class AppContext:
    user_id: str
    tenant_id: str
    permissions: list[str] = field(default_factory=list)
    request_id: str = ""

Pydantic models work too, with the added benefit of validation:

from pydantic import BaseModel

class AppContext(BaseModel):
    user_id: str
    tenant_id: str
    permissions: list[str] = []
    request_id: str = ""

Separate Read-Only and Mutable State

Use frozen dataclasses for state that should not change:

from dataclasses import dataclass, field

@dataclass(frozen=True)
class AuthContext:
    user_id: str
    permissions: tuple[str, ...]  # Immutable

@dataclass
class MutableState:
    actions_log: list[str] = field(default_factory=list)
    api_calls_made: int = 0

@dataclass
class AppContext:
    auth: AuthContext      # Cannot be modified
    state: MutableState    # Can accumulate state

Practical Example: Multi-Tenant SaaS Agent

Here is a complete example showing how context enables a multi-tenant customer support agent:

import asyncio
from dataclasses import dataclass, field
from agents import Agent, Runner, RunContextWrapper, function_tool

@dataclass
class TenantContext:
    tenant_id: str
    tenant_name: str
    user_id: str
    user_email: str
    plan: str  # "free", "pro", "enterprise"
    actions: list[str] = field(default_factory=list)

def build_instructions(ctx: RunContextWrapper[TenantContext], agent: Agent) -> str:
    t = ctx.context
    return f"""You are a support agent for {t.tenant_name}.
Current user: {t.user_email} (Plan: {t.plan})

Guidelines:
- Only access data for tenant {t.tenant_id}
- If user is on free plan, mention relevant upgrade benefits naturally
- Log all data access for compliance"""

@function_tool
async def get_usage_stats(
    ctx: RunContextWrapper[TenantContext],
) -> str:
    """Get the current user's usage statistics."""
    t = ctx.context
    t.actions.append(f"Accessed usage stats for {t.user_id}")
    return f"API calls this month: 1,247 / {'10,000' if t.plan == 'pro' else '1,000'}"

@function_tool
async def submit_ticket(
    ctx: RunContextWrapper[TenantContext],
    subject: str,
    description: str,
    priority: str = "normal",
) -> str:
    """Submit a support ticket.

    Args:
        subject: Ticket subject.
        description: Detailed description of the issue.
        priority: Ticket priority (low, normal, high, urgent).
    """
    t = ctx.context
    t.actions.append(f"Created ticket: {subject} (priority: {priority})")
    return f"Ticket created: #{t.tenant_id[:4]}-{len(t.actions):04d} — {subject}"

agent = Agent[TenantContext](
    name="Support Agent",
    instructions=build_instructions,
    tools=[get_usage_stats, submit_ticket],
)

async def main():
    context = TenantContext(
        tenant_id="tn_acme_corp",
        tenant_name="Acme Corporation",
        user_id="usr_alice",
        user_email="[email protected]",
        plan="pro",
    )

    result = await Runner.run(
        agent,
        "I think I am hitting my API rate limit. Can you check and open a ticket?",
        context=context,
    )

    print(result.final_output)
    print(f"\nAudit log: {context.actions}")

asyncio.run(main())

This pattern provides:

  • Tenant isolation: The agent and tools only access data for the current tenant
  • Personalization: Instructions adapt to the user's plan
  • Audit trail: All actions are logged in the mutable context
  • Type safety: The IDE knows exactly what fields are available on the context

Source: OpenAI Agents SDK — Context

Share
C

Written by

CallSphere Team

Expert insights on AI voice agents and customer communication automation.

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

Technical Guides

Building Voice Agents with the OpenAI Realtime API: Full Tutorial

Hands-on tutorial for building voice agents with the OpenAI Realtime API — WebSocket setup, PCM16 audio, server VAD, and function calling.

Technical Guides

How AI Voice Agents Actually Work: Technical Deep Dive (2026 Edition)

A full technical walkthrough of how modern AI voice agents work — speech-to-text, LLM orchestration, TTS, tool calling, and sub-second latency.

Technical Guides

Voice AI Latency: Why Sub-Second Response Time Matters (And How to Hit It)

A technical breakdown of voice AI latency budgets — STT, LLM, TTS, network — and how to hit sub-second end-to-end response times.

AI Interview Prep

8 AI System Design Interview Questions Actually Asked at FAANG in 2026

Real AI system design interview questions from Google, Meta, OpenAI, and Anthropic. Covers LLM serving, RAG pipelines, recommendation systems, AI agents, and more — with detailed answer frameworks.

AI Interview Prep

8 LLM & RAG Interview Questions That OpenAI, Anthropic & Google Actually Ask

Real LLM and RAG interview questions from top AI labs in 2026. Covers fine-tuning vs RAG decisions, production RAG pipelines, evaluation, PEFT methods, positional embeddings, and safety guardrails with expert answers.

AI Interview Prep

7 ML Fundamentals Questions That Top AI Companies Still Ask in 2026

Real machine learning fundamentals interview questions from OpenAI, Google DeepMind, Meta, and xAI in 2026. Covers attention mechanisms, KV cache, distributed training, MoE, speculative decoding, and emerging architectures.