---
title: "Migrating from LangChain to OpenAI Agents SDK: A Practical Guide"
description: "A hands-on guide to migrating AI agent code from LangChain to the OpenAI Agents SDK. Covers concept mapping, code translation, testing strategies, and gradual migration paths."
canonical: https://callsphere.ai/blog/migrating-langchain-to-openai-agents-sdk-practical-guide
category: "Learn Agentic AI"
tags: ["LangChain", "OpenAI Agents SDK", "Migration", "Python", "Framework Migration"]
author: "CallSphere Team"
published: 2026-03-17T00:00:00.000Z
updated: 2026-05-06T01:02:44.664Z
---

# Migrating from LangChain to OpenAI Agents SDK: A Practical Guide

> A hands-on guide to migrating AI agent code from LangChain to the OpenAI Agents SDK. Covers concept mapping, code translation, testing strategies, and gradual migration paths.

## Why Teams Migrate from LangChain

LangChain was the first widely adopted framework for building LLM applications, and it earned that position by moving fast. But as production requirements matured, teams encountered pain points: deep abstraction layers that obscured what prompts actually reached the model, rapidly changing APIs with frequent breaking changes, and heavyweight dependency trees.

The OpenAI Agents SDK takes a different approach: minimal abstractions, explicit control flow, and built-in primitives for the patterns that matter most in production — tool calling, agent handoffs, guardrails, and tracing.

## Concept Mapping: LangChain to Agents SDK

Understanding the conceptual mapping is the first step. Here is how the core primitives translate:

```mermaid
flowchart LR
    INPUT(["User input"])
    AGENT["Agent
name plus instructions"]
    HAND{"Handoff to
another agent?"}
    SUB["Sub-agent
specialist"]
    GUARD{"Guardrail
passed?"}
    TOOL["Tool call"]
    SDK[("Tracing
OpenAI dashboard")]
    OUT(["Final output"])
    INPUT --> AGENT --> HAND
    HAND -->|Yes| SUB --> GUARD
    HAND -->|No| GUARD
    GUARD -->|Yes| TOOL --> AGENT
    GUARD -->|Block| OUT
    AGENT --> OUT
    AGENT --> SDK
    style AGENT fill:#4f46e5,stroke:#4338ca,color:#fff
    style GUARD fill:#f59e0b,stroke:#d97706,color:#1f2937
    style SDK fill:#ede9fe,stroke:#7c3aed,color:#1e1b4b
    style OUT fill:#059669,stroke:#047857,color:#fff
```

| LangChain | OpenAI Agents SDK | Notes |
| --- | --- | --- |
| `ChatOpenAI` | `Agent(model="gpt-4o")` | Model config lives on the Agent |
| `Tool` / `@tool` | `@function_tool` | Decorator-based, type-safe |
| `AgentExecutor` | `Runner.run()` | Manages the agent loop |
| `ConversationBufferMemory` | Conversation history in `input` | Explicit message list |
| `Chain` | Agent handoffs | Compose via `handoffs=[]` |
| `OutputParser` | `output_type=MyModel` | Pydantic model on Agent |

## Translating a LangChain Agent to Agents SDK

Here is a typical LangChain agent that looks up product information:

```python
# ── LangChain version ──
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_core.tools import tool
from langchain_core.prompts import ChatPromptTemplate

@tool
def lookup_product(product_id: str) -> str:
    """Look up product details by ID."""
    # database call here
    return f"Product {product_id}: Widget Pro, $49.99, in stock"

llm = ChatOpenAI(model="gpt-4o", temperature=0)
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a product assistant."),
    ("human", "{input}"),
    ("placeholder", "{agent_scratchpad}"),
])
agent = create_openai_tools_agent(llm, [lookup_product], prompt)
executor = AgentExecutor(agent=agent, tools=[lookup_product])
result = executor.invoke({"input": "Tell me about product P-1234"})
```

And here is the equivalent in the OpenAI Agents SDK:

```python
# ── OpenAI Agents SDK version ──
from agents import Agent, Runner, function_tool

@function_tool
def lookup_product(product_id: str) -> str:
    """Look up product details by ID."""
    return f"Product {product_id}: Widget Pro, $49.99, in stock"

agent = Agent(
    name="Product Assistant",
    instructions="You are a product assistant.",
    model="gpt-4o",
    tools=[lookup_product],
)

result = Runner.run_sync(agent, "Tell me about product P-1234")
print(result.final_output)
```

The SDK version is roughly half the code. The agent loop, tool execution, and response parsing are handled internally by `Runner`.

## Migrating Chains to Handoffs

LangChain uses chains to compose multiple steps. The Agents SDK uses handoffs to delegate between specialized agents.

```python
from agents import Agent, Runner

billing_agent = Agent(
    name="Billing Agent",
    instructions="Handle billing questions. Access account data.",
    model="gpt-4o",
)

shipping_agent = Agent(
    name="Shipping Agent",
    instructions="Handle shipping and delivery questions.",
    model="gpt-4o",
)

triage_agent = Agent(
    name="Triage Agent",
    instructions="Route the user to the right specialist agent.",
    model="gpt-4o",
    handoffs=[billing_agent, shipping_agent],
)

result = Runner.run_sync(triage_agent, "Where is my order?")
print(result.final_output)
```

## Gradual Migration Strategy

Do not rewrite everything at once. Migrate one agent or chain at a time.

```python
# Compatibility wrapper: run both and compare
async def migrate_with_comparison(user_input: str):
    langchain_result = executor.invoke({"input": user_input})
    sdk_result = Runner.run_sync(agent, user_input)

    match = langchain_result["output"] == sdk_result.final_output
    log_comparison(user_input, langchain_result, sdk_result, match)

    # Return SDK result when confidence is high
    return sdk_result.final_output
```

## FAQ

### Can the Agents SDK work with non-OpenAI models like LangChain does?

Yes. The Agents SDK supports any model via the LiteLLM integration. Install `openai-agents[litellm]` and use model strings like `litellm/anthropic/claude-sonnet-4-20250514`. The tool calling and handoff mechanics work the same regardless of the model provider.

### How do I migrate LangChain memory to the Agents SDK?

The Agents SDK does not have a built-in memory abstraction. Instead, you pass conversation history explicitly as a list of messages in the `input` parameter. Extract your existing conversation history from LangChain memory stores and format it as standard message dicts.

### What about LangChain's document loaders and vector store integrations?

Those are data pipeline tools, not agent framework features. You can keep using LangChain's document loaders and vector stores alongside the Agents SDK. Wrap the retrieval logic in a `@function_tool` and the agent calls it like any other tool.

---

#LangChain #OpenAIAgentsSDK #Migration #Python #FrameworkMigration #AgenticAI #LearnAI #AIEngineering

---

Source: https://callsphere.ai/blog/migrating-langchain-to-openai-agents-sdk-practical-guide
