Skip to content
Learn Agentic AI
Learn Agentic AI12 min read1 views

Migrating from LangChain to OpenAI Agents SDK: A Practical Guide

A hands-on guide to migrating AI agent code from LangChain to the OpenAI Agents SDK. Covers concept mapping, code translation, testing strategies, and gradual migration paths.

Why Teams Migrate from LangChain

LangChain was the first widely adopted framework for building LLM applications, and it earned that position by moving fast. But as production requirements matured, teams encountered pain points: deep abstraction layers that obscured what prompts actually reached the model, rapidly changing APIs with frequent breaking changes, and heavyweight dependency trees.

The OpenAI Agents SDK takes a different approach: minimal abstractions, explicit control flow, and built-in primitives for the patterns that matter most in production — tool calling, agent handoffs, guardrails, and tracing.

Concept Mapping: LangChain to Agents SDK

Understanding the conceptual mapping is the first step. Here is how the core primitives translate:

flowchart TD
    START["Migrating from LangChain to OpenAI Agents SDK: A …"] --> A
    A["Why Teams Migrate from LangChain"]
    A --> B
    B["Concept Mapping: LangChain to Agents SDK"]
    B --> C
    C["Translating a LangChain Agent to Agents…"]
    C --> D
    D["Migrating Chains to Handoffs"]
    D --> E
    E["Gradual Migration Strategy"]
    E --> F
    F["FAQ"]
    F --> DONE["Key Takeaways"]
    style START fill:#4f46e5,stroke:#4338ca,color:#fff
    style DONE fill:#059669,stroke:#047857,color:#fff
LangChain OpenAI Agents SDK Notes
ChatOpenAI Agent(model="gpt-4o") Model config lives on the Agent
Tool / @tool @function_tool Decorator-based, type-safe
AgentExecutor Runner.run() Manages the agent loop
ConversationBufferMemory Conversation history in input Explicit message list
Chain Agent handoffs Compose via handoffs=[]
OutputParser output_type=MyModel Pydantic model on Agent

Translating a LangChain Agent to Agents SDK

Here is a typical LangChain agent that looks up product information:

# ── LangChain version ──
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_core.tools import tool
from langchain_core.prompts import ChatPromptTemplate

@tool
def lookup_product(product_id: str) -> str:
    """Look up product details by ID."""
    # database call here
    return f"Product {product_id}: Widget Pro, $49.99, in stock"

llm = ChatOpenAI(model="gpt-4o", temperature=0)
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a product assistant."),
    ("human", "{input}"),
    ("placeholder", "{agent_scratchpad}"),
])
agent = create_openai_tools_agent(llm, [lookup_product], prompt)
executor = AgentExecutor(agent=agent, tools=[lookup_product])
result = executor.invoke({"input": "Tell me about product P-1234"})

And here is the equivalent in the OpenAI Agents SDK:

See AI Voice Agents Handle Real Calls

Book a free demo or calculate how much you can save with AI voice automation.

# ── OpenAI Agents SDK version ──
from agents import Agent, Runner, function_tool

@function_tool
def lookup_product(product_id: str) -> str:
    """Look up product details by ID."""
    return f"Product {product_id}: Widget Pro, $49.99, in stock"

agent = Agent(
    name="Product Assistant",
    instructions="You are a product assistant.",
    model="gpt-4o",
    tools=[lookup_product],
)

result = Runner.run_sync(agent, "Tell me about product P-1234")
print(result.final_output)

The SDK version is roughly half the code. The agent loop, tool execution, and response parsing are handled internally by Runner.

Migrating Chains to Handoffs

LangChain uses chains to compose multiple steps. The Agents SDK uses handoffs to delegate between specialized agents.

from agents import Agent, Runner

billing_agent = Agent(
    name="Billing Agent",
    instructions="Handle billing questions. Access account data.",
    model="gpt-4o",
)

shipping_agent = Agent(
    name="Shipping Agent",
    instructions="Handle shipping and delivery questions.",
    model="gpt-4o",
)

triage_agent = Agent(
    name="Triage Agent",
    instructions="Route the user to the right specialist agent.",
    model="gpt-4o",
    handoffs=[billing_agent, shipping_agent],
)

result = Runner.run_sync(triage_agent, "Where is my order?")
print(result.final_output)

Gradual Migration Strategy

Do not rewrite everything at once. Migrate one agent or chain at a time.

# Compatibility wrapper: run both and compare
async def migrate_with_comparison(user_input: str):
    langchain_result = executor.invoke({"input": user_input})
    sdk_result = Runner.run_sync(agent, user_input)

    match = langchain_result["output"] == sdk_result.final_output
    log_comparison(user_input, langchain_result, sdk_result, match)

    # Return SDK result when confidence is high
    return sdk_result.final_output

FAQ

Can the Agents SDK work with non-OpenAI models like LangChain does?

Yes. The Agents SDK supports any model via the LiteLLM integration. Install openai-agents[litellm] and use model strings like litellm/anthropic/claude-sonnet-4-20250514. The tool calling and handoff mechanics work the same regardless of the model provider.

How do I migrate LangChain memory to the Agents SDK?

The Agents SDK does not have a built-in memory abstraction. Instead, you pass conversation history explicitly as a list of messages in the input parameter. Extract your existing conversation history from LangChain memory stores and format it as standard message dicts.

What about LangChain's document loaders and vector store integrations?

Those are data pipeline tools, not agent framework features. You can keep using LangChain's document loaders and vector stores alongside the Agents SDK. Wrap the retrieval logic in a @function_tool and the agent calls it like any other tool.


#LangChain #OpenAIAgentsSDK #Migration #Python #FrameworkMigration #AgenticAI #LearnAI #AIEngineering

Share
C

Written by

CallSphere Team

Expert insights on AI voice agents and customer communication automation.

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

Technical Guides

Building Multi-Agent Voice Systems with the OpenAI Agents SDK

A developer guide to building multi-agent voice systems with the OpenAI Agents SDK — triage, handoffs, shared state, and tool calling.

AI Interview Prep

7 AI Coding Interview Questions From Anthropic, Meta & OpenAI (2026 Edition)

Real AI coding interview questions from Anthropic, Meta, and OpenAI in 2026. Includes implementing attention from scratch, Anthropic's progressive coding screens, Meta's AI-assisted round, and vector search — with solution approaches.

Learn Agentic AI

AI Agent Framework Comparison 2026: LangGraph vs CrewAI vs AutoGen vs OpenAI Agents SDK

Side-by-side comparison of the top 4 AI agent frameworks: LangGraph, CrewAI, AutoGen, and OpenAI Agents SDK — architecture, features, production readiness, and when to choose each.

Learn Agentic AI

Building a Multi-Agent Data Pipeline: Ingestion, Transformation, and Analysis Agents

Build a three-agent data pipeline with ingestion, transformation, and analysis agents that process data from APIs, CSVs, and databases using Python.

Learn Agentic AI

OpenAI Agents SDK Deep Dive: Agents, Tools, Handoffs, and Guardrails Explained

Comprehensive guide to the OpenAI Agents SDK covering the Agent class, function tools, agent-as-tool pattern, handoff mechanism, input and output guardrails, and tracing.

Learn Agentic AI

Building a Research Agent with Web Search and Report Generation: Complete Tutorial

Build a research agent that searches the web, extracts and synthesizes data, and generates formatted reports using OpenAI Agents SDK and web search tools.