Getting Started with OpenAI Agents SDK: Installation and Your First Agent
Learn how to install the OpenAI Agents SDK, configure your API key, create your first intelligent agent, and run it with Runner.run_sync(). A complete hands-on tutorial.
Why the OpenAI Agents SDK Matters
Building AI agents that can reason, use tools, and collaborate with other agents has traditionally required stitching together multiple libraries, prompt management systems, and orchestration layers. The OpenAI Agents SDK changes this by providing a lightweight, production-ready framework that handles the agent loop, tool execution, handoffs, and structured outputs — all in a single cohesive package.
Released as an open-source Python library, the Agents SDK is the successor to OpenAI's earlier Swarm experimental project. It is designed for production use with features like type safety, streaming support, built-in tracing, and a model-agnostic architecture that works with any LLM provider — not just OpenAI models.
In this tutorial, you will go from zero to a working agent in under 10 minutes.
Prerequisites
Before you begin, make sure you have:
flowchart TD
START["Getting Started with OpenAI Agents SDK: Installat…"] --> A
A["Why the OpenAI Agents SDK Matters"]
A --> B
B["Prerequisites"]
B --> C
C["Step 1: Install the SDK"]
C --> D
D["Step 2: Configure Your API Key"]
D --> E
E["Step 3: Create Your First Agent"]
E --> F
F["Step 4: Run the Agent"]
F --> G
G["Step 5: Use the Async Runner"]
G --> H
H["Complete Working Example"]
H --> DONE["Key Takeaways"]
style START fill:#4f46e5,stroke:#4338ca,color:#fff
style DONE fill:#059669,stroke:#047857,color:#fff
- Python 3.9 or later installed on your system
- An OpenAI API key (get one at platform.openai.com)
- A basic understanding of Python async/await patterns (helpful but not required)
Step 1: Install the SDK
The SDK is distributed as a standard Python package. Install it with pip:
pip install openai-agents
This installs the core agents package along with its dependencies. The package is lightweight — it pulls in openai, pydantic, and a few other essentials.
If you plan to use voice features or LiteLLM for multi-provider support, install the optional extras:
# Voice support
pip install 'openai-agents[voice]'
# LiteLLM integration for non-OpenAI models
pip install 'openai-agents[litellm]'
Verify the installation:
python -c "import agents; print('Agents SDK installed successfully')"
Step 2: Configure Your API Key
The SDK needs an OpenAI API key to communicate with language models. The simplest approach is to set an environment variable:
export OPENAI_API_KEY="sk-proj-your-key-here"
For a more permanent setup, add this to your shell profile (~/.bashrc or ~/.zshrc). Alternatively, you can set the key programmatically in your code:
from agents import set_default_openai_key
set_default_openai_key("sk-proj-your-key-here")
Security note: Never hardcode API keys in source files that get committed to version control. Use environment variables, .env files (with python-dotenv), or a secrets manager for production deployments.
Step 3: Create Your First Agent
An agent in the OpenAI Agents SDK is defined using the Agent class. At minimum, you provide a name and instructions:
See AI Voice Agents Handle Real Calls
Book a free demo or calculate how much you can save with AI voice automation.
flowchart LR
S0["Step 1: Install the SDK"]
S0 --> S1
S1["Step 2: Configure Your API Key"]
S1 --> S2
S2["Step 3: Create Your First Agent"]
S2 --> S3
S3["Step 4: Run the Agent"]
S3 --> S4
S4["Step 5: Use the Async Runner"]
style S0 fill:#4f46e5,stroke:#4338ca,color:#fff
style S4 fill:#059669,stroke:#047857,color:#fff
from agents import Agent
agent = Agent(
name="Helpful Assistant",
instructions="You are a helpful assistant that answers questions clearly and concisely. Always provide accurate information and admit when you are unsure about something.",
)
That is it. You have defined an agent. The instructions parameter is the system prompt that guides the agent's behavior. The name parameter identifies the agent in logs, traces, and multi-agent handoffs.
Step 4: Run the Agent
The SDK provides three ways to run an agent. For getting started, Runner.run_sync() is the simplest — it blocks until the agent produces a final response:
from agents import Agent, Runner
agent = Agent(
name="Helpful Assistant",
instructions="You are a helpful assistant. Answer questions clearly and concisely.",
)
result = Runner.run_sync(agent, "What are the three laws of thermodynamics?")
print(result.final_output)
Save this as first_agent.py and run it:
python first_agent.py
You should see the agent's response explaining the three laws of thermodynamics. The result object is a RunResult that contains:
result.final_output— the agent's text response (or structured output)result.input— the original input you providedresult.new_items— all items generated during the run (messages, tool calls, etc.)result.last_agent— the agent that produced the final output (important for multi-agent workflows)
Step 5: Use the Async Runner
For production applications, you will typically use the async Runner.run() method. This is essential when building web servers, processing multiple requests, or integrating with async frameworks like FastAPI:
import asyncio
from agents import Agent, Runner
agent = Agent(
name="Helpful Assistant",
instructions="You are a helpful assistant. Answer questions clearly and concisely.",
)
async def main():
result = await Runner.run(agent, "Explain quantum entanglement in simple terms.")
print(result.final_output)
asyncio.run(main())
The async version is functionally identical to run_sync() but does not block the event loop, making it suitable for concurrent workloads.
Complete Working Example
Here is a more complete example that demonstrates several features together:
import asyncio
from agents import Agent, Runner, ModelSettings
# Create an agent with custom model settings
agent = Agent(
name="Science Tutor",
instructions="""You are a science tutor for high school students.
Rules:
- Explain concepts using simple analogies
- Break down complex ideas into numbered steps
- End each explanation with a thought-provoking question
- Keep responses under 200 words""",
model="gpt-4o",
model_settings=ModelSettings(
temperature=0.7,
top_p=0.9,
),
)
async def main():
questions = [
"Why is the sky blue?",
"How do vaccines work?",
"What causes tides?",
]
for question in questions:
print(f"\nQ: {question}")
print("-" * 50)
result = await Runner.run(agent, question)
print(result.final_output)
asyncio.run(main())
This example creates a science tutor agent with a specific persona, configures model parameters, and processes multiple questions sequentially.
Understanding What Happens Under the Hood
When you call Runner.run(), the SDK executes an agent loop:
- The agent's instructions and your input are sent to the language model
- The model generates a response
- If the response is a final text output, the loop ends and returns the result
- If the response contains tool calls, the SDK executes the tools and feeds results back to the model
- If the response contains a handoff to another agent, the SDK switches to that agent
- Steps 2-5 repeat until a final output is produced or
max_turnsis reached
For this basic example without tools, the loop completes in a single turn — the model simply generates a text response.
Common Pitfalls and Troubleshooting
API Key Not Found: If you get an authentication error, verify your environment variable is set correctly. Run echo $OPENAI_API_KEY to check.
Model Not Available: The default model is gpt-4o. If your API key does not have access to this model, specify a different one with the model parameter.
Rate Limiting: If you are processing many requests, you may hit rate limits. The SDK does not automatically retry on rate limits — you will need to handle this in your application code or use the retry configuration covered in a later post.
Import Errors: Make sure you are importing from agents, not openai_agents or openai.agents. The package name is openai-agents but the Python module is agents.
Next Steps
You now have a working agent. In the next posts in this series, we will explore:
- Configuring agents with advanced parameters and dynamic instructions
- Adding tools so agents can interact with external systems
- Structured outputs for type-safe responses
- Multi-agent workflows with handoffs
The OpenAI Agents SDK is designed to scale from simple single-agent scripts to complex multi-agent production systems. Start simple, then layer in capabilities as your use case demands.
Source: OpenAI Agents SDK Documentation
Written by
CallSphere Team
Expert insights on AI voice agents and customer communication automation.
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.