---
title: "Getting Started with OpenAI Agents SDK: Installation and Your First Agent"
description: "Learn how to install the OpenAI Agents SDK, configure your API key, create your first intelligent agent, and run it with Runner.run_sync(). A complete hands-on tutorial."
canonical: https://callsphere.ai/blog/getting-started-openai-agents-sdk-installation-first-agent
category: "Learn Agentic AI"
tags: ["OpenAI", "Agents SDK", "Python", "Getting Started", "Tutorial"]
author: "CallSphere Team"
published: 2026-03-14T00:00:00.000Z
updated: 2026-05-07T16:10:39.805Z
---

# Getting Started with OpenAI Agents SDK: Installation and Your First Agent

> Learn how to install the OpenAI Agents SDK, configure your API key, create your first intelligent agent, and run it with Runner.run_sync(). A complete hands-on tutorial.

## Why the OpenAI Agents SDK Matters

Building AI agents that can reason, use tools, and collaborate with other agents has traditionally required stitching together multiple libraries, prompt management systems, and orchestration layers. The OpenAI Agents SDK changes this by providing a lightweight, production-ready framework that handles the agent loop, tool execution, handoffs, and structured outputs — all in a single cohesive package.

Released as an open-source Python library, the Agents SDK is the successor to OpenAI's earlier Swarm experimental project. It is designed for production use with features like type safety, streaming support, built-in tracing, and a model-agnostic architecture that works with any LLM provider — not just OpenAI models.

In this tutorial, you will go from zero to a working agent in under 10 minutes.

## Prerequisites

Before you begin, make sure you have:

```mermaid
flowchart LR
    INPUT(["User input"])
    AGENT["Agent
name plus instructions"]
    HAND{"Handoff to
another agent?"}
    SUB["Sub-agent
specialist"]
    GUARD{"Guardrail
passed?"}
    TOOL["Tool call"]
    SDK[("Tracing
OpenAI dashboard")]
    OUT(["Final output"])
    INPUT --> AGENT --> HAND
    HAND -->|Yes| SUB --> GUARD
    HAND -->|No| GUARD
    GUARD -->|Yes| TOOL --> AGENT
    GUARD -->|Block| OUT
    AGENT --> OUT
    AGENT --> SDK
    style AGENT fill:#4f46e5,stroke:#4338ca,color:#fff
    style GUARD fill:#f59e0b,stroke:#d97706,color:#1f2937
    style SDK fill:#ede9fe,stroke:#7c3aed,color:#1e1b4b
    style OUT fill:#059669,stroke:#047857,color:#fff
```

- **Python 3.9 or later** installed on your system
- An **OpenAI API key** (get one at [platform.openai.com](https://platform.openai.com))
- A basic understanding of Python async/await patterns (helpful but not required)

## Step 1: Install the SDK

The SDK is distributed as a standard Python package. Install it with pip:

```bash
pip install openai-agents
```

This installs the core `agents` package along with its dependencies. The package is lightweight — it pulls in `openai`, `pydantic`, and a few other essentials.

If you plan to use voice features or LiteLLM for multi-provider support, install the optional extras:

```bash
# Voice support
pip install 'openai-agents[voice]'

# LiteLLM integration for non-OpenAI models
pip install 'openai-agents[litellm]'
```

Verify the installation:

```bash
python -c "import agents; print('Agents SDK installed successfully')"
```

## Step 2: Configure Your API Key

The SDK needs an OpenAI API key to communicate with language models. The simplest approach is to set an environment variable:

```bash
export OPENAI_API_KEY="sk-proj-your-key-here"
```

For a more permanent setup, add this to your shell profile (`~/.bashrc` or `~/.zshrc`). Alternatively, you can set the key programmatically in your code:

```python
from agents import set_default_openai_key

set_default_openai_key("sk-proj-your-key-here")
```

**Security note:** Never hardcode API keys in source files that get committed to version control. Use environment variables, `.env` files (with `python-dotenv`), or a secrets manager for production deployments.

## Step 3: Create Your First Agent

An agent in the OpenAI Agents SDK is defined using the `Agent` class. At minimum, you provide a name and instructions:

```python
from agents import Agent

agent = Agent(
    name="Helpful Assistant",
    instructions="You are a helpful assistant that answers questions clearly and concisely. Always provide accurate information and admit when you are unsure about something.",
)
```

That is it. You have defined an agent. The `instructions` parameter is the system prompt that guides the agent's behavior. The `name` parameter identifies the agent in logs, traces, and multi-agent handoffs.

## Step 4: Run the Agent

The SDK provides three ways to run an agent. For getting started, `Runner.run_sync()` is the simplest — it blocks until the agent produces a final response:

```python
from agents import Agent, Runner

agent = Agent(
    name="Helpful Assistant",
    instructions="You are a helpful assistant. Answer questions clearly and concisely.",
)

result = Runner.run_sync(agent, "What are the three laws of thermodynamics?")

print(result.final_output)
```

Save this as `first_agent.py` and run it:

```bash
python first_agent.py
```

You should see the agent's response explaining the three laws of thermodynamics. The `result` object is a `RunResult` that contains:

- `result.final_output` — the agent's text response (or structured output)
- `result.input` — the original input you provided
- `result.new_items` — all items generated during the run (messages, tool calls, etc.)
- `result.last_agent` — the agent that produced the final output (important for multi-agent workflows)

## Step 5: Use the Async Runner

For production applications, you will typically use the async `Runner.run()` method. This is essential when building web servers, processing multiple requests, or integrating with async frameworks like FastAPI:

```python
import asyncio
from agents import Agent, Runner

agent = Agent(
    name="Helpful Assistant",
    instructions="You are a helpful assistant. Answer questions clearly and concisely.",
)

async def main():
    result = await Runner.run(agent, "Explain quantum entanglement in simple terms.")
    print(result.final_output)

asyncio.run(main())
```

The async version is functionally identical to `run_sync()` but does not block the event loop, making it suitable for concurrent workloads.

## Complete Working Example

Here is a more complete example that demonstrates several features together:

```python
import asyncio
from agents import Agent, Runner, ModelSettings

# Create an agent with custom model settings
agent = Agent(
    name="Science Tutor",
    instructions="""You are a science tutor for high school students.

Rules:
- Explain concepts using simple analogies
- Break down complex ideas into numbered steps
- End each explanation with a thought-provoking question
- Keep responses under 200 words""",
    model="gpt-4o",
    model_settings=ModelSettings(
        temperature=0.7,
        top_p=0.9,
    ),
)

async def main():
    questions = [
        "Why is the sky blue?",
        "How do vaccines work?",
        "What causes tides?",
    ]

    for question in questions:
        print(f"\nQ: {question}")
        print("-" * 50)
        result = await Runner.run(agent, question)
        print(result.final_output)

asyncio.run(main())
```

This example creates a science tutor agent with a specific persona, configures model parameters, and processes multiple questions sequentially.

## Understanding What Happens Under the Hood

When you call `Runner.run()`, the SDK executes an **agent loop**:

1. The agent's instructions and your input are sent to the language model
2. The model generates a response
3. If the response is a final text output, the loop ends and returns the result
4. If the response contains tool calls, the SDK executes the tools and feeds results back to the model
5. If the response contains a handoff to another agent, the SDK switches to that agent
6. Steps 2-5 repeat until a final output is produced or `max_turns` is reached

For this basic example without tools, the loop completes in a single turn — the model simply generates a text response.

## Common Pitfalls and Troubleshooting

**API Key Not Found**: If you get an authentication error, verify your environment variable is set correctly. Run `echo $OPENAI_API_KEY` to check.

**Model Not Available**: The default model is `gpt-4o`. If your API key does not have access to this model, specify a different one with the `model` parameter.

**Rate Limiting**: If you are processing many requests, you may hit rate limits. The SDK does not automatically retry on rate limits — you will need to handle this in your application code or use the retry configuration covered in a later post.

**Import Errors**: Make sure you are importing from `agents`, not `openai_agents` or `openai.agents`. The package name is `openai-agents` but the Python module is `agents`.

## Next Steps

You now have a working agent. In the next posts in this series, we will explore:

- Configuring agents with advanced parameters and dynamic instructions
- Adding tools so agents can interact with external systems
- Structured outputs for type-safe responses
- Multi-agent workflows with handoffs

The OpenAI Agents SDK is designed to scale from simple single-agent scripts to complex multi-agent production systems. Start simple, then layer in capabilities as your use case demands.

---

**Source:** [OpenAI Agents SDK Documentation](https://openai.github.io/openai-agents-python/)

---

Source: https://callsphere.ai/blog/getting-started-openai-agents-sdk-installation-first-agent
