Skip to content
Learn Agentic AI
Learn Agentic AI8 min read8 views

Hosted Tools in OpenAI Agents SDK: Web Search, Code Interpreter, and File Search

Learn how to use OpenAI's hosted tools — WebSearchTool, CodeInterpreterTool, FileSearchTool, and ImageGenerationTool — to give your agents powerful built-in capabilities without writing custom logic.

What Are Hosted Tools?

The OpenAI Agents SDK ships with a set of hosted tools — capabilities that run on OpenAI's infrastructure rather than in your local Python process. This means your agent can search the web, execute code, parse files, and generate images without you writing any implementation logic. You simply attach the tool and the SDK handles the rest.

There are four hosted tools available:

  • WebSearchTool — real-time internet search
  • CodeInterpreterTool — sandboxed Python code execution
  • FileSearchTool — semantic search across uploaded documents
  • ImageGenerationTool — generate images from text descriptions

Let's walk through each one with working code examples.

WebSearchTool: Real-Time Internet Access

The WebSearchTool gives your agent the ability to search the internet and retrieve current information. This is essential for agents that need up-to-date data — stock prices, weather, recent news, or documentation that changes frequently.

flowchart TD
    START["Hosted Tools in OpenAI Agents SDK: Web Search, Co…"] --> A
    A["What Are Hosted Tools?"]
    A --> B
    B["WebSearchTool: Real-Time Internet Access"]
    B --> C
    C["CodeInterpreterTool: Sandboxed Code Exe…"]
    C --> D
    D["FileSearchTool: Semantic Document Search"]
    D --> E
    E["ImageGenerationTool: Text-to-Image"]
    E --> F
    F["Combining Multiple Hosted Tools"]
    F --> G
    G["When to Use Hosted vs. Custom Tools"]
    G --> DONE["Key Takeaways"]
    style START fill:#4f46e5,stroke:#4338ca,color:#fff
    style DONE fill:#059669,stroke:#047857,color:#fff
from agents import Agent, Runner, WebSearchTool

agent = Agent(
    name="Research Assistant",
    instructions="You are a research assistant. Use web search to find current, accurate information. Always cite your sources.",
    tools=[WebSearchTool()],
)

result = Runner.run_sync(agent, "What were the top AI announcements this week?")
print(result.final_output)

You can customize the search behavior with parameters:

web_tool = WebSearchTool(
    search_context_size="high",  # "low", "medium", or "high"
    user_location={
        "type": "approximate",
        "city": "San Francisco",
        "region": "California",
        "country": "US",
    },
)

agent = Agent(
    name="Local Guide",
    instructions="You help users find local events and restaurants.",
    tools=[web_tool],
)

The search_context_size parameter controls how much context the model receives from search results. Use "high" when the agent needs detailed information and "low" when you want faster, more concise responses.

CodeInterpreterTool: Sandboxed Code Execution

The CodeInterpreterTool lets your agent write and execute Python code in a secure sandbox on OpenAI's servers. This is powerful for data analysis, math, chart generation, and any task that benefits from computation.

See AI Voice Agents Handle Real Calls

Book a free demo or calculate how much you can save with AI voice automation.

from agents import Agent, Runner, CodeInterpreterTool

agent = Agent(
    name="Data Analyst",
    instructions="You are a data analyst. Use the code interpreter to perform calculations, analyze data, and generate charts when helpful.",
    tools=[CodeInterpreterTool()],
)

result = Runner.run_sync(
    agent,
    "Calculate the compound interest on $10,000 at 7% annually over 20 years. Show the growth year by year.",
)
print(result.final_output)

The sandbox comes with common Python libraries pre-installed (numpy, pandas, matplotlib, etc.), so your agent can do serious data work without any setup on your end.

The FileSearchTool enables your agent to search through uploaded documents using semantic similarity. It requires a vector store — a collection of files that OpenAI indexes for retrieval.

flowchart TD
    CENTER(("Core Concepts"))
    CENTER --> N0["WebSearchTool — real-time internet sear…"]
    CENTER --> N1["CodeInterpreterTool — sandboxed Python …"]
    CENTER --> N2["FileSearchTool — semantic search across…"]
    CENTER --> N3["ImageGenerationTool — generate images f…"]
    style CENTER fill:#4f46e5,stroke:#4338ca,color:#fff
from agents import Agent, Runner, FileSearchTool

agent = Agent(
    name="Document Assistant",
    instructions="You answer questions based on the uploaded documents. Always reference the specific document and section where you found the information.",
    tools=[
        FileSearchTool(
            vector_store_ids=["vs_abc123"],  # your vector store ID
            max_num_results=5,
        )
    ],
)

result = Runner.run_sync(agent, "What does the Q4 report say about revenue growth?")
print(result.final_output)

You create vector stores through the OpenAI API:

from openai import OpenAI

client = OpenAI()

# Create a vector store
vector_store = client.vector_stores.create(name="Company Documents")

# Upload files to it
client.vector_stores.files.create(
    vector_store_id=vector_store.id,
    file_id="file-abc123",  # previously uploaded file ID
)

The max_num_results parameter controls how many document chunks the agent receives. More results give better coverage but increase token usage.

ImageGenerationTool: Text-to-Image

The ImageGenerationTool lets your agent generate images from text descriptions using DALL-E or other OpenAI image models.

from agents import Agent, Runner, ImageGenerationTool

agent = Agent(
    name="Creative Assistant",
    instructions="You help users create visual content. Generate images when the user describes what they want to see.",
    tools=[ImageGenerationTool()],
)

result = Runner.run_sync(
    agent,
    "Create an illustration of a robot reading a book in a cozy library.",
)
print(result.final_output)

Combining Multiple Hosted Tools

The real power comes from combining tools. An agent can search the web, analyze data with code, and reference documents — all in a single conversation:

agent = Agent(
    name="Full-Stack Researcher",
    instructions="You are an advanced research assistant with access to web search, code execution, and document analysis. Use the right tool for each subtask.",
    tools=[
        WebSearchTool(),
        CodeInterpreterTool(),
        FileSearchTool(vector_store_ids=["vs_abc123"]),
    ],
)

The agent's model decides which tool to call based on the user's request and the instructions you provide. You do not need to write routing logic — the LLM handles tool selection automatically.

When to Use Hosted vs. Custom Tools

Use hosted tools when the built-in capability matches your needs. They are maintained by OpenAI, require zero implementation, and run on scalable infrastructure. Use custom function tools (covered in the next post) when you need to call your own APIs, query your database, or implement domain-specific logic that hosted tools cannot cover.

Share
C

Written by

CallSphere Team

Expert insights on AI voice agents and customer communication automation.

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

Technical Guides

Building Voice Agents with the OpenAI Realtime API: Full Tutorial

Hands-on tutorial for building voice agents with the OpenAI Realtime API — WebSocket setup, PCM16 audio, server VAD, and function calling.

Technical Guides

How AI Voice Agents Actually Work: Technical Deep Dive (2026 Edition)

A full technical walkthrough of how modern AI voice agents work — speech-to-text, LLM orchestration, TTS, tool calling, and sub-second latency.

Technical Guides

Voice AI Latency: Why Sub-Second Response Time Matters (And How to Hit It)

A technical breakdown of voice AI latency budgets — STT, LLM, TTS, network — and how to hit sub-second end-to-end response times.

AI Interview Prep

8 AI System Design Interview Questions Actually Asked at FAANG in 2026

Real AI system design interview questions from Google, Meta, OpenAI, and Anthropic. Covers LLM serving, RAG pipelines, recommendation systems, AI agents, and more — with detailed answer frameworks.

AI Interview Prep

8 LLM & RAG Interview Questions That OpenAI, Anthropic & Google Actually Ask

Real LLM and RAG interview questions from top AI labs in 2026. Covers fine-tuning vs RAG decisions, production RAG pipelines, evaluation, PEFT methods, positional embeddings, and safety guardrails with expert answers.

AI Interview Prep

7 ML Fundamentals Questions That Top AI Companies Still Ask in 2026

Real machine learning fundamentals interview questions from OpenAI, Google DeepMind, Meta, and xAI in 2026. Covers attention mechanisms, KV cache, distributed training, MoE, speculative decoding, and emerging architectures.