Function Tools: Turn Any Python Function into an Agent Tool
Learn how to use the @function_tool decorator to give OpenAI agents the ability to call Python functions. Covers type hints, docstrings, timeouts, and Pydantic validation.
Tools Are What Make Agents Useful
A language model without tools can only generate text. Tools give agents the ability to interact with the real world — query databases, call APIs, process files, execute calculations, and take actions. The OpenAI Agents SDK makes it trivial to turn any Python function into a tool that agents can call.
The primary mechanism is the @function_tool decorator, which automatically generates the JSON schema that the LLM needs to understand how to call your function.
The @function_tool Decorator
At its simplest, you decorate a function and add it to an agent's tool list:
flowchart TD
START["Function Tools: Turn Any Python Function into an …"] --> A
A["Tools Are What Make Agents Useful"]
A --> B
B["The @function_tool Decorator"]
B --> C
C["How Schema Generation Works"]
C --> D
D["Supported Type Hints"]
D --> E
E["Docstring Parsing Styles"]
E --> F
F["Pydantic Field Constraints"]
F --> G
G["Async Tools"]
G --> H
H["Tool Timeouts"]
H --> DONE["Key Takeaways"]
style START fill:#4f46e5,stroke:#4338ca,color:#fff
style DONE fill:#059669,stroke:#047857,color:#fff
from agents import Agent, Runner, function_tool
@function_tool
def get_weather(city: str) -> str:
"""Get the current weather for a given city.
Args:
city: The name of the city to check weather for.
"""
# In production, this would call a real weather API
return f"The weather in {city} is 72F and sunny."
agent = Agent(
name="Weather Bot",
instructions="Help users check the weather. Use the get_weather tool.",
tools=[get_weather],
)
result = Runner.run_sync(agent, "What is the weather in Tokyo?")
print(result.final_output)
When the agent receives "What is the weather in Tokyo?", the LLM recognizes it should call the get_weather tool with city="Tokyo", receives the result, and formulates a natural language response.
How Schema Generation Works
The @function_tool decorator inspects your function to automatically generate a JSON schema:
- Function name becomes the tool name
- Type hints become the parameter types in the schema
- Docstring becomes the tool description
- Parameter descriptions are extracted from the docstring
- Default values mark parameters as optional
@function_tool
def search_products(
query: str,
category: str = "all",
max_results: int = 10,
in_stock_only: bool = True,
) -> str:
"""Search the product catalog.
Args:
query: Search terms to find products.
category: Product category to filter by. Defaults to "all".
max_results: Maximum number of results to return.
in_stock_only: Whether to only show in-stock items.
"""
return f"Found products matching '{query}' in {category}"
This generates a schema where query is required (no default value) and category, max_results, and in_stock_only are optional with their defaults.
Supported Type Hints
The SDK supports all standard Python types for tool parameters:
from typing import Optional
@function_tool
def example_tool(
name: str, # String parameter
count: int, # Integer parameter
ratio: float, # Float parameter
enabled: bool, # Boolean parameter
tags: list[str], # List of strings
metadata: dict[str, str], # Dictionary
optional_note: Optional[str] = None, # Optional parameter
) -> str:
"""An example showing all supported types."""
return "OK"
For complex parameter structures, use Pydantic models:
from pydantic import BaseModel, Field
from agents import function_tool
class SearchFilters(BaseModel):
min_price: float = Field(description="Minimum price in USD")
max_price: float = Field(description="Maximum price in USD")
brands: list[str] = Field(description="List of brand names to include")
@function_tool
def advanced_search(query: str, filters: SearchFilters) -> str:
"""Search products with advanced filters.
Args:
query: Search terms.
filters: Advanced filtering options.
"""
return f"Searching for '{query}' with price range ${filters.min_price}-${filters.max_price}"
Docstring Parsing Styles
The SDK extracts parameter descriptions from docstrings. It supports three common formats:
Google Style (Recommended)
@function_tool
def create_task(title: str, priority: int) -> str:
"""Create a new task in the project.
Args:
title: The title of the task.
priority: Priority level from 1 (low) to 5 (critical).
"""
return f"Created task: {title} (P{priority})"
Sphinx Style
@function_tool
def create_task(title: str, priority: int) -> str:
"""Create a new task in the project.
:param title: The title of the task.
:param priority: Priority level from 1 (low) to 5 (critical).
"""
return f"Created task: {title} (P{priority})"
NumPy Style
@function_tool
def create_task(title: str, priority: int) -> str:
"""Create a new task in the project.
Parameters
----------
title : str
The title of the task.
priority : int
Priority level from 1 (low) to 5 (critical).
"""
return f"Created task: {title} (P{priority})"
All three produce equivalent tool schemas. Use whichever style matches your project's conventions.
Pydantic Field Constraints
For more precise parameter validation, use pydantic.Field in your tool's parameter model. You can achieve this by defining a custom model and using it as the tool's input:
flowchart TD
CENTER(("Core Concepts"))
CENTER --> N0["Function name becomes the tool name"]
CENTER --> N1["Type hints become the parameter types i…"]
CENTER --> N2["Docstring becomes the tool description"]
CENTER --> N3["Parameter descriptions are extracted fr…"]
CENTER --> N4["Default values mark parameters as optio…"]
CENTER --> N5["Call list_team_members to see who is av…"]
style CENTER fill:#4f46e5,stroke:#4338ca,color:#fff
from pydantic import BaseModel, Field
from agents import function_tool
class BookingRequest(BaseModel):
guest_name: str = Field(
description="Full name of the guest",
min_length=2,
max_length=100,
)
room_type: str = Field(
description="Type of room to book",
pattern="^(single|double|suite)$",
)
nights: int = Field(
description="Number of nights to stay",
ge=1,
le=30,
)
special_requests: str = Field(
default="",
description="Any special requests or accommodations",
max_length=500,
)
@function_tool
def book_room(request: BookingRequest) -> str:
"""Book a hotel room for a guest.
Args:
request: The booking details.
"""
return f"Booked {request.room_type} room for {request.guest_name} for {request.nights} nights."
The field constraints are included in the JSON schema sent to the LLM, helping the model generate valid arguments.
See AI Voice Agents Handle Real Calls
Book a free demo or calculate how much you can save with AI voice automation.
Async Tools
Tools can be async functions, which is essential when they perform I/O operations:
import httpx
from agents import function_tool
@function_tool
async def fetch_url(url: str) -> str:
"""Fetch the content of a web page.
Args:
url: The URL to fetch.
"""
async with httpx.AsyncClient() as client:
response = await client.get(url, timeout=10)
response.raise_for_status()
return response.text[:2000] # Truncate to avoid token limits
@function_tool
async def query_database(sql: str) -> str:
"""Execute a read-only SQL query.
Args:
sql: The SQL query to execute.
"""
# Using an async database driver
async with get_db_connection() as conn:
rows = await conn.fetch(sql)
return str(rows)
Async tools are executed concurrently when the model issues parallel tool calls.
Tool Timeouts
Long-running tools should have timeouts to prevent the agent loop from hanging:
@function_tool(timeout=10)
async def slow_api_call(query: str) -> str:
"""Call a potentially slow external API.
Args:
query: The query to send to the API.
"""
async with httpx.AsyncClient() as client:
response = await client.get(f"https://slow-api.example.com/search?q={query}")
return response.text
If the tool exceeds the timeout, the SDK raises a ToolTimeoutError, which is caught by the agent loop and reported back to the LLM as an error. The agent can then decide to retry or handle the failure gracefully.
Custom Tool Names
By default, the tool name is the function name. Override it with the name parameter:
@function_tool(name="search_knowledge_base")
def kb_search(query: str) -> str:
"""Search the internal knowledge base.
Args:
query: Search query.
"""
return "Results from knowledge base..."
This is useful when the function name is not descriptive enough for the LLM, or when you want to avoid exposing internal naming conventions.
Accessing Agent Context in Tools
Tools can access the run context by accepting a RunContextWrapper as their first parameter:
from agents import function_tool, RunContextWrapper
from dataclasses import dataclass
@dataclass
class UserSession:
user_id: str
tenant_id: str
permissions: list[str]
@function_tool
async def get_user_orders(
context: RunContextWrapper[UserSession],
limit: int = 10,
) -> str:
"""Get recent orders for the current user.
Args:
limit: Maximum number of orders to return.
"""
session = context.context
# Use session.user_id to query the correct user's orders
return f"Orders for user {session.user_id}: [...]"
The RunContextWrapper parameter is automatically detected and excluded from the tool's JSON schema — the LLM never sees it.
A Complete Multi-Tool Agent
Here is a practical example combining multiple tools:
import asyncio
from agents import Agent, Runner, function_tool
@function_tool
def add_task(title: str, assignee: str, priority: str = "medium") -> str:
"""Add a new task to the project board.
Args:
title: Task title.
assignee: Team member to assign the task to.
priority: Priority level (low, medium, high, critical).
"""
return f"Created task '{title}' assigned to {assignee} with {priority} priority."
@function_tool
def list_team_members() -> str:
"""Get a list of all team members and their roles."""
return "Alice (Backend), Bob (Frontend), Carol (DevOps), Dave (QA)"
@function_tool
def get_sprint_status() -> str:
"""Get the current sprint's progress and remaining capacity."""
return "Sprint 23: 15/20 story points completed. 5 points remaining. 3 days left."
project_manager = Agent(
name="PM Assistant",
instructions="""You are a project management assistant. Help users manage tasks,
check sprint status, and coordinate with team members.
When creating tasks:
- Always check the team roster first to validate assignees
- Check sprint capacity before adding new tasks
- Suggest appropriate priority levels based on context""",
tools=[add_task, list_team_members, get_sprint_status],
)
async def main():
result = await Runner.run(
project_manager,
"We need to fix the login bug urgently. Who on the team could handle it?",
)
print(result.final_output)
asyncio.run(main())
In this example, the agent will likely:
- Call
list_team_members()to see who is available - Call
get_sprint_status()to check capacity - Reason about who should handle a login bug (Backend or QA)
- Possibly call
add_task()to create the task - Provide a recommendation to the user
Best Practices
Write clear docstrings. The LLM uses the tool description to decide when and how to call it. Vague descriptions lead to misuse.
Use precise type hints.
stris less helpful than a Pydantic model with field constraints. The more precise the schema, the more accurate the tool calls.Return strings, not objects. Tool return values are converted to strings and injected into the conversation. Return human-readable text that the LLM can reason about.
Set timeouts on I/O tools. Any tool that calls an external service should have a timeout.
Validate inputs inside tools. Even though the LLM sees the schema, it can still produce invalid arguments. Validate and return clear error messages.
Keep tools stateless when possible. Stateless tools are easier to test, retry, and parallelize.
Source: OpenAI Agents SDK — Tools
Written by
CallSphere Team
Expert insights on AI voice agents and customer communication automation.
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.