Hosted MCP Tools: Server-Side Tool Execution with OpenAI
Use HostedMCPTool to run MCP tools server-side on OpenAI's infrastructure with zero callback overhead, including tool_config structure, approval options, and GitMCP for repository access.
What Are Hosted MCP Tools?
With MCPServerStdio and MCPServerStreamableHTTP, tool execution happens on your infrastructure — your machine runs the subprocess or your server handles the HTTP request. Hosted MCP tools flip this: OpenAI runs the MCP server and executes tools on their side.
This eliminates the callback overhead entirely. When the LLM decides to use a tool, the tool is executed within OpenAI's infrastructure in the same request cycle. No round trip back to your client, no webhook to handle, no server to maintain.
Hosted MCP is ideal when:
- You want to use third-party MCP servers without running them yourself
- Latency matters and you want to eliminate client-server round trips
- You are accessing public data sources like open source repositories
- You want the simplest possible agent setup
HostedMCPTool Configuration
The HostedMCPTool class configures a tool that OpenAI executes server-side:
flowchart TD
START["Hosted MCP Tools: Server-Side Tool Execution with…"] --> A
A["What Are Hosted MCP Tools?"]
A --> B
B["HostedMCPTool Configuration"]
B --> C
C["The tool_config Structure"]
C --> D
D["No Callback Overhead"]
D --> E
E["Using GitMCP for Repository Access"]
E --> F
F["Combining Hosted and Local MCP"]
F --> G
G["Multiple Hosted MCP Servers"]
G --> H
H["Approval Handling"]
H --> DONE["Key Takeaways"]
style START fill:#4f46e5,stroke:#4338ca,color:#fff
style DONE fill:#059669,stroke:#047857,color:#fff
from agents import Agent, Runner
from agents.tool import HostedMCPTool
# Basic hosted MCP tool
tool = HostedMCPTool(
tool_config={
"type": "mcp",
"server_label": "deepwiki",
"server_url": "https://mcp.deepwiki.com/mcp",
"require_approval": "never",
}
)
agent = Agent(
name="Research Agent",
instructions="""You are a research assistant that can look up
documentation and technical information using DeepWiki.
Use the available tools to find accurate, up-to-date information.""",
tools=[tool],
)
The tool_config Structure
Every HostedMCPTool requires a tool_config dictionary with these fields:
tool_config = {
# Required: always "mcp" for MCP tools
"type": "mcp",
# Required: a unique label identifying this server
# Used in tool filtering and approval policies
"server_label": "my-server",
# Required: the URL of the MCP server
# Must be a publicly accessible HTTPS endpoint
"server_url": "https://mcp.example.com/mcp",
# Required: approval policy for tool execution
# Options: "never", "always", or a per-tool map
"require_approval": "never",
# Optional: specific headers for authentication
"headers": {
"Authorization": "Bearer token123",
},
}
server_label
The server_label is a human-readable identifier for the MCP server. It appears in traces, logs, and tool filtering contexts. Choose descriptive labels:
# Good labels
"server_label": "github-repos"
"server_label": "company-wiki"
"server_label": "weather-api"
# Bad labels
"server_label": "server1"
"server_label": "tools"
server_url
The server_url must point to a publicly accessible MCP endpoint. OpenAI's servers will connect to this URL to execute tools, so it cannot be a localhost or private network address.
require_approval Options
The require_approval field controls whether tool executions need human approval:
See AI Voice Agents Handle Real Calls
Book a free demo or calculate how much you can save with AI voice automation.
# Never require approval — tools execute automatically
"require_approval": "never"
# Always require approval — every tool call needs human confirmation
"require_approval": "always"
# Per-tool approval map
"require_approval": {
"never": {
"tool_names": ["search", "read_file", "list_directory"]
},
"always": {
"tool_names": ["write_file", "delete_file", "execute_command"]
}
}
The per-tool map is the most practical option for production. Read operations run automatically while write operations require approval.
No Callback Overhead
The key advantage of hosted MCP is the execution flow. Compare the two approaches:
Client-side MCP (Stdio/HTTP):
- LLM generates tool call
- Response sent to your client
- Your client calls the MCP server
- MCP server executes the tool
- Your client sends the result back to OpenAI
- LLM continues with the result
Hosted MCP:
- LLM generates tool call
- OpenAI calls the MCP server directly
- LLM continues with the result
Hosted MCP eliminates steps 2, 3, and 5. For agents that make multiple tool calls in sequence, this can reduce total latency significantly.
Using GitMCP for Repository Access
GitMCP is a hosted MCP service that provides access to any public GitHub repository. It exposes tools for reading files, searching code, and browsing repository structure:
flowchart TD
CENTER(("Core Concepts"))
CENTER --> N0["You want to use third-party MCP servers…"]
CENTER --> N1["Latency matters and you want to elimina…"]
CENTER --> N2["You are accessing public data sources l…"]
CENTER --> N3["You want the simplest possible agent se…"]
CENTER --> N4["LLM generates tool call"]
CENTER --> N5["Response sent to your client"]
style CENTER fill:#4f46e5,stroke:#4338ca,color:#fff
from agents import Agent, Runner
from agents.tool import HostedMCPTool
import asyncio
async def main():
# Access any public GitHub repo via GitMCP
gitmcp_tool = HostedMCPTool(
tool_config={
"type": "mcp",
"server_label": "gitmcp",
"server_url": "https://gitmcp.io/openai/openai-agents-python",
"require_approval": "never",
}
)
agent = Agent(
name="Code Research Agent",
instructions="""You are a code research assistant.
Use the GitMCP tools to explore repository contents,
read source files, and understand codebases.
When asked about a project, start by reading the README
and then explore relevant source files.""",
tools=[gitmcp_tool],
)
result = await Runner.run(
agent,
input="What are the main classes in the OpenAI Agents SDK? Show me the Agent class definition.",
)
print(result.final_output)
asyncio.run(main())
The server URL pattern for GitMCP is https://gitmcp.io/{owner}/{repo}. This works with any public repository.
Combining Hosted and Local MCP
You can mix hosted MCP tools with local MCP servers and native function tools:
from agents import Agent, Runner, function_tool
from agents.tool import HostedMCPTool
from agents.mcp import MCPServerStdio
@function_tool
def calculate_cost(hours: float, rate: float) -> str:
"""Calculate project cost from hours and hourly rate."""
return f"Estimated cost: ${hours * rate:,.2f}"
async def main():
# Hosted tool — runs on OpenAI's side
wiki_tool = HostedMCPTool(
tool_config={
"type": "mcp",
"server_label": "deepwiki",
"server_url": "https://mcp.deepwiki.com/mcp",
"require_approval": "never",
}
)
# Local tool — runs as subprocess on your machine
fs_server = MCPServerStdio(
name="Filesystem",
params={
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/workspace"],
},
)
async with fs_server:
agent = Agent(
name="Hybrid Agent",
instructions="""You have three types of tools:
1. DeepWiki for researching technical documentation
2. Filesystem for reading and writing local files
3. A cost calculator for project estimation
Use the appropriate tool for each task.""",
tools=[wiki_tool, calculate_cost],
mcp_servers=[fs_server],
)
result = await Runner.run(
agent,
input="Research FastAPI best practices from the docs, save a summary to /workspace/fastapi-notes.md, and estimate the cost for 40 hours at $150/hr",
)
print(result.final_output)
Multiple Hosted MCP Servers
An agent can use multiple hosted MCP servers simultaneously. Each gets its own server_label for identification:
deepwiki = HostedMCPTool(
tool_config={
"type": "mcp",
"server_label": "deepwiki",
"server_url": "https://mcp.deepwiki.com/mcp",
"require_approval": "never",
}
)
gitmcp_agents = HostedMCPTool(
tool_config={
"type": "mcp",
"server_label": "agents-sdk-repo",
"server_url": "https://gitmcp.io/openai/openai-agents-python",
"require_approval": "never",
}
)
gitmcp_cookbook = HostedMCPTool(
tool_config={
"type": "mcp",
"server_label": "openai-cookbook",
"server_url": "https://gitmcp.io/openai/openai-cookbook",
"require_approval": "never",
}
)
research_agent = Agent(
name="Multi-Source Researcher",
instructions="""You can search documentation via DeepWiki and
explore two GitHub repositories: the Agents SDK and the OpenAI Cookbook.
Cross-reference information across sources for comprehensive answers.""",
tools=[deepwiki, gitmcp_agents, gitmcp_cookbook],
)
Approval Handling
When using require_approval: "always" or per-tool approval, you need to handle approval requests in your application:
from agents import Runner
async def run_with_approval(agent, user_input):
result = await Runner.run(agent, input=user_input)
# Check if the agent is waiting for tool approval
while hasattr(result, "pending_approvals") and result.pending_approvals:
for approval in result.pending_approvals:
print(f"Tool '{approval.tool_name}' wants to execute:")
print(f" Args: {approval.arguments}")
user_decision = input("Approve? (y/n): ").strip().lower()
if user_decision == "y":
approval.approve()
else:
approval.deny(reason="User declined")
# Continue the run after approval decisions
result = await Runner.run(
agent,
input=result,
)
return result.final_output
When to Choose Hosted MCP
| Factor | Hosted MCP | Client-side MCP |
|---|---|---|
| Latency | Lower (no round trips) | Higher (client-server round trips) |
| Infrastructure | None (OpenAI manages) | You run the server |
| Private data | Not suitable | Full control |
| Public APIs | Ideal | Unnecessary overhead |
| Cost | Included in API usage | Your server costs |
| Customization | Limited to server capabilities | Full control |
Use hosted MCP for public data sources and third-party integrations. Use client-side MCP for private data, custom business logic, and scenarios where you need full control over tool execution.
Written by
CallSphere Team
Expert insights on AI voice agents and customer communication automation.
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.