Skip to content
Learn Agentic AI
Learn Agentic AI14 min read10 views

Model Context Protocol (MCP): Connecting Agents to External Tools

Understand MCP, the open protocol for connecting AI agents to external tools and data sources, including its architecture, five transport types, and how to build your first MCP-connected agent.

What Is MCP and Why Does It Matter?

The Model Context Protocol (MCP) is an open standard that defines how AI agents discover and invoke external tools. Think of it as a USB-C port for AI — instead of every agent building custom integrations for every tool, MCP provides a universal interface.

Before MCP, connecting an agent to a database required writing a custom tool function. Connecting to a file system required another. To Slack, another. Each integration was hand-coded, tightly coupled, and impossible to share across frameworks. MCP solves this by defining a standard protocol between MCP clients (your agent) and MCP servers (the tool providers).

The result: a growing ecosystem of MCP servers that any MCP-compatible agent can use out of the box. GitHub, filesystem access, databases, web browsers, Slack, and hundreds more tools are available as MCP servers.

MCP Architecture

MCP follows a client-server model:

flowchart TD
    START["Model Context Protocol MCP: Connecting Agents to …"] --> A
    A["What Is MCP and Why Does It Matter?"]
    A --> B
    B["MCP Architecture"]
    B --> C
    C["The Five Transport Types"]
    C --> D
    D["When to Use Each Transport"]
    D --> E
    E["Building Your First MCP-Connected Agent"]
    E --> F
    F["Combining Multiple MCP Servers"]
    F --> G
    G["MCP Server Discovery"]
    G --> H
    H["Security Considerations"]
    H --> DONE["Key Takeaways"]
    style START fill:#4f46e5,stroke:#4338ca,color:#fff
    style DONE fill:#059669,stroke:#047857,color:#fff

MCP Server — A process or service that exposes tools, prompts, and resources. An MCP server declares what tools it has, what parameters they accept, and handles execution. For example, a filesystem MCP server exposes tools like read_file, write_file, and list_directory.

MCP Client — The agent framework that discovers available tools from MCP servers and calls them as needed. OpenAI's Agents SDK includes a built-in MCP client.

Transport Layer — The communication channel between client and server. MCP supports five transport types, each suited to different deployment scenarios.

The flow works like this:

  1. The agent starts and connects to one or more MCP servers
  2. Each server reports its available tools (name, description, parameters)
  3. These tools are registered with the agent alongside any native tools
  4. When the LLM decides to use a tool, the Agents SDK routes the call to the appropriate MCP server
  5. The server executes the tool and returns results
  6. The agent incorporates the results into its response

The Five Transport Types

MCP defines five transport mechanisms. Choosing the right one depends on where your MCP server runs and how your agent communicates with it.

1. Stdio (Standard I/O)

The agent spawns the MCP server as a subprocess and communicates via stdin/stdout. The simplest transport — no network involved.

Best for: Local tools, development, CLI-based servers, filesystem access.

See AI Voice Agents Handle Real Calls

Book a free demo or calculate how much you can save with AI voice automation.

2. Streamable HTTP

The agent connects to a remote MCP server over HTTP. Supports streaming responses and server-sent events.

Best for: Remote tool servers, cloud-hosted services, production deployments.

3. SSE (Server-Sent Events)

The legacy HTTP transport. Uses SSE for server-to-client messages and HTTP POST for client-to-server messages. Being superseded by Streamable HTTP.

Best for: Backward compatibility with older MCP servers.

4. Hosted MCP

OpenAI hosts the MCP server and executes tools server-side. No client-side infrastructure needed — just point the agent at a URL.

Best for: Third-party integrations where you do not run the server yourself, GitMCP, DeepWiki.

5. Custom Transport

Build your own transport for specialized environments. Useful when your tools are behind a VPN, use gRPC, or have unique authentication requirements.

Best for: Enterprise environments with custom networking requirements.

When to Use Each Transport

Scenario Recommended Transport
Local file system access Stdio
Database queries on localhost Stdio
Cloud API integration Streamable HTTP
Third-party SaaS tools Hosted MCP
Development and testing Stdio
Multi-tenant production Streamable HTTP
Open source repo access Hosted MCP (GitMCP)

Building Your First MCP-Connected Agent

Let us build an agent that uses the filesystem MCP server to read and manage files:

flowchart LR
    S0["1. Stdio Standard I/O"]
    S0 --> S1
    S1["2. Streamable HTTP"]
    S1 --> S2
    S2["3. SSE Server-Sent Events"]
    S2 --> S3
    S3["4. Hosted MCP"]
    S3 --> S4
    S4["5. Custom Transport"]
    style S0 fill:#4f46e5,stroke:#4338ca,color:#fff
    style S4 fill:#059669,stroke:#047857,color:#fff
import asyncio
from agents import Agent, Runner
from agents.mcp import MCPServerStdio

async def main():
    # Create an MCP server that provides filesystem tools
    fs_server = MCPServerStdio(
        name="Filesystem",
        params={
            "command": "npx",
            "args": ["-y", "@modelcontextprotocol/server-filesystem", "/tmp/workspace"],
        },
    )

    # Start the server and discover its tools
    async with fs_server:
        # Create an agent that uses the MCP server's tools
        agent = Agent(
            name="File Manager",
            instructions="""You are a file management assistant.
            Use the filesystem tools to help users organize,
            read, and manage their files in /tmp/workspace.
            Always confirm before deleting files.""",
            mcp_servers=[fs_server],
        )

        # Run the agent
        result = await Runner.run(
            agent,
            input="List all files in the workspace and create a summary.txt with a list of them",
        )
        print(result.final_output)

asyncio.run(main())

When this runs, several things happen behind the scenes:

  1. MCPServerStdio spawns the filesystem server as a subprocess
  2. The async with block initializes the server and fetches its tool list
  3. The agent receives these tools (read_file, write_file, list_directory, etc.) alongside any native tools
  4. The LLM decides which tools to call based on the user's request
  5. Tool calls are routed through the MCP client to the server subprocess
  6. Results flow back and the agent formulates its response

Combining Multiple MCP Servers

Agents can connect to multiple MCP servers simultaneously:

async def multi_server_agent():
    fs_server = MCPServerStdio(
        name="Filesystem",
        params={
            "command": "npx",
            "args": ["-y", "@modelcontextprotocol/server-filesystem", "/tmp/workspace"],
        },
    )

    git_server = MCPServerStdio(
        name="Git",
        params={
            "command": "npx",
            "args": ["-y", "@modelcontextprotocol/server-git"],
        },
    )

    async with fs_server, git_server:
        agent = Agent(
            name="Dev Assistant",
            instructions="""You help developers manage their codebase.
            You can read/write files and perform git operations.""",
            mcp_servers=[fs_server, git_server],
        )

        result = await Runner.run(
            agent,
            input="Show me the git log and read the README.md file",
        )
        print(result.final_output)

MCP Server Discovery

The MCP ecosystem is growing rapidly. Here are some popular MCP servers:

  • @modelcontextprotocol/server-filesystem — File read/write/search
  • @modelcontextprotocol/server-git — Git operations
  • @modelcontextprotocol/server-github — GitHub API (issues, PRs, repos)
  • @modelcontextprotocol/server-postgres — PostgreSQL queries
  • @modelcontextprotocol/server-sqlite — SQLite database access
  • @modelcontextprotocol/server-brave-search — Web search
  • @modelcontextprotocol/server-puppeteer — Browser automation

Each of these can be connected to your agent with just a few lines of configuration.

Security Considerations

MCP servers have real capabilities — they can read files, execute queries, and make API calls. Security must be a first-class concern:

  1. Principle of least privilege — Only give servers access to the directories and resources they need
  2. Tool filtering — Use allow/block lists to restrict which tools an agent can call (covered in detail in Post 80)
  3. Approval policies — Require human approval for destructive operations
  4. Sandboxing — Run MCP servers in containers or restricted environments
  5. Audit logging — Log all tool invocations for compliance and debugging

MCP transforms how we build agent integrations. Instead of custom code for every tool, you have a standard protocol with a rich ecosystem of pre-built servers. The next four posts dive deep into each transport type and advanced configuration.

Share
C

Written by

CallSphere Team

Expert insights on AI voice agents and customer communication automation.

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.