MCP Prompts: Dynamic Agent Instructions from External Sources
Use MCP prompt resources to dynamically load and parameterize agent instructions from external servers, enabling centralized prompt management with list_prompts and get_prompt.
The Problem with Hardcoded Instructions
Most agent tutorials define instructions as static strings inside Python code:
agent = Agent(
name="Support Agent",
instructions="You are a customer support agent for Acme Corp...",
)
This works for demos, but it breaks down in production for several reasons:
- Different teams own different prompts. The product team writes the tone and policy guidelines. The engineering team deploys the agent. Hardcoding instructions forces both teams to coordinate on every prompt change.
- Prompts change faster than code. A/B tests, seasonal promotions, compliance updates — instructions need to change without redeploying the agent service.
- Multi-tenant agents need different instructions per client. A white-label SaaS product might serve dozens of customers, each with different policies and brand voices.
MCP Prompts solve this by making instructions a first-class resource that agents can fetch from external servers at runtime.
What Are MCP Prompts?
The MCP protocol defines three resource types: tools, resources, and prompts. While tools let agents take actions and resources provide data, prompts provide parameterized instruction templates that agents can retrieve on demand.
flowchart TD
START["MCP Prompts: Dynamic Agent Instructions from Exte…"] --> A
A["The Problem with Hardcoded Instructions"]
A --> B
B["What Are MCP Prompts?"]
B --> C
C["Defining Prompts on the Server"]
C --> D
D["Fetching Prompts from the Agent Side"]
D --> E
E["Using Dynamic Prompts with the Agents S…"]
E --> F
F["Storing Prompts in a Database"]
F --> G
G["Parameterized Template Patterns"]
G --> H
H["When to Use MCP Prompts vs Static Instr…"]
H --> DONE["Key Takeaways"]
style START fill:#4f46e5,stroke:#4338ca,color:#fff
style DONE fill:#059669,stroke:#047857,color:#fff
An MCP server can expose a list of named prompts, each with optional parameters. The agent calls list_prompts() to discover available prompts and get_prompt() to fetch a specific one with parameter values filled in.
This is different from just making an HTTP request to fetch a string. MCP prompts have a schema, support parameters with descriptions and required flags, and return structured message arrays — not just raw text.
Defining Prompts on the Server
Here is how to create an MCP server that exposes prompts:
See AI Voice Agents Handle Real Calls
Book a free demo or calculate how much you can save with AI voice automation.
# prompt_server.py
from mcp.server import Server
from mcp.types import Prompt, PromptArgument, PromptMessage, TextContent
server = Server("prompt-server")
PROMPTS = {
"customer-support": {
"description": "Instructions for a customer support agent",
"arguments": [
PromptArgument(
name="company_name",
description="The company the agent represents",
required=True,
),
PromptArgument(
name="tone",
description="Communication tone: friendly, professional, or casual",
required=False,
),
PromptArgument(
name="language",
description="Response language (default: English)",
required=False,
),
],
"template": (
"You are a customer support agent for {company_name}. "
"Your tone should be {tone}. Respond in {language}. "
"Always verify the customer identity before discussing account details. "
"Never share internal pricing or discount structures. "
"If you cannot resolve an issue, escalate to a human agent."
),
"defaults": {
"tone": "professional",
"language": "English",
},
},
"sales-outreach": {
"description": "Instructions for a sales outreach agent",
"arguments": [
PromptArgument(
name="product_name",
description="The product being sold",
required=True,
),
PromptArgument(
name="target_industry",
description="Industry vertical to target",
required=True,
),
],
"template": (
"You are a sales development representative for {product_name}. "
"You are targeting companies in the {target_industry} industry. "
"Lead with value propositions relevant to their industry pain points. "
"Ask qualifying questions before pitching features. "
"Always aim to book a discovery call as the next step."
),
"defaults": {},
},
}
@server.list_prompts()
async def list_prompts() -> list[Prompt]:
return [
Prompt(
name=name,
description=data["description"],
arguments=data["arguments"],
)
for name, data in PROMPTS.items()
]
@server.get_prompt()
async def get_prompt(name: str, arguments: dict | None = None) -> list[PromptMessage]:
if name not in PROMPTS:
raise ValueError(f"Unknown prompt: {name}")
prompt_def = PROMPTS[name]
args = {**prompt_def["defaults"], **(arguments or {})}
# Validate required arguments
for arg_def in prompt_def["arguments"]:
if arg_def.required and arg_def.name not in args:
raise ValueError(f"Missing required argument: {arg_def.name}")
text = prompt_def["template"].format(**args)
return [
PromptMessage(
role="user",
content=TextContent(type="text", text=text),
)
]
Fetching Prompts from the Agent Side
On the agent side, you connect to the prompt server and fetch instructions dynamically. The MCP client session exposes list_prompts() and get_prompt():
flowchart TD
CENTER(("Core Concepts"))
CENTER --> N0["The agent has a single, stable purpose"]
CENTER --> N1["Only engineers modify the instructions"]
CENTER --> N2["The application is single-tenant"]
CENTER --> N3["Non-engineers need to update agent beha…"]
CENTER --> N4["You serve multiple tenants with differe…"]
CENTER --> N5["Instructions change frequently without …"]
style CENTER fill:#4f46e5,stroke:#4338ca,color:#fff
from mcp.client import ClientSession
from mcp.client.stdio import stdio_client
async def load_instructions(
company: str, tone: str = "professional"
) -> str:
async with stdio_client("python", ["prompt_server.py"]) as (read, write):
async with ClientSession(read, write) as session:
await session.initialize()
# Discover available prompts
prompts = await session.list_prompts()
for p in prompts:
print(f"Available: {p.name} - {p.description}")
# Fetch a specific prompt with parameters
result = await session.get_prompt(
"customer-support",
arguments={
"company_name": company,
"tone": tone,
},
)
return result.messages[0].content.text
Using Dynamic Prompts with the Agents SDK
The real power comes from wiring MCP prompts into the OpenAI Agents SDK. You can use the instructions parameter as a callable that fetches prompts at runtime:
from agents import Agent, Runner
from agents.mcp import MCPServerStdio
prompt_server = MCPServerStdio(
name="Prompts",
params={"command": "python", "args": ["prompt_server.py"]},
)
tools_server = MCPServerStdio(
name="Tools",
params={"command": "python", "args": ["tools_server.py"]},
cache_tools_list=True,
)
async def dynamic_instructions(run_context, agent):
"""Fetch instructions from the prompt server at runtime."""
client = run_context.get("prompt_client")
if client:
result = await client.get_prompt(
"customer-support",
arguments={
"company_name": run_context.get("company", "Acme"),
"tone": run_context.get("tone", "professional"),
},
)
return result.messages[0].content.text
return "You are a helpful assistant."
agent = Agent(
name="Support Agent",
instructions=dynamic_instructions,
mcp_servers=[tools_server],
)
Storing Prompts in a Database
For production use, you will want prompts stored in a database rather than hardcoded in the server. This enables version control, A/B testing, and non-engineer editing:
import asyncpg
from mcp.server import Server
from mcp.types import Prompt, PromptArgument, PromptMessage, TextContent
server = Server("db-prompt-server")
db_pool = None
async def get_pool():
global db_pool
if db_pool is None:
db_pool = await asyncpg.create_pool(
"postgresql://localhost/prompts_db"
)
return db_pool
@server.list_prompts()
async def list_prompts() -> list[Prompt]:
pool = await get_pool()
rows = await pool.fetch(
"SELECT name, description, arguments FROM prompts WHERE active = true"
)
return [
Prompt(
name=row["name"],
description=row["description"],
arguments=[
PromptArgument(**arg) for arg in row["arguments"]
],
)
for row in rows
]
@server.get_prompt()
async def get_prompt(name: str, arguments: dict | None = None) -> list[PromptMessage]:
pool = await get_pool()
row = await pool.fetchrow(
"SELECT template, defaults FROM prompts WHERE name = $1 AND active = true",
name,
)
if not row:
raise ValueError(f"Prompt not found: {name}")
args = {**row["defaults"], **(arguments or {})}
text = row["template"].format(**args)
# Log prompt usage for analytics
await pool.execute(
"INSERT INTO prompt_usage_log (prompt_name, arguments) VALUES ($1, $2)",
name,
arguments,
)
return [
PromptMessage(
role="user",
content=TextContent(type="text", text=text),
)
]
Parameterized Template Patterns
Beyond simple string substitution, you can build sophisticated template patterns:
Conditional sections — Include blocks based on parameter presence:
def build_prompt(template_parts: list[dict], args: dict) -> str:
sections = []
for part in template_parts:
condition = part.get("condition")
if condition and condition not in args:
continue
text = part["text"].format(**args)
sections.append(text)
return " ".join(sections)
Versioned prompts — Serve different prompt versions for A/B testing:
@server.get_prompt()
async def get_prompt(name: str, arguments: dict | None = None):
version = (arguments or {}).pop("version", "latest")
pool = await get_pool()
row = await pool.fetchrow(
"SELECT template FROM prompts WHERE name = $1 AND version = $2",
name,
version,
)
# ... format and return
When to Use MCP Prompts vs Static Instructions
Use static instructions when:
- The agent has a single, stable purpose
- Only engineers modify the instructions
- The application is single-tenant
Use MCP Prompts when:
- Non-engineers need to update agent behavior
- You serve multiple tenants with different requirements
- Instructions change frequently without code deploys
- You want centralized prompt management across multiple agents
- You need audit logging of prompt versions and usage
MCP Prompts turn agent instructions from a code artifact into a managed resource. They give product teams direct control over agent behavior while keeping the engineering team focused on capabilities and infrastructure.
Written by
CallSphere Team
Expert insights on AI voice agents and customer communication automation.
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.