Skip to content
Learn Agentic AI
Learn Agentic AI11 min read2 views

Building an Agent with Mastra Framework: TypeScript-First Agent Development

Learn how to build AI agents using the Mastra framework. This guide covers project setup, agent definition with typed tools, persistent memory, workflow orchestration, and deployment strategies for TypeScript-first agent applications.

What Is Mastra

Mastra is an open-source TypeScript framework designed specifically for building AI agents, workflows, and RAG pipelines. Unlike general-purpose libraries that bolt agent capabilities onto existing chat abstractions, Mastra treats agents as first-class primitives with built-in support for tools, memory, structured outputs, and multi-step workflows.

The framework follows a "TypeScript-first" philosophy — every component is fully typed, schemas are defined with Zod, and the developer experience prioritizes IDE autocompletion and compile-time safety.

Project Setup

Scaffold a new Mastra project using the CLI:

flowchart TD
    START["Building an Agent with Mastra Framework: TypeScri…"] --> A
    A["What Is Mastra"]
    A --> B
    B["Project Setup"]
    B --> C
    C["Defining Tools"]
    C --> D
    D["Defining an Agent"]
    D --> E
    E["Registering with the Mastra Instance"]
    E --> F
    F["Running the Agent"]
    F --> G
    G["Adding Memory"]
    G --> H
    H["Workflows for Multi-Step Processes"]
    H --> DONE["Key Takeaways"]
    style START fill:#4f46e5,stroke:#4338ca,color:#fff
    style DONE fill:#059669,stroke:#047857,color:#fff
npx create-mastra@latest my-agent-app
cd my-agent-app

The CLI prompts you for your preferred LLM provider and generates a project structure:

my-agent-app/
  src/
    mastra/
      agents/
        index.ts       # Agent definitions
      tools/
        index.ts       # Tool definitions
      index.ts         # Mastra instance
  .env
  package.json

Install dependencies and set your API key:

npm install
echo "OPENAI_API_KEY=sk-proj-your-key" > .env

Defining Tools

Tools give your agent capabilities beyond text generation. Each tool has a typed input schema, a description for the LLM, and an execute function:

// src/mastra/tools/index.ts
import { createTool } from "@mastra/core";
import { z } from "zod";

export const searchDocsTool = createTool({
  id: "search_docs",
  description: "Search the documentation for relevant articles",
  inputSchema: z.object({
    query: z.string().describe("The search query"),
    limit: z.number().default(5).describe("Max results to return"),
  }),
  outputSchema: z.object({
    results: z.array(
      z.object({
        title: z.string(),
        snippet: z.string(),
        url: z.string(),
      })
    ),
  }),
  execute: async ({ context }) => {
    const { query, limit } = context;
    const results = await searchKnowledgeBase(query, limit);
    return { results };
  },
});

export const createTicketTool = createTool({
  id: "create_support_ticket",
  description: "Create a support ticket for unresolved issues",
  inputSchema: z.object({
    title: z.string(),
    description: z.string(),
    priority: z.enum(["low", "medium", "high"]),
  }),
  execute: async ({ context }) => {
    const ticket = await ticketService.create(context);
    return { ticketId: ticket.id, status: "created" };
  },
});

The inputSchema serves dual purpose: it generates the JSON Schema sent to the LLM for function calling and it validates the arguments at runtime before execute runs.

Defining an Agent

Agents combine a model, system instructions, and tools into a coherent unit:

See AI Voice Agents Handle Real Calls

Book a free demo or calculate how much you can save with AI voice automation.

// src/mastra/agents/index.ts
import { Agent } from "@mastra/core";
import { searchDocsTool, createTicketTool } from "../tools";

export const supportAgent = new Agent({
  name: "Support Agent",
  instructions: `You are a customer support agent for a SaaS platform.
Your primary task is to answer user questions by searching documentation.
If you cannot resolve an issue after searching, create a support ticket.
Always be concise and reference specific documentation links.`,
  model: {
    provider: "OPEN_AI",
    name: "gpt-4o",
    toolChoice: "auto",
  },
  tools: {
    search_docs: searchDocsTool,
    create_support_ticket: createTicketTool,
  },
});

Registering with the Mastra Instance

The Mastra instance is the central registry for all agents, tools, and workflows:

// src/mastra/index.ts
import { Mastra } from "@mastra/core";
import { supportAgent } from "./agents";

export const mastra = new Mastra({
  agents: { supportAgent },
});

Running the Agent

Execute the agent programmatically or through the built-in dev server:

import { mastra } from "./mastra";

async function main() {
  const agent = mastra.getAgent("supportAgent");

  const response = await agent.generate(
    "How do I reset my password? I've tried the forgot password link but it's not sending emails."
  );

  console.log(response.text);
}

main();

For development, Mastra provides a playground:

npx mastra dev

This launches a local web interface where you can interact with your agents, inspect tool calls, and debug conversation flows.

Adding Memory

Mastra supports persistent memory so agents remember context across conversations:

import { Agent } from "@mastra/core";
import { PostgresMemory } from "@mastra/memory";

const memory = new PostgresMemory({
  connectionString: process.env.DATABASE_URL!,
});

export const supportAgent = new Agent({
  name: "Support Agent",
  instructions: "...",
  model: { provider: "OPEN_AI", name: "gpt-4o" },
  tools: { /* ... */ },
  memory,
});

With memory enabled, calling agent.generate() with a threadId parameter automatically loads and saves conversation history.

Workflows for Multi-Step Processes

For complex operations that go beyond a single agent loop, Mastra provides typed workflows:

import { Workflow, Step } from "@mastra/core";
import { z } from "zod";

const onboardingWorkflow = new Workflow({
  name: "user-onboarding",
  triggerSchema: z.object({
    userId: z.string(),
    plan: z.enum(["free", "pro", "enterprise"]),
  }),
});

onboardingWorkflow
  .step(new Step({
    id: "create-workspace",
    execute: async ({ context }) => {
      return { workspaceId: await createWorkspace(context.userId) };
    },
  }))
  .then(new Step({
    id: "send-welcome",
    execute: async ({ context }) => {
      await sendWelcomeEmail(context.userId, context.workspaceId);
      return { emailSent: true };
    },
  }));

FAQ

How does Mastra compare to LangChain.js?

Mastra is more opinionated and TypeScript-native. LangChain.js offers broader integrations and a larger community, but Mastra provides tighter type safety, a built-in dev playground, and a cleaner API surface. Mastra is a good choice if you want a batteries-included framework specifically for agent applications rather than a general-purpose LLM toolkit.

Can I use Mastra with providers other than OpenAI?

Yes. Mastra supports Anthropic, Google Gemini, and Groq out of the box. Specify the provider in the agent's model configuration. The tool calling interface remains identical regardless of the underlying model provider.

Is Mastra suitable for production deployments?

Mastra is designed for production use. It supports deployment to Vercel, Cloudflare Workers, and any Node.js server. The framework includes built-in observability hooks, error handling, and structured logging for production monitoring.


#Mastra #TypeScript #AIAgents #Framework #ToolCalling #AgentMemory #AgenticAI #LearnAI #AIEngineering

Share
C

Written by

CallSphere Team

Expert insights on AI voice agents and customer communication automation.

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

Use Cases

Automating Client Document Collection: How AI Agents Chase Missing Tax Documents and Reduce Filing Delays

See how AI agents automate tax document collection — chasing missing W-2s, 1099s, and receipts via calls and texts to eliminate the #1 CPA bottleneck.

Buyer Guides

How to Evaluate an AI Voice Agent Vendor: A 10-Step Scoring Framework

A 10-step scoring framework for evaluating AI voice agent vendors — with a downloadable rubric and worked example.

Learn Agentic AI

Prompt Engineering for AI Agents: System Prompts, Tool Descriptions, and Few-Shot Patterns

Agent-specific prompt engineering techniques: crafting effective system prompts, writing clear tool descriptions for function calling, and few-shot examples that improve complex task performance.

Learn Agentic AI

AI Agents for IT Helpdesk: L1 Automation, Ticket Routing, and Knowledge Base Integration

Build IT helpdesk AI agents with multi-agent architecture for triage, device, network, and security issues. RAG-powered knowledge base, automated ticket creation, routing, and escalation.

Learn Agentic AI

Computer Use in GPT-5.4: Building AI Agents That Navigate Desktop Applications

Technical guide to GPT-5.4's computer use capabilities for building AI agents that interact with desktop UIs, browser automation, and real-world application workflows.

Learn Agentic AI

Google Cloud AI Agent Trends Report 2026: Key Findings and Developer Implications

Analysis of Google Cloud's 2026 AI agent trends report covering Gemini-powered agents, Google ADK, Vertex AI agent builder, and enterprise adoption patterns.