Skip to content
Learn Agentic AI
Learn Agentic AI11 min read2 views

CrewAI Tools: Built-In and Custom Tools for Agent Capabilities

Extend CrewAI agents with built-in tools like SerperDevTool and ScrapeWebsiteTool, create custom tools using the @tool decorator, and configure tool sharing across multiple agents.

Why Tools Matter for Agents

An agent without tools is limited to what its LLM already knows. It cannot search the web, read files, query databases, or interact with APIs. Tools give agents the ability to take real actions in the world. In CrewAI, tools are Python functions or classes that agents can invoke during their reasoning loop. The agent decides when and how to use them based on the task at hand.

CrewAI provides a rich set of built-in tools through the crewai-tools package and makes it straightforward to build custom ones.

Built-In Tools

Install the tools package if you have not already:

flowchart TD
    START["CrewAI Tools: Built-In and Custom Tools for Agent…"] --> A
    A["Why Tools Matter for Agents"]
    A --> B
    B["Built-In Tools"]
    B --> C
    C["Creating Custom Tools"]
    C --> D
    D["Tool Sharing Across Agents"]
    D --> E
    E["Tool Error Handling"]
    E --> F
    F["FAQ"]
    F --> DONE["Key Takeaways"]
    style START fill:#4f46e5,stroke:#4338ca,color:#fff
    style DONE fill:#059669,stroke:#047857,color:#fff
pip install crewai-tools

The SerperDevTool enables agents to search the web using the Serper API (a Google Search wrapper):

from crewai import Agent
from crewai_tools import SerperDevTool

search_tool = SerperDevTool()

researcher = Agent(
    role="Research Analyst",
    goal="Find up-to-date information from the web",
    backstory="Expert at online research and source verification.",
    tools=[search_tool],
)

Set your Serper API key in the environment:

export SERPER_API_KEY="your-serper-key"

The agent will automatically invoke the search tool when it needs current information that is not in its training data.

ScrapeWebsiteTool — Web Scraping

For reading specific web pages, use ScrapeWebsiteTool:

from crewai_tools import ScrapeWebsiteTool

# General scraper — agent provides the URL
scraper = ScrapeWebsiteTool()

# URL-specific scraper — locked to a single page
doc_scraper = ScrapeWebsiteTool(
    website_url="https://docs.crewai.com/introduction"
)

The general version lets the agent scrape any URL it discovers. The URL-specific version restricts it to a single page, which is useful for focused research tasks.

FileReadTool and DirectoryReadTool

For local file access:

See AI Voice Agents Handle Real Calls

Book a free demo or calculate how much you can save with AI voice automation.

from crewai_tools import FileReadTool, DirectoryReadTool

file_reader = FileReadTool(file_path="./data/report.csv")
dir_reader = DirectoryReadTool(directory="./data/")

data_analyst = Agent(
    role="Data Analyst",
    goal="Analyze local data files",
    backstory="Expert at reading and interpreting structured data.",
    tools=[file_reader, dir_reader],
)

Creating Custom Tools

CrewAI provides two approaches for building custom tools: the @tool decorator for simple functions and the BaseTool class for complex tools.

flowchart TD
    ROOT["CrewAI Tools: Built-In and Custom Tools for …"] 
    ROOT --> P0["Built-In Tools"]
    P0 --> P0C0["SerperDevTool — Web Search"]
    P0 --> P0C1["ScrapeWebsiteTool — Web Scraping"]
    P0 --> P0C2["FileReadTool and DirectoryReadTool"]
    ROOT --> P1["Creating Custom Tools"]
    P1 --> P1C0["The @tool Decorator"]
    P1 --> P1C1["The BaseTool Class"]
    ROOT --> P2["FAQ"]
    P2 --> P2C0["How many tools should an agent have?"]
    P2 --> P2C1["Can tools call other tools?"]
    P2 --> P2C2["Do tools work with all LLM providers?"]
    style ROOT fill:#4f46e5,stroke:#4338ca,color:#fff
    style P0 fill:#e0e7ff,stroke:#6366f1,color:#1e293b
    style P1 fill:#e0e7ff,stroke:#6366f1,color:#1e293b
    style P2 fill:#e0e7ff,stroke:#6366f1,color:#1e293b

The @tool Decorator

The simplest way to create a custom tool:

from crewai.tools import tool

@tool("Calculate Compound Interest")
def compound_interest(principal: float, rate: float, years: int) -> str:
    """Calculate compound interest for a given principal, annual rate, and time period.
    Args:
        principal: The initial investment amount
        rate: Annual interest rate as a decimal (e.g., 0.05 for 5%)
        years: Number of years
    """
    amount = principal * (1 + rate) ** years
    interest = amount - principal
    return f"Principal: ${principal:,.2f}, Rate: {rate*100}%, Years: {years}, Final: ${amount:,.2f}, Interest: ${interest:,.2f}"

The docstring is critical. CrewAI uses it to tell the agent what the tool does and what parameters it accepts. A well-written docstring means the agent will use the tool correctly.

The BaseTool Class

For tools that need initialization, state, or complex logic:

from crewai.tools import BaseTool
from pydantic import BaseModel, Field
import httpx

class StockPriceInput(BaseModel):
    ticker: str = Field(description="Stock ticker symbol, e.g. AAPL")

class StockPriceTool(BaseTool):
    name: str = "Get Stock Price"
    description: str = "Fetches the current stock price for a given ticker symbol."
    args_schema: type[BaseModel] = StockPriceInput

    def _run(self, ticker: str) -> str:
        response = httpx.get(
            f"https://api.example.com/stock/{ticker}/price",
            headers={"Authorization": f"Bearer {self.api_key}"},
        )
        data = response.json()
        return f"{ticker}: ${data['price']:.2f} ({data['change']:+.2f}%)"

The BaseTool approach gives you a Pydantic schema for input validation, which produces better tool descriptions for the LLM and catches parameter errors before execution.

Tool Sharing Across Agents

By default, tools assigned to an agent are private. To share tools across the entire crew, pass them at the crew level:

from crewai import Crew

shared_search = SerperDevTool()

crew = Crew(
    agents=[researcher, analyst, writer],
    tasks=[research_task, analysis_task, writing_task],
    tools=[shared_search],
)

When tools are provided at the crew level, every agent in the crew can access them. Agent-level tools take priority if there is a naming conflict.

Tool Error Handling

Wrap your custom tools with error handling to prevent agent crashes:

@tool("Fetch API Data")
def fetch_api_data(endpoint: str) -> str:
    """Fetch data from the internal API. Args: endpoint: The API path to query."""
    try:
        response = httpx.get(f"https://api.internal.com/{endpoint}", timeout=10)
        response.raise_for_status()
        return response.text
    except httpx.TimeoutException:
        return "Error: API request timed out after 10 seconds."
    except httpx.HTTPStatusError as e:
        return f"Error: API returned status {e.response.status_code}."

Returning error messages as strings (instead of raising exceptions) allows the agent to reason about the failure and try alternative approaches.

FAQ

How many tools should an agent have?

Keep it under 8 to 10 tools per agent. Each tool's description is injected into the agent's context, consuming tokens and potentially confusing the LLM. If an agent needs many capabilities, consider splitting it into multiple specialized agents.

Can tools call other tools?

Not directly through CrewAI's tool framework. If you need composed behavior, build it into a single tool function that internally calls multiple APIs or functions. The agent sees it as one tool, keeping the interface clean.

Do tools work with all LLM providers?

Yes. Tools are provider-agnostic because CrewAI translates them into the standard function-calling format. However, smaller or older models may struggle with complex tool schemas. If you see tool-use errors, simplify your parameter types and improve your docstrings.


#CrewAI #Tools #CustomTools #WebScraping #Python #AgenticAI #LearnAI #AIEngineering

Share
C

Written by

CallSphere Team

Expert insights on AI voice agents and customer communication automation.

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

Technical Guides

Building Multi-Agent Voice Systems with the OpenAI Agents SDK

A developer guide to building multi-agent voice systems with the OpenAI Agents SDK — triage, handoffs, shared state, and tool calling.

AI Interview Prep

7 AI Coding Interview Questions From Anthropic, Meta & OpenAI (2026 Edition)

Real AI coding interview questions from Anthropic, Meta, and OpenAI in 2026. Includes implementing attention from scratch, Anthropic's progressive coding screens, Meta's AI-assisted round, and vector search — with solution approaches.

Learn Agentic AI

AI Agent Framework Comparison 2026: LangGraph vs CrewAI vs AutoGen vs OpenAI Agents SDK

Side-by-side comparison of the top 4 AI agent frameworks: LangGraph, CrewAI, AutoGen, and OpenAI Agents SDK — architecture, features, production readiness, and when to choose each.

AI Interview Prep

7 Agentic AI & Multi-Agent System Interview Questions for 2026

Real agentic AI and multi-agent system interview questions from Anthropic, OpenAI, and Microsoft in 2026. Covers agent design patterns, memory systems, safety, orchestration frameworks, tool calling, and evaluation.

Learn Agentic AI

OpenAI Agents SDK Deep Dive: Agents, Tools, Handoffs, and Guardrails Explained

Comprehensive guide to the OpenAI Agents SDK covering the Agent class, function tools, agent-as-tool pattern, handoff mechanism, input and output guardrails, and tracing.

Learn Agentic AI

Building a Multi-Agent Data Pipeline: Ingestion, Transformation, and Analysis Agents

Build a three-agent data pipeline with ingestion, transformation, and analysis agents that process data from APIs, CSVs, and databases using Python.