---
title: "CrewAI Tools: Built-In and Custom Tools for Agent Capabilities"
description: "Extend CrewAI agents with built-in tools like SerperDevTool and ScrapeWebsiteTool, create custom tools using the @tool decorator, and configure tool sharing across multiple agents."
canonical: https://callsphere.ai/blog/crewai-tools-built-in-custom-agent-capabilities
category: "Learn Agentic AI"
tags: ["CrewAI", "Tools", "Custom Tools", "Web Scraping", "Python"]
author: "CallSphere Team"
published: 2026-03-17T00:00:00.000Z
updated: 2026-05-06T01:02:44.054Z
---

# CrewAI Tools: Built-In and Custom Tools for Agent Capabilities

> Extend CrewAI agents with built-in tools like SerperDevTool and ScrapeWebsiteTool, create custom tools using the @tool decorator, and configure tool sharing across multiple agents.

## Why Tools Matter for Agents

An agent without tools is limited to what its LLM already knows. It cannot search the web, read files, query databases, or interact with APIs. Tools give agents the ability to take real actions in the world. In CrewAI, tools are Python functions or classes that agents can invoke during their reasoning loop. The agent decides when and how to use them based on the task at hand.

CrewAI provides a rich set of built-in tools through the `crewai-tools` package and makes it straightforward to build custom ones.

## Built-In Tools

Install the tools package if you have not already:

```mermaid
flowchart TD
    GOAL(["Crew goal"])
    MGR["Manager agent
hierarchical process"]
    R1["Researcher agent
role plus backstory"]
    R2["Analyst agent"]
    W1["Writer agent"]
    T1["Task A
research"]
    T2["Task B
analyze"]
    T3["Task C
draft"]
    TOOLS[("Tools
web search, files")]
    OUT(["Crew output"])
    GOAL --> MGR
    MGR --> T1 --> R1 --> TOOLS
    R1 --> T2 --> R2
    R2 --> T3 --> W1 --> OUT
    style MGR fill:#4f46e5,stroke:#4338ca,color:#fff
    style TOOLS fill:#ede9fe,stroke:#7c3aed,color:#1e1b4b
    style OUT fill:#059669,stroke:#047857,color:#fff
```

```bash
pip install crewai-tools
```

### SerperDevTool — Web Search

The `SerperDevTool` enables agents to search the web using the Serper API (a Google Search wrapper):

```python
from crewai import Agent
from crewai_tools import SerperDevTool

search_tool = SerperDevTool()

researcher = Agent(
    role="Research Analyst",
    goal="Find up-to-date information from the web",
    backstory="Expert at online research and source verification.",
    tools=[search_tool],
)
```

Set your Serper API key in the environment:

```bash
export SERPER_API_KEY="your-serper-key"
```

The agent will automatically invoke the search tool when it needs current information that is not in its training data.

### ScrapeWebsiteTool — Web Scraping

For reading specific web pages, use `ScrapeWebsiteTool`:

```python
from crewai_tools import ScrapeWebsiteTool

# General scraper — agent provides the URL
scraper = ScrapeWebsiteTool()

# URL-specific scraper — locked to a single page
doc_scraper = ScrapeWebsiteTool(
    website_url="https://docs.crewai.com/introduction"
)
```

The general version lets the agent scrape any URL it discovers. The URL-specific version restricts it to a single page, which is useful for focused research tasks.

### FileReadTool and DirectoryReadTool

For local file access:

```python
from crewai_tools import FileReadTool, DirectoryReadTool

file_reader = FileReadTool(file_path="./data/report.csv")
dir_reader = DirectoryReadTool(directory="./data/")

data_analyst = Agent(
    role="Data Analyst",
    goal="Analyze local data files",
    backstory="Expert at reading and interpreting structured data.",
    tools=[file_reader, dir_reader],
)
```

## Creating Custom Tools

CrewAI provides two approaches for building custom tools: the `@tool` decorator for simple functions and the `BaseTool` class for complex tools.

### The @tool Decorator

The simplest way to create a custom tool:

```python
from crewai.tools import tool

@tool("Calculate Compound Interest")
def compound_interest(principal: float, rate: float, years: int) -> str:
    """Calculate compound interest for a given principal, annual rate, and time period.
    Args:
        principal: The initial investment amount
        rate: Annual interest rate as a decimal (e.g., 0.05 for 5%)
        years: Number of years
    """
    amount = principal * (1 + rate) ** years
    interest = amount - principal
    return f"Principal: ${principal:,.2f}, Rate: {rate*100}%, Years: {years}, Final: ${amount:,.2f}, Interest: ${interest:,.2f}"
```

The docstring is critical. CrewAI uses it to tell the agent what the tool does and what parameters it accepts. A well-written docstring means the agent will use the tool correctly.

### The BaseTool Class

For tools that need initialization, state, or complex logic:

```python
from crewai.tools import BaseTool
from pydantic import BaseModel, Field
import httpx

class StockPriceInput(BaseModel):
    ticker: str = Field(description="Stock ticker symbol, e.g. AAPL")

class StockPriceTool(BaseTool):
    name: str = "Get Stock Price"
    description: str = "Fetches the current stock price for a given ticker symbol."
    args_schema: type[BaseModel] = StockPriceInput

    def _run(self, ticker: str) -> str:
        response = httpx.get(
            f"https://api.example.com/stock/{ticker}/price",
            headers={"Authorization": f"Bearer {self.api_key}"},
        )
        data = response.json()
        return f"{ticker}: ${data['price']:.2f} ({data['change']:+.2f}%)"
```

The `BaseTool` approach gives you a Pydantic schema for input validation, which produces better tool descriptions for the LLM and catches parameter errors before execution.

## Tool Sharing Across Agents

By default, tools assigned to an agent are private. To share tools across the entire crew, pass them at the crew level:

```python
from crewai import Crew

shared_search = SerperDevTool()

crew = Crew(
    agents=[researcher, analyst, writer],
    tasks=[research_task, analysis_task, writing_task],
    tools=[shared_search],
)
```

When tools are provided at the crew level, every agent in the crew can access them. Agent-level tools take priority if there is a naming conflict.

## Tool Error Handling

Wrap your custom tools with error handling to prevent agent crashes:

```python
@tool("Fetch API Data")
def fetch_api_data(endpoint: str) -> str:
    """Fetch data from the internal API. Args: endpoint: The API path to query."""
    try:
        response = httpx.get(f"https://api.internal.com/{endpoint}", timeout=10)
        response.raise_for_status()
        return response.text
    except httpx.TimeoutException:
        return "Error: API request timed out after 10 seconds."
    except httpx.HTTPStatusError as e:
        return f"Error: API returned status {e.response.status_code}."
```

Returning error messages as strings (instead of raising exceptions) allows the agent to reason about the failure and try alternative approaches.

## FAQ

### How many tools should an agent have?

Keep it under 8 to 10 tools per agent. Each tool's description is injected into the agent's context, consuming tokens and potentially confusing the LLM. If an agent needs many capabilities, consider splitting it into multiple specialized agents.

### Can tools call other tools?

Not directly through CrewAI's tool framework. If you need composed behavior, build it into a single tool function that internally calls multiple APIs or functions. The agent sees it as one tool, keeping the interface clean.

### Do tools work with all LLM providers?

Yes. Tools are provider-agnostic because CrewAI translates them into the standard function-calling format. However, smaller or older models may struggle with complex tool schemas. If you see tool-use errors, simplify your parameter types and improve your docstrings.

---

#CrewAI #Tools #CustomTools #WebScraping #Python #AgenticAI #LearnAI #AIEngineering

---

Source: https://callsphere.ai/blog/crewai-tools-built-in-custom-agent-capabilities
