---
title: "Building LangChain Agents: Tools, AgentExecutor, and the ReAct Loop"
description: "Learn how to build LangChain agents that use tools to solve problems, understand the ReAct reasoning loop, and configure AgentExecutor for reliable agent behavior."
canonical: https://callsphere.ai/blog/building-langchain-agents-tools-agentexecutor-react-loop
category: "Learn Agentic AI"
tags: ["LangChain", "AI Agents", "ReAct", "Tool Use", "Python"]
author: "CallSphere Team"
published: 2026-03-17T00:00:00.000Z
updated: 2026-05-06T04:13:25.142Z
---

# Building LangChain Agents: Tools, AgentExecutor, and the ReAct Loop

> Learn how to build LangChain agents that use tools to solve problems, understand the ReAct reasoning loop, and configure AgentExecutor for reliable agent behavior.

## From Chains to Agents

A chain follows a fixed path: prompt goes in, response comes out. An agent, by contrast, decides at each step what action to take. It can call tools, inspect results, reason about what to do next, and repeat until it has enough information to answer the original question.

LangChain agents implement the **ReAct** (Reasoning + Acting) pattern. The model alternates between reasoning about the problem and taking actions (tool calls). This loop continues until the model decides it can produce a final answer.

## Defining Tools

Tools are functions that an agent can invoke. Each tool has a name, a description (used by the LLM to decide when to call it), and an implementation.

```mermaid
flowchart TD
    USER(["User message"])
    LLM["LLM call
with tools schema"]
    DECIDE{"Model wants
to call a tool?"}
    EXEC["Execute tool
sandboxed runtime"]
    RESULT["Append tool_result
to messages"]
    GUARD{"Output passes
guardrails?"}
    DONE(["Final reply"])
    BLOCK(["Refuse and log"])
    USER --> LLM --> DECIDE
    DECIDE -->|Yes| EXEC --> RESULT --> LLM
    DECIDE -->|No| GUARD
    GUARD -->|Yes| DONE
    GUARD -->|No| BLOCK
    style LLM fill:#4f46e5,stroke:#4338ca,color:#fff
    style EXEC fill:#ede9fe,stroke:#7c3aed,color:#1e1b4b
    style GUARD fill:#f59e0b,stroke:#d97706,color:#1f2937
    style DONE fill:#059669,stroke:#047857,color:#fff
    style BLOCK fill:#dc2626,stroke:#b91c1c,color:#fff
```

```python
from langchain_core.tools import tool

@tool
def multiply(a: float, b: float) -> float:
    """Multiply two numbers together."""
    return a * b

@tool
def web_search(query: str) -> str:
    """Search the web for current information about a topic."""
    # In production, this would call a search API
    return f"Search results for: {query}"

print(multiply.name)        # "multiply"
print(multiply.description) # "Multiply two numbers together."
```

The `@tool` decorator automatically extracts the function name, docstring, and parameter types to build the tool schema. The LLM sees the name, description, and parameter schema when deciding which tool to call.

## Creating an Agent with Tool Binding

Modern LangChain agents use the `create_tool_calling_agent` function, which leverages native tool-calling capabilities of chat models rather than parsing text output.

```python
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.agents import create_tool_calling_agent, AgentExecutor

# Define the prompt with required placeholders
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful math assistant."),
    MessagesPlaceholder("chat_history", optional=True),
    ("human", "{input}"),
    MessagesPlaceholder("agent_scratchpad"),
])

# Create model and bind tools
llm = ChatOpenAI(model="gpt-4o", temperature=0)
tools = [multiply, web_search]

# Build the agent
agent = create_tool_calling_agent(llm, tools, prompt)

# Wrap in AgentExecutor
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
```

The `agent_scratchpad` placeholder is where intermediate tool calls and results are stored during the reasoning loop.

## Running the Agent

```python
result = executor.invoke({
    "input": "What is 47.5 multiplied by 23.8?"
})
print(result["output"])
```

With `verbose=True`, you will see the full reasoning trace:

1. The agent receives the question
2. It decides to call the `multiply` tool with arguments `a=47.5, b=23.8`
3. The tool returns `1130.5`
4. The agent produces the final answer: "47.5 multiplied by 23.8 is 1,130.5"

## The ReAct Loop Internals

Each iteration of the agent loop follows this pattern:

1. **Observe** — the current state (user question plus any previous tool results) is formatted into the prompt
2. **Reason** — the LLM generates a response that may include tool calls
3. **Act** — if tool calls are present, the executor runs each tool and appends results to the scratchpad
4. **Repeat** — the loop continues until the LLM responds without tool calls (the final answer)

The `AgentExecutor` manages this loop and provides safeguards:

```python
executor = AgentExecutor(
    agent=agent,
    tools=tools,
    verbose=True,
    max_iterations=10,         # Prevent infinite loops
    handle_parsing_errors=True, # Recover from malformed output
    return_intermediate_steps=True, # Include tool call history
)

result = executor.invoke({"input": "Search for LangChain news"})

# Access intermediate steps
for step in result["intermediate_steps"]:
    action, observation = step
    print(f"Tool: {action.tool}, Input: {action.tool_input}")
    print(f"Result: {observation}")
```

## Multi-Tool Agent Example

Here is a more complete agent that combines calculation and search capabilities.

```python
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.agents import create_tool_calling_agent, AgentExecutor

@tool
def calculate(expression: str) -> str:
    """Evaluate a mathematical expression. Use Python syntax."""
    try:
        result = eval(expression)
        return str(result)
    except Exception as e:
        return f"Error: {e}"

@tool
def get_current_date() -> str:
    """Get today's date."""
    from datetime import date
    return date.today().isoformat()

tools = [calculate, get_current_date]

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant with access to tools. "
               "Always use tools when a calculation is needed."),
    ("human", "{input}"),
    MessagesPlaceholder("agent_scratchpad"),
])

agent = create_tool_calling_agent(
    ChatOpenAI(model="gpt-4o-mini", temperature=0),
    tools,
    prompt,
)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

response = executor.invoke({
    "input": "What day is it and what is 2 raised to the 20th power?"
})
print(response["output"])
```

The agent will call both tools — `get_current_date` and `calculate("2**20")` — then combine the results into a coherent answer.

## FAQ

### What is the difference between create_tool_calling_agent and the older create_react_agent?

`create_tool_calling_agent` uses the native tool-calling API supported by modern LLMs, which returns structured tool calls in the response. `create_react_agent` relies on text-based parsing of the ReAct format. The tool-calling approach is more reliable and is the recommended default.

### How do I prevent an agent from running forever?

Set `max_iterations` on the `AgentExecutor`. The default is 15. If the agent exceeds this limit, it returns an error message. You can also set `max_execution_time` (in seconds) as a wall-clock timeout.

### Can an agent call the same tool multiple times?

Yes. The agent can call any tool any number of times across iterations. It can also call multiple tools in a single step if the model supports parallel tool calling.

---

#LangChain #AIAgents #ReAct #ToolUse #Python #AgenticAI #LearnAI #AIEngineering

---

Source: https://callsphere.ai/blog/building-langchain-agents-tools-agentexecutor-react-loop
