---
title: "Human-in-the-Loop with LangGraph: Approval Gates and Manual Intervention Points"
description: "Implement human approval gates in LangGraph using interrupt_before, interrupt_after, and resume patterns to build agent workflows that pause for human review before executing sensitive actions."
canonical: https://callsphere.ai/blog/langgraph-human-in-the-loop-approval-gates-manual-intervention
category: "Learn Agentic AI"
tags: ["LangGraph", "Human-in-the-Loop", "Approval Gates", "Agent Safety", "Python"]
author: "CallSphere Team"
published: 2026-03-17T00:00:00.000Z
updated: 2026-05-08T09:13:29.434Z
---

# Human-in-the-Loop with LangGraph: Approval Gates and Manual Intervention Points

> Implement human approval gates in LangGraph using interrupt_before, interrupt_after, and resume patterns to build agent workflows that pause for human review before executing sensitive actions.

## Why Agents Need Human Oversight

Fully autonomous agents are powerful but dangerous in production. An agent that can send emails, modify databases, or make API calls to external services should not do so without guardrails. Human-in-the-loop patterns let you build agents that pause at critical decision points, present their intended actions to a human reviewer, and only proceed after explicit approval.

LangGraph implements this through interrupts — points in the graph where execution pauses and waits for external input before continuing.

## Setting Up Interrupts

Interrupts require a checkpointer because the graph state must be persisted while waiting for human input:

```mermaid
flowchart TD
    USER(["User input"])
    SUPER["Supervisor node
routes by state"]
    A["Specialist node A
research"]
    B["Specialist node B
writing"]
    TOOL{"Tool call
needed?"}
    EXEC["Tool executor
ToolNode"]
    CHK[("Postgres
checkpointer")]
    INT{"interrupt for
human approval?"}
    HUMAN(["Human reviewer"])
    OUT(["Final response"])
    USER --> SUPER
    SUPER --> A
    SUPER --> B
    A --> TOOL
    B --> TOOL
    TOOL -->|Yes| EXEC --> SUPER
    TOOL -->|No| INT
    INT -->|Yes| HUMAN --> SUPER
    INT -->|No| OUT
    SUPER  CHK
    style SUPER fill:#4f46e5,stroke:#4338ca,color:#fff
    style CHK fill:#ede9fe,stroke:#7c3aed,color:#1e1b4b
    style OUT fill:#059669,stroke:#047857,color:#fff
    style HUMAN fill:#f59e0b,stroke:#d97706,color:#1f2937
```

```python
from typing import TypedDict, Annotated, Literal
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.checkpoint.memory import MemorySaver
from langgraph.prebuilt import ToolNode
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool

@tool
def send_email(to: str, subject: str, body: str) -> str:
    """Send an email to the specified recipient."""
    # Real implementation here
    return f"Email sent to {to}"

tools = [send_email]
llm = ChatOpenAI(model="gpt-4o-mini").bind_tools(tools)
tool_node = ToolNode(tools)

class State(TypedDict):
    messages: Annotated[list, add_messages]

checkpointer = MemorySaver()
```

## Using interrupt_before

The `interrupt_before` parameter on `compile()` pauses execution before a specified node runs:

```python
def call_agent(state: State) -> dict:
    return {"messages": [llm.invoke(state["messages"])]}

def route(state: State) -> Literal["tools", "end"]:
    last = state["messages"][-1]
    if hasattr(last, "tool_calls") and last.tool_calls:
        return "tools"
    return "end"

builder = StateGraph(State)
builder.add_node("agent", call_agent)
builder.add_node("tools", tool_node)
builder.add_edge(START, "agent")
builder.add_conditional_edges("agent", route, {
    "tools": "tools",
    "end": END,
})
builder.add_edge("tools", "agent")

graph = builder.compile(
    checkpointer=checkpointer,
    interrupt_before=["tools"],
)
```

Now every time the agent wants to execute a tool, the graph pauses before the `tools` node runs. The caller can inspect the pending tool calls and decide whether to approve.

## The Approval Loop

Here is the complete pattern for running the graph with human approval:

```python
from langchain_core.messages import HumanMessage

config = {"configurable": {"thread_id": "approval-demo"}}

# Initial invocation — will pause before tools
result = graph.invoke(
    {"messages": [HumanMessage(content="Send an email to bob@example.com saying hello")]},
    config=config,
)

# Inspect what the agent wants to do
state = graph.get_state(config)
pending_calls = state.values["messages"][-1].tool_calls
print("Agent wants to execute:")
for call in pending_calls:
    print(f"  {call['name']}({call['args']})")

# Human approves — resume execution with None input
approved = input("Approve? (y/n): ")
if approved.lower() == "y":
    result = graph.invoke(None, config=config)
    print("Execution completed:", result["messages"][-1].content)
else:
    print("Execution rejected by human reviewer.")
```

Passing `None` to `invoke()` tells LangGraph to resume from the checkpoint without adding new input. Execution continues from exactly where it paused.

## Using interrupt_after

Sometimes you want to pause after a node runs rather than before. This is useful for review-then-continue patterns:

```python
graph = builder.compile(
    checkpointer=checkpointer,
    interrupt_after=["agent"],
)
```

With `interrupt_after`, the agent node completes and its output is saved to state, then execution pauses. The human can review the agent's reasoning or proposed tool calls, then resume or modify the state before continuing.

## Modifying State Before Resuming

You can edit the graph state before resuming, which lets humans correct agent mistakes:

```python
from langgraph.checkpoint.base import empty_checkpoint

# After interrupt, modify the state
graph.update_state(
    config,
    {"messages": [HumanMessage(content="Actually, send it to alice@example.com instead")]},
)

# Resume with the modified state
result = graph.invoke(None, config=config)
```

This pattern is powerful for correction workflows where the human wants to adjust the agent's plan without starting over from scratch.

## Selective Interrupts

Not every tool call needs approval. You can implement selective interruption by checking tool names in a custom node:

```python
SENSITIVE_TOOLS = {"send_email", "delete_record", "make_payment"}

def check_approval(state: State) -> Literal["needs_approval", "safe"]:
    tool_calls = state["messages"][-1].tool_calls
    for call in tool_calls:
        if call["name"] in SENSITIVE_TOOLS:
            return "needs_approval"
    return "safe"
```

Route sensitive tool calls through an approval gate while letting safe tools execute automatically.

## FAQ

### Can I set a timeout for human approval?

LangGraph itself does not have a built-in timeout mechanism for interrupts. You implement timeouts in your application layer — for example, a web server that cancels the workflow if no approval arrives within a time window. The checkpointed state persists indefinitely until resumed or discarded.

### What happens if I never resume an interrupted graph?

The state remains checkpointed and can be resumed at any time, even days later. The graph does not consume resources while paused. This makes interrupts suitable for asynchronous approval workflows where a human might review actions hours after the agent proposes them.

### Can I combine interrupt_before and interrupt_after?

Yes. You can pass different node lists to each parameter. For example, interrupt before tool execution for approval and interrupt after the final response for quality review. Both can be active on the same compiled graph.

---

#LangGraph #HumanintheLoop #ApprovalGates #AgentSafety #Python #AgenticAI #LearnAI #AIEngineering

---

Source: https://callsphere.ai/blog/langgraph-human-in-the-loop-approval-gates-manual-intervention
