Skip to content
Learn Agentic AI
Learn Agentic AI12 min read2 views

LangGraph State Management: TypedDict, Reducers, and State Channels

Master LangGraph state management with TypedDict schemas, annotation reducers for message lists, custom state channels, and strategies for complex multi-step agent workflows.

State Is the Foundation of LangGraph

Every node in a LangGraph workflow reads from and writes to a shared state object. Understanding how state is defined, updated, and merged is the single most important concept for building reliable agent graphs. Get state management wrong and your agents will overwrite data, lose context, or produce unpredictable results.

Defining State with TypedDict

State schemas are defined as Python TypedDict classes:

flowchart TD
    START["LangGraph State Management: TypedDict, Reducers, …"] --> A
    A["State Is the Foundation of LangGraph"]
    A --> B
    B["Defining State with TypedDict"]
    B --> C
    C["The Problem with Default Overwrite"]
    C --> D
    D["Annotation Reducers"]
    D --> E
    E["The add_messages Reducer"]
    E --> F
    F["Custom Reducers"]
    F --> G
    G["State Channels and Defaults"]
    G --> H
    H["Nested and Complex State"]
    H --> DONE["Key Takeaways"]
    style START fill:#4f46e5,stroke:#4338ca,color:#fff
    style DONE fill:#059669,stroke:#047857,color:#fff
from typing import TypedDict

class ResearchState(TypedDict):
    query: str
    sources: list[str]
    summary: str
    iteration_count: int

Each field represents a channel of data flowing through the graph. When a node returns a dictionary, LangGraph merges those values into the current state. By default, returned values overwrite existing values for each key.

The Problem with Default Overwrite

Consider a node that adds a source URL:

def search_node(state: ResearchState) -> dict:
    new_source = "https://example.com/article"
    return {"sources": [new_source]}

Without a reducer, this overwrites the entire sources list on every call. If you ran two search nodes sequentially, the second would erase results from the first. This is where reducers become essential.

Annotation Reducers

Reducers define how state updates merge with existing values. You declare them using Annotated types:

from typing import Annotated
from operator import add

class ResearchState(TypedDict):
    query: str
    sources: Annotated[list[str], add]
    summary: str
    iteration_count: int

Now sources uses the add operator as its reducer. When a node returns {"sources": ["new_url"]}, LangGraph calls existing_sources + ["new_url"] instead of replacing the list.

The add_messages Reducer

For chat-based agents, LangGraph provides a specialized add_messages reducer that handles message deduplication by ID:

See AI Voice Agents Handle Real Calls

Book a free demo or calculate how much you can save with AI voice automation.

from langgraph.graph.message import add_messages
from langchain_core.messages import HumanMessage, AIMessage

class ChatState(TypedDict):
    messages: Annotated[list, add_messages]
    context: str

The add_messages reducer appends new messages to the list. If a message with the same ID already exists, it updates that message in place rather than duplicating it. This is critical for tool-calling loops where the LLM might regenerate responses.

Custom Reducers

You can write any function as a reducer. It takes the existing value and the new value, then returns the merged result:

def max_reducer(existing: int, new: int) -> int:
    return max(existing, new)

def unique_list_reducer(existing: list, new: list) -> list:
    seen = set(existing)
    result = list(existing)
    for item in new:
        if item not in seen:
            result.append(item)
            seen.add(item)
    return result

class AnalysisState(TypedDict):
    messages: Annotated[list, add_messages]
    max_score: Annotated[int, max_reducer]
    unique_tags: Annotated[list, unique_list_reducer]

Custom reducers give you precise control over how concurrent or sequential node outputs combine.

State Channels and Defaults

You can provide default values by using a class-based approach or by passing initial state on invocation. The recommended pattern is to always pass a complete initial state:

initial_state = {
    "query": "agentic AI frameworks",
    "sources": [],
    "summary": "",
    "iteration_count": 0,
}

result = graph.invoke(initial_state)

This makes the starting condition explicit and avoids KeyError exceptions when nodes access state fields that were never initialized.

Nested and Complex State

State fields can hold any serializable Python type including dictionaries, Pydantic models, and dataclasses:

from pydantic import BaseModel

class DocumentRef(BaseModel):
    url: str
    relevance: float
    snippet: str

class DeepResearchState(TypedDict):
    messages: Annotated[list, add_messages]
    documents: Annotated[list[DocumentRef], add]
    metadata: dict

Using Pydantic models inside state gives you validation and type safety for complex nested data structures.

FAQ

What happens if two nodes write to the same state key without a reducer?

The last write wins. If node A sets summary = "X" and node B sets summary = "Y", and B runs after A, the final value is "Y". Use a reducer if you need to combine values rather than overwrite.

Can I remove items from a list state channel?

Yes. Write a custom reducer that supports removal signals. For example, you could return a special wrapper object that tells the reducer to filter out certain items, or you can replace the entire list by not using a reducer on that field.

Is there a size limit on LangGraph state?

There is no hard limit imposed by LangGraph itself, but state is serialized for checkpointing. Extremely large state objects — such as those containing full document texts — will slow down serialization and increase memory usage. Keep state lean and store large data externally with references.


#LangGraph #StateManagement #TypedDict #Reducers #Python #AgenticAI #LearnAI #AIEngineering

Share
C

Written by

CallSphere Team

Expert insights on AI voice agents and customer communication automation.

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

AI Interview Prep

7 AI Coding Interview Questions From Anthropic, Meta & OpenAI (2026 Edition)

Real AI coding interview questions from Anthropic, Meta, and OpenAI in 2026. Includes implementing attention from scratch, Anthropic's progressive coding screens, Meta's AI-assisted round, and vector search — with solution approaches.

Learn Agentic AI

AI Agent Framework Comparison 2026: LangGraph vs CrewAI vs AutoGen vs OpenAI Agents SDK

Side-by-side comparison of the top 4 AI agent frameworks: LangGraph, CrewAI, AutoGen, and OpenAI Agents SDK — architecture, features, production readiness, and when to choose each.

AI Interview Prep

7 Agentic AI & Multi-Agent System Interview Questions for 2026

Real agentic AI and multi-agent system interview questions from Anthropic, OpenAI, and Microsoft in 2026. Covers agent design patterns, memory systems, safety, orchestration frameworks, tool calling, and evaluation.

Learn Agentic AI

Building a Multi-Agent Data Pipeline: Ingestion, Transformation, and Analysis Agents

Build a three-agent data pipeline with ingestion, transformation, and analysis agents that process data from APIs, CSVs, and databases using Python.

Learn Agentic AI

Building a Research Agent with Web Search and Report Generation: Complete Tutorial

Build a research agent that searches the web, extracts and synthesizes data, and generates formatted reports using OpenAI Agents SDK and web search tools.

Learn Agentic AI

OpenAI Agents SDK in 2026: Building Multi-Agent Systems with Handoffs and Guardrails

Complete tutorial on the OpenAI Agents SDK covering agent creation, tool definitions, handoff patterns between specialist agents, and input/output guardrails for safe AI systems.