Skip to content
Learn Agentic AI
Learn Agentic AI12 min read1 views

LangChain Expression Language (LCEL): Composing AI Pipelines Declaratively

Deep dive into LCEL's pipe operator, RunnablePassthrough, RunnableParallel, branching, and fallback patterns for building flexible, declarative AI pipelines in LangChain.

Why LCEL Exists

LangChain Expression Language (LCEL) is the declarative composition layer that replaced legacy chain classes like LLMChain and SequentialChain. Instead of instantiating chain objects with keyword arguments, you compose pipelines using the | pipe operator. Every LCEL chain automatically gets streaming, batching, async support, and integration with LangSmith tracing — for free.

The design philosophy is simple: every component is a Runnable, and runnables compose via pipes. If you understand this one concept, you understand LCEL.

The Pipe Operator

The | operator connects components left to right. The output of the left component becomes the input of the right component.

flowchart TD
    START["LangChain Expression Language LCEL: Composing AI …"] --> A
    A["Why LCEL Exists"]
    A --> B
    B["The Pipe Operator"]
    B --> C
    C["RunnablePassthrough: Forwarding Input"]
    C --> D
    D["RunnableParallel: Branching Execution"]
    D --> E
    E["Conditional Branching with RunnableBran…"]
    E --> F
    F["Fallbacks for Resilience"]
    F --> G
    G["RunnableLambda: Custom Functions"]
    G --> H
    H["FAQ"]
    H --> DONE["Key Takeaways"]
    style START fill:#4f46e5,stroke:#4338ca,color:#fff
    style DONE fill:#059669,stroke:#047857,color:#fff
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

chain = (
    ChatPromptTemplate.from_template("Tell me a fact about {topic}")
    | ChatOpenAI(model="gpt-4o-mini")
    | StrOutputParser()
)

# All of these work automatically
result = chain.invoke({"topic": "octopuses"})       # sync
result = await chain.ainvoke({"topic": "octopuses"}) # async
results = chain.batch([{"topic": "octopuses"}, {"topic": "stars"}])

Under the hood, the pipe operator creates a RunnableSequence. Each step's output is validated and passed forward.

RunnablePassthrough: Forwarding Input

RunnablePassthrough passes input through unchanged. This is critical when you need the original input at a later stage in the pipeline.

from langchain_core.runnables import RunnablePassthrough

chain = (
    {
        "context": retriever,                    # fetches documents
        "question": RunnablePassthrough(),       # forwards the raw input
    }
    | prompt
    | model
    | StrOutputParser()
)

result = chain.invoke("What is LCEL?")

Here the input string flows to both the retriever (which uses it as a query) and directly into the prompt template as the question variable. Without RunnablePassthrough, the original input would be lost after the retriever step.

You can also use RunnablePassthrough.assign() to add new keys to a dict while keeping existing ones:

from langchain_core.runnables import RunnablePassthrough

chain = RunnablePassthrough.assign(
    word_count=lambda x: len(x["text"].split())
)

result = chain.invoke({"text": "Hello world from LCEL"})
# {"text": "Hello world from LCEL", "word_count": 4}

RunnableParallel: Branching Execution

RunnableParallel runs multiple runnables concurrently and collects their outputs into a dictionary.

See AI Voice Agents Handle Real Calls

Book a free demo or calculate how much you can save with AI voice automation.

from langchain_core.runnables import RunnableParallel
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

model = ChatOpenAI(model="gpt-4o-mini")
parser = StrOutputParser()

summary_chain = (
    ChatPromptTemplate.from_template("Summarize this in one sentence: {text}")
    | model | parser
)

keyword_chain = (
    ChatPromptTemplate.from_template("Extract 5 keywords from: {text}")
    | model | parser
)

parallel_chain = RunnableParallel(
    summary=summary_chain,
    keywords=keyword_chain,
)

result = parallel_chain.invoke({
    "text": "LangChain is a framework for building LLM applications..."
})
print(result["summary"])
print(result["keywords"])

Both chains run concurrently. This is particularly useful for tasks like RAG where you want to fetch context and format the question simultaneously.

Conditional Branching with RunnableBranch

RunnableBranch lets you route input to different chains based on conditions.

from langchain_core.runnables import RunnableBranch

branch = RunnableBranch(
    (lambda x: x["language"] == "python", python_chain),
    (lambda x: x["language"] == "javascript", js_chain),
    default_chain,  # fallback
)

result = branch.invoke({"language": "python", "question": "How do I sort a list?"})

Each tuple contains a condition function and the runnable to execute if the condition returns True. The first matching condition wins. If no condition matches, the default runnable is used.

Fallbacks for Resilience

LCEL chains support fallbacks — if the primary runnable fails, a backup takes over.

from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic

primary = ChatOpenAI(model="gpt-4o")
backup = ChatAnthropic(model="claude-sonnet-4-20250514")

model_with_fallback = primary.with_fallbacks([backup])

# If OpenAI fails, Anthropic handles the request
chain = prompt | model_with_fallback | StrOutputParser()

This pattern is invaluable in production where provider outages happen. You can chain multiple fallbacks and they are tried in order.

RunnableLambda: Custom Functions

Wrap any Python function as a runnable using RunnableLambda.

from langchain_core.runnables import RunnableLambda

def clean_text(input_dict: dict) -> dict:
    input_dict["text"] = input_dict["text"].strip().lower()
    return input_dict

chain = RunnableLambda(clean_text) | prompt | model | parser

For async functions, RunnableLambda automatically detects and uses the async version when ainvoke is called.

FAQ

Can I nest LCEL chains inside each other?

Yes. Since every LCEL chain is itself a Runnable, you can use one chain as a step inside another chain. This is how you build complex multi-stage pipelines — compose small, focused chains and then wire them together.

How does LCEL handle errors in the middle of a chain?

By default, exceptions propagate up and the entire chain fails. Use .with_fallbacks() on any step to provide alternatives, or wrap individual steps with try/except logic inside a RunnableLambda. You can also use .with_retry() to automatically retry transient failures.

Is LCEL required to use LangChain?

Technically no — you can use individual components without LCEL composition. But LCEL is the primary API for building chains in modern LangChain. It provides streaming, batching, and tracing automatically, which you would have to implement manually otherwise.


#LangChain #LCEL #AIPipelines #Python #Composability #AgenticAI #LearnAI #AIEngineering

Share
C

Written by

CallSphere Team

Expert insights on AI voice agents and customer communication automation.

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

AI Interview Prep

7 AI Coding Interview Questions From Anthropic, Meta & OpenAI (2026 Edition)

Real AI coding interview questions from Anthropic, Meta, and OpenAI in 2026. Includes implementing attention from scratch, Anthropic's progressive coding screens, Meta's AI-assisted round, and vector search — with solution approaches.

Learn Agentic AI

Building a Multi-Agent Data Pipeline: Ingestion, Transformation, and Analysis Agents

Build a three-agent data pipeline with ingestion, transformation, and analysis agents that process data from APIs, CSVs, and databases using Python.

Learn Agentic AI

OpenAI Agents SDK in 2026: Building Multi-Agent Systems with Handoffs and Guardrails

Complete tutorial on the OpenAI Agents SDK covering agent creation, tool definitions, handoff patterns between specialist agents, and input/output guardrails for safe AI systems.

Learn Agentic AI

Building a Research Agent with Web Search and Report Generation: Complete Tutorial

Build a research agent that searches the web, extracts and synthesizes data, and generates formatted reports using OpenAI Agents SDK and web search tools.

Learn Agentic AI

Build a Customer Support Agent from Scratch: Python, OpenAI, and Twilio in 60 Minutes

Step-by-step tutorial to build a production-ready customer support AI agent using Python FastAPI, OpenAI Agents SDK, and Twilio Voice with five integrated tools.

Learn Agentic AI

LangGraph Agent Patterns 2026: Building Stateful Multi-Step AI Workflows

Complete LangGraph tutorial covering state machines for agents, conditional edges, human-in-the-loop patterns, checkpointing, and parallel execution with full code examples.