---
title: "LangChain Expression Language (LCEL): Composing AI Pipelines Declaratively"
description: "Deep dive into LCEL's pipe operator, RunnablePassthrough, RunnableParallel, branching, and fallback patterns for building flexible, declarative AI pipelines in LangChain."
canonical: https://callsphere.ai/blog/langchain-expression-language-lcel-composing-ai-pipelines
category: "Learn Agentic AI"
tags: ["LangChain", "LCEL", "AI Pipelines", "Python", "Composability"]
author: "CallSphere Team"
published: 2026-03-17T00:00:00.000Z
updated: 2026-05-06T22:07:30.444Z
---

# LangChain Expression Language (LCEL): Composing AI Pipelines Declaratively

> Deep dive into LCEL's pipe operator, RunnablePassthrough, RunnableParallel, branching, and fallback patterns for building flexible, declarative AI pipelines in LangChain.

## Why LCEL Exists

LangChain Expression Language (LCEL) is the declarative composition layer that replaced legacy chain classes like `LLMChain` and `SequentialChain`. Instead of instantiating chain objects with keyword arguments, you compose pipelines using the `|` pipe operator. Every LCEL chain automatically gets streaming, batching, async support, and integration with LangSmith tracing — for free.

The design philosophy is simple: every component is a `Runnable`, and runnables compose via pipes. If you understand this one concept, you understand LCEL.

## The Pipe Operator

The `|` operator connects components left to right. The output of the left component becomes the input of the right component.

```mermaid
flowchart LR
    INPUT(["User intent"])
    PARSE["Parse plus
classify"]
    PLAN["Plan and tool
selection"]
    AGENT["Agent loop
LLM plus tools"]
    GUARD{"Guardrails
and policy"}
    EXEC["Execute and
verify result"]
    OBS[("Trace and metrics")]
    OUT(["Outcome plus
next action"])
    INPUT --> PARSE --> PLAN --> AGENT --> GUARD
    GUARD -->|Pass| EXEC --> OUT
    GUARD -->|Fail| AGENT
    AGENT --> OBS
    style AGENT fill:#4f46e5,stroke:#4338ca,color:#fff
    style GUARD fill:#f59e0b,stroke:#d97706,color:#1f2937
    style OBS fill:#ede9fe,stroke:#7c3aed,color:#1e1b4b
    style OUT fill:#059669,stroke:#047857,color:#fff
```

```python
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

chain = (
    ChatPromptTemplate.from_template("Tell me a fact about {topic}")
    | ChatOpenAI(model="gpt-4o-mini")
    | StrOutputParser()
)

# All of these work automatically
result = chain.invoke({"topic": "octopuses"})       # sync
result = await chain.ainvoke({"topic": "octopuses"}) # async
results = chain.batch([{"topic": "octopuses"}, {"topic": "stars"}])
```

Under the hood, the pipe operator creates a `RunnableSequence`. Each step's output is validated and passed forward.

## RunnablePassthrough: Forwarding Input

`RunnablePassthrough` passes input through unchanged. This is critical when you need the original input at a later stage in the pipeline.

```python
from langchain_core.runnables import RunnablePassthrough

chain = (
    {
        "context": retriever,                    # fetches documents
        "question": RunnablePassthrough(),       # forwards the raw input
    }
    | prompt
    | model
    | StrOutputParser()
)

result = chain.invoke("What is LCEL?")
```

Here the input string flows to both the retriever (which uses it as a query) and directly into the prompt template as the `question` variable. Without `RunnablePassthrough`, the original input would be lost after the retriever step.

You can also use `RunnablePassthrough.assign()` to add new keys to a dict while keeping existing ones:

```python
from langchain_core.runnables import RunnablePassthrough

chain = RunnablePassthrough.assign(
    word_count=lambda x: len(x["text"].split())
)

result = chain.invoke({"text": "Hello world from LCEL"})
# {"text": "Hello world from LCEL", "word_count": 4}
```

## RunnableParallel: Branching Execution

`RunnableParallel` runs multiple runnables concurrently and collects their outputs into a dictionary.

```python
from langchain_core.runnables import RunnableParallel
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

model = ChatOpenAI(model="gpt-4o-mini")
parser = StrOutputParser()

summary_chain = (
    ChatPromptTemplate.from_template("Summarize this in one sentence: {text}")
    | model | parser
)

keyword_chain = (
    ChatPromptTemplate.from_template("Extract 5 keywords from: {text}")
    | model | parser
)

parallel_chain = RunnableParallel(
    summary=summary_chain,
    keywords=keyword_chain,
)

result = parallel_chain.invoke({
    "text": "LangChain is a framework for building LLM applications..."
})
print(result["summary"])
print(result["keywords"])
```

Both chains run concurrently. This is particularly useful for tasks like RAG where you want to fetch context and format the question simultaneously.

## Conditional Branching with RunnableBranch

`RunnableBranch` lets you route input to different chains based on conditions.

```python
from langchain_core.runnables import RunnableBranch

branch = RunnableBranch(
    (lambda x: x["language"] == "python", python_chain),
    (lambda x: x["language"] == "javascript", js_chain),
    default_chain,  # fallback
)

result = branch.invoke({"language": "python", "question": "How do I sort a list?"})
```

Each tuple contains a condition function and the runnable to execute if the condition returns True. The first matching condition wins. If no condition matches, the default runnable is used.

## Fallbacks for Resilience

LCEL chains support fallbacks — if the primary runnable fails, a backup takes over.

```python
from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic

primary = ChatOpenAI(model="gpt-4o")
backup = ChatAnthropic(model="claude-sonnet-4-20250514")

model_with_fallback = primary.with_fallbacks([backup])

# If OpenAI fails, Anthropic handles the request
chain = prompt | model_with_fallback | StrOutputParser()
```

This pattern is invaluable in production where provider outages happen. You can chain multiple fallbacks and they are tried in order.

## RunnableLambda: Custom Functions

Wrap any Python function as a runnable using `RunnableLambda`.

```python
from langchain_core.runnables import RunnableLambda

def clean_text(input_dict: dict) -> dict:
    input_dict["text"] = input_dict["text"].strip().lower()
    return input_dict

chain = RunnableLambda(clean_text) | prompt | model | parser
```

For async functions, `RunnableLambda` automatically detects and uses the async version when `ainvoke` is called.

## FAQ

### Can I nest LCEL chains inside each other?

Yes. Since every LCEL chain is itself a Runnable, you can use one chain as a step inside another chain. This is how you build complex multi-stage pipelines — compose small, focused chains and then wire them together.

### How does LCEL handle errors in the middle of a chain?

By default, exceptions propagate up and the entire chain fails. Use `.with_fallbacks()` on any step to provide alternatives, or wrap individual steps with try/except logic inside a `RunnableLambda`. You can also use `.with_retry()` to automatically retry transient failures.

### Is LCEL required to use LangChain?

Technically no — you can use individual components without LCEL composition. But LCEL is the primary API for building chains in modern LangChain. It provides streaming, batching, and tracing automatically, which you would have to implement manually otherwise.

---

#LangChain #LCEL #AIPipelines #Python #Composability #AgenticAI #LearnAI #AIEngineering

---

Source: https://callsphere.ai/blog/langchain-expression-language-lcel-composing-ai-pipelines
