---
title: "LangChain Fundamentals: Chains, Prompts, and Language Models Explained"
description: "Master the core building blocks of LangChain including chains, prompt templates, language model wrappers, and the LangChain Expression Language for composing AI applications."
canonical: https://callsphere.ai/blog/langchain-fundamentals-chains-prompts-language-models-explained
category: "Learn Agentic AI"
tags: ["LangChain", "LLM", "Prompt Engineering", "Python", "AI Framework"]
author: "CallSphere Team"
published: 2026-03-17T00:00:00.000Z
updated: 2026-05-06T11:04:40.602Z
---

# LangChain Fundamentals: Chains, Prompts, and Language Models Explained

> Master the core building blocks of LangChain including chains, prompt templates, language model wrappers, and the LangChain Expression Language for composing AI applications.

## What Is LangChain and Why Does It Matter

LangChain is an open-source framework for building applications powered by large language models. Rather than writing raw API calls and managing prompt formatting, response parsing, and chaining logic yourself, LangChain provides composable abstractions that let you assemble complex LLM workflows from reusable components.

The framework has evolved significantly since its inception. Modern LangChain centers on three ideas: **prompt templates** for parameterized inputs, **language model wrappers** that normalize different providers behind a common interface, and **chains** that compose these pieces into pipelines. Understanding these fundamentals is essential before moving on to agents, RAG, or multi-step workflows.

## Prompt Templates

A prompt template is a string with placeholders that get filled in at runtime. Instead of concatenating strings manually, you define a template once and invoke it with different variables.

```mermaid
flowchart TD
    SPEC(["Task spec"])
    SYSTEM["System prompt
role plus rules"]
    SHOTS["Few shot examples
3 to 5"]
    VARS["Variable injection
Jinja or f-string"]
    COT["Chain of thought
or scratchpad"]
    CONSTR["Output constraint
JSON schema"]
    LLM["LLM call"]
    EVAL["Offline eval
LLM as judge plus regex"]
    GATE{"Score over
threshold?"}
    COMMIT(["Promote to prod
version pinned"])
    REVISE(["Revise prompt"])
    SPEC --> SYSTEM --> SHOTS --> VARS --> COT --> CONSTR --> LLM --> EVAL --> GATE
    GATE -->|Yes| COMMIT
    GATE -->|No| REVISE --> SYSTEM
    style LLM fill:#4f46e5,stroke:#4338ca,color:#fff
    style EVAL fill:#f59e0b,stroke:#d97706,color:#1f2937
    style COMMIT fill:#059669,stroke:#047857,color:#fff
```

```python
from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant that speaks {language}."),
    ("human", "{question}"),
])

# Invoke the template with variables
formatted = prompt.invoke({
    "language": "Spanish",
    "question": "What is machine learning?"
})
print(formatted.messages)
```

LangChain provides several template types. `ChatPromptTemplate` works with chat models that expect message lists. `PromptTemplate` handles plain string completion models. `FewShotPromptTemplate` lets you inject dynamic examples. All templates are `Runnable` objects, which means they can be composed using the pipe operator.

## Language Model Wrappers

LangChain wraps model providers behind two interfaces: `BaseChatModel` for chat models and `BaseLLM` for completion models. In practice, nearly all modern usage goes through chat models.

```python
from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic

# OpenAI
gpt = ChatOpenAI(model="gpt-4o", temperature=0)

# Anthropic
claude = ChatAnthropic(model="claude-sonnet-4-20250514", temperature=0)

# Both share the same interface
response = gpt.invoke("Explain gradient descent in one sentence.")
print(response.content)
```

The wrapper handles authentication, retry logic, token counting, and response normalization. You can swap providers without changing downstream code because the interface is consistent.

## Chains and the Pipe Operator

A chain connects a prompt template to a model and optionally to an output parser. With LangChain Expression Language (LCEL), you compose chains using the `|` pipe operator.

```python
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

prompt = ChatPromptTemplate.from_template(
    "Explain {concept} in simple terms for a beginner."
)
model = ChatOpenAI(model="gpt-4o-mini")
parser = StrOutputParser()

# Compose the chain
chain = prompt | model | parser

# Run it
result = chain.invoke({"concept": "neural networks"})
print(result)  # Plain string output
```

The pipe operator connects components left to right. The output of `prompt` feeds into `model`, and the output of `model` feeds into `parser`. Each component is a `Runnable` — an object that implements `invoke`, `batch`, `stream`, and their async counterparts.

## Runnables: The Universal Interface

Every component in LCEL implements the `Runnable` interface. This means any component supports:

- **`invoke(input)`** — process a single input synchronously
- **`ainvoke(input)`** — async version
- **`batch(inputs)`** — process multiple inputs with concurrency
- **`stream(input)`** — yield output chunks as they arrive

```python
# Streaming example
for chunk in chain.stream({"concept": "transformers"}):
    print(chunk, end="", flush=True)
```

This uniformity means that whether your component is a prompt, a model, a retriever, or a custom function, it plugs into the same composition framework.

## Putting It All Together

Here is a practical example that builds a chain accepting a topic and difficulty level, then returns a structured explanation.

```python
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a computer science tutor. Adjust your "
               "explanation to the {level} level."),
    ("human", "Explain {topic}."),
])

chain = prompt | ChatOpenAI(model="gpt-4o-mini") | StrOutputParser()

# Batch processing
results = chain.batch([
    {"topic": "recursion", "level": "beginner"},
    {"topic": "recursion", "level": "advanced"},
])

for r in results:
    print(r[:100], "...")
    print("---")
```

The `batch` method processes both inputs concurrently, making efficient use of API rate limits.

## FAQ

### What is the difference between LangChain and calling the OpenAI API directly?

LangChain adds composability, provider abstraction, and a unified interface on top of raw API calls. You can swap models, chain components, add memory, and integrate tools without rewriting your application logic. For simple single-call use cases, the raw API is fine. For multi-step workflows, LangChain reduces boilerplate significantly.

### Do I need to use LCEL or can I use the legacy chain classes?

Modern LangChain strongly recommends LCEL (the pipe operator approach). Legacy classes like `LLMChain` and `SequentialChain` still work but are no longer the primary API. LCEL provides streaming, batching, and async support automatically for every chain you build.

### Does LangChain only work with OpenAI models?

No. LangChain supports dozens of providers through integration packages including Anthropic, Google, Mistral, Ollama for local models, and many more. You install the relevant package (e.g., `langchain-anthropic`) and swap the model wrapper.

---

#LangChain #LLM #PromptEngineering #Python #AIFramework #AgenticAI #LearnAI #AIEngineering

---

Source: https://callsphere.ai/blog/langchain-fundamentals-chains-prompts-language-models-explained
