Skip to content
Learn Agentic AI
Learn Agentic AI11 min read0 views

LangChain Fundamentals: Chains, Prompts, and Language Models Explained

Master the core building blocks of LangChain including chains, prompt templates, language model wrappers, and the LangChain Expression Language for composing AI applications.

What Is LangChain and Why Does It Matter

LangChain is an open-source framework for building applications powered by large language models. Rather than writing raw API calls and managing prompt formatting, response parsing, and chaining logic yourself, LangChain provides composable abstractions that let you assemble complex LLM workflows from reusable components.

The framework has evolved significantly since its inception. Modern LangChain centers on three ideas: prompt templates for parameterized inputs, language model wrappers that normalize different providers behind a common interface, and chains that compose these pieces into pipelines. Understanding these fundamentals is essential before moving on to agents, RAG, or multi-step workflows.

Prompt Templates

A prompt template is a string with placeholders that get filled in at runtime. Instead of concatenating strings manually, you define a template once and invoke it with different variables.

flowchart TD
    START["LangChain Fundamentals: Chains, Prompts, and Lang…"] --> A
    A["What Is LangChain and Why Does It Matter"]
    A --> B
    B["Prompt Templates"]
    B --> C
    C["Language Model Wrappers"]
    C --> D
    D["Chains and the Pipe Operator"]
    D --> E
    E["Runnables: The Universal Interface"]
    E --> F
    F["Putting It All Together"]
    F --> G
    G["FAQ"]
    G --> DONE["Key Takeaways"]
    style START fill:#4f46e5,stroke:#4338ca,color:#fff
    style DONE fill:#059669,stroke:#047857,color:#fff
from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant that speaks {language}."),
    ("human", "{question}"),
])

# Invoke the template with variables
formatted = prompt.invoke({
    "language": "Spanish",
    "question": "What is machine learning?"
})
print(formatted.messages)

LangChain provides several template types. ChatPromptTemplate works with chat models that expect message lists. PromptTemplate handles plain string completion models. FewShotPromptTemplate lets you inject dynamic examples. All templates are Runnable objects, which means they can be composed using the pipe operator.

Language Model Wrappers

LangChain wraps model providers behind two interfaces: BaseChatModel for chat models and BaseLLM for completion models. In practice, nearly all modern usage goes through chat models.

from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic

# OpenAI
gpt = ChatOpenAI(model="gpt-4o", temperature=0)

# Anthropic
claude = ChatAnthropic(model="claude-sonnet-4-20250514", temperature=0)

# Both share the same interface
response = gpt.invoke("Explain gradient descent in one sentence.")
print(response.content)

The wrapper handles authentication, retry logic, token counting, and response normalization. You can swap providers without changing downstream code because the interface is consistent.

Chains and the Pipe Operator

A chain connects a prompt template to a model and optionally to an output parser. With LangChain Expression Language (LCEL), you compose chains using the | pipe operator.

See AI Voice Agents Handle Real Calls

Book a free demo or calculate how much you can save with AI voice automation.

flowchart TD
    CENTER(("Core Concepts"))
    CENTER --> N0["invokeinput — process a single input sy…"]
    CENTER --> N1["ainvokeinput — async version"]
    CENTER --> N2["batchinputs — process multiple inputs w…"]
    CENTER --> N3["streaminput — yield output chunks as th…"]
    style CENTER fill:#4f46e5,stroke:#4338ca,color:#fff
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

prompt = ChatPromptTemplate.from_template(
    "Explain {concept} in simple terms for a beginner."
)
model = ChatOpenAI(model="gpt-4o-mini")
parser = StrOutputParser()

# Compose the chain
chain = prompt | model | parser

# Run it
result = chain.invoke({"concept": "neural networks"})
print(result)  # Plain string output

The pipe operator connects components left to right. The output of prompt feeds into model, and the output of model feeds into parser. Each component is a Runnable — an object that implements invoke, batch, stream, and their async counterparts.

Runnables: The Universal Interface

Every component in LCEL implements the Runnable interface. This means any component supports:

  • invoke(input) — process a single input synchronously
  • ainvoke(input) — async version
  • batch(inputs) — process multiple inputs with concurrency
  • stream(input) — yield output chunks as they arrive
# Streaming example
for chunk in chain.stream({"concept": "transformers"}):
    print(chunk, end="", flush=True)

This uniformity means that whether your component is a prompt, a model, a retriever, or a custom function, it plugs into the same composition framework.

Putting It All Together

Here is a practical example that builds a chain accepting a topic and difficulty level, then returns a structured explanation.

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a computer science tutor. Adjust your "
               "explanation to the {level} level."),
    ("human", "Explain {topic}."),
])

chain = prompt | ChatOpenAI(model="gpt-4o-mini") | StrOutputParser()

# Batch processing
results = chain.batch([
    {"topic": "recursion", "level": "beginner"},
    {"topic": "recursion", "level": "advanced"},
])

for r in results:
    print(r[:100], "...")
    print("---")

The batch method processes both inputs concurrently, making efficient use of API rate limits.

FAQ

What is the difference between LangChain and calling the OpenAI API directly?

LangChain adds composability, provider abstraction, and a unified interface on top of raw API calls. You can swap models, chain components, add memory, and integrate tools without rewriting your application logic. For simple single-call use cases, the raw API is fine. For multi-step workflows, LangChain reduces boilerplate significantly.

Do I need to use LCEL or can I use the legacy chain classes?

Modern LangChain strongly recommends LCEL (the pipe operator approach). Legacy classes like LLMChain and SequentialChain still work but are no longer the primary API. LCEL provides streaming, batching, and async support automatically for every chain you build.

Does LangChain only work with OpenAI models?

No. LangChain supports dozens of providers through integration packages including Anthropic, Google, Mistral, Ollama for local models, and many more. You install the relevant package (e.g., langchain-anthropic) and swap the model wrapper.


#LangChain #LLM #PromptEngineering #Python #AIFramework #AgenticAI #LearnAI #AIEngineering

Share
C

Written by

CallSphere Team

Expert insights on AI voice agents and customer communication automation.

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

Technical Guides

How to Train an AI Voice Agent on Your Business: Prompts, RAG, and Fine-Tuning

A practical guide to training an AI voice agent on your specific business — system prompts, RAG over knowledge bases, and when to fine-tune.

AI Interview Prep

8 LLM & RAG Interview Questions That OpenAI, Anthropic & Google Actually Ask

Real LLM and RAG interview questions from top AI labs in 2026. Covers fine-tuning vs RAG decisions, production RAG pipelines, evaluation, PEFT methods, positional embeddings, and safety guardrails with expert answers.

AI Interview Prep

7 AI Coding Interview Questions From Anthropic, Meta & OpenAI (2026 Edition)

Real AI coding interview questions from Anthropic, Meta, and OpenAI in 2026. Includes implementing attention from scratch, Anthropic's progressive coding screens, Meta's AI-assisted round, and vector search — with solution approaches.

Learn Agentic AI

Prompt Engineering for AI Agents: System Prompts, Tool Descriptions, and Few-Shot Patterns

Agent-specific prompt engineering techniques: crafting effective system prompts, writing clear tool descriptions for function calling, and few-shot examples that improve complex task performance.

Learn Agentic AI

Building a Multi-Agent Data Pipeline: Ingestion, Transformation, and Analysis Agents

Build a three-agent data pipeline with ingestion, transformation, and analysis agents that process data from APIs, CSVs, and databases using Python.

Learn Agentic AI

Building a Research Agent with Web Search and Report Generation: Complete Tutorial

Build a research agent that searches the web, extracts and synthesizes data, and generates formatted reports using OpenAI Agents SDK and web search tools.