---
title: "Anthropic Agent SDK Getting Started: Building Your First Claude-Powered Agent"
description: "Learn how to install the Anthropic Python SDK, define tools, create your first Claude-powered agent, and execute multi-step workflows with structured tool calling."
canonical: https://callsphere.ai/blog/anthropic-agent-sdk-getting-started-first-claude-agent
category: "Learn Agentic AI"
tags: ["Anthropic", "Claude", "Agent SDK", "Python", "Getting Started"]
author: "CallSphere Team"
published: 2026-03-17T00:00:00.000Z
updated: 2026-05-06T01:02:42.246Z
---

# Anthropic Agent SDK Getting Started: Building Your First Claude-Powered Agent

> Learn how to install the Anthropic Python SDK, define tools, create your first Claude-powered agent, and execute multi-step workflows with structured tool calling.

## Why Anthropic Claude for Agents

Anthropic's Claude models are purpose-built for agentic workflows. With native tool use, extended thinking, and a 200K-token context window, Claude gives agents the reasoning depth needed to handle multi-step tasks without losing coherence. The Anthropic Python SDK provides a clean, typed interface for building agents that call tools, process results, and iterate until a task is complete.

Unlike wrapper frameworks that add abstraction layers, building directly on the Anthropic SDK means you control every aspect of the agent loop — tool definitions, retry logic, context management, and output parsing. This tutorial walks you through the entire process from installation to a working agent.

## Installation and Setup

Install the Anthropic Python SDK using pip:

```mermaid
flowchart LR
    INPUT(["User intent"])
    PARSE["Parse plus
classify"]
    PLAN["Plan and tool
selection"]
    AGENT["Agent loop
LLM plus tools"]
    GUARD{"Guardrails
and policy"}
    EXEC["Execute and
verify result"]
    OBS[("Trace and metrics")]
    OUT(["Outcome plus
next action"])
    INPUT --> PARSE --> PLAN --> AGENT --> GUARD
    GUARD -->|Pass| EXEC --> OUT
    GUARD -->|Fail| AGENT
    AGENT --> OBS
    style AGENT fill:#4f46e5,stroke:#4338ca,color:#fff
    style GUARD fill:#f59e0b,stroke:#d97706,color:#1f2937
    style OBS fill:#ede9fe,stroke:#7c3aed,color:#1e1b4b
    style OUT fill:#059669,stroke:#047857,color:#fff
```

```bash
pip install anthropic
```

Set your API key as an environment variable:

```bash
export ANTHROPIC_API_KEY="sk-ant-your-key-here"
```

Verify the installation:

```python
import anthropic

client = anthropic.Anthropic()
message = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=256,
    messages=[{"role": "user", "content": "Say hello"}]
)
print(message.content[0].text)
```

If you see a greeting, your SDK and API key are configured correctly.

## Defining Tools for Your Agent

Claude agents gain capabilities through tools. Each tool is a JSON schema that tells Claude what the tool does, what parameters it accepts, and what it returns. Here is a simple calculator tool:

```python
tools = [
    {
        "name": "calculator",
        "description": "Performs arithmetic calculations. Use this for any math operation.",
        "input_schema": {
            "type": "object",
            "properties": {
                "expression": {
                    "type": "string",
                    "description": "A mathematical expression like '2 + 3 * 4'"
                }
            },
            "required": ["expression"]
        }
    }
]
```

Tool descriptions matter. Claude uses the description to decide when to call each tool, so be specific about what the tool does and when it should be used.

## Building the Agent Loop

An agentic workflow requires a loop: send a message to Claude, check if it wants to call tools, execute those tools, feed results back, and repeat until Claude produces a final text response.

```python
import anthropic
import json

client = anthropic.Anthropic()

def run_agent(user_message: str, tools: list, system: str = "") -> str:
    messages = [{"role": "user", "content": user_message}]

    while True:
        response = client.messages.create(
            model="claude-sonnet-4-20250514",
            max_tokens=4096,
            system=system,
            tools=tools,
            messages=messages,
        )

        # If Claude stops without tool use, return the text
        if response.stop_reason == "end_turn":
            text_blocks = [b.text for b in response.content if b.type == "text"]
            return "\n".join(text_blocks)

        # Process tool calls
        messages.append({"role": "assistant", "content": response.content})

        tool_results = []
        for block in response.content:
            if block.type == "tool_use":
                result = execute_tool(block.name, block.input)
                tool_results.append({
                    "type": "tool_result",
                    "tool_use_id": block.id,
                    "content": json.dumps(result),
                })

        messages.append({"role": "user", "content": tool_results})

    return "Agent loop ended unexpectedly"
```

The key insight is that Claude signals tool use through `stop_reason == "tool_use"` and embeds `tool_use` blocks in its response content. Your agent loop processes these blocks, executes the corresponding functions, and sends results back as `tool_result` messages.

## Executing Tools Locally

The `execute_tool` function maps tool names to actual Python functions:

```python
def execute_tool(name: str, inputs: dict):
    if name == "calculator":
        try:
            result = eval(inputs["expression"])  # Use a safe parser in production
            return {"result": result}
        except Exception as e:
            return {"error": str(e)}
    return {"error": f"Unknown tool: {name}"}
```

In production, replace `eval` with a safe math parser like `numexpr` or `asteval`. Never execute arbitrary code from LLM outputs without sandboxing.

## Running Your First Agent

Put it all together:

```python
system_prompt = """You are a helpful math assistant. Use the calculator tool
for any arithmetic. Show your reasoning before calculating."""

answer = run_agent(
    "What is 15% tip on a $127.50 dinner bill, and what is the total?",
    tools=tools,
    system=system_prompt,
)
print(answer)
```

Claude will reason through the problem, call the calculator tool for `127.50 * 0.15`, then call it again for `127.50 + 19.125`, and return a formatted answer.

## FAQ

### How is the Anthropic SDK different from LangChain or other agent frameworks?

The Anthropic SDK is a thin client library that communicates directly with the Claude API. It does not include prompt templates, vector stores, or chain abstractions. This gives you full control over the agent loop, tool execution, and error handling without framework-imposed patterns. For many production use cases, this directness reduces bugs and makes debugging straightforward.

### Can I use async calls for better performance?

Yes. The SDK provides `anthropic.AsyncAnthropic()` for async usage. Replace `client.messages.create()` with `await client.messages.create()` inside an async function. This is essential for web servers where blocking calls would stall request handling.

### How many tools can I define for a single agent?

Claude supports up to 128 tools in a single request. However, performance is best with fewer, well-described tools. If you have more than 20 tools, consider splitting them across specialized sub-agents or using tool categories with dynamic tool selection based on the user's query.

---

#Anthropic #Claude #AgentSDK #Python #GettingStarted #AgenticAI #LearnAI #AIEngineering

---

Source: https://callsphere.ai/blog/anthropic-agent-sdk-getting-started-first-claude-agent
