---
title: "OpenAI Function Calling: Letting LLMs Interact with Your Code"
description: "Master OpenAI's function calling feature to let language models invoke your Python functions, parse structured arguments, and build tool-augmented AI applications."
canonical: https://callsphere.ai/blog/openai-function-calling-letting-llms-interact-with-your-code
category: "Learn Agentic AI"
tags: ["OpenAI", "Function Calling", "Tools", "Python", "AI Agents"]
author: "CallSphere Team"
published: 2026-03-17T00:00:00.000Z
updated: 2026-05-06T01:02:42.571Z
---

# OpenAI Function Calling: Letting LLMs Interact with Your Code

> Master OpenAI's function calling feature to let language models invoke your Python functions, parse structured arguments, and build tool-augmented AI applications.

## What Is Function Calling?

Function calling (also called tool use) lets an LLM decide when to invoke a function you define, generate the correct arguments as structured JSON, and then incorporate the function's result into its response. This bridges the gap between the model's language capabilities and your application's data and actions.

Use cases include fetching real-time data, querying databases, sending emails, creating records, calling external APIs — anything your code can do.

## Defining Tools

You describe your functions using JSON Schema in the `tools` parameter:

```mermaid
flowchart TD
    USER(["User message"])
    LLM["LLM call
with tools schema"]
    DECIDE{"Model wants
to call a tool?"}
    EXEC["Execute tool
sandboxed runtime"]
    RESULT["Append tool_result
to messages"]
    GUARD{"Output passes
guardrails?"}
    DONE(["Final reply"])
    BLOCK(["Refuse and log"])
    USER --> LLM --> DECIDE
    DECIDE -->|Yes| EXEC --> RESULT --> LLM
    DECIDE -->|No| GUARD
    GUARD -->|Yes| DONE
    GUARD -->|No| BLOCK
    style LLM fill:#4f46e5,stroke:#4338ca,color:#fff
    style EXEC fill:#ede9fe,stroke:#7c3aed,color:#1e1b4b
    style GUARD fill:#f59e0b,stroke:#d97706,color:#1f2937
    style DONE fill:#059669,stroke:#047857,color:#fff
    style BLOCK fill:#dc2626,stroke:#b91c1c,color:#fff
```

```python
from openai import OpenAI

client = OpenAI()

tools = [
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get the current weather for a given city.",
            "parameters": {
                "type": "object",
                "properties": {
                    "city": {
                        "type": "string",
                        "description": "The city name, e.g., 'San Francisco'",
                    },
                    "units": {
                        "type": "string",
                        "enum": ["celsius", "fahrenheit"],
                        "description": "Temperature unit",
                    },
                },
                "required": ["city"],
            },
        },
    },
]
```

The `description` fields are critical — the model reads them to decide when and how to call the function.

## Making a Tool-Augmented Request

Pass the tools array along with your messages:

```python
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "You are a helpful weather assistant."},
        {"role": "user", "content": "What is the weather like in Tokyo?"},
    ],
    tools=tools,
    tool_choice="auto",  # let the model decide whether to call a tool
)

message = response.choices[0].message

if message.tool_calls:
    for tool_call in message.tool_calls:
        print(f"Function: {tool_call.function.name}")
        print(f"Arguments: {tool_call.function.arguments}")
        print(f"Call ID: {tool_call.id}")
```

When the model decides a tool is needed, `finish_reason` is `tool_calls` and the `message.tool_calls` array contains one or more function calls with JSON string arguments.

## The Complete Tool Call Loop

Function calling requires a multi-turn conversation. You send the request, execute the function, then send the result back:

```python
import json

def get_weather(city: str, units: str = "celsius") -> dict:
    # In production, call a real weather API
    return {"city": city, "temperature": 22, "units": units, "condition": "partly cloudy"}

# Step 1: Send the user message with tools
messages = [
    {"role": "system", "content": "You are a helpful weather assistant."},
    {"role": "user", "content": "What is the weather in Tokyo and London?"},
]

response = client.chat.completions.create(
    model="gpt-4o",
    messages=messages,
    tools=tools,
)

assistant_message = response.choices[0].message

# Step 2: Execute each tool call
if assistant_message.tool_calls:
    messages.append(assistant_message)  # add the assistant's tool call message

    for tool_call in assistant_message.tool_calls:
        args = json.loads(tool_call.function.arguments)
        result = get_weather(**args)

        messages.append({
            "role": "tool",
            "tool_call_id": tool_call.id,
            "content": json.dumps(result),
        })

    # Step 3: Send results back to the model
    final_response = client.chat.completions.create(
        model="gpt-4o",
        messages=messages,
        tools=tools,
    )

    print(final_response.choices[0].message.content)
```

The model sees the tool results and produces a natural language summary for the user.

## Controlling Tool Choice

The `tool_choice` parameter controls when tools are used:

```python
# Let the model decide (default)
tool_choice = "auto"

# Force a specific function
tool_choice = {"type": "function", "function": {"name": "get_weather"}}

# Prevent tool use entirely
tool_choice = "none"

# Require the model to call at least one tool
tool_choice = "required"
```

## Multiple Tools in One Application

Real applications expose several tools. The model picks the right one based on context:

```python
tools = [
    {
        "type": "function",
        "function": {
            "name": "search_products",
            "description": "Search the product catalog by keyword.",
            "parameters": {
                "type": "object",
                "properties": {
                    "query": {"type": "string"},
                    "max_results": {"type": "integer", "default": 5},
                },
                "required": ["query"],
            },
        },
    },
    {
        "type": "function",
        "function": {
            "name": "get_order_status",
            "description": "Check the status of an order by order ID.",
            "parameters": {
                "type": "object",
                "properties": {
                    "order_id": {"type": "string"},
                },
                "required": ["order_id"],
            },
        },
    },
]
```

When the user says "Where is my order #12345?", the model calls `get_order_status`. When they say "Show me wireless headphones", it calls `search_products`.

## FAQ

### Can the model call multiple functions in parallel?

Yes. The model can return multiple entries in the `tool_calls` array within a single response. You should execute them all and send back all results before making the next API call.

### What happens if the function returns an error?

Return the error as the tool result content. The model will see the error and can communicate it to the user or try a different approach. For example: `{"error": "Order not found"}`.

### How do I prevent the model from hallucinating function arguments?

Write detailed descriptions for each parameter, use `enum` for constrained values, and mark fields as `required` when they must be provided. The more specific your schema, the more reliable the arguments.

---

#OpenAI #FunctionCalling #Tools #Python #AIAgents #AgenticAI #LearnAI #AIEngineering

---

Source: https://callsphere.ai/blog/openai-function-calling-letting-llms-interact-with-your-code
