Skip to content
Learn Agentic AI
Learn Agentic AI12 min read3 views

Building a Gemini Code Agent: Code Generation, Execution, and Debugging

Build a code agent with Gemini that generates Python code, executes it in a sandboxed environment, analyzes results, and iteratively debugs failures. Includes the code execution API and test generation patterns.

Gemini's Built-In Code Execution

One of Gemini's most powerful features for agent development is native code execution. Instead of generating code and hoping it works, Gemini can write Python code, run it in a sandboxed environment, observe the output, and iterate if something goes wrong — all within a single API call.

This creates a true code agent loop: generate, execute, analyze, fix. The model does not just suggest code — it verifies that the code actually works.

Enabling Code Execution

Code execution is enabled as a tool, similar to function calling:

flowchart TD
    START["Building a Gemini Code Agent: Code Generation, Ex…"] --> A
    A["Gemini39s Built-In Code Execution"]
    A --> B
    B["Enabling Code Execution"]
    B --> C
    C["Inspecting Execution Results"]
    C --> D
    D["Building an Iterative Code Agent"]
    D --> E
    E["Combining Code Execution with Function …"]
    E --> F
    F["Test Generation Pattern"]
    F --> G
    G["FAQ"]
    G --> DONE["Key Takeaways"]
    style START fill:#4f46e5,stroke:#4338ca,color:#fff
    style DONE fill:#059669,stroke:#047857,color:#fff
import google.generativeai as genai
import os

genai.configure(api_key=os.environ["GOOGLE_API_KEY"])

model = genai.GenerativeModel(
    "gemini-2.0-flash",
    tools="code_execution",
)

response = model.generate_content(
    "Calculate the first 20 Fibonacci numbers and show them as a formatted table."
)

print(response.text)

When code execution is enabled, Gemini writes Python code, executes it server-side in a sandboxed environment, and incorporates the actual output into its response. You see both the code and its real results.

Inspecting Execution Results

The response contains structured parts that separate code from output:

response = model.generate_content(
    "Generate a random dataset of 100 points and calculate basic statistics."
)

for part in response.candidates[0].content.parts:
    if part.text:
        print(f"TEXT: {part.text[:200]}")
    if part.executable_code:
        print(f"CODE:\n{part.executable_code.code}")
    if part.code_execution_result:
        print(f"OUTPUT:\n{part.code_execution_result.output}")
        print(f"OUTCOME: {part.code_execution_result.outcome}")

The outcome field tells you whether execution succeeded or failed. On failure, Gemini automatically attempts to fix the code and re-execute — you get the full debugging cycle in the response.

See AI Voice Agents Handle Real Calls

Book a free demo or calculate how much you can save with AI voice automation.

Building an Iterative Code Agent

Here is a code agent that handles complex multi-step programming tasks:

import google.generativeai as genai
import os

genai.configure(api_key=os.environ["GOOGLE_API_KEY"])

class CodeAgent:
    def __init__(self):
        self.model = genai.GenerativeModel(
            "gemini-2.0-flash",
            tools="code_execution",
            system_instruction="""You are a Python programming agent.
            When given a task:
            1. Break it into steps
            2. Write code for each step
            3. Execute and verify each step works
            4. If any step fails, debug and fix before continuing
            5. Always validate your final output""",
        )
        self.chat = self.model.start_chat()

    def solve(self, task: str) -> dict:
        response = self.chat.send_message(task)

        code_blocks = []
        outputs = []
        explanation = []

        for part in response.candidates[0].content.parts:
            if part.executable_code:
                code_blocks.append(part.executable_code.code)
            if part.code_execution_result:
                outputs.append({
                    "output": part.code_execution_result.output,
                    "success": part.code_execution_result.outcome.name == "OUTCOME_OK",
                })
            if part.text:
                explanation.append(part.text)

        return {
            "explanation": "\n".join(explanation),
            "code_blocks": code_blocks,
            "outputs": outputs,
            "all_succeeded": all(o["success"] for o in outputs),
        }

agent = CodeAgent()
result = agent.solve(
    "Read a CSV string with columns name, age, salary. "
    "Find the average salary by age group (20-29, 30-39, 40+). "
    "Format the results as a markdown table."
)

print(result["explanation"])
print(f"All code executed successfully: {result['all_succeeded']}")

Combining Code Execution with Function Calling

The code execution tool works alongside custom function calling. This lets the agent both run ad-hoc code and access external systems:

def query_database(sql: str) -> list:
    """Execute a SQL query against the analytics database.

    Args:
        sql: The SQL query to execute.
    """
    # In production, connect to your real database
    return [
        {"month": "Jan", "revenue": 150000},
        {"month": "Feb", "revenue": 175000},
        {"month": "Mar", "revenue": 162000},
    ]

model = genai.GenerativeModel(
    "gemini-2.0-flash",
    tools=["code_execution", query_database],
)

chat = model.start_chat(enable_automatic_function_calling=True)

response = chat.send_message(
    "Query the monthly revenue data, then calculate the trend line "
    "and predict next month's revenue using linear regression."
)

In this pattern, the agent calls your database function to get data, then uses code execution to run the statistical analysis. Each tool handles what it does best.

Test Generation Pattern

A practical application is generating and running tests for existing code:

agent = CodeAgent()

source_code = """
def parse_duration(text: str) -> int:
    parts = text.strip().split()
    total_seconds = 0
    i = 0
    while i < len(parts) - 1:
        value = int(parts[i])
        unit = parts[i + 1].lower().rstrip('s')
        if unit == 'hour':
            total_seconds += value * 3600
        elif unit == 'minute':
            total_seconds += value * 60
        elif unit == 'second':
            total_seconds += value
        i += 2
    return total_seconds
"""

result = agent.solve(
    f"Here is a Python function:\n\n{source_code}\n\n"
    "Write comprehensive unit tests for this function including edge cases. "
    "Execute all tests and report which pass and which fail."
)

FAQ

What Python libraries are available in the code execution sandbox?

The sandbox includes NumPy, Pandas, Matplotlib, and the Python standard library. It does not have network access or the ability to install additional packages. For tasks requiring other libraries, generate the code for local execution instead.

Is there a time limit for code execution?

Yes. Code execution has a timeout of approximately 30 seconds. Long-running computations will be terminated. Design your code agent prompts to break large tasks into smaller, faster steps.

Can the code execution sandbox access files I upload?

No. The code execution environment is separate from the Files API. If you need to process uploaded files with code, extract the content as text and pass it as part of the prompt, or use function calling to bridge the gap.


#GoogleGemini #CodeGeneration #AIAgents #Python #Debugging #AgenticAI #LearnAI #AIEngineering

Share
C

Written by

CallSphere Team

Expert insights on AI voice agents and customer communication automation.

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

Use Cases

Automating Client Document Collection: How AI Agents Chase Missing Tax Documents and Reduce Filing Delays

See how AI agents automate tax document collection — chasing missing W-2s, 1099s, and receipts via calls and texts to eliminate the #1 CPA bottleneck.

AI Interview Prep

7 AI Coding Interview Questions From Anthropic, Meta & OpenAI (2026 Edition)

Real AI coding interview questions from Anthropic, Meta, and OpenAI in 2026. Includes implementing attention from scratch, Anthropic's progressive coding screens, Meta's AI-assisted round, and vector search — with solution approaches.

Learn Agentic AI

API Design for AI Agent Tool Functions: Best Practices and Anti-Patterns

How to design tool functions that LLMs can use effectively with clear naming, enum parameters, structured responses, informative error messages, and documentation.

Learn Agentic AI

AI Agents for IT Helpdesk: L1 Automation, Ticket Routing, and Knowledge Base Integration

Build IT helpdesk AI agents with multi-agent architecture for triage, device, network, and security issues. RAG-powered knowledge base, automated ticket creation, routing, and escalation.

Learn Agentic AI

Computer Use in GPT-5.4: Building AI Agents That Navigate Desktop Applications

Technical guide to GPT-5.4's computer use capabilities for building AI agents that interact with desktop UIs, browser automation, and real-world application workflows.

Learn Agentic AI

Prompt Engineering for AI Agents: System Prompts, Tool Descriptions, and Few-Shot Patterns

Agent-specific prompt engineering techniques: crafting effective system prompts, writing clear tool descriptions for function calling, and few-shot examples that improve complex task performance.