---
title: "Building a GitHub Event Agent: Auto-Responding to Issues, PRs, and Deployments"
description: "Build a GitHub webhook-powered AI agent that automatically triages issues, reviews pull requests, and monitors deployment status using FastAPI and the GitHub API."
canonical: https://callsphere.ai/blog/building-github-event-agent-auto-responding-issues-prs-deployments
category: "Learn Agentic AI"
tags: ["GitHub", "Webhooks", "AI Agents", "DevOps Automation", "FastAPI"]
author: "CallSphere Team"
published: 2026-03-17T00:00:00.000Z
updated: 2026-05-07T18:08:11.227Z
---

# Building a GitHub Event Agent: Auto-Responding to Issues, PRs, and Deployments

> Build a GitHub webhook-powered AI agent that automatically triages issues, reviews pull requests, and monitors deployment status using FastAPI and the GitHub API.

## Why GitHub Needs an AI Agent

Large repositories generate a constant stream of events — new issues, pull requests, comments, deployments, and security alerts. Manually triaging every issue, reviewing every PR, and monitoring every deployment does not scale. A GitHub event agent can handle the repetitive work: labeling and prioritizing issues, providing initial code review feedback, and alerting the team when deployments fail.

This is not about replacing human reviewers. It is about giving them a head start. When a developer opens a PR, the agent can summarize the changes, flag potential issues, and check for common anti-patterns before a human reviewer even looks at it.

## Setting Up the Webhook Receiver

First, configure your GitHub repository to send webhooks to your FastAPI server. In your repository settings, add a webhook URL and select the events you want to receive.

```mermaid
flowchart LR
    CLIENT(["Client SDK"])
    GW["API Gateway
auth plus rate limit"]
    APP["FastAPI app
handlers and DI"]
    VAL["Pydantic validation"]
    SVC["Service layer
business logic"]
    DB[(Database)]
    QUEUE[(Background queue)]
    OBS[(Tracing)]
    CLIENT --> GW --> APP --> VAL --> SVC
    SVC --> DB
    SVC --> QUEUE
    SVC --> OBS
    SVC --> CLIENT
    style GW fill:#4f46e5,stroke:#4338ca,color:#fff
    style APP fill:#f59e0b,stroke:#d97706,color:#1f2937
    style DB fill:#ede9fe,stroke:#7c3aed,color:#1e1b4b
```

```python
import os
import hmac
import hashlib
import httpx
from fastapi import FastAPI, Request, HTTPException, BackgroundTasks

app = FastAPI()

GITHUB_WEBHOOK_SECRET = os.environ["GITHUB_WEBHOOK_SECRET"]
GITHUB_TOKEN = os.environ["GITHUB_TOKEN"]

def verify_github_signature(payload: bytes, signature: str) -> bool:
    expected = hmac.new(
        GITHUB_WEBHOOK_SECRET.encode(), payload, hashlib.sha256
    ).hexdigest()
    return hmac.compare_digest(f"sha256={expected}", signature)

@app.post("/github/webhook")
async def github_webhook(request: Request, background_tasks: BackgroundTasks):
    body = await request.body()
    signature = request.headers.get("X-Hub-Signature-256", "")

    if not verify_github_signature(body, signature):
        raise HTTPException(status_code=401, detail="Invalid signature")

    event_type = request.headers.get("X-GitHub-Event", "")
    payload = await request.json()

    background_tasks.add_task(route_github_event, event_type, payload)
    return {"status": "accepted"}
```

GitHub sends the event type in the `X-GitHub-Event` header, which tells you whether the payload is an issue, pull request, deployment, or something else.

## Routing Events to Handlers

Build a dispatcher that routes each event type to its specialized handler.

```python
from openai import AsyncOpenAI

llm = AsyncOpenAI()

async def route_github_event(event_type: str, payload: dict):
    handlers = {
        "issues": handle_issue_event,
        "pull_request": handle_pr_event,
        "deployment_status": handle_deployment_event,
    }
    handler = handlers.get(event_type)
    if handler:
        await handler(payload)

async def handle_issue_event(payload: dict):
    if payload["action"] != "opened":
        return

    issue = payload["issue"]
    title = issue["title"]
    body = issue["body"] or ""
    repo = payload["repository"]["full_name"]

    prompt = f"""Triage this GitHub issue. Respond with:
1. A severity label (bug, feature-request, question, documentation)
2. A priority (P0-critical, P1-high, P2-medium, P3-low)
3. A brief helpful response to the issue author.

Title: {title}
Body: {body}"""

    response = await llm.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": prompt}],
    )
    analysis = response.choices[0].message.content

    await add_issue_comment(repo, issue["number"], analysis)
    await add_issue_labels(repo, issue["number"], extract_labels(analysis))
```

## Handling Pull Request Events

PR review is where the agent provides the most value. It can summarize changes, check for common issues, and leave inline comments.

```python
async def handle_pr_event(payload: dict):
    if payload["action"] != "opened":
        return

    pr = payload["pull_request"]
    repo = payload["repository"]["full_name"]

    diff = await fetch_pr_diff(repo, pr["number"])

    prompt = f"""Review this pull request diff. Provide:
1. A summary of what this PR does (2-3 sentences)
2. Any potential bugs, security issues, or performance concerns
3. Suggestions for improvement

PR Title: {pr['title']}
PR Description: {pr['body'] or 'No description provided'}

Diff:
{diff[:8000]}"""

    response = await llm.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": prompt}],
    )
    review = response.choices[0].message.content
    await add_pr_comment(repo, pr["number"], f"## AI Review Summary\n\n{review}")

async def fetch_pr_diff(repo: str, pr_number: int) -> str:
    async with httpx.AsyncClient() as client:
        resp = await client.get(
            f"https://api.github.com/repos/{repo}/pulls/{pr_number}",
            headers={
                "Authorization": f"Bearer {GITHUB_TOKEN}",
                "Accept": "application/vnd.github.diff",
            },
        )
        return resp.text
```

## Deployment Status Monitoring

When a deployment fails, the agent can analyze logs and notify the team with context.

```python
async def handle_deployment_event(payload: dict):
    status = payload["deployment_status"]
    if status["state"] != "failure":
        return

    repo = payload["repository"]["full_name"]
    description = status.get("description", "No description")
    environment = status.get("environment", "unknown")

    prompt = f"""A deployment to {environment} failed in {repo}.
Status description: {description}
Suggest possible causes and immediate remediation steps."""

    response = await llm.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": prompt}],
    )
    analysis = response.choices[0].message.content
    await notify_team(repo, environment, analysis)
```

## GitHub API Helper Functions

These utility functions interact with the GitHub API to post comments and labels.

```python
async def add_issue_comment(repo: str, issue_number: int, body: str):
    async with httpx.AsyncClient() as client:
        await client.post(
            f"https://api.github.com/repos/{repo}/issues/{issue_number}/comments",
            headers={"Authorization": f"Bearer {GITHUB_TOKEN}"},
            json={"body": body},
        )

async def add_issue_labels(repo: str, issue_number: int, labels: list[str]):
    async with httpx.AsyncClient() as client:
        await client.post(
            f"https://api.github.com/repos/{repo}/issues/{issue_number}/labels",
            headers={"Authorization": f"Bearer {GITHUB_TOKEN}"},
            json={"labels": labels},
        )
```

## FAQ

### How do I prevent the agent from being too noisy on every PR?

Add filters based on PR size, author, or file paths. For example, skip PRs that only change markdown files or that come from dependabot. You can also set a minimum diff size threshold before the agent activates.

### Can the agent leave inline comments on specific lines?

Yes. Use the GitHub Pull Request Review API to submit line-level comments. You need to map the LLM output to specific file paths and line numbers from the diff, which requires parsing the unified diff format.

### How do I handle rate limits from the GitHub API?

GitHub allows 5,000 authenticated requests per hour. For high-volume repositories, cache API responses and batch operations. Use the `X-RateLimit-Remaining` response header to implement backoff before you hit the limit.

---

#GitHub #Webhooks #AIAgents #DevOpsAutomation #FastAPI #AgenticAI #LearnAI #AIEngineering

---

Source: https://callsphere.ai/blog/building-github-event-agent-auto-responding-issues-prs-deployments
