Skip to content
Learn Agentic AI
Learn Agentic AI13 min read3 views

Slack Bot Agent: Building an AI Assistant for Team Communication

Build a production-ready Slack bot agent using the Slack SDK that listens to events, handles slash commands, responds with interactive messages, and integrates LLM-powered reasoning for team support.

Why Slack Is the Perfect Home for AI Agents

Slack is where teams coordinate work. When an AI agent lives inside Slack, it eliminates context-switching. Instead of opening a separate tool, team members ask the agent directly in the channel where the conversation is already happening. The agent can answer questions about internal docs, summarize threads, create tickets, and route requests to the right people.

In this guide, we will build a Slack bot agent using Python's slack_bolt framework. The agent handles events, slash commands, and interactive messages while delegating complex reasoning to an LLM.

Setting Up the Slack App

Before writing code, create a Slack app at api.slack.com/apps. Enable Socket Mode for local development, add the chat:write, app_mentions:read, commands, and im:history scopes, and subscribe to the app_mention and message.im events.

flowchart TD
    START["Slack Bot Agent: Building an AI Assistant for Tea…"] --> A
    A["Why Slack Is the Perfect Home for AI Ag…"]
    A --> B
    B["Setting Up the Slack App"]
    B --> C
    C["Handling App Mentions"]
    C --> D
    D["Implementing Slash Commands"]
    D --> E
    E["Interactive Messages with Buttons"]
    E --> F
    F["Running the Agent"]
    F --> G
    G["FAQ"]
    G --> DONE["Key Takeaways"]
    style START fill:#4f46e5,stroke:#4338ca,color:#fff
    style DONE fill:#059669,stroke:#047857,color:#fff

Install dependencies:

# pip install slack-bolt openai

from slack_bolt import App
from slack_bolt.adapter.socket_mode import SocketModeHandler
import os

app = App(token=os.environ["SLACK_BOT_TOKEN"])

Handling App Mentions

When someone mentions the bot in a channel, the agent processes the message and responds with LLM-generated content:

from openai import OpenAI

llm = OpenAI()

SYSTEM_PROMPT = (
    "You are a helpful team assistant in Slack. "
    "Answer questions concisely. Use bullet points for lists. "
    "If you do not know the answer, say so clearly."
)

def ask_llm(question: str, context: str = "") -> str:
    """Send a question to the LLM with optional context."""
    messages = [{"role": "system", "content": SYSTEM_PROMPT}]
    if context:
        messages.append({"role": "user", "content": f"Context:\n{context}"})
    messages.append({"role": "user", "content": question})

    response = llm.chat.completions.create(
        model="gpt-4o-mini",
        messages=messages,
        temperature=0.3,
        max_tokens=500,
    )
    return response.choices[0].message.content

@app.event("app_mention")
def handle_mention(event, say, client):
    """Respond to @bot mentions in channels."""
    user_text = event["text"]
    thread_ts = event.get("thread_ts", event["ts"])

    # Fetch thread context if replying in a thread
    context = ""
    if event.get("thread_ts"):
        result = client.conversations_replies(
            channel=event["channel"],
            ts=event["thread_ts"],
            limit=10,
        )
        context = "\n".join(
            m["text"] for m in result["messages"][:-1]  # Exclude current message
        )

    response = ask_llm(user_text, context)
    say(text=response, thread_ts=thread_ts)

The agent fetches thread history for context when mentioned inside a thread. This gives the LLM the full conversation to work with rather than just the latest message.

Implementing Slash Commands

Slash commands provide structured entry points for specific actions. Here we create a /ask command that accepts a question and a /summarize command that summarizes the current channel's recent messages:

See AI Voice Agents Handle Real Calls

Book a free demo or calculate how much you can save with AI voice automation.

@app.command("/ask")
def handle_ask(ack, command, say):
    """Handle the /ask slash command."""
    ack()  # Acknowledge within 3 seconds
    question = command["text"]
    if not question.strip():
        say("Usage: `/ask <your question>`", ephemeral=True)
        return
    answer = ask_llm(question)
    say(f"*Q:* {question}\n\n{answer}")

@app.command("/summarize")
def handle_summarize(ack, command, client, say):
    """Summarize recent channel messages."""
    ack()
    channel = command["channel_id"]
    result = client.conversations_history(channel=channel, limit=50)
    messages = [m["text"] for m in result["messages"] if m.get("text")]
    combined = "\n".join(messages[:50])

    summary = ask_llm(
        "Summarize the following Slack messages into 3-5 bullet points. "
        "Focus on decisions, action items, and key topics.",
        context=combined,
    )
    say(f"*Channel Summary (last 50 messages):*\n\n{summary}")

Always call ack() immediately when handling slash commands. Slack requires acknowledgment within three seconds or it shows an error to the user.

Interactive Messages with Buttons

Interactive messages let users take actions directly from the bot's response. Here we add approval buttons to a request workflow:

@app.command("/request")
def handle_request(ack, command, client):
    """Create an approval request with interactive buttons."""
    ack()
    request_text = command["text"]
    requester = command["user_id"]

    client.chat_postMessage(
        channel=os.environ["APPROVAL_CHANNEL"],
        text=f"New request from <@{requester}>",
        blocks=[
            {
                "type": "section",
                "text": {
                    "type": "mrkdwn",
                    "text": f"*New Request from <@{requester}>:*\n{request_text}",
                },
            },
            {
                "type": "actions",
                "elements": [
                    {
                        "type": "button",
                        "text": {"type": "plain_text", "text": "Approve"},
                        "style": "primary",
                        "action_id": "approve_request",
                        "value": f"{requester}|{request_text}",
                    },
                    {
                        "type": "button",
                        "text": {"type": "plain_text", "text": "Deny"},
                        "style": "danger",
                        "action_id": "deny_request",
                        "value": f"{requester}|{request_text}",
                    },
                ],
            },
        ],
    )

@app.action("approve_request")
def handle_approve(ack, body, client):
    """Handle approval button click."""
    ack()
    requester, request_text = body["actions"][0]["value"].split("|", 1)
    approver = body["user"]["id"]
    client.chat_postMessage(
        channel=requester,
        text=f"Your request was *approved* by <@{approver}>: {request_text}",
    )

Running the Agent

Start the bot with Socket Mode for development:

if __name__ == "__main__":
    handler = SocketModeHandler(app, os.environ["SLACK_APP_TOKEN"])
    print("Slack bot agent is running...")
    handler.start()

For production, switch to HTTP mode behind a reverse proxy with proper SSL termination. Use a process manager like systemd or deploy as a container.

FAQ

How do I handle rate limits from the Slack API?

The slack_bolt framework includes built-in rate limit handling that automatically retries requests with appropriate backoff. For high-volume bots, use WebClient with retry_handlers configured and batch operations where possible.

Can the bot maintain conversation history across messages?

Store conversation context in a database or Redis keyed by channel ID and thread timestamp. Before each LLM call, retrieve the last N messages from your store to include as context. This gives the agent memory without relying solely on Slack API calls.

How do I restrict the bot to certain channels?

Check event["channel"] against an allowlist of channel IDs in your event handlers. Return early without processing if the channel is not in the list. You can also use Slack's app-level channel restrictions in the app configuration.


#SlackBot #AIAgents #SlackSDK #WorkflowAutomation #Python #ChatOps #AgenticAI #LearnAI #AIEngineering

Share
C

Written by

CallSphere Team

Expert insights on AI voice agents and customer communication automation.

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

Use Cases

Automating Client Document Collection: How AI Agents Chase Missing Tax Documents and Reduce Filing Delays

See how AI agents automate tax document collection — chasing missing W-2s, 1099s, and receipts via calls and texts to eliminate the #1 CPA bottleneck.

AI Interview Prep

7 AI Coding Interview Questions From Anthropic, Meta & OpenAI (2026 Edition)

Real AI coding interview questions from Anthropic, Meta, and OpenAI in 2026. Includes implementing attention from scratch, Anthropic's progressive coding screens, Meta's AI-assisted round, and vector search — with solution approaches.

Learn Agentic AI

API Design for AI Agent Tool Functions: Best Practices and Anti-Patterns

How to design tool functions that LLMs can use effectively with clear naming, enum parameters, structured responses, informative error messages, and documentation.

Learn Agentic AI

AI Agents for IT Helpdesk: L1 Automation, Ticket Routing, and Knowledge Base Integration

Build IT helpdesk AI agents with multi-agent architecture for triage, device, network, and security issues. RAG-powered knowledge base, automated ticket creation, routing, and escalation.

Learn Agentic AI

Computer Use in GPT-5.4: Building AI Agents That Navigate Desktop Applications

Technical guide to GPT-5.4's computer use capabilities for building AI agents that interact with desktop UIs, browser automation, and real-world application workflows.

Learn Agentic AI

Prompt Engineering for AI Agents: System Prompts, Tool Descriptions, and Few-Shot Patterns

Agent-specific prompt engineering techniques: crafting effective system prompts, writing clear tool descriptions for function calling, and few-shot examples that improve complex task performance.