---
title: "Optimistic UI for Agent Interactions: Showing Immediate Feedback Before Server Response"
description: "Learn how to implement optimistic updates in AI agent chat interfaces to provide instant feedback, handle rollbacks on failure, and manage loading states for the best user experience."
canonical: https://callsphere.ai/blog/optimistic-ui-agent-interactions-immediate-feedback-server-response
category: "Learn Agentic AI"
tags: ["Optimistic UI", "React", "UX Patterns", "TypeScript", "Error Handling"]
author: "CallSphere Team"
published: 2026-03-17T00:00:00.000Z
updated: 2026-05-06T01:02:45.172Z
---

# Optimistic UI for Agent Interactions: Showing Immediate Feedback Before Server Response

> Learn how to implement optimistic updates in AI agent chat interfaces to provide instant feedback, handle rollbacks on failure, and manage loading states for the best user experience.

## The Latency Problem in Agent Interfaces

When a user sends a message to an AI agent, the round trip involves network transit, model inference, and response generation. This can take anywhere from 500 milliseconds to 30 seconds depending on the model and task complexity. Without optimistic UI, the user stares at a blank space after hitting send, wondering whether their message was received. Optimistic updates solve this by immediately showing the user's message in the chat and displaying a typing indicator while the agent processes the request.

## The Optimistic Update Pattern

The core idea: update the UI immediately as if the server request succeeded, then reconcile the state when the actual response arrives. If the request fails, roll back the optimistic change and show an error.

```mermaid
flowchart LR
    INPUT(["User intent"])
    PARSE["Parse plus
classify"]
    PLAN["Plan and tool
selection"]
    AGENT["Agent loop
LLM plus tools"]
    GUARD{"Guardrails
and policy"}
    EXEC["Execute and
verify result"]
    OBS[("Trace and metrics")]
    OUT(["Outcome plus
next action"])
    INPUT --> PARSE --> PLAN --> AGENT --> GUARD
    GUARD -->|Pass| EXEC --> OUT
    GUARD -->|Fail| AGENT
    AGENT --> OBS
    style AGENT fill:#4f46e5,stroke:#4338ca,color:#fff
    style GUARD fill:#f59e0b,stroke:#d97706,color:#1f2937
    style OBS fill:#ede9fe,stroke:#7c3aed,color:#1e1b4b
    style OUT fill:#059669,stroke:#047857,color:#fff
```

```typescript
interface ChatMessage {
  id: string;
  role: "user" | "assistant";
  content: string;
  status: "optimistic" | "confirmed" | "error";
  timestamp: Date;
}

type ChatAction =
  | { type: "ADD_OPTIMISTIC"; message: ChatMessage }
  | { type: "CONFIRM"; tempId: string; realId: string }
  | { type: "ADD_RESPONSE"; message: ChatMessage }
  | { type: "MARK_ERROR"; id: string; error: string }
  | { type: "RETRY"; id: string };

function chatReducer(
  state: ChatMessage[],
  action: ChatAction
): ChatMessage[] {
  switch (action.type) {
    case "ADD_OPTIMISTIC":
      return [...state, action.message];

    case "CONFIRM":
      return state.map((m) =>
        m.id === action.tempId
          ? { ...m, id: action.realId, status: "confirmed" }
          : m
      );

    case "ADD_RESPONSE":
      return [...state, action.message];

    case "MARK_ERROR":
      return state.map((m) =>
        m.id === action.id ? { ...m, status: "error" } : m
      );

    case "RETRY":
      return state.map((m) =>
        m.id === action.id ? { ...m, status: "optimistic" } : m
      );

    default:
      return state;
  }
}
```

Using a reducer instead of simple `useState` makes state transitions explicit and testable. Each action represents a clear step in the message lifecycle.

## Implementing the Send Flow

Wire the reducer into a hook that manages the full send-and-receive lifecycle.

```typescript
import { useReducer, useCallback } from "react";

function useOptimisticChat() {
  const [messages, dispatch] = useReducer(chatReducer, []);

  const sendMessage = useCallback(async (content: string) => {
    const tempId = crypto.randomUUID();
    const optimisticMsg: ChatMessage = {
      id: tempId,
      role: "user",
      content,
      status: "optimistic",
      timestamp: new Date(),
    };

    dispatch({ type: "ADD_OPTIMISTIC", message: optimisticMsg });

    try {
      const res = await fetch("/api/agent/chat", {
        method: "POST",
        headers: { "Content-Type": "application/json" },
        body: JSON.stringify({ message: content }),
      });

      if (!res.ok) throw new Error("Request failed");

      const data = await res.json();
      dispatch({
        type: "CONFIRM",
        tempId,
        realId: data.userMessageId,
      });
      dispatch({
        type: "ADD_RESPONSE",
        message: {
          id: data.agentMessageId,
          role: "assistant",
          content: data.response,
          status: "confirmed",
          timestamp: new Date(),
        },
      });
    } catch {
      dispatch({
        type: "MARK_ERROR",
        id: tempId,
        error: "Failed to send",
      });
    }
  }, []);

  return { messages, sendMessage, dispatch };
}
```

## Visual Feedback for Message States

Each message state requires distinct visual treatment so users understand what is happening.

```typescript
function MessageBubble({ message }: { message: ChatMessage }) {
  const statusStyles: Record = {
    optimistic: "opacity-70",
    confirmed: "opacity-100",
    error: "opacity-100 border-2 border-red-300",
  };

  return (

{message.content}

      {message.status === "optimistic" && (

          Sending...

      )}

      {message.status === "error" && (

            Failed to send

           {/* retry logic */}}
            className="text-xs text-blue-600 underline"
          >
            Retry

      )}

  );
}
```

Optimistic messages render at reduced opacity so users can subconsciously distinguish them from confirmed messages. Error messages get a red border and a retry button.

## The Typing Indicator

While waiting for the agent response, show a typing indicator that appears after the user's confirmed message.

```typescript
function TypingIndicator() {
  return (

        {[0, 1, 2].map((i) => (

        ))}

        Agent is thinking...

  );
}
```

## Retry with Exponential Backoff

When a message fails, the retry button should not hammer the server. Implement exponential backoff for automatic retries.

```typescript
async function retryWithBackoff(
  fn: () => Promise,
  maxRetries = 3
): Promise {
  for (let attempt = 0; attempt  setTimeout(r, delay));
    }
  }
  throw new Error("Unreachable");
}
```

## FAQ

### How do I handle optimistic updates for messages that trigger tool calls?

When the agent uses tools (web search, database queries, code execution), show an intermediate status like "Searching..." or "Running code..." between the user message and the final response. Add a `toolCalls` field to your message type and render each tool call as a collapsible section that shows the tool name, input, and output.

### Should I use TanStack Query's optimistic update feature instead of a custom reducer?

TanStack Query's `onMutate` / `onError` / `onSettled` pattern works well for CRUD operations with cache invalidation. However, chat messages are append-only and sequential, which makes a reducer more natural. The reducer gives you fine-grained control over the message lifecycle without fighting the cache invalidation model.

### How do I prevent duplicate messages if the user double-clicks the send button?

Disable the send button immediately after the first click by checking the `status` of the last message. If the last message has status `optimistic`, disable the input. Additionally, debounce the submit handler and deduplicate by content hash on the server side.

---

#OptimisticUI #React #UXPatterns #TypeScript #ErrorHandling #AgenticAI #LearnAI #AIEngineering

---

Source: https://callsphere.ai/blog/optimistic-ui-agent-interactions-immediate-feedback-server-response
