---
title: "AI Agents with Persistent Identities: Building Agents That Maintain Consistent Personas Across Sessions"
description: "Learn how to build AI agents that maintain a consistent personality, remember past interactions, consolidate memories over time, and build long-term relationships with users across multiple sessions."
canonical: https://callsphere.ai/blog/ai-agents-persistent-identities-consistent-personas-across-sessions
category: "Learn Agentic AI"
tags: ["Persistent Identity", "Memory Systems", "Persona Consistency", "Long-Term Agents", "Conversational AI"]
author: "CallSphere Team"
published: 2026-03-18T00:00:00.000Z
updated: 2026-05-07T08:27:05.524Z
---

# AI Agents with Persistent Identities: Building Agents That Maintain Consistent Personas Across Sessions

> Learn how to build AI agents that maintain a consistent personality, remember past interactions, consolidate memories over time, and build long-term relationships with users across multiple sessions.

## The Statelessness Problem

Every LLM call starts from scratch. The model has no memory of previous conversations. If you build an agent named "Luna" that has a warm, curious personality on Monday, by Wednesday it is a blank slate unless you rebuild its context. Users who interact with agents repeatedly expect continuity — they expect the agent to remember them, maintain its personality, and build on prior conversations.

Persistent identity requires three systems working together: an identity core that defines who the agent is, a memory system that retains what happened, and a consolidation process that evolves the agent's understanding over time.

## The Identity Core

The identity core is the agent's immutable foundation — its personality traits, communication style, values, and boundaries. Unlike conversation memory, the identity core does not change between sessions.

```mermaid
flowchart TD
    MSG(["New message"])
    WORKING["Working memory
rolling window"]
    EPISODIC[("Episodic memory
past sessions")]
    SEMANTIC[("Semantic memory
facts and preferences")]
    SUM["Summarizer
compresses old turns"]
    ROUTER{"Retrieve
needed memories"}
    PROMPT["Assembled context"]
    LLM["LLM"]
    UPD["Memory updater
writes new facts"]
    MSG --> WORKING --> ROUTER
    ROUTER -->|Past sessions| EPISODIC
    ROUTER -->|User facts| SEMANTIC
    EPISODIC --> SUM --> PROMPT
    SEMANTIC --> PROMPT
    WORKING --> PROMPT --> LLM --> UPD
    UPD --> EPISODIC
    UPD --> SEMANTIC
    style ROUTER fill:#4f46e5,stroke:#4338ca,color:#fff
    style LLM fill:#f59e0b,stroke:#d97706,color:#1f2937
    style EPISODIC fill:#ede9fe,stroke:#7c3aed,color:#1e1b4b
    style SEMANTIC fill:#ede9fe,stroke:#7c3aed,color:#1e1b4b
```

```python
from pydantic import BaseModel

class AgentIdentity(BaseModel):
    name: str
    personality_traits: list[str]
    communication_style: str
    values: list[str]
    boundaries: list[str]
    backstory: str

    def to_system_prompt(self) -> str:
        return f"""You are {self.name}.

Personality: {', '.join(self.personality_traits)}

Communication style: {self.communication_style}

Values you hold: {', '.join(self.values)}

Boundaries (things you will not do):
{chr(10).join(f'- {b}' for b in self.boundaries)}

Background: {self.backstory}

CRITICAL: Maintain this identity consistently. Never break character.
Never say "as an AI" or "I'm a language model." You are {self.name}."""

luna = AgentIdentity(
    name="Luna",
    personality_traits=["curious", "warm", "slightly witty", "patient"],
    communication_style="Conversational and encouraging. Uses questions to "
                        "guide understanding. Occasionally uses metaphors.",
    values=["intellectual honesty", "kindness", "growth mindset"],
    boundaries=[
        "Will not pretend to have human experiences",
        "Will not give medical or legal advice",
        "Will redirect harmful requests gently",
    ],
    backstory="Luna is a research companion who loves exploring ideas "
              "across disciplines. She finds connections between seemingly "
              "unrelated fields fascinating.",
)
```

## Session Memory: Remembering Conversations

Each conversation is stored and retrievable. The memory system has three layers: short-term (current session), episodic (past sessions), and semantic (consolidated knowledge about the user).

```python
from datetime import datetime
import json
import sqlite3

class MemoryStore:
    def __init__(self, db_path: str = "agent_memory.db"):
        self.db = sqlite3.connect(db_path)
        self._init_tables()

    def _init_tables(self):
        self.db.executescript("""
            CREATE TABLE IF NOT EXISTS sessions (
                id TEXT PRIMARY KEY,
                user_id TEXT,
                started_at TEXT,
                ended_at TEXT,
                summary TEXT
            );
            CREATE TABLE IF NOT EXISTS messages (
                id INTEGER PRIMARY KEY,
                session_id TEXT,
                role TEXT,
                content TEXT,
                timestamp TEXT,
                FOREIGN KEY (session_id) REFERENCES sessions(id)
            );
            CREATE TABLE IF NOT EXISTS user_facts (
                id INTEGER PRIMARY KEY,
                user_id TEXT,
                fact TEXT,
                source_session TEXT,
                confidence REAL DEFAULT 1.0,
                created_at TEXT,
                UNIQUE(user_id, fact)
            );
            CREATE TABLE IF NOT EXISTS relationship_state (
                user_id TEXT PRIMARY KEY,
                rapport_level TEXT DEFAULT 'new',
                interaction_count INTEGER DEFAULT 0,
                topics_discussed TEXT DEFAULT '[]',
                last_interaction TEXT
            );
        """)

    def save_message(self, session_id: str, role: str, content: str):
        self.db.execute(
            "INSERT INTO messages (session_id, role, content, timestamp) "
            "VALUES (?, ?, ?, ?)",
            (session_id, role, content, datetime.utcnow().isoformat()),
        )
        self.db.commit()

    def get_user_context(self, user_id: str) -> dict:
        """Build a complete context for the agent about this user."""
        facts = self.db.execute(
            "SELECT fact FROM user_facts WHERE user_id = ? ORDER BY confidence DESC",
            (user_id,),
        ).fetchall()

        relationship = self.db.execute(
            "SELECT * FROM relationship_state WHERE user_id = ?",
            (user_id,),
        ).fetchone()

        recent_sessions = self.db.execute(
            "SELECT summary FROM sessions WHERE user_id = ? "
            "ORDER BY ended_at DESC LIMIT 5",
            (user_id,),
        ).fetchall()

        return {
            "known_facts": [f[0] for f in facts],
            "relationship": relationship,
            "recent_sessions": [s[0] for s in recent_sessions if s[0]],
        }
```

## Memory Consolidation

After each session, a consolidation process extracts key facts and updates the user model. This is where the agent's understanding of each user deepens over time.

```python
from agents import Agent, Runner

consolidator = Agent(
    name="Memory Consolidator",
    instructions="""Analyze this conversation and extract:
    1. New facts learned about the user (interests, preferences, background)
    2. A 2-3 sentence summary of what was discussed
    3. The emotional tone of the interaction (positive, neutral, frustrating)
    4. Any commitments or follow-ups mentioned

    Return as JSON with keys: facts, summary, tone, followups""",
)

async def consolidate_session(
    memory: MemoryStore, session_id: str, user_id: str, messages: list[dict]
):
    """Run after each session to extract and store insights."""
    conversation = "\n".join(
        f"{m['role']}: {m['content']}" for m in messages
    )

    result = await Runner.run(
        consolidator,
        f"Analyze this conversation:\n{conversation}",
    )

    insights = json.loads(result.final_output)

    # Store session summary
    memory.db.execute(
        "UPDATE sessions SET summary = ?, ended_at = ? WHERE id = ?",
        (insights["summary"], datetime.utcnow().isoformat(), session_id),
    )

    # Store new user facts
    for fact in insights["facts"]:
        memory.db.execute(
            "INSERT OR IGNORE INTO user_facts (user_id, fact, source_session, created_at) "
            "VALUES (?, ?, ?, ?)",
            (user_id, fact, session_id, datetime.utcnow().isoformat()),
        )

    # Update relationship state
    memory.db.execute("""
        INSERT INTO relationship_state (user_id, interaction_count, last_interaction)
        VALUES (?, 1, ?)
        ON CONFLICT(user_id) DO UPDATE SET
            interaction_count = interaction_count + 1,
            last_interaction = ?
    """, (user_id, datetime.utcnow().isoformat(), datetime.utcnow().isoformat()))

    memory.db.commit()
```

## Assembling the Persistent Agent

Tie the identity, memory, and consolidation together into a complete agent system.

```python
from agents import Agent, Runner
import uuid

class PersistentAgent:
    def __init__(self, identity: AgentIdentity, memory: MemoryStore):
        self.identity = identity
        self.memory = memory

    async def chat(self, user_id: str, message: str, session_id: str = None):
        if not session_id:
            session_id = str(uuid.uuid4())

        # Build context from memory
        user_context = self.memory.get_user_context(user_id)

        # Construct the system prompt with identity + memory
        system_prompt = self.identity.to_system_prompt()
        if user_context["known_facts"]:
            system_prompt += f"""\n\nWhat you know about this user:
{chr(10).join(f'- {f}' for f in user_context['known_facts'])}"""

        if user_context["recent_sessions"]:
            system_prompt += f"""\n\nRecent conversation summaries:
{chr(10).join(f'- {s}' for s in user_context['recent_sessions'])}"""

        agent = Agent(
            name=self.identity.name,
            instructions=system_prompt,
        )

        # Save user message
        self.memory.save_message(session_id, "user", message)

        # Run agent
        result = await Runner.run(agent, message)

        # Save agent response
        self.memory.save_message(session_id, "assistant", result.final_output)

        return result.final_output

# Usage
agent = PersistentAgent(identity=luna, memory=MemoryStore())
response = await agent.chat("user_123", "Hey Luna, remember that paper I mentioned last week?")
```

## FAQ

### How do you prevent the context window from overflowing with too many memories?

Implement tiered retrieval. Store all facts but only inject the most relevant ones into each conversation. Use a combination of recency (recent facts rank higher), relevance (semantic similarity to the current message), and importance (user-corrected facts rank highest). Cap the injected context at a fixed token budget — typically 1000-2000 tokens of memory context is sufficient for natural continuity.

### How do you handle contradictions between old and new facts?

When the consolidator extracts a fact that contradicts an existing one, update the old fact rather than adding a duplicate. For ambiguous cases, reduce the confidence score of the old fact and add the new one with full confidence. Periodically run a fact reconciliation pass that presents contradictions to the LLM for resolution.

### Can the user ask the agent to forget specific information?

Yes, and this is important for user trust and privacy compliance. Implement a `forget` tool that deletes matching facts from `user_facts`. When the user says "forget that I mentioned my job," the agent searches for job-related facts and removes them. Log the deletion for compliance purposes but do not retain the deleted content.

---

#PersistentAI #AgentMemory #PersonaConsistency #ConversationalAI #LongTermAgents #MemoryConsolidation #UserRelationships #IdentityPersistence

---

Source: https://callsphere.ai/blog/ai-agents-persistent-identities-consistent-personas-across-sessions
