---
title: "Claude Code Gets /simplify, /batch Commands, Auto-Save Memory, and HTTP Hooks"
description: "February 2026 brings new slash commands, smarter memory handling, HTTP hooks, and shared project configs across git worktrees to Claude Code."
canonical: https://callsphere.ai/blog/claude-code-updates-simplify-batch-commands-memory
category: "AI News"
tags: ["Claude Code", "CLI Updates", "Developer Tools", "Anthropic", "Coding"]
author: "CallSphere Team"
published: 2026-02-19T00:00:00.000Z
updated: 2026-05-08T17:27:37.058Z
---

# Claude Code Gets /simplify, /batch Commands, Auto-Save Memory, and HTTP Hooks

> February 2026 brings new slash commands, smarter memory handling, HTTP hooks, and shared project configs across git worktrees to Claude Code.

## February's Claude Code Changelog

Claude Code received significant updates throughout February 2026, adding new commands, improving memory management, and introducing HTTP hooks.

### New Commands

**`/simplify`** — Reviews changed code for reuse, quality, and efficiency, then automatically fixes issues found. A one-command code quality pass.

**`/batch`** — Run multiple operations across files simultaneously. Enables batch processing workflows that previously required scripting.

**`/copy`** — Interactive picker for code blocks. When code blocks are present, lets you select individual blocks or the full response for clipboard copy.

### Memory Improvements

- **Auto-save memory** — Claude automatically remembers important context from your sessions
- **Multi-agent memory handling** — Better memory coordination when using agent teams
- **Shared across worktrees** — Project configs and auto memory now shared across git worktrees of the same repository

### HTTP Hooks

A major addition for teams: **HTTP hooks** can POST JSON to a URL and receive JSON back instead of running a shell command. This enables integration with external services, dashboards, and notification systems.

```mermaid
flowchart TD
    HUB(("February's Claude Code
Changelog"))
    HUB --> L0["New Commands"]
    style L0 fill:#e0e7ff,stroke:#6366f1,color:#1e293b
    HUB --> L1["Memory Improvements"]
    style L1 fill:#e0e7ff,stroke:#6366f1,color:#1e293b
    HUB --> L2["HTTP Hooks"]
    style L2 fill:#e0e7ff,stroke:#6366f1,color:#1e293b
    HUB --> L3["Bug Fixes"]
    style L3 fill:#e0e7ff,stroke:#6366f1,color:#1e293b
    HUB --> L4["Opt-Out Option"]
    style L4 fill:#e0e7ff,stroke:#6366f1,color:#1e293b
    style HUB fill:#4f46e5,stroke:#4338ca,color:#fff
```

### Bug Fixes

- Fixed **memory leak** in git root detection cache (unbounded growth in long sessions)
- Fixed **memory leak** in JSON parsing cache
- Improved **`/model` command** to show currently active model in the menu
- Enhanced **VSCode session** stability

### Opt-Out Option

Added `ENABLE_CLAUDEAI_MCP_SERVERS=false` environment variable to opt out of claude.ai MCP servers.

**Source:** [GitHub - Claude Code CHANGELOG](https://github.com/anthropics/claude-code/blob/main/CHANGELOG.md) | [Releasebot](https://releasebot.io/updates/anthropic/claude-code) | [ClaudeLog](https://claudelog.com/claude-code-changelog/) | [ClaudeFast](https://claudefa.st/blog/guide/changelog)

```mermaid
flowchart LR
    IN(["Input prompt"])
    subgraph PRE["Pre processing"]
        TOK["Tokenize"]
        EMB["Embed"]
    end
    subgraph CORE["Model Core"]
        ATTN["Self attention layers"]
        MLP["Feed forward layers"]
    end
    subgraph POST["Post processing"]
        SAMP["Sampling"]
        DETOK["Detokenize"]
    end
    OUT(["Generated text"])
    IN --> TOK --> EMB --> ATTN --> MLP --> SAMP --> DETOK --> OUT
    style IN fill:#f1f5f9,stroke:#64748b,color:#0f172a
    style CORE fill:#ede9fe,stroke:#7c3aed,color:#1e1b4b
    style OUT fill:#059669,stroke:#047857,color:#fff
```

```mermaid
flowchart TD
    HUB(("February's Claude Code
Changelog"))
    HUB --> L0["New Commands"]
    style L0 fill:#e0e7ff,stroke:#6366f1,color:#1e293b
    HUB --> L1["Memory Improvements"]
    style L1 fill:#e0e7ff,stroke:#6366f1,color:#1e293b
    HUB --> L2["HTTP Hooks"]
    style L2 fill:#e0e7ff,stroke:#6366f1,color:#1e293b
    HUB --> L3["Bug Fixes"]
    style L3 fill:#e0e7ff,stroke:#6366f1,color:#1e293b
    HUB --> L4["Opt-Out Option"]
    style L4 fill:#e0e7ff,stroke:#6366f1,color:#1e293b
    style HUB fill:#4f46e5,stroke:#4338ca,color:#fff
```

## Claude Code Gets /simplify, /batch Commands, Auto-Save Memory, and HTTP Hooks — operator perspective

Behind Claude Code Gets /simplify, /batch Commands, Auto-Save Memory, and HTTP Hooks sits a smaller, more useful question: which production constraint just got cheaper to solve — first-token latency, language coverage, structured outputs, or tool-call reliability? On the CallSphere side, the practical filter is simple: would this make a 90-second appointment-booking call faster, cheaper, or more reliable? If the answer is "maybe in a benchmark," it doesn't ship to production.

## What AI news actually moves the needle for SMB call automation

Most AI news is noise. A new benchmark score, a leaderboard reshuffle, a leaked memo — none of it changes whether your AI receptionist books appointments without dropping the call. The handful of things that *do* move production AI voice and chat are concrete: realtime API stability (does the WebSocket survive 5+ minutes without a stall?), language coverage (does it handle 57+ languages with usable accents, or is English the only first-class citizen?), tool-use reliability (does the model actually call the right function with the right argument types under load?), multi-agent handoffs (do specialist agents receive structured context, or just transcripts?), and latency under load (p95 first-token under 800ms when 200 concurrent calls hit the same endpoint?). The CallSphere rule on news is: if it doesn't move at least one of those five numbers in a measurable eval, it's a blog post, not a product change. What to track: provider changelogs for realtime endpoints, tool-call schema changes, language-add announcements, and any deprecation that pins your stack to a sunset date. What to ignore: leaderboard wins on tasks that don't map to your call flow, "agentic" benchmarks that don't measure tool latency, and demos that work because the prompt was hand-tuned for the demo. The teams that ship fastest treat AI news the same way ops teams treat CVE feeds — read everything, act on the small fraction that touches your runtime, archive the rest.

## FAQs

**Q: Does claude Code Gets /simplify, /batch Commands, Auto-Save Memory, and HTTP Hooks actually move p95 latency or tool-call reliability?**

A: Most of the time it doesn't, and that's the right starting assumption. The relevant test is whether it improves at least one of: p95 first-token latency, tool-call argument accuracy on noisy inputs, multi-turn handoff stability, or per-session cost. CallSphere runs 37 specialized AI agents wired to 90+ function tools across 115+ database tables in 6 live verticals.

**Q: What would have to be true before claude Code Gets /simplify, /batch Commands, Auto-Save Memory, and HTTP Hooks ships into production?**

A: The eval gate is unsentimental — a regression suite that simulates real call traffic (noisy ASR, partial inputs, tool-call timeouts) measures four numbers, and a candidate has to win on three of four without losing badly on the fourth. Anything else is treated as a blog post, not a stack change.

**Q: Which CallSphere vertical would benefit from claude Code Gets /simplify, /batch Commands, Auto-Save Memory, and HTTP Hooks first?**

A: In a CallSphere deployment, new model and API capabilities land first in the post-call analytics pipeline (lower stakes, async, easy to roll back) and only later in the live realtime path. Today the verticals most likely to absorb new capability first are Salon and Sales, which already run the largest share of production traffic.

## See it live

Want to see after-hours escalation agents handle real traffic? Walk through https://escalation.callsphere.tech or grab 20 minutes with the founder: https://calendly.com/sagar-callsphere/new-meeting.

---

Source: https://callsphere.ai/blog/claude-code-updates-simplify-batch-commands-memory
