---
title: "Claude Opus 4.6 Now Available on Microsoft Foundry — Azure Becomes Only Cloud with Both Claude and GPT"
description: "Azure becomes the only cloud platform offering both Claude and GPT frontier models as Claude Opus 4.6 launches on Microsoft Foundry with MCP support."
canonical: https://callsphere.ai/blog/claude-opus-4-6-available-microsoft-foundry-azure
category: "AI News"
tags: ["Microsoft Foundry", "Azure", "Claude", "Cloud AI", "Anthropic"]
author: "CallSphere Team"
published: 2026-02-10T00:00:00.000Z
updated: 2026-05-08T17:27:36.947Z
---

# Claude Opus 4.6 Now Available on Microsoft Foundry — Azure Becomes Only Cloud with Both Claude and GPT

> Azure becomes the only cloud platform offering both Claude and GPT frontier models as Claude Opus 4.6 launches on Microsoft Foundry with MCP support.

## Azure Gets Both AI Giants

Microsoft Azure has become the **only cloud providing access to both Claude and GPT frontier models** through Microsoft Foundry, with Claude Opus 4.6 now available for deployment.

### Available Models

- Claude Opus 4.6
- Claude Sonnet 4.5
- Claude Haiku 4.5

### Key Integration Features

- **Model Context Protocol (MCP):** Seamlessly connect Claude to data fetchers, pipelines, and external APIs
- **Claude Code Integration:** Developers can use Claude Code directly with Microsoft Foundry
- **Azure Billing:** Works with current Azure agreements (MACC-eligible), eliminating separate vendor approvals
- **Serverless Deployment:** Scale while Anthropic manages the infrastructure

### Healthcare Focus

Anthropic has added specific tools for Microsoft Foundry that bring advanced reasoning and agentic workflows purpose-built for healthcare and life sciences industries — extending Claude for Healthcare into the Azure ecosystem.

```mermaid
flowchart TD
    HUB(("Azure Gets Both AI
Giants"))
    HUB --> L0["Available Models"]
    style L0 fill:#e0e7ff,stroke:#6366f1,color:#1e293b
    HUB --> L1["Key Integration Features"]
    style L1 fill:#e0e7ff,stroke:#6366f1,color:#1e293b
    HUB --> L2["Healthcare Focus"]
    style L2 fill:#e0e7ff,stroke:#6366f1,color:#1e293b
    HUB --> L3["Enterprise Benefits"]
    style L3 fill:#e0e7ff,stroke:#6366f1,color:#1e293b
    style HUB fill:#4f46e5,stroke:#4338ca,color:#fff
```

### Enterprise Benefits

For enterprises already invested in the Azure ecosystem, Claude through Microsoft Foundry means:

- No new vendor relationships to manage
- Existing billing and compliance frameworks apply
- Access to frontier AI without infrastructure overhead

The integration positions Azure as the premium choice for enterprises wanting optionality across the best AI models.

**Source:** [Microsoft Azure Blog](https://azure.microsoft.com/en-us/blog/claude-opus-4-6-anthropics-powerful-model-for-coding-agents-and-enterprise-workflows-is-now-available-in-microsoft-foundry-on-azure/) | [Anthropic](https://www.anthropic.com/news/claude-in-microsoft-foundry) | [Claude API Docs](https://platform.claude.com/docs/en/build-with-claude/claude-in-microsoft-foundry)

```mermaid
flowchart LR
    IN(["Input prompt"])
    subgraph PRE["Pre processing"]
        TOK["Tokenize"]
        EMB["Embed"]
    end
    subgraph CORE["Model Core"]
        ATTN["Self attention layers"]
        MLP["Feed forward layers"]
    end
    subgraph POST["Post processing"]
        SAMP["Sampling"]
        DETOK["Detokenize"]
    end
    OUT(["Generated text"])
    IN --> TOK --> EMB --> ATTN --> MLP --> SAMP --> DETOK --> OUT
    style IN fill:#f1f5f9,stroke:#64748b,color:#0f172a
    style CORE fill:#ede9fe,stroke:#7c3aed,color:#1e1b4b
    style OUT fill:#059669,stroke:#047857,color:#fff
```

```mermaid
flowchart TD
    HUB(("Azure Gets Both AI
Giants"))
    HUB --> L0["Available Models"]
    style L0 fill:#e0e7ff,stroke:#6366f1,color:#1e293b
    HUB --> L1["Key Integration Features"]
    style L1 fill:#e0e7ff,stroke:#6366f1,color:#1e293b
    HUB --> L2["Healthcare Focus"]
    style L2 fill:#e0e7ff,stroke:#6366f1,color:#1e293b
    HUB --> L3["Enterprise Benefits"]
    style L3 fill:#e0e7ff,stroke:#6366f1,color:#1e293b
    style HUB fill:#4f46e5,stroke:#4338ca,color:#fff
```

## Claude Opus 4.6 Now Available on Microsoft Foundry — Azure Becomes Only Cloud with Both Claude and GPT — operator perspective

Behind Claude Opus 4.6 Now Available on Microsoft Foundry — Azure Becomes Only Cloud with Both Claude and GPT sits a smaller, more useful question: which production constraint just got cheaper to solve — first-token latency, language coverage, structured outputs, or tool-call reliability? The CallSphere stack treats announcements as input to an evals queue, not a product roadmap. Production agents stay pinned; new releases earn their slot only after a regression suite confirms cost, latency, and tool-call reliability move the right way.

## What AI news actually moves the needle for SMB call automation

Most AI news is noise. A new benchmark score, a leaderboard reshuffle, a leaked memo — none of it changes whether your AI receptionist books appointments without dropping the call. The handful of things that *do* move production AI voice and chat are concrete: realtime API stability (does the WebSocket survive 5+ minutes without a stall?), language coverage (does it handle 57+ languages with usable accents, or is English the only first-class citizen?), tool-use reliability (does the model actually call the right function with the right argument types under load?), multi-agent handoffs (do specialist agents receive structured context, or just transcripts?), and latency under load (p95 first-token under 800ms when 200 concurrent calls hit the same endpoint?). The CallSphere rule on news is: if it doesn't move at least one of those five numbers in a measurable eval, it's a blog post, not a product change. What to track: provider changelogs for realtime endpoints, tool-call schema changes, language-add announcements, and any deprecation that pins your stack to a sunset date. What to ignore: leaderboard wins on tasks that don't map to your call flow, "agentic" benchmarks that don't measure tool latency, and demos that work because the prompt was hand-tuned for the demo. The teams that ship fastest treat AI news the same way ops teams treat CVE feeds — read everything, act on the small fraction that touches your runtime, archive the rest.

## FAQs

**Q: Is claude Opus 4.6 Now Available on Microsoft Foundry — Azure Becomes Only Cloud with Both Claude and GPT ready for the realtime call path, or only for analytics?**

A: Most of the time it doesn't, and that's the right starting assumption. The relevant test is whether it improves at least one of: p95 first-token latency, tool-call argument accuracy on noisy inputs, multi-turn handoff stability, or per-session cost. Real Estate deployments run 10 specialist agents with 30 tools, including vision-on-photos for listing intake and follow-up.

**Q: What's the cost story behind claude Opus 4.6 Now Available on Microsoft Foundry — Azure Becomes Only Cloud with Both Claude and GPT at SMB call volumes?**

A: The eval gate is unsentimental — a regression suite that simulates real call traffic (noisy ASR, partial inputs, tool-call timeouts) measures four numbers, and a candidate has to win on three of four without losing badly on the fourth. Anything else is treated as a blog post, not a stack change.

**Q: How does CallSphere decide whether to adopt claude Opus 4.6 Now Available on Microsoft Foundry — Azure Becomes Only Cloud with Both Claude and GPT?**

A: In a CallSphere deployment, new model and API capabilities land first in the post-call analytics pipeline (lower stakes, async, easy to roll back) and only later in the live realtime path. Today the verticals most likely to absorb new capability first are Salon and After-Hours Escalation, which already run the largest share of production traffic.

## See it live

Want to see after-hours escalation agents handle real traffic? Walk through https://escalation.callsphere.tech or grab 20 minutes with the founder: https://calendly.com/sagar-callsphere/new-meeting.

---

Source: https://callsphere.ai/blog/claude-opus-4-6-available-microsoft-foundry-azure
