Skip to content
AI Infrastructure
AI Infrastructure11 min read0 views

Asterisk AGI and AMI for AI Dialplan Augmentation in 2026: When to Use Each

AGI is the script-per-call hand-off. AMI is the firehose of events and commands. Both still ship in Asterisk 22 and both still matter for AI voice if you know where each one breaks at scale.

AGI is the original AI hook for Asterisk: dialplan calls a script, script reads stdin, writes stdout, and Asterisk obeys. AMI is the panopticon: every channel event, every CDR, every registration, fanned out over TCP. ARI superseded both for new builds, but in 2026 AGI and AMI are still load-bearing in shops with seven years of dialplan they cannot rewrite.

Background

Asterisk Gateway Interface (AGI) was introduced in Asterisk 1.0 in 2004 as a process-spawn protocol. Your dialplan runs AGI(myscript.py), Asterisk forks the script, the script reads channel variables and DTMF from stdin, writes commands like SAY DIGITS, EXEC, GET DATA to stdout, and returns control. FastAGI is the same protocol over TCP so you do not pay the fork cost.

Asterisk Manager Interface (AMI) is the older event/command socket on port 5038. Clients log in with a username and shared secret, subscribe to event classes (call, agent, system), and issue Action: commands like Originate, Hangup, Redirect, ChannelStatus. AMI is the substrate behind every Asterisk GUI, FOP/FOP2, queuemetrics integration.

For AI voice in 2026, both have real limits. AGI's per-call process spawn caps at maybe a few hundred concurrent calls; FastAGI helps but the synchronous request-response model still blocks. AMI is global state, not per-call control, so you cannot use it to inject barge-in audio mid-utterance.

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →

Architecture

graph TD
    A[Inbound SIP] --> B[Asterisk 22 Dialplan]
    B -->|AGI legacy path| C[FastAGI listener / Python]
    B -->|Stasis handoff| D[ARI WebSocket]
    C --> E[OpenAI / Deepgram]
    D --> E
    F[AMI Listener] -->|events| G[Analytics + CRM]
    B -.->|every event| F

The 2026 best practice is hybrid: keep AGI for legacy dialplan branches (IVR menus, time conditions, queue routing decisions) and move real-time AI conversations to ARI with externalMedia or AudioSocket. Use AMI as the read-only event tap for analytics, do-not-touch for control.

# FastAGI hello world for an AI handoff
from asterisk.fastagi import FastAGIServer

class AIHandler(FastAGIServer):
    def handle(self, agi):
        agi.answer()
        agi.exec_command('Set', 'AI_SESSION_ID=' + agi.unique_id)
        agi.exec_command('Stasis', 'ai-app,realtime')

CallSphere implementation

CallSphere terminates every call on Twilio across all six verticals (Healthcare AI, Real Estate AI, Sales Calling AI, Salon AI, IT Helpdesk AI, After-Hours AI). Healthcare AI is a FastAPI service on :8084 that streams audio to OpenAI Realtime; Sales Calling AI fires 5 concurrent outbound calls per tenant; After-Hours AI rings staff with simul call+SMS and a 120-second timeout. We do not run Asterisk in production, but several of our enterprise prospects in the IT Helpdesk vertical are migrating from on-prem Asterisk PBXes. Our migration playbook keeps their AGI scripts intact for IVR and adds an ARI Stasis branch that hands the call to a Twilio SIP domain via SIP REFER, where our 37 agents and 90+ tools take over inside HIPAA + SOC 2 boundaries on $149/$499/$1499 plans.

Build steps

  1. Audit the existing dialplan: list every AGI() call and what it does. Most are simple lookups (caller-name, do-not-call, queue assignment).
  2. Stand up FastAGI on a long-running Python process so you stop paying fork-and-exec on every call.
  3. For real-time AI conversations, add a Stasis() branch instead of AGI(); ARI gives you bridges, externalMedia, and async events.
  4. AMI listener (panoramisk in Python, asterisk-ami in Node) subscribes to events and writes to your analytics store; do not issue control commands here.
  5. Use AMI Originate to kick off outbound AI calls; the originated leg lands in dialplan and proceeds via Stasis or AGI.
  6. Pin AMI to TLS on port 5039 and rotate the manager.conf secret quarterly.
  7. Benchmark: AGI synchronous call latency should be under 50 ms p99; AMI event lag under 100 ms.

Pitfalls

  • AGI process-per-call exhaustion: the OOM killer fires before you notice the load average.
  • AGI commands return synchronously; a slow Python AI call hangs the dialplan and the channel.
  • AMI events arrive in submission order but with no guaranteed millisecond ordering across multiple Asterisk nodes.
  • Mixing AGI control with AMI Redirect is a footgun; one will overwrite the other.
  • AMI permissions default to read+write+system; lock down to read for analytics consumers.

FAQ

Is AGI deprecated? No, but it is no longer the recommended path for new AI builds. ARI is. AGI remains supported in Asterisk 22 LTS through 2030.

Can AGI handle real-time AI conversations? Poorly. The synchronous model means every AI call blocks the channel thread. AudioSocket plus ARI is the modern equivalent.

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

What is AsyncAGI? AGI requests dispatched over AMI rather than spawned as processes; better than fork-AGI, worse than ARI. Decent middle ground for legacy modernization.

How do I rate-limit AMI events to my consumer? Filter in manager.conf with read=call,user instead of read=all, and set eventfilter to drop chatty events.

Is AMI safe over the public internet? With TLS and IP allowlisting, yes, but most teams run it inside the VPC. Asterisk 22 supports manager.conf TLS natively.

Sources

Start a 14-day trial to migrate your AGI dialplan to managed AI voice, see pricing for the right tier, or contact us about Asterisk-to-Twilio migration playbooks.

Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

AI Voice Agents

MOS Call Quality Scoring for AI Voice Operations in 2026: Beyond 4.2

MOS 4.3+ is the band where AI voice feels human. Drop below 3.6 and conversations break. Here is how to measure, improve, and alert on MOS in production AI voice using G.711, Opus, and the underlying packet loss / jitter / latency math.

AI Infrastructure

Build a Voice Agent with Asterisk + ARI + Open LLM (2026)

Asterisk + ARI + AudioSocket + an open LLM = a voice agent that drops into your existing PBX. No SIP-trunking provider lock-in — full Python orchestration.

AI Strategy

State Data Residency for AI Voice in Healthcare — Texas, Nevada, Colorado in 2026

Texas SB 1188 requires US-resident EHRs from January 1, 2026; Nevada's consumer-health-data law constrains health data; Colorado AI Act takes effect June 30, 2026. AI voice agents must architect for state-by-state data localization.

AI Engineering

SIP Debugging with sngrep and Wireshark for AI Voice Calls in 2026: The Hands-On Playbook

When your AI voice agent gets one-way audio, missed DTMF, or codec mismatch, sngrep and Wireshark are still the fastest path to root cause in 2026. Here is the playbook.

AI Infrastructure

RTP Transcoding Cost for AI Voice in 2026: Why Edge Placement Beats Central GPU

Transcoding RTP to WebSocket is more CPU-intensive than people expect. For AI voice in 2026, where you place the transcode (edge near the carrier vs central near the model) decides your cost-per-minute.

AI Infrastructure

Kamailio Dispatcher for AI Voice Scaling in 2026: Round-Robin Is Not Enough

Kamailio 6.0's dispatcher module is how you horizontally scale AI voice bridges behind a SIP front-end. Round-robin is the easy answer; call-load and weight-based dispatching is the right one.