Skip to content
Vertical Solutions
Vertical Solutions8 min read0 views

Clinical Decision Support Agents: Where FDA Draws the Line in 2026

FDA's 2026 guidance on AI-based clinical decision support clarifies what is regulated software. What this means for builders and providers.

What's Regulated and What's Not

The FDA's 2024-2025 guidance on Clinical Decision Support (CDS) software, with 2026 refinements, draws clearer lines between regulated medical devices and unregulated tools. For AI-based CDS, the four-criteria test from the 21st Century Cures Act remains the framework. The 2026 refinement clarified how the test applies to LLM-based CDS specifically.

This piece walks through what the test means in practice and what 2026 deployments are doing on each side of the line.

The Four-Criteria Test

flowchart TD
    A[1. Not for time-critical decisions] --> Pass1
    B[2. Displays supporting evidence] --> Pass1
    C[3. Independent review possible] --> Pass1
    D[4. Used by HCPs] --> Pass1
    Pass1{All four met?}
    Pass1 -->|Yes| NoFDA[Not regulated as a device]
    Pass1 -->|No| FDA[Regulated as medical device software]

If all four criteria are met, the software is not a medical device under FDA. If any fails, it is.

The 2026 refinement clarified that LLMs presenting evidence in summary form may still satisfy criterion 2 if the underlying source material is accessible. Earlier interpretations were stricter.

What's Not Regulated (Examples)

  • An LLM that summarizes medical literature for an MD with citations
  • A tool that compiles a patient's chart for the MD to review
  • Reference Q&A on dosing guidelines
  • Documentation drafting (visit notes, discharge summaries) for MD review and signoff
  • Patient-facing scheduling and triage that does not give clinical advice

These are widely deployed in 2026 across health systems.

What Is Regulated (Examples)

  • Software that generates a diagnostic recommendation without exposing the basis to the clinician
  • Software that issues alerts in time-critical scenarios where the clinician cannot independently review
  • Software that recommends specific treatments without supporting evidence
  • Direct-to-patient diagnostic AI

These require FDA clearance (typically 510(k) or De Novo) and are subject to ongoing post-market obligations.

See AI Voice Agents Handle Real Calls

Book a free demo or calculate how much you can save with AI voice automation.

The 2026 LLM-Specific Wrinkle

LLMs raise three specific issues the 2024 guidance addressed and 2026 refined:

  • Confidence calibration: an LLM giving a confident wrong answer is more dangerous than a low-confidence right one. FDA expects calibration evaluation.
  • Currency of training data: medical knowledge evolves; clinical AI must stay current. FDA has signaled expectations for update cadence.
  • Performance drift: LLMs can degrade as inputs shift. Post-market monitoring is expected for cleared devices.

What Deployments Look Like in 2026

flowchart TB
    NoFDA[Non-device deployments] --> Doc[Documentation assistance]
    NoFDA --> Lit[Literature summaries]
    NoFDA --> Adm[Administrative]
    Cleared[FDA-cleared] --> Diag[Specific diagnostic AI]
    Cleared --> Alert[Critical alerting]
    Cleared --> Tri[Triage automation]

Most deployed clinical LLM applications in 2026 fall on the non-device side: documentation help, literature search, administrative workflow. The cleared-device side has specific point solutions (radiology AI, sepsis prediction, etc.) that have gone through formal regulatory review.

A Health System's Decision Framework

For a health system evaluating a clinical LLM application in 2026:

  1. Apply the four-criteria test honestly
  2. If non-device, check vendor-supplied evidence of clinical accuracy
  3. Verify HIPAA, BAA, and data privacy
  4. Confirm physician sign-off workflow before any clinical action
  5. Plan for post-deployment monitoring of accuracy and clinical impact
  6. Have a rollback path if quality degrades

If device-cleared, ensure the deployment matches the cleared indications for use; off-label deployment of cleared software is a regulatory issue.

Liability Considerations

Even non-device CDS carries liability if it produces wrong recommendations relied upon by clinicians. The 2026 best practice:

  • All clinical decisions remain with the clinician of record
  • AI outputs are framed as suggestions with explicit "verify before acting" language
  • Documentation of AI use in the EHR
  • Vendor indemnification for AI errors (negotiated case-by-case)

Patient-Facing Deployments

Direct-to-patient AI in clinical contexts is a different category. The 2026 deployments that are working:

  • Symptom triage with explicit "this is not a diagnosis" framing and clear path to a clinician
  • Medication reminders and adherence support
  • Health-literacy explanations of clinician-prepared diagnoses
  • Appointment scheduling and pre-visit intake

What's not deployed (or not working): autonomous symptom-to-diagnosis flows, medication recommendations without clinician review, autonomous treatment planning. These hit the unregulated/regulated boundary in dangerous ways.

What's Coming

  • Specific FDA pathways for adaptive (continuously updating) AI
  • Clearer expectations on post-market performance monitoring
  • More CDS-cleared LLM applications
  • State-level regulatory variation as some states adopt their own rules

Sources

Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

Healthcare

Reducing ER Boarding with AI Voice Triage: Nurse Line Automation That Diverts Non-Emergent Calls

How AI nurse triage agents route non-emergent callers away from the ER toward urgent care, telehealth, and self-care — measurably reducing door-to-provider time.

Learn Agentic AI

AI Agents for Healthcare: Appointment Scheduling, Insurance Verification, and Patient Triage

How healthcare AI agents handle real workflows: appointment booking with provider matching, insurance eligibility checks, symptom triage, HIPAA compliance, and EHR integration patterns.

Learn Agentic AI

Building a Medical Image Analysis Agent: X-Ray, Scan, and Lab Report Reading

Learn how to build an AI agent for medical image analysis that preprocesses X-rays and scans, detects findings, generates structured reports, and includes appropriate clinical disclaimers for responsible deployment.

Learn Agentic AI

Building a Medical Appointment Scheduling Agent: HIPAA-Compliant AI

Learn how to build an AI agent that schedules medical appointments with provider matching, slot optimization, and HIPAA-compliant data handling. Includes EMR integration patterns and encryption strategies.

Learn Agentic AI

Domain-Specific Prompt Libraries: Building Reusable Prompts for Healthcare, Legal, and Finance

Learn how to build production-grade prompt libraries for regulated industries with domain-specific templates, terminology handling, and compliance-aware prompting patterns.

Healthcare

Medical Imaging and AI: How Diagnostic Accuracy Is Improving Across Radiology | CallSphere Blog

With 61% of healthcare organizations deploying AI for medical imaging, discover how machine learning is augmenting radiologist capabilities, reducing missed findings, and accelerating diagnostic workflows.