Clinical Decision Support Agents: Where FDA Draws the Line in 2026
FDA's 2026 guidance on AI-based clinical decision support clarifies what is regulated software. What this means for builders and providers.
What's Regulated and What's Not
The FDA's 2024-2025 guidance on Clinical Decision Support (CDS) software, with 2026 refinements, draws clearer lines between regulated medical devices and unregulated tools. For AI-based CDS, the four-criteria test from the 21st Century Cures Act remains the framework. The 2026 refinement clarified how the test applies to LLM-based CDS specifically.
This piece walks through what the test means in practice and what 2026 deployments are doing on each side of the line.
The Four-Criteria Test
flowchart TD
A[1. Not for time-critical decisions] --> Pass1
B[2. Displays supporting evidence] --> Pass1
C[3. Independent review possible] --> Pass1
D[4. Used by HCPs] --> Pass1
Pass1{All four met?}
Pass1 -->|Yes| NoFDA[Not regulated as a device]
Pass1 -->|No| FDA[Regulated as medical device software]
If all four criteria are met, the software is not a medical device under FDA. If any fails, it is.
The 2026 refinement clarified that LLMs presenting evidence in summary form may still satisfy criterion 2 if the underlying source material is accessible. Earlier interpretations were stricter.
What's Not Regulated (Examples)
- An LLM that summarizes medical literature for an MD with citations
- A tool that compiles a patient's chart for the MD to review
- Reference Q&A on dosing guidelines
- Documentation drafting (visit notes, discharge summaries) for MD review and signoff
- Patient-facing scheduling and triage that does not give clinical advice
These are widely deployed in 2026 across health systems.
What Is Regulated (Examples)
- Software that generates a diagnostic recommendation without exposing the basis to the clinician
- Software that issues alerts in time-critical scenarios where the clinician cannot independently review
- Software that recommends specific treatments without supporting evidence
- Direct-to-patient diagnostic AI
These require FDA clearance (typically 510(k) or De Novo) and are subject to ongoing post-market obligations.
See AI Voice Agents Handle Real Calls
Book a free demo or calculate how much you can save with AI voice automation.
The 2026 LLM-Specific Wrinkle
LLMs raise three specific issues the 2024 guidance addressed and 2026 refined:
- Confidence calibration: an LLM giving a confident wrong answer is more dangerous than a low-confidence right one. FDA expects calibration evaluation.
- Currency of training data: medical knowledge evolves; clinical AI must stay current. FDA has signaled expectations for update cadence.
- Performance drift: LLMs can degrade as inputs shift. Post-market monitoring is expected for cleared devices.
What Deployments Look Like in 2026
flowchart TB
NoFDA[Non-device deployments] --> Doc[Documentation assistance]
NoFDA --> Lit[Literature summaries]
NoFDA --> Adm[Administrative]
Cleared[FDA-cleared] --> Diag[Specific diagnostic AI]
Cleared --> Alert[Critical alerting]
Cleared --> Tri[Triage automation]
Most deployed clinical LLM applications in 2026 fall on the non-device side: documentation help, literature search, administrative workflow. The cleared-device side has specific point solutions (radiology AI, sepsis prediction, etc.) that have gone through formal regulatory review.
A Health System's Decision Framework
For a health system evaluating a clinical LLM application in 2026:
- Apply the four-criteria test honestly
- If non-device, check vendor-supplied evidence of clinical accuracy
- Verify HIPAA, BAA, and data privacy
- Confirm physician sign-off workflow before any clinical action
- Plan for post-deployment monitoring of accuracy and clinical impact
- Have a rollback path if quality degrades
If device-cleared, ensure the deployment matches the cleared indications for use; off-label deployment of cleared software is a regulatory issue.
Liability Considerations
Even non-device CDS carries liability if it produces wrong recommendations relied upon by clinicians. The 2026 best practice:
- All clinical decisions remain with the clinician of record
- AI outputs are framed as suggestions with explicit "verify before acting" language
- Documentation of AI use in the EHR
- Vendor indemnification for AI errors (negotiated case-by-case)
Patient-Facing Deployments
Direct-to-patient AI in clinical contexts is a different category. The 2026 deployments that are working:
- Symptom triage with explicit "this is not a diagnosis" framing and clear path to a clinician
- Medication reminders and adherence support
- Health-literacy explanations of clinician-prepared diagnoses
- Appointment scheduling and pre-visit intake
What's not deployed (or not working): autonomous symptom-to-diagnosis flows, medication recommendations without clinician review, autonomous treatment planning. These hit the unregulated/regulated boundary in dangerous ways.
What's Coming
- Specific FDA pathways for adaptive (continuously updating) AI
- Clearer expectations on post-market performance monitoring
- More CDS-cleared LLM applications
- State-level regulatory variation as some states adopt their own rules
Sources
- FDA CDS guidance — https://www.fda.gov/regulatory-information/search-fda-guidance-documents
- 21st Century Cures Act — https://www.fda.gov
- FDA AI/ML Action Plan — https://www.fda.gov/medical-devices
- "Predetermined Change Control Plan" FDA — https://www.fda.gov
- AMA AI policy — https://www.ama-assn.org
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.