Only 42% of Americans are open to AI being used in their healthcare — down from 52% in 2024 — even as hospital systems accelerate AI deployment across clinical, diagnostic, and documentation workflows.
Americans' Openness to AI in Healthcare Drops 10 Points in Two Years
By Hector Herrera | April 26, 2026 | Health
Only 42% of Americans are now open to AI being used in their healthcare — down from 52% in 2024 — according to a new national survey covered by U.S. News & World Report on April 7. That 10-point drop happened over two years, a period during which hospitals and health systems have been accelerating AI deployment across clinical workflows, not pulling back. The trust gap and the deployment gap are moving in opposite directions.
The disconnect matters because patient trust is not a soft metric in healthcare. It affects whether patients engage honestly with their providers, whether they follow treatment recommendations, and whether they seek care in the first place. An AI-assisted diagnostic system only works if patients show up.
The Trust Numbers in Context
The drop from 52% to 42% is the headline, but the nuance inside the survey data is equally important.
25% of Americans have already used an AI chatbot or tool for health information — as a supplement to, not a replacement for, clinical care. That figure has grown meaningfully from prior years. Americans are using AI for health purposes even as their openness to AI being used on them in clinical settings is falling.
This distinction matters. People are comfortable using AI as a personal health resource — to understand a diagnosis, look up a medication interaction, or research treatment options. They are less comfortable with AI being embedded in the clinical decisions that healthcare providers make about them, without their full visibility into when or how it's happening.
The trust erosion appears to be concentrated in the clinical application category: AI in diagnosis, AI in treatment planning, AI in triage. The consumer-tool category — AI assistants, symptom checkers, chatbots — shows more continued adoption.
Why Trust Is Eroding
The survey doesn't isolate a single cause, but several factors are likely contributing:
Get this in your inbox.
Daily AI intelligence. Free. No spam.
High-profile AI errors in healthcare settings have accumulated. Missed diagnoses, AI systems that performed differently across demographic groups, and documentation AI tools that generated inaccuracies have all received media coverage. Each incident adds to a mental model of AI in healthcare as unreliable.
Transparency is low. Most patients have no way of knowing when AI is being used in their care. A radiologist reviewing an AI-flagged scan, a clinical decision support tool suggesting a medication, an AI-generated prior authorization — patients rarely know these systems are in the loop. That opacity creates distrust when it's eventually disclosed.
The "AI replacing doctors" narrative persists. Despite industry messaging that AI assists clinicians rather than replacing them, patient concern about AI substitution for human judgment remains a consistent theme in healthcare AI surveys.
What Healthcare Systems Are Actually Deploying
The survey data arrives as hospitals and health systems are investing more in AI, not less. Current AI deployments in clinical settings span:
- Documentation AI: Tools that generate clinical notes from physician-patient conversations, reducing administrative burden. Widespread and generally well-received by clinicians.
- Diagnostic AI: Radiology, pathology, and dermatology AI that flags abnormalities in imaging. FDA-cleared products number in the hundreds.
- Clinical decision support: AI embedded in electronic health records that surfaces drug interaction alerts, care pathway recommendations, and risk scores.
- Prior authorization and administrative AI: Back-office automation that has attracted scrutiny for denial rate increases in some applications.
The gap between where deployment is happening and what patients are aware of is significant. Most AI in healthcare is invisible to patients — embedded in workflows they don't see.
The Implications for Health AI Adoption
The trust gap creates a practical problem for healthcare AI deployment at scale. Patient acceptance — not just clinician adoption and regulatory clearance — is increasingly a requirement for sustained use.
For healthcare systems deploying AI:
- Proactive patient disclosure — communicating when AI is used in care — is an emerging best practice and may become a regulatory requirement
- AI that visibly reduces wait times, improves care coordination, or enhances patient communication is more likely to build trust than AI that is invisible in clinical decision-making
- Demographic disparities in AI performance need to be addressed before broad deployment; published evidence that AI performs differently by race, age, or sex erodes trust across all patient populations
For patients:
- AI health tools used as supplements to care — for information, symptom understanding, and care navigation — carry different risk profiles than AI embedded in clinical decisions
- Questions about AI use in your care are appropriate; healthcare providers should be able to tell you when and how AI is being used
What to Watch
Look for the next wave of national surveys in late 2026 to see whether trust continues to erode or stabilizes. The introduction of federal transparency requirements for clinical AI — which have been discussed at FDA and HHS but not finalized — would likely improve patient visibility and could reverse some of the trust decline.
The trust data also creates pressure on healthcare AI vendors to invest in explainability features that allow both clinicians and patients to understand how AI-assisted decisions are reached.
Source: U.S. News & World Report
Did this help you understand AI better?
Your feedback helps us write more useful content.
Get tomorrow's AI briefing
Join readers who start their day with NexChron. Free, daily, no spam.