Growing numbers of Americans are turning to AI tools to supplement — and in some cases replace — in-person medical consultations, raising urgent questions about diagnostic accuracy, liability, and health equity.
Americans Are Using AI Between Doctor Visits. Nobody Knows If That's Safe.
By Hector Herrera | May 5, 2026 | Health
Growing numbers of Americans are turning to AI tools to supplement — and in some cases replace — in-person medical consultations, according to a new Gallup survey released this month. The trend is accelerating fast enough that healthcare systems, insurers, and regulators are now forced to answer a question no one has cleanly resolved: when AI gives a patient the wrong answer about their health, who is accountable?
This is not a fringe behavior. The Gallup data captures a behavioral shift that clinicians have been observing anecdotally for the past 18 months — patients arriving at appointments with AI-generated research, or skipping appointments entirely because an AI told them what they wanted to hear.
What the Survey Found
The Gallup survey found that a statistically significant share of Americans now consult AI tools between medical appointments to interpret symptoms, review diagnoses, or research treatment options. Key patterns from the data:
- Younger Americans (18-34) are disproportionately likely to use AI as a first step before scheduling any appointment at all
- Rural respondents showed higher AI consultation rates — consistent with lower physician density in those regions
- The behavior spans general-purpose chatbots (ChatGPT, Gemini) and a new wave of specialized health AI platforms entering the market
- Almost none of the tools being used carry FDA clearance for diagnostic use
The full percentage breakdowns are available to Gallup subscribers; the public summary establishes direction and statistical significance but withholds specific toplines.
Why People Are Doing This
Healthcare has never been perfectly accessible. Cost, geography, time, and stigma have always pushed some patients toward self-diagnosis — first via phone nurses, then WebMD, now large language models. What's changed is three things: conversational fluency, availability, and the confidence users report placing in AI responses.
An LLM can engage in a back-and-forth about your specific symptoms. It doesn't require an appointment, a copay, or a two-week wait. For someone in rural Montana without a nearby specialist, or an uninsured worker who can't take time off for a clinic visit, the appeal is not irrational. The problem is that the tool's fluency can mask its limitations.
Get this in your inbox.
Daily AI intelligence. Free. No spam.
The Real Risks
Diagnostic error. Large language models are not trained diagnosticians. They hallucinate — generating plausible-sounding but factually incorrect medical information. They cannot order bloodwork, examine a patient, or access medical history. A confident-sounding AI answer about chest pain that misses a cardiac warning sign is not a theoretical concern; it is a documented failure mode that emergency physicians are already encountering.
Delayed care. The more insidious risk is patients who wait too long because AI reassured them. Several emergency medicine departments have begun logging cases where AI advice — "this sounds like a muscle strain" — preceded delayed presentation for conditions that required urgent intervention. The data on this is anecdotal and emerging; no systematic study has quantified the delay-attribution problem yet.
The fluency trap. A doctor who isn't sure will often say so. LLMs tend toward confident prose. Uncertainty is harder to convey in a conversational AI response, and users tend to read confident prose as reliable information.
The Equity Dimension
The data cuts in two directions simultaneously.
On one hand, rural and underinsured Americans with limited physician access may genuinely benefit from AI health information as a stopgap. If AI can help someone understand that their symptom warrants a visit — versus assuming it doesn't — that is an access benefit.
On the other hand, AI health tools trained predominantly on English-language, majority-population data are known to perform worse for non-English speakers and for populations with different disease prevalence profiles. The gap AI could close may come with an accuracy gap that widens it for the populations who can least afford misdiagnosis.
The equity case for AI health access and the equity case against it are both real. Policy will need to hold both.
The Liability Vacuum
No clear legal framework governs what happens when an AI health recommendation causes patient harm. The FTC has issued warnings about unsubstantiated health claims by AI products. The FDA has cleared specific, narrowly defined AI diagnostic tools. But general-purpose LLMs answering health queries operate in regulatory gray territory — neither prohibited nor meaningfully regulated.
This is not a stable situation. The first major lawsuit attributing patient harm to AI health advice will force courts to assign liability somewhere — to the model developer, the platform, or the user who chose to consult AI instead of a physician. The outcome of that case will reshape how health AI is deployed.
What to Watch
The FDA is expected to release updated guidance on AI health tools in 2026. The critical question is whether that guidance captures general-purpose LLMs used for health queries — or only purpose-built medical AI. Simultaneously, watch hospital systems and major insurers begin releasing their own "sanctioned" AI health tools as a way to route patients toward clinically validated information while reducing their own liability exposure. Mayo Clinic, Cleveland Clinic, and several major payers are already in development. Whether those tools get adoption depends on whether they can compete on convenience — the one thing general-purpose AI currently wins on decisively.
Hector Herrera covers AI in healthcare and science for NexChron. Source: Gallup, May 2026.
Did this help you understand AI better?
Your feedback helps us write more useful content.
Get tomorrow's AI briefing
Join readers who start their day with NexChron. Free, daily, no spam.