Healthcare & Wellness | 4 min read

65% of U.S. Doctors Are Using This AI Tool — Most Patients Have No Idea

Nearly two-thirds of U.S. physicians consulted OpenEvidence — an AI clinical information platform — during roughly 27 million patient encounters in April 2026. Most patients were never told.

Hector Herrera
Hector Herrera
A medical facility featuring patient, related to 65% of U.S. Doctors Are Using This AI Tool — Most Patients H
Why this matters Nearly two-thirds of U.S. physicians consulted OpenEvidence — an AI clinical information platform — during roughly 27 million patient encounters in April 2026. Most patients were never told.

65% of U.S. Doctors Are Using This AI Tool — Most Patients Have No Idea

By Hector Herrera | May 14, 2026 | Health

Nearly two-thirds of U.S. physicians consulted OpenEvidence — an AI clinical information platform — during roughly 27 million patient encounters in April 2026. Most patients were never told. That scale of adoption, happening largely below the radar of patients and policymakers alike, is forcing a hard question into the open: when AI becomes a de facto tool in clinical decision-making, what does informed consent actually mean?

This isn't a fringe experiment. At 65% physician reach in a single month, OpenEvidence has achieved something that few clinical tools have managed in such a short window — near-ubiquitous adoption among practicing U.S. doctors, at a scale that rivals or exceeds UpToDate, the reference standard that took more than a decade to become standard clinical practice.

What OpenEvidence Is

OpenEvidence is an AI platform built exclusively for licensed medical professionals. Unlike general-purpose AI chatbots, it is trained on peer-reviewed clinical literature, drug databases, treatment protocols, and medical guidelines. Physicians use it primarily to look up drug information, check dosages, identify drug interactions, and reference treatment options — typically in real time during or immediately before a patient encounter.

The platform requires verification of a valid National Provider Identifier (NPI) number — the unique ID issued to all licensed U.S. healthcare providers — before granting access. That gatekeeping mechanism distinguishes it from consumer health AI and positions it as a professional clinical resource. It is, in practical terms, an AI-powered evolution of the medical reference database — faster and more conversational than legacy tools, but serving the same function doctors have always relied on references for.

The April 2026 Numbers

The figures, reported by NBC News, describe a platform that has crossed from early adoption into mainstream clinical use:

  • ~65% of practicing U.S. physicians used OpenEvidence during April 2026
  • Nearly 27 million clinical encounters involved an OpenEvidence consultation
  • Use cases center on drug lookups, dosage verification, and protocol reference

The United States has approximately 1 million active physicians. A 65% monthly active rate — if that figure holds on closer inspection — would represent adoption velocity with few precedents in clinical technology. UpToDate, now owned by Wolters Kluwer and considered the gold standard for clinical decision support, was built over more than a decade of structured hospital partnerships. OpenEvidence has moved faster, and it has done so largely without institutional procurement processes.

The Transparency Gap

Here is where it gets more complicated. According to NBC News reporting, physicians are typically using OpenEvidence without notifying patients that an AI system is playing a role in the clinical interaction.

That is not necessarily deceptive. Physicians use reference materials constantly — textbooks, drug package inserts, clinical databases — and have never been expected to narrate every tool consulted. The professional and legal standard has always been that the physician bears responsibility for the clinical judgment, regardless of what resources informed it.

But AI changes the nature of that tool in ways that matter:

  • AI systems can be wrong in non-obvious ways. A database lookup returns a recorded fact. AI inference can produce a plausible-sounding answer that is subtly or significantly incorrect.
  • AI errors are not auditable at the point of care. A physician can immediately spot an outdated textbook entry; an AI hallucination or training cutoff error is far harder to catch without deep knowledge of what the model was trained on.
  • Informed consent frameworks were not written for algorithmic decision support. Most state and federal medical disclosure requirements predate AI clinical tools at this scale by decades.

Patient advocates have argued that when AI contributes meaningfully to a clinical recommendation — especially on dosing, diagnosis selection, or treatment protocol — patients have a legitimate interest in knowing it.

What Physicians Say

The physicians using OpenEvidence describe it the way an earlier generation described UpToDate: a faster, more accessible way to verify clinical information in real time. In a busy clinical setting, pulling up a drug interaction check in seconds versus minutes genuinely matters. The speed advantage is real.

The concern from critics isn't that physicians shouldn't use AI reference tools. It's that the absence of any transparency standard leaves patients with no way to assess what role AI played in their care — and no mechanism to ask for that information in a structured way.

The Regulatory Gap

No current U.S. federal or state regulation specifically requires physicians to disclose the use of AI clinical decision-support tools to patients. The FDA regulates AI and machine learning as medical devices in some contexts — primarily when the AI makes autonomous determinations, such as flagging a radiology image — but general reference tools that physicians consult fall into a grayer zone.

OpenEvidence doesn't diagnose. It doesn't generate orders. But at 27 million encounters per month, its outputs are influencing clinical reasoning at a scale that few regulated medical devices approach.

The question now circulating in policy and bioethics circles: should AI clinical reference tools require the same transparency as other technologies that influence care decisions?

What to Watch

Several state legislatures are exploring AI disclosure requirements in healthcare settings. The American Medical Association has not yet issued formal guidance on whether physicians should disclose AI clinical reference use to patients, though that conversation is underway. OpenEvidence's April numbers will likely accelerate the timeline for formal guidance.

The deeper issue isn't whether OpenEvidence is clinically safe — for the reference use cases physicians are applying it to, it likely performs at or above legacy databases. The issue is that medicine has always operated on informed consent, and the definition of "informed" is now being stress-tested by tools patients cannot see.

Hector Herrera covers AI in health, business, and policy for NexChron.

Key Takeaways

  • By Hector Herrera | May 14, 2026 | Health
  • ~65% of practicing U.S. physicians
  • Nearly 27 million clinical encounters
  • AI systems can be wrong in non-obvious ways.
  • AI errors are not auditable at the point of care.

Did this help you understand AI better?

Your feedback helps us write more useful content.

Hector Herrera

Written by

Hector Herrera

Hector Herrera is the founder of Hex AI Systems, where he builds AI-powered operations for mid-market businesses across 16 industries. He writes daily about how AI is reshaping business, government, and everyday life. 20+ years in technology. Houston, TX.

More from Hector →

Get tomorrow's AI briefing

Join readers who start their day with NexChron. Free, daily, no spam.

More from NexChron