AI in medicine is already very much “there”

As of February 2026, the debate on whether AI will replace doctors mirrors the earlier algo trading discussion: AI in medicine is already very much “there” — deeply integrated, rapidly scaling, and handling substantial parts of clinical work — but full replacement of human physicians is not happening anytime soon, and most experts say it won’t in the foreseeable future.

Instead, the overwhelming consensus from recent sources (including medical journals, major health systems, AMA surveys, and 2025–2026 reports) is augmentation > replacement. AI acts as a powerful co-pilot, teammate, or assistant, not a standalone doctor.

Current Reality in 2026 (What’s Already Happening)

  • Adoption is high and accelerating — ~66% of physicians used healthcare AI in 2024 (up sharply from 38% in 2023), per AMA data. By now, it’s normalized in many systems for tasks like:
  • Generating visit notes, discharge summaries, and care plans.
  • Summarizing research/evidence at point-of-care (tools like OpenEvidence).
  • Diagnostic support (e.g., better detection in radiology, lung function tests, breast cancer screening).
  • Administrative burden reduction (charting, prior authorizations).
  • Initial patient triage/symptom evaluation (e.g., Mass General Brigham’s Care Connect program where GenAI handles history-taking, provisional diagnosis, and treatment suggestions — doctor reviews and finalizes).
  • Performance edge in narrow tasks — In controlled studies, AI often outperforms or matches average physicians:
  • Optimal recommendations in ~77% of cases vs. 67% for doctors (recent independent reviews).
  • Fewer diagnostic/treatment errors in large-scale pilots (e.g., Kenya urgent care + OpenAI collaboration).
  • Radiology: AI spots more issues without extra false positives; jobs in the field are still growing (projected +5% in US through 2034).
  • Bold implementations — Some places experiment aggressively:
  • AI agents handling first drafts of reasoning, plans, and even virtual “avatars” for rural/underserved areas (e.g., proposals from CMS leadership).
  • Predictions that 10–25% of clinical work could be automated in the next 5 years.

Why Full Replacement Is Unlikely (Key Barriers)

Most sources emphasize these human elements AI can’t replicate yet (or perhaps ever fully):

  • Empathy, bedside manner, and building trust.
  • Complex ethical judgment, uncertainty handling, and accountability (who’s liable if AI errs?).
  • Physical exams, procedures (surgery, even with robotics — human oversight needed).
  • Holistic patient context (social determinants, nuances beyond data).
  • Liability and regulation — Laws (e.g., Arizona’s 2025 bill requiring human review for AI-influenced denials) and FDA shifts keep final decisions with clinicians.

Many physicians feel “existentially threatened” by AI’s diagnostic speed/reasoning, but the trajectory is AI handles routine/repetitive parts → doctors focus on high-level judgment, coordination, and human connection.

Parallels to Algo Trading

Just like “algo trading is already there” (computers dominate volume, but top human quants + better models still win), in medicine:

  • AI is already dominating narrow, data-heavy tasks.
  • The real edge comes from doctors who master AI tools (those who don’t may fall behind).
  • Full “replacement” talk is often hype — the future is hybrid: AI-augmented medicine, not doctor-less hospitals.

Bottom line in 2026: No, AI won’t replace doctors — but it will replace (or drastically reduce) much of the drudgery doctors currently do, making good medicine faster, more accurate, and (hopefully) more humane. Doctors who embrace it will thrive; the field as a whole is transforming, not disappearing.

Leave a comment