HomeBlogHuman in the Loop or Human in the Way? The Debate Over AI Autonomy at HIMSS 2026
Table of contents
    Ready to elevate care?

    Ambient AI, dictation, and real-time intelligence—seamlessly integrated to improve clinical workflows.

    Learn more
    News

    Human in the Loop or Human in the Way? The Debate Over AI Autonomy at HIMSS 2026

    March 13, 2026
    time
    min. reading

    LAS VEGAS — If 2024 was the year healthcare learned to let AI listen, 2026 may be the year it decides how much it will let it act.

    At the HIMSS 2026 Physicians Community Roundtable, clinical leaders gathered to debate the next phase of healthcare AI: the transition from ambient AI, which documents clinical encounters, to agentic AI, systems capable of taking action within healthcare workflows.

    Moderated by Dr. Ed Lee, Chief Medical Officer at Nabla, the discussion reflected a broader shift in the industry: from asking whether AI can support clinicians to asking how much autonomy it should have.

    From Ambient to Agentic

    Over the past two years, ambient AI tools that generate clinical notes from patient encounters have rapidly gained adoption, helping reduce documentation burden and after-hours charting.

    The next frontier, panelists said, is agentic AI: or systems designed to complete multi-step tasks across clinical workflows.

    “Agentic AI actually does work,” said Doug McKee, MD, Associate CMIO at Orlando Health. “You assign it a goal, and it carries out the steps needed to complete it.”

    Dr. Bita Behrouzi, an internist at Maine Medical Center, described this evolution as an “autonomy ladder”: AI systems that first read clinical information, then draft notes, stage actions such as orders or follow-ups, and eventually execute tasks within defined guardrails.

    For now, most healthcare deployments remain in the early stages - reading and drafting.

    When “Human in the Loop” Becomes a Bottleneck

    But the conversation also surfaced a more provocative question: does keeping humans in the loop always improve outcomes?

    Dr. Behrouzi pointed to emerging research suggesting that in some cases, AI systems can outperform human - AI teams. Not because humans are inherently worse decision makers, but because most clinicians have received little training on how to effectively work with AI systems.

    Without that training, the result can be counterintuitive: humans overriding accurate AI suggestions, misinterpreting outputs, or spending additional time navigating unfamiliar tools.

    Behrouzi noted that her organization saw this firsthand when new AI capabilities were introduced. Early adoption actually increased time spent in the chart as clinicians learned how to integrate the technology into their workflow.

    The lesson, she argued, isn’t that humans should be removed from the loop.

    It’s that human-AI collaboration requires its own kind of training and operational design -  something healthcare has only begun to address.

    The Technology Behind the Shift

    Advances in AI architecture are helping drive interest in this next phase.

    During the discussion, Dr. Lee pointed to the emergence of world models, AI systems designed to reason about real-world environments rather than simply generate text.

    The concept gained attention recently when Nabla partner, Advanced Machine Intelligence (AMI) Labs, chaired by AI pioneer Yann LeCun, announced a $1 billion seed round to develop world model technologies capable of reasoning through complex systems.

    For healthcare, such models could enable AI to better understand how clinical workflows unfold and anticipate next steps across care teams and systems.

    The Real-World Use Cases

    While the idea of autonomous AI can sound futuristic, panelists suggested the most immediate impact may appear in operational areas.

    Dr. Ryan Sadeghian, MD, CMIO at the University of Toledo, described healthcare’s revenue cycle as an “asymmetric arms race,” with insurers increasingly using AI to analyze claims and deny payments more quickly.

    In response, providers are exploring AI tools that automate prior authorizations, documentation improvement, and administrative workflows.

    Others highlighted patient safety and care coordination use cases, such as AI agents that monitor abnormal lab trends or ensure follow-up appointments are scheduled.

    Governance Becomes the Key Question

    Despite optimism about the technology, panelists emphasized that the central challenge ahead is governance, not capability.

    Healthcare organizations are unlikely to embrace fully autonomous systems in the near term. Instead, most leaders envision a human-in-the-loop model, where AI drafts, stages, or recommends actions while clinicians retain final authority.

    As one panelist summarized, the question facing the industry is no longer whether AI can support clinical workflows.

    It’s how much autonomy healthcare organizations are prepared to give it - and how to govern that autonomy safely.

    The good news is that healthcare has already begun building the guardrails. The governance, monitoring, and human-in-the-loop oversight developed for ambient AI deployments now offer a roadmap for agentic systems. The next phase won’t be defined by whether AI can act - but by how thoughtfully health systems apply the lessons they’ve already learned.

    Related articles

    View all
    April 22, 2026
    10 ways clinicians use an AI medical scribe
    May 7, 2026
    AMI Raises $1.03B to Build World Models — Powering the Next Generation of Healthcare AI with Nabla
    April 22, 2026
    From Vision to Execution: Key Takeaways from Accelerate: AI Innovation in Healthcare
    April 22, 2026
    Ambient AI + Dictation: The Full Clinical Day