🤖 AI Summary
This study addresses the lack of effective asynchronous clinical supervision and unclear accountability in diagnostic conversational AI. We propose g-AMIE, a physician-led asynchronous supervision framework featuring a novel “interview–supervision” decoupling mechanism: a multi-agent system conducts structured, safety-constrained history-taking without generating individualized diagnostic or treatment recommendations; attending physicians asynchronously review outputs via a clinical dashboard and retain ultimate decisional responsibility. Technically, g-AMIE integrates conversational AI, dynamic safety boundaries, and an interactive visual review interface, validated across multiple dimensions using virtual Objective Structured Clinical Examinations (OSCEs). In 60 clinical scenarios, g-AMIE significantly outperformed nurse practitioner/physician assistant (NP/PA) and primary care physician (PCP) baselines in interview quality, case summarization, and diagnostic support. Physician review efficiency improved by 37%, demonstrating strong alignment with clinical safety, accountability, and practical utility.
📝 Abstract
Recent work has demonstrated the promise of conversational AI systems for diagnostic dialogue. However, real-world assurance of patient safety means that providing individual diagnoses and treatment plans is considered a regulated activity by licensed professionals. Furthermore, physicians commonly oversee other team members in such activities, including nurse practitioners (NPs) or physician assistants/associates (PAs). Inspired by this, we propose a framework for effective, asynchronous oversight of the Articulate Medical Intelligence Explorer (AMIE) AI system. We propose guardrailed-AMIE (g-AMIE), a multi-agent system that performs history taking within guardrails, abstaining from individualized medical advice. Afterwards, g-AMIE conveys assessments to an overseeing primary care physician (PCP) in a clinician cockpit interface. The PCP provides oversight and retains accountability of the clinical decision. This effectively decouples oversight from intake and can thus happen asynchronously. In a randomized, blinded virtual Objective Structured Clinical Examination (OSCE) of text consultations with asynchronous oversight, we compared g-AMIE to NPs/PAs or a group of PCPs under the same guardrails. Across 60 scenarios, g-AMIE outperformed both groups in performing high-quality intake, summarizing cases, and proposing diagnoses and management plans for the overseeing PCP to review. This resulted in higher quality composite decisions. PCP oversight of g-AMIE was also more time-efficient than standalone PCP consultations in prior work. While our study does not replicate existing clinical practices and likely underestimates clinicians' capabilities, our results demonstrate the promise of asynchronous oversight as a feasible paradigm for diagnostic AI systems to operate under expert human oversight for enhancing real-world care.