🤖 AI Summary
Despite growing interest in large language models (LLMs) for clinical decision support, their real-world reliability in complex, high-stakes healthcare settings remains poorly characterized, particularly regarding systematic failure modes in medication safety review.
Method: We conducted the first systematic evaluation of LLM-driven medication safety review on real-world NHS primary care electronic health records from 277 high-risk patients, employing a multi-model architecture (GPT-4, Claude, Llama) and expert double-blind adjudication.
Contribution/Results: We identify context-aware reasoning deficits—not knowledge gaps—as the primary failure driver, uncovering five cross-model, cross-population stable failure patterns supported by 45 structured clinical cases. The system achieves 100% sensitivity (95% CI: 98.2–100) but only 83.1% specificity (95% CI: 72.7–90.1); crucially, only 46.9% of patients’ medication issues and corresponding interventions were fully and correctly identified. This work reveals fundamental robustness limitations in LLM reasoning for clinical deployment and establishes a reproducible failure-analysis framework and empirical benchmark for safety-critical medical AI.
📝 Abstract
Large language models (LLMs) often match or exceed clinician-level performance on medical benchmarks, yet very few are evaluated on real clinical data or examined beyond headline metrics. We present, to our knowledge, the first evaluation of an LLM-based medication safety review system on real NHS primary care data, with detailed characterisation of key failure behaviours across varying levels of clinical complexity. In a retrospective study using a population-scale EHR spanning 2,125,549 adults in NHS Cheshire and Merseyside, we strategically sampled patients to capture a broad range of clinical complexity and medication safety risk, yielding 277 patients after data-quality exclusions. An expert clinician reviewed these patients and graded system-identified issues and proposed interventions. Our primary LLM system showed strong performance in recognising when a clinical issue is present (sensitivity 100% [95% CI 98.2--100], specificity 83.1% [95% CI 72.7--90.1]), yet correctly identified all issues and interventions in only 46.9% [95% CI 41.1--52.8] of patients. Failure analysis reveals that, in this setting, the dominant failure mechanism is contextual reasoning rather than missing medication knowledge, with five primary patterns: overconfidence in uncertainty, applying standard guidelines without adjusting for patient context, misunderstanding how healthcare is delivered in practice, factual errors, and process blindness. These patterns persisted across patient complexity and demographic strata, and across a range of state-of-the-art models and configurations. We provide 45 detailed vignettes that comprehensively cover all identified failure cases. This work highlights shortcomings that must be addressed before LLM-based clinical AI can be safely deployed. It also begs larger-scale, prospective evaluations and deeper study of LLM behaviours in clinical contexts.