🤖 AI Summary
Large language models (LLMs) achieve high accuracy on standardized medical knowledge assessments (e.g., GPT-4o: 94.9% disease identification), yet their real-world effectiveness in public health consultation remains unverified and potentially overstated. Method: We conducted a randomized controlled trial with 1,298 participants comparing real-time LLM assistance (GPT-4o, Llama 3, Command R+) against autonomous information retrieval for diagnosing common conditions and selecting appropriate management actions. Contribution/Results: Under LLM assistance, diagnostic accuracy dropped to <34.5% and management decision accuracy to <44.2%—statistically indistinguishable from the control group. This is the first empirical demonstration that high performance on static medical benchmarks does not predict effective human–LLM interaction in authentic clinical decision support. The findings expose a critical clinical validity gap in current evaluation paradigms. We advocate mandating human-in-the-loop randomized controlled trials as a prerequisite for deploying LLMs in healthcare applications, shifting assessment focus from isolated model capability to interactive real-world efficacy.
📝 Abstract
Global healthcare providers are exploring use of large language models (LLMs) to provide medical advice to the public. LLMs now achieve nearly perfect scores on medical licensing exams, but this does not necessarily translate to accurate performance in real-world settings. We tested if LLMs can assist members of the public in identifying underlying conditions and choosing a course of action (disposition) in ten medical scenarios in a controlled study with 1,298 participants. Participants were randomly assigned to receive assistance from an LLM (GPT-4o, Llama 3, Command R+) or a source of their choice (control). Tested alone, LLMs complete the scenarios accurately, correctly identifying conditions in 94.9% of cases and disposition in 56.3% on average. However, participants using the same LLMs identified relevant conditions in less than 34.5% of cases and disposition in less than 44.2%, both no better than the control group. We identify user interactions as a challenge to the deployment of LLMs for medical advice. Standard benchmarks for medical knowledge and simulated patient interactions do not predict the failures we find with human participants. Moving forward, we recommend systematic human user testing to evaluate interactive capabilities prior to public deployments in healthcare.