Evaluating Medical LLMs by Levels of Autonomy: A Survey Moving from Benchmarks to Applications

📅 2025-10-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the misalignment between benchmark performance and real-world safety/reliability of medical large language models (LLMs) in clinical practice. We propose an autonomy-level assessment framework (L0–L3), inspired by autonomous driving taxonomy, which maps clinical tasks—ranging from information assistance and integration to decision support and agent-based execution—to corresponding risk levels. Each level is rigorously defined by operational boundaries and quantifiable fault-tolerance thresholds. By integrating established benchmarks with clinical risk dimensions, we construct an interpretable, regulatory-aware, hierarchical evaluation standard. Our key contribution is the first evidence-generation pathway bridging laboratory benchmarks to clinically trustworthy LLM deployment: it explicitly links evaluation outcomes to regulatory requirements and risk mitigation strategies, thereby providing a practical, actionable methodology for safe, responsible clinical adoption of medical LLMs.

Technology Category

Application Category

📝 Abstract
Medical Large language models achieve strong scores on standard benchmarks; however, the transfer of those results to safe and reliable performance in clinical workflows remains a challenge. This survey reframes evaluation through a levels-of-autonomy lens (L0-L3), spanning informational tools, information transformation and aggregation, decision support, and supervised agents. We align existing benchmarks and metrics with the actions permitted at each level and their associated risks, making the evaluation targets explicit. This motivates a level-conditioned blueprint for selecting metrics, assembling evidence, and reporting claims, alongside directions that link evaluation to oversight. By centering autonomy, the survey moves the field beyond score-based claims toward credible, risk-aware evidence for real clinical use.
Problem

Research questions and friction points this paper is trying to address.

Evaluating medical LLMs through autonomy levels for clinical applications
Aligning benchmarks with autonomy levels and associated risks
Moving beyond benchmarks to credible evidence for clinical use
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reframes evaluation through levels-of-autonomy lens
Aligns benchmarks with actions and risks per level
Proposes level-conditioned blueprint for metric selection
🔎 Similar Papers
No similar papers found.