🤖 AI Summary
Existing LLM clinical evaluations predominantly rely on simplified QA benchmarks (e.g., MedQA), failing to capture the complexity and multidimensionality of real-world clinical decision-making. Method: We propose a dual-dimensional evaluation paradigm grounded in authentic clinical scenarios—orthogonally modeling tasks along *clinical context* (e.g., patient demographics, care settings) and *clinical reasoning* (e.g., diagnostic inference, therapeutic trade-offs)—to transcend the limitations of conventional single-answer QA assessment. Our framework integrates quantitative metrics across accuracy, reasoning efficiency, interpretability, and robustness, and systematically compares model performance under diverse clinical decision paradigms via combined training-time interventions and test-time enhancements. Contribution/Results: The study rigorously delineates the applicability boundaries of mainstream datasets and methods, identifies critical bottlenecks in clinical reasoning, and establishes a standardized, actionable benchmark for the trustworthy deployment of LLMs in clinical decision support.
📝 Abstract
Large language models (LLMs) show promise for clinical use. They are often evaluated using datasets such as MedQA. However, Many medical datasets, such as MedQA, rely on simplified Question-Answering (QA) that underrepresents real-world clinical decision-making. Based on this, we propose a unifying paradigm that characterizes clinical decision-making tasks along two dimensions: Clinical Backgrounds and Clinical Questions. As the background and questions approach the real clinical environment, the difficulty increases. We summarize the settings of existing datasets and benchmarks along two dimensions. Then we review methods to address clinical decision-making, including training-time and test-time techniques, and summarize when they help. Next, we extend evaluation beyond accuracy to include efficiency, explainability. Finally, we highlight open challenges. Our paradigm clarifies assumptions, standardizes comparisons, and guides the development of clinically meaningful LLMs.