🤖 AI Summary
Current medical large language model (LLM) benchmarks predominantly rely on licensing examination questions, raising concerns about their validity in assessing core clinical competencies—particularly clinical reasoning.
Method: This work is the first to systematically integrate psychological construct validity theory into medical LLM evaluation, proposing structural validity as the foundational design principle. We develop a verifiable, real-world clinical data–driven assessment framework and introduce a multidimensional bias diagnostic methodology.
Contribution/Results: Empirical evaluation across multiple mainstream benchmarks reveals substantial construct validity deficits: model performance frequently stems from memorization or superficial pattern matching rather than genuine target clinical capabilities. Our approach establishes a methodological foundation and practical pathway for building high-fidelity, theory-grounded clinical competency assessments.
📝 Abstract
Medical large language models (LLMs) research often makes bold claims, from encoding clinical knowledge to reasoning like a physician. These claims are usually backed by evaluation on competitive benchmarks; a tradition inherited from mainstream machine learning. But how do we separate real progress from a leaderboard flex? Medical LLM benchmarks, much like those in other fields, are arbitrarily constructed using medical licensing exam questions. For these benchmarks to truly measure progress, they must accurately capture the real-world tasks they aim to represent. In this position paper, we argue that medical LLM benchmarks should (and indeed can) be empirically evaluated for their construct validity. In the psychological testing literature,"construct validity"refers to the ability of a test to measure an underlying"construct", that is the actual conceptual target of evaluation. By drawing an analogy between LLM benchmarks and psychological tests, we explain how frameworks from this field can provide empirical foundations for validating benchmarks. To put these ideas into practice, we use real-world clinical data in proof-of-concept experiments to evaluate popular medical LLM benchmarks and report significant gaps in their construct validity. Finally, we outline a vision for a new ecosystem of medical LLM evaluation centered around the creation of valid benchmarks.