🤖 AI Summary
This study addresses the lack of rigorous evaluation of higher-order cognitive capabilities of large language models (LLMs) in medicine. Grounded in Bloom’s Taxonomy, we propose the first hierarchical and scalable medical evaluation framework, comprising three cognitive levels: knowledge recall, integrative application, and contextual problem solving. Methodologically, we integrate diverse medical datasets and employ a standardized zero-shot prompting protocol to systematically assess six major LLM families—Llama, Qwen, Gemma, Phi, GPT, and DeepSeek. Innovatively, we incorporate cognitive psychology theory into medical LLM evaluation for the first time, revealing that parameter count constitutes a critical bottleneck for higher-order reasoning: all models exhibit substantial performance degradation at the contextual problem-solving level (average drop of 32.7%). These findings provide empirical grounding and methodological guidance for clinical-oriented LLM architecture optimization and capability alignment.
📝 Abstract
Large language models (LLMs) have demonstrated remarkable performance on various medical benchmarks, but their capabilities across different cognitive levels remain underexplored. Inspired by Bloom's Taxonomy, we propose a multi-cognitive-level evaluation framework for assessing LLMs in the medical domain in this study. The framework integrates existing medical datasets and introduces tasks targeting three cognitive levels: preliminary knowledge grasp, comprehensive knowledge application, and scenario-based problem solving. Using this framework, we systematically evaluate state-of-the-art general and medical LLMs from six prominent families: Llama, Qwen, Gemma, Phi, GPT, and DeepSeek. Our findings reveal a significant performance decline as cognitive complexity increases across evaluated models, with model size playing a more critical role in performance at higher cognitive levels. Our study highlights the need to enhance LLMs' medical capabilities at higher cognitive levels and provides insights for developing LLMs suited to real-world medical applications.