Evaluating LLMs Across Multi-Cognitive Levels: From Medical Knowledge Mastery to Scenario-Based Problem Solving

📅 2025-06-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the lack of rigorous evaluation of higher-order cognitive capabilities of large language models (LLMs) in medicine. Grounded in Bloom’s Taxonomy, we propose the first hierarchical and scalable medical evaluation framework, comprising three cognitive levels: knowledge recall, integrative application, and contextual problem solving. Methodologically, we integrate diverse medical datasets and employ a standardized zero-shot prompting protocol to systematically assess six major LLM families—Llama, Qwen, Gemma, Phi, GPT, and DeepSeek. Innovatively, we incorporate cognitive psychology theory into medical LLM evaluation for the first time, revealing that parameter count constitutes a critical bottleneck for higher-order reasoning: all models exhibit substantial performance degradation at the contextual problem-solving level (average drop of 32.7%). These findings provide empirical grounding and methodological guidance for clinical-oriented LLM architecture optimization and capability alignment.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have demonstrated remarkable performance on various medical benchmarks, but their capabilities across different cognitive levels remain underexplored. Inspired by Bloom's Taxonomy, we propose a multi-cognitive-level evaluation framework for assessing LLMs in the medical domain in this study. The framework integrates existing medical datasets and introduces tasks targeting three cognitive levels: preliminary knowledge grasp, comprehensive knowledge application, and scenario-based problem solving. Using this framework, we systematically evaluate state-of-the-art general and medical LLMs from six prominent families: Llama, Qwen, Gemma, Phi, GPT, and DeepSeek. Our findings reveal a significant performance decline as cognitive complexity increases across evaluated models, with model size playing a more critical role in performance at higher cognitive levels. Our study highlights the need to enhance LLMs' medical capabilities at higher cognitive levels and provides insights for developing LLMs suited to real-world medical applications.
Problem

Research questions and friction points this paper is trying to address.

Assessing LLMs' medical capabilities across cognitive levels
Evaluating performance decline with increasing cognitive complexity
Identifying need for higher cognitive-level medical LLM improvements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-cognitive-level evaluation framework for LLMs
Integration of existing and new medical datasets
Systematic evaluation across six LLM families
🔎 Similar Papers
No similar papers found.
Y
Yuxuan Zhou
Department of Electronic Engineering, Tsinghua University, Beijing, China
Xien Liu
Xien Liu
Tsinghua University
Deep LearningMedicalNLPLarge Language Models
Chenwei Yan
Chenwei Yan
Beijing University of Posts and Telecommunications
Natural Language ProcessingLarge Language Models
C
Chen Ning
Department of Electronic Engineering, Tsinghua University, Beijing, China
X
Xiao Zhang
Department of Electronic Engineering, Tsinghua University, Beijing, China
B
Boxun Li
Infinigence-AI, Beijing, China
X
Xiangling Fu
School of Computer Science, Beijing University of Posts and Telecommunications, Beijing, China
Shijin Wang
Shijin Wang
Tongji University
Schedulingmaintenance
G
Guoping Hu
iFLYTEK Research, Hefei, China
Y
Yu Wang
Department of Electronic Engineering, Tsinghua University, Beijing, China
Ji Wu
Ji Wu
Tsinghua University
Artificial Intelligence,smart healthcaremachine learningpattern recognitionspeech recognition