Investigating Large Language Models in Diagnosing Students' Cognitive Skills in Math Problem-solving

📅 2025-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the effectiveness of large language models (LLMs) in diagnosing fine-grained mathematical cognitive skills—such as reasoning, application, and comprehension—moving beyond conventional text-matching automated scoring. We introduce MathCog, the first benchmark dataset for cognitive diagnosis in mathematics, comprising 639 human-annotated student responses. Using zero-shot and few-shot evaluation across 16 open- and closed-source LLMs—and validating predictions via expert teacher cognitive checklists—we find that all models achieve F1 scores below 0.5, indicating severe diagnostic limitations. Performance correlates strongly with parameter count (Spearman’s rₛ = 0.771), yet erroneous predictions frequently exhibit high confidence (rₛ = 0.617), revealing pervasive overconfidence. Our core contributions are: (1) the first systematic empirical demonstration of LLMs’ inadequacy for fine-grained cognitive diagnosis; and (2) a reproducible evaluation framework and publicly available benchmark resource to advance research in AI-assisted educational assessment.

Technology Category

Application Category

📝 Abstract
Mathematics learning entails mastery of both content knowledge and cognitive processing of knowing, applying, and reasoning with it. Automated math assessment primarily has focused on grading students' exhibition of content knowledge by finding textual evidence, such as specific numbers, formulas, and statements. Recent advancements in problem-solving, image recognition, and reasoning capabilities of large language models (LLMs) show promise for nuanced evaluation of students' cognitive skills. Diagnosing cognitive skills needs to infer students' thinking processes beyond textual evidence, which is an underexplored task in LLM-based automated assessment. In this work, we investigate how state-of-the-art LLMs diagnose students' cognitive skills in mathematics. We constructed MathCog, a novel benchmark dataset comprising 639 student responses to 110 expert-curated middle school math problems, each annotated with detailed teachers' diagnoses based on cognitive skill checklists. Using MathCog, we evaluated 16 closed and open LLMs of varying model sizes and vendors. Our evaluation reveals that even the state-of-the-art LLMs struggle with the task, all F1 scores below 0.5, and tend to exhibit strong false confidence for incorrect cases ($r_s=.617$). We also found that model size positively correlates with the diagnosis performance ($r_s=.771$). Finally, we discuss the implications of these findings, the overconfidence issue, and directions for improving automated cognitive skill diagnosis.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' ability to diagnose students' cognitive skills in math
Assessing LLMs' performance on inferring thinking processes from student responses
Investigating model size impact on cognitive skill diagnosis accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes large language models for cognitive skill diagnosis
Introduces MathCog benchmark dataset for evaluation
Analyzes model size impact on diagnosis performance
Hyoungwook Jin
Hyoungwook Jin
University of Michigan
human-computer interactionend-learner programmingpersonalized education
Yoonsu Kim
Yoonsu Kim
Ph.D. Student in KAIST
Human-Computer InteractionHuman-AI Interaction
D
Dongyun Jung
School of Computing, KAIST
S
Seungju Kim
School of Computing, KAIST
K
Kiyoon Choi
AlgorithmLabs
J
Jinho Son
AlgorithmLabs
J
Juho Kim
School of Computing, KAIST