Can MLLMs Read Students' Minds? Unpacking Multimodal Error Analysis in Handwritten Math

📅 2026-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the prevailing neglect of the multimodal complexity inherent in students’ handwritten mathematical scratch work within educational NLP research and the limited capacity of mainstream multimodal large language models (MLLMs) to perform error diagnosis from a “teacher’s perspective,” as they typically generate answers from a “test-taker’s view.” To bridge this gap, we introduce ScratchMath—the first multimodal benchmark for error analysis grounded in real handwritten math drafts from K–12 students—defining seven canonical error types and constructing a high-quality dataset via a multi-stage human-AI collaborative annotation pipeline. Systematic evaluation of 16 prominent MLLMs reveals that current models substantially lag behind human experts in visual recognition and logical reasoning, with closed-source models generally outperforming open-source counterparts, while large reasoning models demonstrate notable promise in error explanation tasks.

Technology Category

Application Category

📝 Abstract
Assessing student handwritten scratchwork is crucial for personalized educational feedback but presents unique challenges due to diverse handwriting, complex layouts, and varied problem-solving approaches. Existing educational NLP primarily focuses on textual responses and neglects the complexity and multimodality inherent in authentic handwritten scratchwork. Current multimodal large language models (MLLMs) excel at visual reasoning but typically adopt an "examinee perspective", prioritizing generating correct answers rather than diagnosing student errors. To bridge these gaps, we introduce ScratchMath, a novel benchmark specifically designed for explaining and classifying errors in authentic handwritten mathematics scratchwork. Our dataset comprises 1,720 mathematics samples from Chinese primary and middle school students, supporting two key tasks: Error Cause Explanation (ECE) and Error Cause Classification (ECC), with seven defined error types. The dataset is meticulously annotated through rigorous human-machine collaborative approaches involving multiple stages of expert labeling, review, and verification. We systematically evaluate 16 leading MLLMs on ScratchMath, revealing significant performance gaps relative to human experts, especially in visual recognition and logical reasoning. Proprietary models notably outperform open-source models, with large reasoning models showing strong potential for error explanation. All evaluation data and frameworks are publicly available to facilitate further research.
Problem

Research questions and friction points this paper is trying to address.

handwritten math
error analysis
multimodal large language models
educational feedback
student errors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal Large Language Models
Handwritten Math Error Analysis
Error Cause Explanation
Educational AI
ScratchMath Benchmark
🔎 Similar Papers
No similar papers found.