Explain with Visual Keypoints Like a Real Mentor! A Benchmark for Multimodal Solution Explanation

๐Ÿ“… 2025-04-04
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Current large language models (LLMs) lack support for visual explanations in mathematical reasoning, despite the critical role of diagrams, auxiliary lines, and other visual aids in human pedagogy. To address this gap, we introduce *Visual Solution Explanation*โ€”a novel multimodal task requiring joint generation of text explanations and corresponding visual elements (e.g., auxiliary lines, annotations, geometric constructions) that are semantically aligned and mutually reinforcing. We present MathExplain, the first education-oriented multimodal benchmark for this task, comprising 997 high-quality math problems, each annotated with fine-grained visual keypoints and aligned natural-language explanations. Extensive experiments reveal systematic deficiencies in open-source LLMs for visionโ€“language collaborative reasoning, while proprietary multimodal models demonstrate nascent capability. All code and data are publicly released to advance explainable, pedagogically grounded AI for education.

Technology Category

Application Category

๐Ÿ“ Abstract
With the rapid advancement of mathematical reasoning capabilities in large language models (LLMs), AI systems are increasingly being adopted in educational settings to support students' comprehension of problem-solving processes. However, a critical component remains underexplored in current LLM-generated explanations: visual explanation. In real-world instructional contexts, human tutors routinely employ visual aids-such as diagrams, markings, and highlights-to enhance conceptual clarity. To bridge this gap, we introduce a novel task of visual solution explanation, which requires not only solving problems but also generating explanations that incorporate newly introduced visual elements essential for understanding (e.g., auxiliary lines, annotations, or geometric constructions). To evaluate model performance on this task, we propose MathExplain, a multimodal benchmark consisting of 997 math problems annotated with visual keypoints and corresponding explanatory text that references those elements. Our empirical results show that while some closed-source models demonstrate promising capabilities on visual solution-explaining, current open-source general-purpose models perform inconsistently, particularly in identifying relevant visual components and producing coherent keypoint-based explanations. We expect that visual solution-explaining and the MathExplain dataset will catalyze further research on multimodal LLMs in education and advance their deployment as effective, explanation-oriented AI tutors. Code and data will be released publicly.
Problem

Research questions and friction points this paper is trying to address.

Bridging the gap in visual explanations in LLM-generated educational content
Introducing visual solution explanation task with essential visual elements
Evaluating model performance on multimodal math problem explanations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces visual solution explanation task
Proposes MathExplain multimodal benchmark
Evaluates models on visual keypoints integration
๐Ÿ”Ž Similar Papers
No similar papers found.
J
Jaewoo Park
Yonsei University
J
Jungyang Park
Yonsei University
D
Dongju Jang
Yonsei University
Jiwan Chung
Jiwan Chung
Yonsei University
Computer VisionNLPMultimodal Learning
B
Byungwoo Yoo
Mathpresso
Jaewoo Shin
Jaewoo Shin
Graduate student
distributed computingspatial index structures
S
Seonjoon Park
Mathpresso
T
Taehyeong Kim
Mathpresso
Y
Youngjae Yu
Yonsei University