MMTutorBench: The First Multimodal Benchmark for AI Math Tutoring

📅 2025-10-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal large language models (MLLMs) lack fine-grained evaluation criteria for AI-powered mathematics tutoring, particularly in diagnosing student misconceptions and delivering progressive, stepwise guidance. Method: We introduce MMTutorBench—the first multimodal benchmark tailored for AI mathematics tutoring—comprising 685 problems spanning three pedagogically critical stages: insight discovery, operational construction, and execution. It supports six-dimensional fine-grained assessment. We propose problem-specific scoring rules and an LLM-as-a-Judge framework, integrating OCR-based input analysis and few-shot prompting to enable scalable, high-agreement automated diagnosis of tutoring processes. Contribution/Results: Experiments reveal substantial performance gaps between current AI systems and human tutors; closed-source models outperform open-source counterparts; and OCR quality critically impacts tutoring efficacy. MMTutorBench establishes a reliable, reproducible evaluation paradigm for assessing and improving MLLMs’ mathematical tutoring capabilities.

Technology Category

Application Category

📝 Abstract
Effective math tutoring requires not only solving problems but also diagnosing students' difficulties and guiding them step by step. While multimodal large language models (MLLMs) show promise, existing benchmarks largely overlook these tutoring skills. We introduce MMTutorBench, the first benchmark for AI math tutoring, consisting of 685 problems built around pedagogically significant key-steps. Each problem is paired with problem-specific rubrics that enable fine-grained evaluation across six dimensions, and structured into three tasks-Insight Discovery, Operation Formulation, and Operation Execution. We evaluate 12 leading MLLMs and find clear performance gaps between proprietary and open-source systems, substantial room compared to human tutors, and consistent trends across input variants: OCR pipelines degrade tutoring quality, few-shot prompting yields limited gains, and our rubric-based LLM-as-a-Judge proves highly reliable. These results highlight both the difficulty and diagnostic value of MMTutorBench for advancing AI tutoring.
Problem

Research questions and friction points this paper is trying to address.

Evaluating AI's ability to diagnose student math difficulties
Assessing multimodal models' step-by-step math tutoring skills
Measuring performance gaps between AI tutors and humans
Innovation

Methods, ideas, or system contributions that make the work stand out.

First multimodal benchmark for AI math tutoring
Evaluates tutoring skills across six fine-grained dimensions
Uses rubric-based LLM-as-a-Judge for reliable assessment
T
Tengchao Yang
University of Notre Dame
S
Sichen Guo
University of Notre Dame
Mengzhao Jia
Mengzhao Jia
University of Notre Dame
J
Jiaming Su
Fudan University
Y
Yuanyang Liu
Nanjing University of Posts and Telecommunications
Zhihan Zhang
Zhihan Zhang
PhD student, University of Notre Dame
Natural Language Processing
M
Meng Jiang
University of Notre Dame