🤖 AI Summary
Existing multimodal large language model (MLLM) evaluation benchmarks suffer from limited data scale, narrow disciplinary coverage, and coarse-grained knowledge structuring. Method: We introduce the first K–12 examination–grounded, multidisciplinary multimodal reasoning benchmark, covering mathematics, physics, chemistry, biology, geography, and information science, with 140K real-world exam items. It features fine-grained knowledge-point annotations, hierarchical difficulty labels, and cross-grade/cross-year splits. We propose an education-informed structured evaluation framework, a knowledge-graph–driven annotation schema, a dynamic templated assessment pipeline, and a prompt-guided bootstrapping strategy—leveraging question types and image styles—to mitigate data contamination. Contribution/Results: Extensive experiments expose critical weaknesses of current MLLMs in cross-disciplinary vision–language reasoning. The benchmark enables reproducible, attributable evaluation and provides concrete, actionable directions for improvement.
📝 Abstract
Multimodal reasoning, which integrates language and visual cues into problem solving and decision making, is a fundamental aspect of human intelligence and a crucial step toward artificial general intelligence. However, the evaluation of multimodal reasoning capabilities in Multimodal Large Language Models (MLLMs) remains inadequate. Most existing reasoning benchmarks are constrained by limited data size, narrow domain coverage, and unstructured knowledge distribution. To close these gaps, we introduce MDK12-Bench, a multi-disciplinary benchmark assessing the reasoning capabilities of MLLMs via real-world K-12 examinations. Spanning six disciplines (math, physics, chemistry, biology, geography, and information science), our benchmark comprises 140K reasoning instances across diverse difficulty levels from primary school to 12th grade. It features 6,827 instance-level knowledge point annotations based on a well-organized knowledge structure, detailed answer explanations, difficulty labels and cross-year partitions, providing a robust platform for comprehensive evaluation. Additionally, we present a novel dynamic evaluation framework to mitigate data contamination issues by bootstrapping question forms, question types, and image styles during evaluation. Extensive experiment on MDK12-Bench reveals the significant limitation of current MLLMs in multimodal reasoning. The findings on our benchmark provide insights into the development of the next-generation models. Our data and codes are available at https://github.com/LanceZPF/MDK12.