🤖 AI Summary
Existing multimodal large language models (MLLMs) lack rigorous evaluation in K12 education due to narrow subject coverage, small-scale benchmarks, limited question formats, and overreliance on answer correctness. Method: We introduce K12Vista—the first Chinese K12 multimodal benchmark—comprising 33K problems across five subjects and three question types—and propose a novel reasoning-process evaluation paradigm. Our “answer + process” dual-dimension framework includes the large-scale process-annotated dataset K12-PEM-800K, the lightweight process-evaluation model K12-PEM, and the high-quality human-annotated benchmark K12-PEBench. Leveraging automated pipelines, fine-grained annotation, and rule-model hybrid adjudication, we systematically quantify models’ knowledge comprehension and stepwise reasoning capabilities. Results: Experiments expose severe reasoning deficiencies in state-of-the-art MLLMs; K12-PEM significantly improves process evaluation accuracy. All resources are publicly released to advance trustworthy AI assessment in education.
📝 Abstract
Multimodal large language models have demonstrated remarkable reasoning capabilities in various visual tasks. However, their abilities in K12 scenarios are still systematically underexplored. Previous studies suffer from various limitations including narrow subject coverage, insufficient data scale, lack of diversity in question types, and naive answer-centric evaluation method, resulting in insufficient exploration of model capabilities. To address these gaps, we propose K12Vista, the most comprehensive multimodal benchmark for Chinese K12 subject knowledge understanding and reasoning to date, featuring 33,000 questions across five core subjects from primary to high school and three question types. Moreover, beyond the final outcome, we are also concerned with the correctness of MLLMs' reasoning processes. For this purpose, we meticulously compiles errors from MLLMs' reasoning processes and leverage an automated data pipeline to construct K12-PEM-800K, the largest process evaluation dataset offering detailed step-by-step judgement annotations for MLLMs' reasoning. Subsequently, we developed K12-PEM, an advanced process evaluation model that integrates an overall assessment of both the reasoning process and answer correctness. Moreover, we also introduce K12-PEBench, the first high-quality, human-annotated benchmark specifically designed for evaluating abilities of reasoning process evaluation.Extensive experiments reveal that current MLLMs exhibit significant flaws when reasoning within K12Vista, providing critical insights for the development of more capable MLLMs.We open our resources at https://github.com/lichongod/K12Vista.