🤖 AI Summary
This study addresses the challenge of structured question extraction from real-world high school mathematics exam papers, which are often corrupted by severe visual noise. To this end, the authors construct the first document-level information extraction benchmark tailored to this scenario, introducing a dataset of authentic exam papers that includes non-recognizable samples. They propose a multidimensional evaluation framework and employ state-of-the-art multimodal large language models—such as Qwen3-VL and Gemini-2.5-Pro—for end-to-end extraction. Evaluation metrics encompass question stem accuracy, visual similarity, and the model’s ability to actively abstain from answering when inputs are unreliable. Experimental results demonstrate that while current state-of-the-art models can effectively extract structured content under clear conditions, they generally lack robust refusal mechanisms when confronted with blurry or incomplete inputs, revealing a critical gap in their robustness.
📝 Abstract
The automated extraction of structured questions from paper-based mathematics exams is fundamental to intelligent education, yet remains challenging in real-world settings due to severe visual noise. Existing benchmarks mainly focus on clean documents or generic layout analysis, overlooking both the structural integrity of mathematical problems and the ability of models to actively reject incomplete inputs. We introduce MathDoc, the first benchmark for document-level information extraction from authentic high school mathematics exam papers. MathDoc contains \textbf{3,609} carefully curated questions with real-world artifacts and explicitly includes unrecognizable samples to evaluate active refusal behavior. We propose a multi-dimensional evaluation framework covering stem accuracy, visual similarity, and refusal capability. Experiments on SOTA MLLMs, including Qwen3-VL and Gemini-2.5-Pro, show that although end-to-end models achieve strong extraction performance, they consistently fail to refuse illegible inputs, instead producing confident but invalid outputs. These results highlight a critical gap in current MLLMs and establish MathDoc as a benchmark for assessing model reliability under degraded document conditions. Our project repository is available at \href{https://github.com/winnk123/papers/tree/master}{GitHub repository}