MathDoc: Benchmarking Structured Extraction and Active Refusal on Noisy Mathematics Exam Papers

📅 2026-01-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of structured question extraction from real-world high school mathematics exam papers, which are often corrupted by severe visual noise. To this end, the authors construct the first document-level information extraction benchmark tailored to this scenario, introducing a dataset of authentic exam papers that includes non-recognizable samples. They propose a multidimensional evaluation framework and employ state-of-the-art multimodal large language models—such as Qwen3-VL and Gemini-2.5-Pro—for end-to-end extraction. Evaluation metrics encompass question stem accuracy, visual similarity, and the model’s ability to actively abstain from answering when inputs are unreliable. Experimental results demonstrate that while current state-of-the-art models can effectively extract structured content under clear conditions, they generally lack robust refusal mechanisms when confronted with blurry or incomplete inputs, revealing a critical gap in their robustness.

Technology Category

Application Category

📝 Abstract
The automated extraction of structured questions from paper-based mathematics exams is fundamental to intelligent education, yet remains challenging in real-world settings due to severe visual noise. Existing benchmarks mainly focus on clean documents or generic layout analysis, overlooking both the structural integrity of mathematical problems and the ability of models to actively reject incomplete inputs. We introduce MathDoc, the first benchmark for document-level information extraction from authentic high school mathematics exam papers. MathDoc contains \textbf{3,609} carefully curated questions with real-world artifacts and explicitly includes unrecognizable samples to evaluate active refusal behavior. We propose a multi-dimensional evaluation framework covering stem accuracy, visual similarity, and refusal capability. Experiments on SOTA MLLMs, including Qwen3-VL and Gemini-2.5-Pro, show that although end-to-end models achieve strong extraction performance, they consistently fail to refuse illegible inputs, instead producing confident but invalid outputs. These results highlight a critical gap in current MLLMs and establish MathDoc as a benchmark for assessing model reliability under degraded document conditions. Our project repository is available at \href{https://github.com/winnk123/papers/tree/master}{GitHub repository}
Problem

Research questions and friction points this paper is trying to address.

structured extraction
active refusal
mathematics exam papers
visual noise
document-level information extraction
Innovation

Methods, ideas, or system contributions that make the work stand out.

structured extraction
active refusal
noisy document
mathematics exam benchmark
multimodal LLM evaluation
🔎 Similar Papers
No similar papers found.
C
Chenyue Zhou
Nanjing University of Aeronautics and Astronautics
J
Jiayi Tuo
University of Science and Technology of China
S
Shitong Qin
Gaotu Techedu Inc.
W
Wei Dai
Beijing Key Laboratory of Research on Large Models and Intelligent Governance
M
Mingxuan Wang
Gaoling School of Artificial Intelligence, Renmin University of China
Ziwei Zhao
Ziwei Zhao
University of Science and Technology of China
Graph LearningLarge Language Models
D
Duoyang Li
Gaoling School of Artificial Intelligence, Renmin University of China
Shiyang Su
Shiyang Su
University of Central Florida
PsychometricsOccupational Health PsychologyItem Response TheoryLongitudinal Modeling
Y
Yanxi Lu
Gaoling School of Artificial Intelligence, Renmin University of China
Y
Yanbiao Ma
Beijing Key Laboratory of Research on Large Models and Intelligent Governance, Engineering Research Center of Next-Generation Intelligent Search and Recommendation, MOE, Gaoling School of Artificial Intelligence, Renmin University of China