MathClean: A Benchmark for Synthetic Mathematical Data Cleaning

📅 2025-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Mathematical synthetic data frequently contains logical, computational, and formatting errors, yet no standardized benchmark exists for evaluating mathematical data cleaning. Method: We propose MathClean—the first dedicated benchmark for mathematical data cleaning—comprising 8,000 human-verified samples with fine-grained, multi-dimensional error annotations (e.g., reasoning leaps, arithmetic errors, symbol misuse), synthesized from GSM8K and MATH via rule- and LLM-guided controllable error injection and augmentation. Contribution/Results: MathClean establishes the first standardized evaluation framework enabling both error-type discrimination and end-to-end cleaning process assessment. Experiments reveal that state-of-the-art models—including GPT-4o and DeepSeek-R1—achieve less than 60% accuracy in error identification, exposing fundamental deficiencies in current LLMs’ mathematical data cleaning capabilities. The full codebase and dataset are publicly released to advance high-quality mathematical training data for LLMs.

Technology Category

Application Category

📝 Abstract
With the rapid development of large language models (LLMs), the quality of training data has become crucial. Among the various types of training data, mathematical data plays a key role in enabling LLMs to acquire strong reasoning abilities. While high-quality open-source data is important, it is often insufficient for pre-training, necessitating the addition of synthetic math problems. However, synthetic math questions and answers can introduce inaccuracies, which may degrade both the training data and web data. Therefore, an effective method for cleaning synthetic math data is essential. In this paper, we propose the MathClean benchmark to evaluate the effectiveness of math data cleaning models. The MathClean benchmark consists of 2,000 correct questions and 2,000 erroneous questions with additional 2,000 correct and erroneous answers sourced from augmented data based on GSM8K and MATH. Moreover, we also annotate error types for each question or answer, since it can assess whether models can correctly identify the error categories for future improvements. Finally, we present comprehensive evaluations using state-of-the-art (SOTA) models. Our results demonstrate that even strong models like GPT-o1 and DeepSeek-R1 perform poorly on this benchmark, highlighting the utility of MathClean. Our code and data is available at https://github.com/YuYingLi0/MathClean.
Problem

Research questions and friction points this paper is trying to address.

Evaluates cleaning of synthetic math data
Identifies errors in math questions and answers
Assesses model performance on error detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmark for synthetic data cleaning
Error type annotation for improvement
State-of-the-art models evaluation
🔎 Similar Papers
No similar papers found.
H
Hao Liang
Peking University, Beijing, China
M
Meiyi Qiang
Beijing Institute of Technology, Beijing, China
Yuying Li
Yuying Li
Professor Cheriton School of Computer Science, University of Waterloo
optimizationscientific computingdata miningcomputational finance
Z
Zefeng He
Nanjing University, Nanjing, China
Y
Yongzhen Guo
Ant Group, Beijing, China
Z
Zhengzhou Zhu
Peking University, Beijing, China
Wentao Zhang
Wentao Zhang
Institute of Physics, Chinese Academy of Sciences
photoemissionsuperconductivitycupratehtsctime-resolved
B
Bin Cui
Peking University, Beijing, China