🤖 AI Summary
Mathematical synthetic data frequently contains logical, computational, and formatting errors, yet no standardized benchmark exists for evaluating mathematical data cleaning. Method: We propose MathClean—the first dedicated benchmark for mathematical data cleaning—comprising 8,000 human-verified samples with fine-grained, multi-dimensional error annotations (e.g., reasoning leaps, arithmetic errors, symbol misuse), synthesized from GSM8K and MATH via rule- and LLM-guided controllable error injection and augmentation. Contribution/Results: MathClean establishes the first standardized evaluation framework enabling both error-type discrimination and end-to-end cleaning process assessment. Experiments reveal that state-of-the-art models—including GPT-4o and DeepSeek-R1—achieve less than 60% accuracy in error identification, exposing fundamental deficiencies in current LLMs’ mathematical data cleaning capabilities. The full codebase and dataset are publicly released to advance high-quality mathematical training data for LLMs.
📝 Abstract
With the rapid development of large language models (LLMs), the quality of training data has become crucial. Among the various types of training data, mathematical data plays a key role in enabling LLMs to acquire strong reasoning abilities. While high-quality open-source data is important, it is often insufficient for pre-training, necessitating the addition of synthetic math problems. However, synthetic math questions and answers can introduce inaccuracies, which may degrade both the training data and web data. Therefore, an effective method for cleaning synthetic math data is essential. In this paper, we propose the MathClean benchmark to evaluate the effectiveness of math data cleaning models. The MathClean benchmark consists of 2,000 correct questions and 2,000 erroneous questions with additional 2,000 correct and erroneous answers sourced from augmented data based on GSM8K and MATH. Moreover, we also annotate error types for each question or answer, since it can assess whether models can correctly identify the error categories for future improvements. Finally, we present comprehensive evaluations using state-of-the-art (SOTA) models. Our results demonstrate that even strong models like GPT-o1 and DeepSeek-R1 perform poorly on this benchmark, highlighting the utility of MathClean. Our code and data is available at https://github.com/YuYingLi0/MathClean.