🤖 AI Summary
This work investigates the trade-off between data quality and quantity in mathematical reasoning tasks. Motivated by the observation that large language models’ reasoning performance heavily depends on data quality—yet lacks systematic empirical evaluation—we propose a unified training–evaluation pipeline to comparatively assess mainstream open-source datasets and diverse data synthesis methods, including strong-model distillation and interpretability-aware structural enhancement. Experimental results demonstrate that high-quality, structurally enriched synthetic data substantially outperforms naive scale-up: using only one-third of the data volume, it surpasses the full-scale baseline. Moreover, the distillation-plus-structured-annotation strategy yields consistent accuracy gains of 4.2–7.8 percentage points across major mathematical benchmarks (e.g., MATH, AMC). These findings establish a reproducible, cost-effective, and high-yield paradigm for industrial-grade dataset construction in mathematical reasoning.
📝 Abstract
The reasoning capabilities of Large Language Models (LLMs) play a critical role in many downstream tasks, yet depend strongly on the quality of training data. Despite various proposed data construction methods, their practical utility in real-world pipelines remains underexplored. In this work, we conduct a comprehensive analysis of open-source datasets and data synthesis techniques for mathematical reasoning, evaluating them under a unified pipeline designed to mirror training and deployment scenarios. We further distill effective data selection strategies and identify practical methods suitable for industrial applications. Our findings highlight that structuring data in more interpretable formats, or distilling from stronger models often outweighs simply scaling up data volume. This study provides actionable guidance for integrating training data to enhance LLM capabilities, supporting both cost-effective data curation and scalable model enhancement. We hope this work will inspire further research on how to balance "more data" versus "better data" for real-world reasoning tasks.