๐ค AI Summary
Existing synthetic data methods fail to effectively enhance large language modelsโ (LLMs) performance on multi-step mathematical and complex reasoning tasks. To address this, we propose MIND-OWMโthe first math-aware synthetic dialogue dataset grounded in knowledge-gap modeling. It explicitly encodes structured knowledge disparities between dialogue participants to generate high-quality, multi-step mathematical reasoning dialogues. Methodologically, we abandon naive data concatenation and instead introduce a novel raw-corpus reconstruction strategy: leveraging OpenWebMath to construct dialogue templates, then applying knowledge-gap injection and format re-orchestration techniques. Evaluated on GSM8K, MATH, and MMLU, MIND-OWM yields improvements of +13.42%, +2.30%, and +4.55%, respectively, substantially strengthening LLMsโ mathematical, STEM, and general reasoning capabilities. This work establishes a new paradigm for aligning reasoning competence during LLM pretraining.
๐ Abstract
The utility of synthetic data to enhance pretraining data quality and hence to improve downstream task accuracy has been widely explored in recent large language models (LLMs). Yet, these approaches fall inadequate in complex, multi-hop and mathematical reasoning tasks as the synthetic data typically fails to add complementary knowledge to the existing raw corpus. In this work, we propose a novel large-scale and diverse Math Informed syNthetic Dialogue (MIND) generation method that improves the mathematical reasoning ability of LLMs. Specifically, using MIND, we generate synthetic conversations based on OpenWebMath (OWM), resulting in a new math corpus, MIND-OWM. Our experiments with different conversational settings reveal that incorporating knowledge gaps between dialog participants is essential for generating high-quality math data. We further identify an effective way to format and integrate synthetic and raw data during pretraining to maximize the gain in mathematical reasoning, emphasizing the need to restructure raw data rather than use it as-is. Compared to pretraining just on raw data, a model pretrained on MIND-OWM shows significant boost in mathematical reasoning (GSM8K: +13.42%, MATH: +2.30%), including superior performance in specialized knowledge (MMLU: +4.55%, MMLU-STEM: +4.28%) and general purpose reasoning tasks (GENERAL REASONING: +2.51%).