MIND: Math Informed syNthetic Dialogues for Pretraining LLMs

๐Ÿ“… 2024-10-15
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 1
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing synthetic data methods fail to effectively enhance large language modelsโ€™ (LLMs) performance on multi-step mathematical and complex reasoning tasks. To address this, we propose MIND-OWMโ€”the first math-aware synthetic dialogue dataset grounded in knowledge-gap modeling. It explicitly encodes structured knowledge disparities between dialogue participants to generate high-quality, multi-step mathematical reasoning dialogues. Methodologically, we abandon naive data concatenation and instead introduce a novel raw-corpus reconstruction strategy: leveraging OpenWebMath to construct dialogue templates, then applying knowledge-gap injection and format re-orchestration techniques. Evaluated on GSM8K, MATH, and MMLU, MIND-OWM yields improvements of +13.42%, +2.30%, and +4.55%, respectively, substantially strengthening LLMsโ€™ mathematical, STEM, and general reasoning capabilities. This work establishes a new paradigm for aligning reasoning competence during LLM pretraining.

Technology Category

Application Category

๐Ÿ“ Abstract
The utility of synthetic data to enhance pretraining data quality and hence to improve downstream task accuracy has been widely explored in recent large language models (LLMs). Yet, these approaches fall inadequate in complex, multi-hop and mathematical reasoning tasks as the synthetic data typically fails to add complementary knowledge to the existing raw corpus. In this work, we propose a novel large-scale and diverse Math Informed syNthetic Dialogue (MIND) generation method that improves the mathematical reasoning ability of LLMs. Specifically, using MIND, we generate synthetic conversations based on OpenWebMath (OWM), resulting in a new math corpus, MIND-OWM. Our experiments with different conversational settings reveal that incorporating knowledge gaps between dialog participants is essential for generating high-quality math data. We further identify an effective way to format and integrate synthetic and raw data during pretraining to maximize the gain in mathematical reasoning, emphasizing the need to restructure raw data rather than use it as-is. Compared to pretraining just on raw data, a model pretrained on MIND-OWM shows significant boost in mathematical reasoning (GSM8K: +13.42%, MATH: +2.30%), including superior performance in specialized knowledge (MMLU: +4.55%, MMLU-STEM: +4.28%) and general purpose reasoning tasks (GENERAL REASONING: +2.51%).
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLMs' mathematical reasoning with synthetic dialogues
Addressing knowledge gaps in synthetic data for complex reasoning
Optimizing data integration to boost math task performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates math-informed synthetic dialogues for LLMs
Incorporates knowledge gaps in dialog participants
Restructures raw data for better pretraining integration
๐Ÿ”Ž Similar Papers
No similar papers found.