🤖 AI Summary
Large language models (LLMs) suffer from suboptimal mathematical reasoning performance due to coarse-grained, implicit intermediate reasoning steps. Method: This paper introduces a Chain-of-Thought (CoT) expansion framework grounded in the “Fill-in-the-Middle” (FIM) paradigm—originally developed for code modeling—and adapts it to mathematical reasoning for the first time. By decomposing problem prompts into prefix-suffix segments and reconstructing latent intermediate steps, the framework automatically infills and refines granular reasoning traces without relying on stronger external models or costly inference-time augmentation. We fine-tune MathFimer-7B on NuminaMath-FIM and automate enhancement of datasets such as MathInstruct. Results: Models trained on the expanded data achieve significant improvements over baselines on GSM8K and MATH benchmarks, demonstrating the method’s effectiveness, scalability, and cross-dataset generalization capability.
📝 Abstract
Mathematical reasoning represents a critical frontier in advancing large language models (LLMs). While step-by-step approaches have emerged as the dominant paradigm for mathematical problem-solving in LLMs, the quality of reasoning steps in training data fundamentally constrains the performance of the models. Recent studies has demonstrated that more detailed intermediate steps can enhance model performance, yet existing methods for step expansion either require more powerful external models or incur substantial computational costs. In this paper, we introduce MathFimer, a novel framework for mathematical reasoning step expansion inspired by the"Fill-in-the-middle"task from code completion. By decomposing solution chains into prefix-suffix pairs and training models to reconstruct missing intermediate steps, we develop a specialized model, MathFimer-7B, on our carefully curated NuminaMath-FIM dataset. We then apply these models to enhance existing mathematical reasoning datasets by inserting detailed intermediate steps into their solution chains, creating MathFimer-expanded versions. Through comprehensive experiments on multiple mathematical reasoning datasets, including MathInstruct, MetaMathQA and etc., we demonstrate that models trained on MathFimer-expanded data consistently outperform their counterparts trained on original data across various benchmarks such as GSM8K and MATH. Our approach offers a practical, scalable solution for enhancing mathematical reasoning capabilities in LLMs without relying on powerful external models or expensive inference procedures.