MathFimer: Enhancing Mathematical Reasoning by Expanding Reasoning Steps through Fill-in-the-Middle Task

📅 2025-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from suboptimal mathematical reasoning performance due to coarse-grained, implicit intermediate reasoning steps. Method: This paper introduces a Chain-of-Thought (CoT) expansion framework grounded in the “Fill-in-the-Middle” (FIM) paradigm—originally developed for code modeling—and adapts it to mathematical reasoning for the first time. By decomposing problem prompts into prefix-suffix segments and reconstructing latent intermediate steps, the framework automatically infills and refines granular reasoning traces without relying on stronger external models or costly inference-time augmentation. We fine-tune MathFimer-7B on NuminaMath-FIM and automate enhancement of datasets such as MathInstruct. Results: Models trained on the expanded data achieve significant improvements over baselines on GSM8K and MATH benchmarks, demonstrating the method’s effectiveness, scalability, and cross-dataset generalization capability.

Technology Category

Application Category

📝 Abstract
Mathematical reasoning represents a critical frontier in advancing large language models (LLMs). While step-by-step approaches have emerged as the dominant paradigm for mathematical problem-solving in LLMs, the quality of reasoning steps in training data fundamentally constrains the performance of the models. Recent studies has demonstrated that more detailed intermediate steps can enhance model performance, yet existing methods for step expansion either require more powerful external models or incur substantial computational costs. In this paper, we introduce MathFimer, a novel framework for mathematical reasoning step expansion inspired by the"Fill-in-the-middle"task from code completion. By decomposing solution chains into prefix-suffix pairs and training models to reconstruct missing intermediate steps, we develop a specialized model, MathFimer-7B, on our carefully curated NuminaMath-FIM dataset. We then apply these models to enhance existing mathematical reasoning datasets by inserting detailed intermediate steps into their solution chains, creating MathFimer-expanded versions. Through comprehensive experiments on multiple mathematical reasoning datasets, including MathInstruct, MetaMathQA and etc., we demonstrate that models trained on MathFimer-expanded data consistently outperform their counterparts trained on original data across various benchmarks such as GSM8K and MATH. Our approach offers a practical, scalable solution for enhancing mathematical reasoning capabilities in LLMs without relying on powerful external models or expensive inference procedures.
Problem

Research questions and friction points this paper is trying to address.

Enhancing mathematical reasoning in LLMs
Expanding reasoning steps efficiently
Reducing dependency on external models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Fill-in-the-Middle task inspiration
Trains models on prefix-suffix pairs
Expands datasets with detailed intermediate steps
🔎 Similar Papers
No similar papers found.
Y
Yuchen Yan
Zhejiang University, Meituan Group
Y
Yongliang Shen
Zhejiang University
Y
Yang Liu
Meituan Group
J
Jin Jiang
Meituan Group, Peking University
X
Xin Xu
Hong Kong University of Science and Technology
M
Mengdi Zhang
Meituan Group
Jian Shao
Jian Shao
Zhejiang University
Unstructured data managementCross media analysis and retrieval
Y
Yueting Zhuang
Zhejiang University