Mind the Gap: Bridging Thought Leap for Improved Chain-of-Thought Tuning

📅 2025-05-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing chain-of-thought (CoT) datasets for mathematical reasoning suffer from “reasoning leaps”—critical intermediate steps omitted by experts—impairing model learning and generalization. This work formally defines and systematically addresses this issue. First, we construct ScaleQM+, a high-quality, structured CoT dataset with explicitly validated logical continuity. Second, we propose CoT-Bridge, a plug-and-use module that, via supervised fine-tuning, automatically detects reasoning leaps and generates missing intermediate steps, thereby restoring CoT completeness and logical coherence. Our approach is fully compatible with prevailing CoT optimization paradigms. Empirical evaluation on NuminaMath shows +5.87% absolute accuracy gain, +3.02% improvement in distilled data quality, and +3.1% boost in reinforcement learning initialization performance. Moreover, the method significantly enhances cross-domain logical reasoning generalization, demonstrating robust transferability beyond the training distribution.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have achieved remarkable progress on mathemati-cal tasks through Chain-of-Thought (CoT) reasoning. However, existing mathematical CoT datasets often suffer from Thought Leaps due to experts omitting intermediate steps, which negatively impacts model learning and generalization. We propose the CoT Thought Leap Bridge Task, which aims to automatically detect leaps and generate missing intermediate reasoning steps to restore the completeness and coherence of CoT. To facilitate this, we constructed a specialized training dataset called ScaleQM+, based on the structured ScaleQuestMath dataset, and trained CoT-Bridge to bridge thought leaps. Through comprehensive experiments on mathematical reasoning benchmarks, we demonstrate that models fine-tuned on bridged datasets consistently outperform those trained on original datasets, with improvements of up to +5.87% on NuminaMath. Our approach effectively enhances distilled data (+3.02%) and provides better starting points for reinforcement learning (+3.1%), functioning as a plug-and-play module compatible with existing optimization techniques. Furthermore, CoT-Bridge demonstrate improved generalization to out-of-domain logical reasoning tasks, confirming that enhancing reasoning completeness yields broadly applicable benefits.
Problem

Research questions and friction points this paper is trying to address.

Detect and fill missing steps in Chain-of-Thought reasoning
Improve model learning by reducing thought leaps in CoT
Enhance reasoning completeness for better generalization in LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Detects thought leaps in reasoning steps
Generates missing intermediate reasoning steps
Enhances model performance with bridged datasets
🔎 Similar Papers
No similar papers found.
Haolei Xu
Haolei Xu
Zhejiang University
Y
Yuchen Yan
Zhejiang University
Y
Yongliang Shen
Zhejiang University
Wenqi Zhang
Wenqi Zhang
Zhejiang University
Language ModelMultimodal LearningEmbodied Agents
G
Guiyang Hou
Zhejiang University
S
Shengpei Jiang
The Chinese University of Hong Kong
Kaitao Song
Kaitao Song
Senior Researcher, Microsoft Research
Natural Language ProcessingLarge Language ModelsArtificial General Intelligence
Weiming Lu
Weiming Lu
Zhejiang University
Natural Language ProcessingLarge Language ModelsAGI
J
Jun Xiao
Zhejiang University
Y
Yueting Zhuang
Zhejiang University