🤖 AI Summary
Existing chain-of-thought (CoT) datasets for mathematical reasoning suffer from “reasoning leaps”—critical intermediate steps omitted by experts—impairing model learning and generalization. This work formally defines and systematically addresses this issue. First, we construct ScaleQM+, a high-quality, structured CoT dataset with explicitly validated logical continuity. Second, we propose CoT-Bridge, a plug-and-use module that, via supervised fine-tuning, automatically detects reasoning leaps and generates missing intermediate steps, thereby restoring CoT completeness and logical coherence. Our approach is fully compatible with prevailing CoT optimization paradigms. Empirical evaluation on NuminaMath shows +5.87% absolute accuracy gain, +3.02% improvement in distilled data quality, and +3.1% boost in reinforcement learning initialization performance. Moreover, the method significantly enhances cross-domain logical reasoning generalization, demonstrating robust transferability beyond the training distribution.
📝 Abstract
Large language models (LLMs) have achieved remarkable progress on mathemati-cal tasks through Chain-of-Thought (CoT) reasoning. However, existing mathematical CoT datasets often suffer from Thought Leaps due to experts omitting intermediate steps, which negatively impacts model learning and generalization. We propose the CoT Thought Leap Bridge Task, which aims to automatically detect leaps and generate missing intermediate reasoning steps to restore the completeness and coherence of CoT. To facilitate this, we constructed a specialized training dataset called ScaleQM+, based on the structured ScaleQuestMath dataset, and trained CoT-Bridge to bridge thought leaps. Through comprehensive experiments on mathematical reasoning benchmarks, we demonstrate that models fine-tuned on bridged datasets consistently outperform those trained on original datasets, with improvements of up to +5.87% on NuminaMath. Our approach effectively enhances distilled data (+3.02%) and provides better starting points for reinforcement learning (+3.1%), functioning as a plug-and-play module compatible with existing optimization techniques. Furthermore, CoT-Bridge demonstrate improved generalization to out-of-domain logical reasoning tasks, confirming that enhancing reasoning completeness yields broadly applicable benefits.