🤖 AI Summary
This work addresses the inefficiency and optimization challenges in large language models stemming from the absence of a mechanism to quantify information gain at each reasoning step. To this end, we propose CoT-Flow, a novel framework that, for the first time, integrates continuous probability flows into chain-of-thought reasoning by modeling discrete reasoning steps as a probability flow. This formulation enables precise quantification of step-level information gain. Building on this foundation, we introduce a flow-guided decoding strategy and a dense-reward reinforcement learning mechanism that operates without external verifiers. Experimental results demonstrate that our approach achieves a superior balance between reasoning performance and efficiency across multiple challenging benchmarks.
📝 Abstract
High-quality chain-of-thought has demonstrated strong potential for unlocking the reasoning capabilities of large language models. However, current paradigms typically treat the reasoning process as an indivisible sequence, lacking an intrinsic mechanism to quantify step-wise information gain. This granularity gap manifests in two limitations: inference inefficiency from redundant exploration without explicit guidance, and optimization difficulty due to sparse outcome supervision or costly external verifiers. In this work, we propose CoT-Flow, a framework that reconceptualizes discrete reasoning steps as a continuous probabilistic flow, quantifying the contribution of each step toward the ground-truth answer. Built on this formulation, CoT-Flow enables two complementary methodologies: flow-guided decoding, which employs a greedy flow-based decoding strategy to extract information-efficient reasoning paths, and flow-based reinforcement learning, which constructs a verifier-free dense reward function. Experiments on challenging benchmarks demonstrate that CoT-Flow achieves a superior balance between inference efficiency and reasoning performance.