Efficient Paths and Dense Rewards: Probabilistic Flow Reasoning for Large Language Models

📅 2026-01-14
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inefficiency and optimization challenges in large language models stemming from the absence of a mechanism to quantify information gain at each reasoning step. To this end, we propose CoT-Flow, a novel framework that, for the first time, integrates continuous probability flows into chain-of-thought reasoning by modeling discrete reasoning steps as a probability flow. This formulation enables precise quantification of step-level information gain. Building on this foundation, we introduce a flow-guided decoding strategy and a dense-reward reinforcement learning mechanism that operates without external verifiers. Experimental results demonstrate that our approach achieves a superior balance between reasoning performance and efficiency across multiple challenging benchmarks.

Technology Category

Application Category

📝 Abstract
High-quality chain-of-thought has demonstrated strong potential for unlocking the reasoning capabilities of large language models. However, current paradigms typically treat the reasoning process as an indivisible sequence, lacking an intrinsic mechanism to quantify step-wise information gain. This granularity gap manifests in two limitations: inference inefficiency from redundant exploration without explicit guidance, and optimization difficulty due to sparse outcome supervision or costly external verifiers. In this work, we propose CoT-Flow, a framework that reconceptualizes discrete reasoning steps as a continuous probabilistic flow, quantifying the contribution of each step toward the ground-truth answer. Built on this formulation, CoT-Flow enables two complementary methodologies: flow-guided decoding, which employs a greedy flow-based decoding strategy to extract information-efficient reasoning paths, and flow-based reinforcement learning, which constructs a verifier-free dense reward function. Experiments on challenging benchmarks demonstrate that CoT-Flow achieves a superior balance between inference efficiency and reasoning performance.
Problem

Research questions and friction points this paper is trying to address.

chain-of-thought
reasoning efficiency
sparse rewards
step-wise information gain
inference redundancy
Innovation

Methods, ideas, or system contributions that make the work stand out.

probabilistic flow
chain-of-thought
dense reward
flow-guided decoding
reasoning efficiency
🔎 Similar Papers
No similar papers found.