🤖 AI Summary
Traditional chain-of-thought (CoT) reasoning is constrained by the greediness and inherent determinism of autoregressive decoding, limiting exploration of diverse reasoning paths. To address this, we propose M3PO—a novel framework that introduces multi-path collaborative reasoning to large language models (LLMs) for the first time. M3PO employs reinforcement learning to generate multiple semantically diverse reasoning paths in parallel via rollout, and incorporates a lightweight cross-path interaction mechanism enabling each path to dynamically refine its decisions based on peer feedback. Crucially, it replaces hard token selection with soft abstract token representations to model uncertainty, facilitating collective policy updates. Evaluated on knowledge-intensive and reasoning-intensive benchmarks, M3PO achieves significant improvements in accuracy, robustness, and interpretability, while maintaining inference efficiency. This work establishes a new paradigm for trustworthy, collaborative reasoning in LLMs.
📝 Abstract
Chain-of-Thought (CoT) reasoning has significantly advanced the problem-solving capabilities of Large Language Models (LLMs), yet conventional CoT often exhibits internal determinism during decoding, limiting exploration of plausible alternatives. Recent methods attempt to address this by generating soft abstract tokens to enable reasoning in a continuous semantic space. However, we find that such approaches remain constrained by the greedy nature of autoregressive decoding, which fundamentally isolates the model from alternative reasoning possibilities. In this work, we propose Multi-Path Perception Policy Optimization (M3PO), a novel reinforcement learning framework that explicitly injects collective insights into the reasoning process. M3PO leverages parallel policy rollouts as naturally diverse reasoning sources and integrates cross-path interactions into policy updates through a lightweight collaborative mechanism. This design allows each trajectory to refine its reasoning with peer feedback, thereby cultivating more reliable multi-step reasoning patterns. Empirical results show that M3PO achieves state-of-the-art performance on both knowledge- and reasoning-intensive benchmarks. Models trained with M3PO maintain interpretability and inference efficiency, underscoring the promise of multi-path collaborative learning for robust reasoning.