🤖 AI Summary
Current chain-of-thought reinforcement learning (CoT-RL) faces two key bottlenecks: opaque reasoning—where black-box, single-step rewards obscure individual step contributions—and inefficient sequential decoding—whose O(n) time complexity hinders real-time deployment. To address these, we propose a “parallel sub-step generation + cascaded RL” framework. First, we decompose the reasoning chain into modular, parallelizable sub-steps. Second, we design a fine-grained, modular reward function enabling per-step independent evaluation and precise error attribution. Third, we introduce Cascaded DRPO—a novel RL algorithm that jointly optimizes sub-steps while preserving their logical dependencies. Our method employs lightweight, domain-specialized models to enable efficient parallelization, significantly improving both inference efficiency and interpretability. Experiments across multiple benchmarks demonstrate state-of-the-art performance, with 3.8× faster inference, 72.4% lower energy consumption, 68% higher throughput, and 22.7% improved interpretability.
📝 Abstract
Existing reinforcement learning methods for Chain-of-Thought reasoning suffer from two critical limitations. First, they operate as monolithic black boxes that provide undifferentiated reward signals, obscuring individual step contributions and hindering error diagnosis. Second, sequential decoding has O(n) time complexity. This makes real-time deployment impractical for complex reasoning tasks. We present DeCoRL (Decoupled Reasoning Chains via Coordinated Reinforcement Learning), a novel framework that transforms reasoning from sequential processing into collaborative modular orchestration. DeCoRL trains lightweight specialized models to generate reasoning sub-steps concurrently, eliminating sequential bottlenecks through parallel processing. To enable precise error attribution, the framework designs modular reward functions that score each sub-step independently. Cascaded DRPO optimization then coordinates these rewards while preserving inter-step dependencies. Comprehensive evaluation demonstrates state-of-the-art results across RM-Bench, RMB, and RewardBench, outperforming existing methods including large-scale models. DeCoRL delivers 3.8 times faster inference while maintaining superior solution quality and offers a 22.7% improvement in interpretability through explicit reward attribution. These advancements, combined with a 72.4% reduction in energy consumption and a 68% increase in throughput, make real-time deployment of complex reasoning systems a reality.