DeCoRL: Decoupling Reasoning Chains via Parallel Sub-Step Generation and Cascaded Reinforcement for Interpretable and Scalable RLHF

📅 2025-11-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current chain-of-thought reinforcement learning (CoT-RL) faces two key bottlenecks: opaque reasoning—where black-box, single-step rewards obscure individual step contributions—and inefficient sequential decoding—whose O(n) time complexity hinders real-time deployment. To address these, we propose a “parallel sub-step generation + cascaded RL” framework. First, we decompose the reasoning chain into modular, parallelizable sub-steps. Second, we design a fine-grained, modular reward function enabling per-step independent evaluation and precise error attribution. Third, we introduce Cascaded DRPO—a novel RL algorithm that jointly optimizes sub-steps while preserving their logical dependencies. Our method employs lightweight, domain-specialized models to enable efficient parallelization, significantly improving both inference efficiency and interpretability. Experiments across multiple benchmarks demonstrate state-of-the-art performance, with 3.8× faster inference, 72.4% lower energy consumption, 68% higher throughput, and 22.7% improved interpretability.

Technology Category

Application Category

📝 Abstract
Existing reinforcement learning methods for Chain-of-Thought reasoning suffer from two critical limitations. First, they operate as monolithic black boxes that provide undifferentiated reward signals, obscuring individual step contributions and hindering error diagnosis. Second, sequential decoding has O(n) time complexity. This makes real-time deployment impractical for complex reasoning tasks. We present DeCoRL (Decoupled Reasoning Chains via Coordinated Reinforcement Learning), a novel framework that transforms reasoning from sequential processing into collaborative modular orchestration. DeCoRL trains lightweight specialized models to generate reasoning sub-steps concurrently, eliminating sequential bottlenecks through parallel processing. To enable precise error attribution, the framework designs modular reward functions that score each sub-step independently. Cascaded DRPO optimization then coordinates these rewards while preserving inter-step dependencies. Comprehensive evaluation demonstrates state-of-the-art results across RM-Bench, RMB, and RewardBench, outperforming existing methods including large-scale models. DeCoRL delivers 3.8 times faster inference while maintaining superior solution quality and offers a 22.7% improvement in interpretability through explicit reward attribution. These advancements, combined with a 72.4% reduction in energy consumption and a 68% increase in throughput, make real-time deployment of complex reasoning systems a reality.
Problem

Research questions and friction points this paper is trying to address.

Overcomes monolithic reward signals hindering error diagnosis in reasoning chains
Addresses sequential decoding bottlenecks with parallel sub-step generation
Enables real-time deployment of complex reasoning through efficiency improvements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parallel sub-step generation eliminates sequential bottlenecks
Modular reward functions enable precise error attribution
Cascaded DRPO optimization coordinates rewards preserving dependencies
🔎 Similar Papers
No similar papers found.
Z
Ziyuan Gao
University College London
Di Liang
Di Liang
University of Michigan
diode lasersSi photonicsphotonic integrated circuitsnanofabrication
X
Xianjie Wu
Beihang University
P
Philippe Morel
University College London
Minlong Peng
Minlong Peng
Baidu
Machine LearningNatural Language Processing