π€ AI Summary
This work addresses the vulnerability of existing unsupervised reinforcement learning methods to pseudo-label bias, which often leads models to converge on spurious majority answers and hinders reasoning capability. To overcome this limitation, we propose Dual Consensus Reinforcement Learning (DCRL), a novel two-stage consensus mechanism that operates without external supervision. DCRL first anchors training on the modelβs own dominant responses and then employs a transient anti-learning strategy to generate diverse exploration signals; a robust training objective is formed via the harmonic mean of these two components. This approach effectively avoids the trap of false consensus and significantly enhances the reasoning performance of large language models under purely unsupervised settings. Experiments demonstrate that DCRL consistently outperforms conventional majority voting across eight benchmark tasks, with improved training stability and scalability.
π Abstract
Current label-free RLVR approaches for large language models (LLMs), such as TTRL and Self-reward, have demonstrated effectiveness in improving the performance of LLMs on complex reasoning tasks. However, these methods rely heavily on accurate pseudo-label estimation and converge on spurious yet popular answers, thereby trapping in a dominant mode and limiting further improvements. Building on this, we propose Dual Consensus Reinforcement Learning (DCRL), a novel self-supervised training method which is capable of generating more reliable learning signals through a two-stage consensus mechanism. The model initially acts as an anchor, producing dominant responses; then it serves as an explorer, generating diverse auxiliary signals via a temporary unlearning process. The final training target is derived from the harmonic mean of these two signal sets. Notably, the process operates entirely without external models or supervision. Across eight benchmarks and diverse domains, DCRL consistently improves Pass@1 over majority vote while yielding more stable training dynamics. These results demonstrate that DCRL establishes a scalable path toward stronger reasoning without labels.