Dual Consensus: Escaping from Spurious Majority in Unsupervised RLVR via Two-Stage Vote Mechanism

πŸ“… 2026-03-17
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the vulnerability of existing unsupervised reinforcement learning methods to pseudo-label bias, which often leads models to converge on spurious majority answers and hinders reasoning capability. To overcome this limitation, we propose Dual Consensus Reinforcement Learning (DCRL), a novel two-stage consensus mechanism that operates without external supervision. DCRL first anchors training on the model’s own dominant responses and then employs a transient anti-learning strategy to generate diverse exploration signals; a robust training objective is formed via the harmonic mean of these two components. This approach effectively avoids the trap of false consensus and significantly enhances the reasoning performance of large language models under purely unsupervised settings. Experiments demonstrate that DCRL consistently outperforms conventional majority voting across eight benchmark tasks, with improved training stability and scalability.

Technology Category

Application Category

πŸ“ Abstract
Current label-free RLVR approaches for large language models (LLMs), such as TTRL and Self-reward, have demonstrated effectiveness in improving the performance of LLMs on complex reasoning tasks. However, these methods rely heavily on accurate pseudo-label estimation and converge on spurious yet popular answers, thereby trapping in a dominant mode and limiting further improvements. Building on this, we propose Dual Consensus Reinforcement Learning (DCRL), a novel self-supervised training method which is capable of generating more reliable learning signals through a two-stage consensus mechanism. The model initially acts as an anchor, producing dominant responses; then it serves as an explorer, generating diverse auxiliary signals via a temporary unlearning process. The final training target is derived from the harmonic mean of these two signal sets. Notably, the process operates entirely without external models or supervision. Across eight benchmarks and diverse domains, DCRL consistently improves Pass@1 over majority vote while yielding more stable training dynamics. These results demonstrate that DCRL establishes a scalable path toward stronger reasoning without labels.
Problem

Research questions and friction points this paper is trying to address.

unsupervised RLVR
spurious majority
pseudo-label estimation
reasoning tasks
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual Consensus
Unsupervised RLVR
Two-Stage Vote Mechanism
Self-supervised Training
Harmonic Mean Consensus
πŸ”Ž Similar Papers
No similar papers found.
K
Kaixuan Du
School of Automation Science and Electrical Engineering, Beihang University
Meng Cao
Meng Cao
Postdoc, Carnegie Mellon University
Psychology
Hang Zhang
Hang Zhang
Assistant Professor of Computer Science, Indiana University Bloomington
computer security
Yukun Wang
Yukun Wang
China University of Petroleum (Beijing)
quantum informationquantum cryptographyquantum computing
X
Xiangzhou Huang
School of Automation Science and Electrical Engineering, Beihang University
N
Ni Li
School of Automation Science and Electrical Engineering, Beihang University