🤖 AI Summary
To address confirmation bias and sparse reward issues arising from majority-voting pseudo-labeling in test-time reinforcement learning, this paper proposes SCOPE—a subgroup-specific, step-level confidence-weighted pseudo-labeling framework. Methodologically, SCOPE replaces global majority voting with step-level confidence modeling; introduces a dynamic clustering-based subgroup partitioning mechanism to balance local consensus, inference quality, and exploration diversity; and integrates model confidence estimation, repeated local inference via resampling, and confidence-weighted pseudo-label generation. Evaluated on the AIME 2025 and AMC benchmarks, SCOPE achieves relative performance improvements of 13.1% and 8.1%, respectively, significantly outperforming existing test-time RL approaches. Its core contributions lie in (1) the first formulation of step-level confidence modeling for pseudo-labeling, (2) a dynamic, clustering-driven subgroup decomposition strategy that preserves both reliability and diversity during test-time adaptation, and (3) a unified confidence-aware pseudo-labeling pipeline grounded in robust local inference.
📝 Abstract
Test-time reinforcement learning mitigates the reliance on annotated data by using majority voting results as pseudo-labels, emerging as a complementary direction to reinforcement learning with verifiable rewards (RLVR) for improving reasoning ability of large language models (LLMs). However, this voting strategy often induces confirmation bias and suffers from sparse rewards, limiting the overall performance. In this work, we propose subgroup-specific step-wise confidence-weighted pseudo-label estimation (SCOPE), a framework integrating model confidence and dynamic subgroup partitioning to address these issues. Specifically, SCOPE integrates the proposed step-wise confidence into pseudo label deduction, prioritizing high-quality reasoning paths over simple frequency count. Furthermore, it dynamically partitions the candidate outputs pool into independent subgroups by balancing reasoning quality against exploration diversity. By deriving local consensus via repeat sampling for each sub group, SCOPE provides diverse supervision targets to encourage broader exploration. We conduct experiments across various models and benchmarks, experimental results show that SCOPE consistently outperforms recent baselines. Notably, SCOPE achieving relative improvements of 13.1% on challenging AIME 2025 and 8.1% on AMC. The code is released at href{https://github.com/szu-tera/SCOPE}{https://github.com/szu-tera/SCOPE}.