Learn More with Less: Uncertainty Consistency Guided Query Selection for RLVR

📅 2026-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high annotation cost of existing reinforcement learning from visual demonstrations (RLVR) algorithms, which typically require large labeled datasets. To mitigate this, the authors integrate active learning into the RLVR framework and propose a novel sample selection strategy based on the alignment between subjective and objective uncertainties. Specifically, they measure offline alignment using the point-biserial correlation coefficient (PBC) and introduce an online uncertainty consistency metric that combines normalized advantage with subjective uncertainty to dynamically guide data sampling. Theoretical analysis reveals that this metric is negatively correlated with offline performance, effectively overcoming the failure of conventional active learning approaches in RLVR. Experiments demonstrate that the proposed method achieves comparable performance to full-data training using only 30% of the labeled samples, significantly outperforming random sampling and classical active learning baselines while substantially reducing annotation costs.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have recently improved mathematical reasoning through Reinforcement Learning with Verifiable Reward (RLVR). However, existing RLVR algorithms require large query budgets, making annotation costly. We investigate whether fewer but more informative queries can yield similar or superior performance, introducing active learning (AL) into RLVR. We identify that classic AL sampling strategies fail to outperform random selection in this setting, due to ignoring objective uncertainty when only selecting by subjective uncertainty. This work proposes an uncertainty consistency metric to evaluate how well subjective uncertainty aligns with objective uncertainty. In the offline setting, this alignment is measured using the Point-Biserial Correlation Coefficient (PBC). For online training, because of limited sampling and dynamically shifting output distributions, PBC estimation is difficult. Therefore, we introduce a new online variant, computed from normalized advantage and subjective uncertainty. Theoretically, we prove that the online variant is strictly negatively correlated with offline PBC and supports better sample selection. Experiments show our method consistently outperforms random and classic AL baselines, achieving full-dataset performance while training on only 30% of the data, effectively reducing the cost of RLVR for reasoning tasks.
Problem

Research questions and friction points this paper is trying to address.

RLVR
query selection
active learning
uncertainty consistency
annotation cost
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uncertainty Consistency
Active Learning
RLVR
Subjective vs Objective Uncertainty
Query Selection
🔎 Similar Papers
No similar papers found.
H
Hao Yi
Renmin University of China, Gaoling School of Artificial Intelligence, Beijing; Amap, Alibaba Group
Y
Yulan Hu
Amap, Alibaba Group
Xin Li
Xin Li
Alibaba Group
natural language processing
S
Sheng Ouyang
Renmin University of China, Gaoling School of Artificial Intelligence, Beijing; Amap, Alibaba Group
L
Lizhong Ding
School of Computer Science & Technology, Beijing Institute of Technology
Y
Yong Liu
Renmin University of China, Gaoling School of Artificial Intelligence, Beijing