🤖 AI Summary
To address the high cost of human reward annotation and the late-stage collapse prevalent in unsupervised methods for training Large Reasoning Models (LRMs) via reinforcement learning, this paper proposes RLVR—a novel semi-supervised reinforcement learning paradigm. Its core innovation is the first introduction of a learned trajectory similarity matching mechanism, which leverages a small set of expert-annotated trajectories to guide policy optimization on unlabeled samples, coupled with an internal-consistency-based reward function integrating entropy regularization and majority voting. With only 10% labeled data (e.g., 1K annotated + 3K unlabeled trajectories), RLVR achieves a 42.6% average accuracy across six mathematical reasoning and three out-of-distribution (OOD) benchmarks—substantially outperforming a 45K-unlabeled baseline. Under a larger-scale configuration (4K labeled + 12K unlabeled), RLVR comprehensively surpasses a fully supervised 45K-labeled model.
📝 Abstract
Reinforcement learning with verifiable rewards (RLVR) has proven effective in training large reasoning models (LRMs) by leveraging answer-verifiable signals to guide policy optimization, which, however, suffers from high annotation costs. To alleviate this problem, recent work has explored unsupervised RLVR methods that derive rewards solely from the model's internal consistency, such as through entropy and majority voting. While seemingly promising, these methods often suffer from model collapse in the later stages of training, which may arise from the reinforcement of incorrect reasoning patterns in the absence of external supervision. In this work, we investigate a novel semi-supervised RLVR paradigm that utilizes a small labeled set to guide RLVR training on unlabeled samples. Our key insight is that supervised rewards are essential for stabilizing consistency-based training on unlabeled samples, ensuring that only reasoning patterns verified on labeled instances are incorporated into RL training. Technically, we propose an effective policy optimization algorithm, TraPO, that identifies reliable unlabeled samples by matching their learning trajectory similarity to labeled ones. Building on this, TraPO achieves remarkable data efficiency and strong generalization on six widely used mathematical reasoning benchmarks (AIME24/25, AMC, MATH-500, Minerva, and Olympiad) and three out-of-distribution tasks (ARC-c, GPQA-diamond, and MMLU-pro). With only 1K labeled and 3K unlabeled samples, TraPO reaches 42.6% average accuracy, surpassing the best unsupervised method trained on 45K unlabeled samples (38.3%). Notably, when using 4K labeled and 12K unlabeled samples, TraPO even outperforms the fully supervised model trained on the full 45K labeled samples on all benchmarks, while using only 10% of the labeled data. The code is available via https://github.com/ShenzhiYang2000/TRAPO.