π€ AI Summary
This work addresses the challenge of aligning small language models via reinforcement learning in local deployment settings, where access to human preference annotations and high-quality reward models is typically limited. The authors propose a Positive-Unlabeled (PU) reinforcement learning distillation method that requires only a single generation from a teacher model and local sampling from the student model. By introducing an anchor-conditioned self-ranking mechanism, the approach constructs preference signals without any human annotation, enabling reward-model-free alignment training on-device. To the best of the authorsβ knowledge, this is the first method to achieve reinforcement-based alignment for small models without relying on preference data. Experimental results demonstrate that the proposed technique significantly improves alignment performance under low-resource conditions while maintaining training stability and effectiveness.
π Abstract
Due to constraints on privacy, cost, and latency, on-premise deployment of small models is increasingly common. However, most practical pipelines stop at supervised fine-tuning (SFT) and fail to reach the reinforcement learning (RL) alignment stage. The main reason is that RL alignment typically requires either expensive human preference annotation or heavy reliance on high-quality reward models with large-scale API usage and ongoing engineering maintenance, both of which are ill-suited to on-premise settings. To bridge this gap, we propose a positive-unlabeled (PU) RL distillation method for on-premise small-model deployment. Without human-labeled preferences or a reward model, our method distills the teacher's preference-optimization capability from black-box generations into a locally trainable student. For each prompt, we query the teacher once to obtain an anchor response, locally sample multiple student candidates, and perform anchor-conditioned self-ranking to induce pairwise or listwise preferences, enabling a fully local training loop via direct preference optimization or group relative policy optimization. Theoretical analysis justifies that the induced preference signal by our method is order-consistent and concentrates on near-optimal candidates, supporting its stability for preference optimization. Experiments demonstrate that our method achieves consistently strong performance under a low-cost setting.