๐ค AI Summary
This work proposes a reinforcement learningโbased unsupervised knowledge distillation framework that eliminates reliance on ground-truth labels for reward signals. By introducing a large language model (LLM) as a single-token referee, the method generates fine-grained scores for student model outputs, thereby constructing label-free rewards. This approach enables effective knowledge transfer from teacher to student models using large-scale unlabeled data. Experimental results on mathematical reasoning benchmarks demonstrate that, when combined with verifiable reward mechanisms, the proposed framework significantly enhances student model performance, confirming that LLM-based referees can supply both effective and scalable training signals without annotated data.
๐ Abstract
Reinforcement Learning (RL) has been shown to substantially improve the reasoning capability of small and large language models (LLMs), but existing approaches typically rely on verifiable rewards, hence ground truth labels. We propose an RL framework that uses rewards from an LLM that acts as a judge evaluating model outputs over large amounts of unlabeled data, enabling label-free knowledge distillation and replacing the need of ground truth supervision. Notably, the judge operates with a single-token output, making reward computation efficient. When combined with verifiable rewards, our approach yields substantial performance gains across math reasoning benchmarks. These results suggest that LLM-based evaluators can produce effective training signals for RL fine-tuning.