Reinforcement Learning-based Knowledge Distillation with LLM-as-a-Judge

๐Ÿ“… 2026-04-02
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work proposes a reinforcement learningโ€“based unsupervised knowledge distillation framework that eliminates reliance on ground-truth labels for reward signals. By introducing a large language model (LLM) as a single-token referee, the method generates fine-grained scores for student model outputs, thereby constructing label-free rewards. This approach enables effective knowledge transfer from teacher to student models using large-scale unlabeled data. Experimental results on mathematical reasoning benchmarks demonstrate that, when combined with verifiable reward mechanisms, the proposed framework significantly enhances student model performance, confirming that LLM-based referees can supply both effective and scalable training signals without annotated data.
๐Ÿ“ Abstract
Reinforcement Learning (RL) has been shown to substantially improve the reasoning capability of small and large language models (LLMs), but existing approaches typically rely on verifiable rewards, hence ground truth labels. We propose an RL framework that uses rewards from an LLM that acts as a judge evaluating model outputs over large amounts of unlabeled data, enabling label-free knowledge distillation and replacing the need of ground truth supervision. Notably, the judge operates with a single-token output, making reward computation efficient. When combined with verifiable rewards, our approach yields substantial performance gains across math reasoning benchmarks. These results suggest that LLM-based evaluators can produce effective training signals for RL fine-tuning.
Problem

Research questions and friction points this paper is trying to address.

Reinforcement Learning
Knowledge Distillation
LLM-as-a-Judge
Label-free Learning
Reasoning Capability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement Learning
Knowledge Distillation
LLM-as-a-Judge
Label-free Learning
Reward Modeling
๐Ÿ”Ž Similar Papers
No similar papers found.
Y
Yiyang Shen
Department of Computer Science, University of Iowa
L
Lifu Tu
Weiran Wang
Weiran Wang
University of Iowa
Machine learningspeech processing