Preference Leakage: A Contamination Problem in LLM-as-a-judge

📅 2025-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper identifies a “preference leakage” problem in large language models (LLMs) acting as automatic evaluators: when the evaluator and generator share an origin—e.g., identical architecture, model family, or parameter inheritance—the evaluator systematically overestimates the generator’s output quality, inducing assessment bias. We formally define this previously unrecognized contamination mechanism and empirically validate its prevalence and stealthiness, showing it is more widespread and harder to detect than known evaluator biases. Using Llama, Qwen, Gemma, and other multi-model baselines across standard benchmarks, we design three classes of correlation-controlled experiments, complemented by statistical analysis and ablation studies. Results demonstrate that preference leakage significantly inflates scores for correlated generators across diverse tasks, with an average bias of 12.7%. All code and datasets are publicly released.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) as judges and LLM-based data synthesis have emerged as two fundamental LLM-driven data annotation methods in model development. While their combination significantly enhances the efficiency of model training and evaluation, little attention has been given to the potential contamination brought by this new model development paradigm. In this work, we expose preference leakage, a contamination problem in LLM-as-a-judge caused by the relatedness between the synthetic data generators and LLM-based evaluators. To study this issue, we first define three common relatednesses between data generator LLM and judge LLM: being the same model, having an inheritance relationship, and belonging to the same model family. Through extensive experiments, we empirically confirm the bias of judges towards their related student models caused by preference leakage across multiple LLM baselines and benchmarks. Further analysis suggests that preference leakage is a pervasive issue that is harder to detect compared to previously identified biases in LLM-as-a-judge scenarios. All of these findings imply that preference leakage is a widespread and challenging problem in the area of LLM-as-a-judge. We release all codes and data at: https://github.com/David-Li0406/Preference-Leakage.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Bias Leakage
Preference Judgement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Preference Leakage
Large Language Models
Bias in AI
🔎 Similar Papers
No similar papers found.