The Comparative Trap: Pairwise Comparisons Amplifies Biased Preferences of LLM Evaluators

📅 2024-06-18
📈 Citations: 1
Influential: 1
📄 PDF
🤖 AI Summary
Large language models (LLMs) employed as evaluators in pairwise comparison tasks are susceptible to surface-level artifacts—such as verbosity and authoritative tone—leading to systematic bias amplification. This work is the first to systematically expose this phenomenon. We propose PRePair, a novel evaluation framework that integrates pointwise independent scoring into the pairwise paradigm via multi-step prompt engineering and adversarial alignment strategies, thereby jointly optimizing discriminative power and fairness. PRePair establishes the first paradigm that explicitly models pointwise reasoning within a pairwise structure. Experiments demonstrate that PRePair significantly mitigates evaluation bias on the adversarial benchmark LLMBar, while outperforming pure pointwise methods on the standard MT-Bench benchmark—achieving simultaneous improvements in both fairness and accuracy.

Technology Category

Application Category

📝 Abstract
As large language models (LLMs) are increasingly used as evaluators for natural language generation tasks, ensuring unbiased assessments is essential. However, LLM evaluators often display biased preferences, such as favoring verbosity and authoritative tones. Our empirical analysis reveals that these biases are exacerbated in pairwise evaluation, where LLMs directly compare two outputs and easily prioritize superficial attributes. In contrast, pointwise evaluation, which assesses outputs independently, is less susceptible to such bias because each output is judged in isolation. To address the limitations of the pairwise evaluation, we introduce a novel evaluation method, PRePair, which integrates pointwise reasoning within a pairwise framework. PRePair effectively alleviates biased preference, improving performance on the adversarial benchmark (LLMBar) while outperforming pointwise evaluation on the standard benchmark (MT-Bench).
Problem

Research questions and friction points this paper is trying to address.

LLM evaluators exhibit biased preferences in assessments
Pairwise comparisons worsen biases favoring superficial attributes
Proposing PRePair to reduce bias in evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

PRePair integrates pointwise reasoning pairwise
Alleviates bias in LLM evaluators
Outperforms pointwise on MT-Bench
🔎 Similar Papers
No similar papers found.