Toward Robust LLM-Based Judges: Taxonomic Bias Evaluation and Debiasing Optimization

📅 2026-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language model (LLM)-based automatic evaluation systems commonly exhibit diverse and pronounced judgment biases, severely undermining their reliability, yet lack systematic assessment frameworks and effective debiasing methods. To address this gap, this work proposes JudgeBiasBench, the first bias evaluation benchmark encompassing both generative and discriminative evaluators. It introduces a four-dimensional taxonomy and a controllable bias injection mechanism to systematically quantify twelve representative bias types. Furthermore, we design a bias-aware training strategy that integrates reinforcement learning with contrastive learning, effectively mitigating multiple forms of bias while largely preserving the original evaluation performance.

Technology Category

Application Category

📝 Abstract
Large language model (LLM)-based judges are widely adopted for automated evaluation and reward modeling, yet their judgments are often affected by judgment biases. Accurately evaluating these biases is essential for ensuring the reliability of LLM-based judges. However, existing studies typically investigate limited biases under a single judge formulation, either generative or discriminative, lacking a comprehensive evaluation. To bridge this gap, we propose JudgeBiasBench, a benchmark for systematically quantifying biases in LLM-based judges. JudgeBiasBench defines a taxonomy of judgment biases across 4 dimensions, and constructs bias-augmented evaluation instances through a controlled bias injection pipeline, covering 12 representative bias types. We conduct extensive experiments across both generative and discriminative judges, revealing that current judges exhibit significant and diverse bias patterns that often compromise the reliability of automated evaluation. To mitigate judgment bias, we propose bias-aware training that explicitly incorporates bias-related attributes into the training process, encouraging judges to disentangle task-relevant quality from bias-correlated cues. By adopting reinforcement learning for generative judges and contrastive learning for discriminative judges, our methods effectively reduce judgment biases while largely preserving general evaluation capability.
Problem

Research questions and friction points this paper is trying to address.

judgment bias
LLM-based judges
bias evaluation
automated evaluation
taxonomic bias
Innovation

Methods, ideas, or system contributions that make the work stand out.

judgment bias
bias taxonomy
bias-aware training
LLM-based judges
debiasing optimization
🔎 Similar Papers
No similar papers found.
H
Hongli Zhou
Faculty of Computing, Harbin Institute of Technology, Harbin, 150001, China
Hui Huang
Hui Huang
Harbin Institute of Technology
Large Language Model
R
Rui Zhang
Faculty of Computing, Harbin Institute of Technology, Harbin, 150001, China
Kehai Chen
Kehai Chen
Harbin Institute of Technolgy (Shenzhen)
LLMNatural Language ProcessingAgentMulti-model Generation
B
Bing Xu
Faculty of Computing, Harbin Institute of Technology, Harbin, 150001, China
C
Conghui Zhu
Faculty of Computing, Harbin Institute of Technology, Harbin, 150001, China
T
Tiejun Zhao
Faculty of Computing, Harbin Institute of Technology, Harbin, 150001, China
M
Muyun Yang
Faculty of Computing, Harbin Institute of Technology, Harbin, 150001, China