🤖 AI Summary
This work addresses the susceptibility of large language models used as evaluators (LLM-as-judge) to systematic biases, which undermines assessment reliability. The authors propose CyclicJudge, a novel method that introduces the first bias analysis framework based on variance decomposition, disentangling evaluation scores into distinct components attributable to the scenario, the generated response, the judge, and residual noise. By incorporating a round-robin assignment mechanism, CyclicJudge completely eliminates judge-induced bias without increasing the per-evaluation computational cost. Experimental results on MT-Bench demonstrate that the proposed approach effectively removes systematic bias, significantly enhancing both consistency and fairness in model evaluations.
📝 Abstract
LLM-as-judge evaluation has become standard practice for open-ended model assessment; however, judges exhibit systematic biases that cannot be eliminated by increasing the number of scenarios or generations. These biases are often similar in magnitude to the model differences that benchmarks are designed to detect, resulting in unreliable rankings when single-judge evaluations are used. This work introduces a variance decomposition that partitions benchmark score variance into scenario, generation, judge, and residual components. Based on this analysis, CyclicJudge, a round-robin assignment of judges, is demonstrated to be the optimal allocation strategy. It eliminates bias precisely while requiring each judge only once per cycle, maintaining the cost of single-judge evaluation. Empirical validation on MT-Bench supports all theoretical predictions.