🤖 AI Summary
This work addresses the critical yet previously underexplored impact of anchor model selection in the LLM-as-a-judge evaluation paradigm. Through large-scale experiments on Arena-Hard-v2.0 involving 22 candidate anchors, combined with correlation analysis, power analysis, and statistical evaluation, the study demonstrates that models with extreme performance levels are unsuitable as anchors, whereas those with moderate (“mediocre”) capabilities yield more stable inter-model rankings that align closely with human judgments. The effect size of anchor choice is shown to be comparable to that of the judge model itself. The authors propose an anchor selection guideline grounded in the principle of mediocrity, reveal that standard benchmark sizes are insufficient for reliable pairwise evaluation, and provide practical recommendations for minimum sample sizes and informative anchor selection—substantially enhancing both the efficiency and reliability of LLM-based evaluations.
📝 Abstract
The ``LLM-as-a-judge'' paradigm has become a standard method for evaluating open-ended generation. To address the quadratic scalability costs of pairwise comparisons, popular benchmarks like Arena-Hard and AlpacaEval compare all models against a single anchor. However, despite its widespread use, the impact of anchor selection on the reliability of the results remains largely unexplored. In this work, we systematically investigate the effect of anchor selection by evaluating 22 different anchors on the Arena-Hard-v2.0 dataset. We find that the choice of anchor is critical: a poor anchor can dramatically reduce correlation with human rankings. We identify that common anchor choices (best-performing and worst-performing models) make poor anchors. Because these extreme anchors are consistently better or worse than all other models, they are seldom indicative of the relative ranking of the models. We further quantify the effect size of anchor selection, showing it is comparable to the selection of a judge model. We conclude with actionable recommendations. First, we conduct a power analysis, and compute sufficient benchmark sizes for anchor-based evaluation, finding that standard benchmark sizes are insufficient for pairwise evaluation and fail to distinguish between competitive models reliably. Second, we provide guidelines for selecting informative anchors to ensure reliable and efficient evaluation practices.