🤖 AI Summary
Large language models (LLMs) employed as automated evaluators exhibit systematic biases in complex tasks, yet these biases remain poorly characterized and quantified. Method: We introduce ComplexEval—the first benchmark explicitly designed for high-complexity evaluation scenarios—comprising 12 foundational and 3 advanced tasks, incorporating multi-dimensional scoring criteria, unstructured reference answers, and fine-grained evaluation protocols. Contribution/Results: We systematically identify and quantify six previously unexplored evaluation biases, including the “curse of knowledge”—a paradoxical phenomenon where increased model capability exacerbates judgment bias. Empirical analysis across mainstream LLMs reveals statistically significant bias in all models, with bias magnitude monotonically increasing with task complexity. Our work provides critical empirical data and theoretical insights to advance the development of reliable, verifiable automated evaluation frameworks.
📝 Abstract
As large language models (LLMs) grow more capable, they face increasingly diverse and complex tasks, making reliable evaluation challenging. The paradigm of LLMs as judges has emerged as a scalable solution, yet prior work primarily focuses on simple settings. Their reliability in complex tasks--where multi-faceted rubrics, unstructured reference answers, and nuanced criteria are critical--remains understudied. In this paper, we constructed ComplexEval, a challenge benchmark designed to systematically expose and quantify Auxiliary Information Induced Biases. We systematically investigated and validated 6 previously unexplored biases across 12 basic and 3 advanced scenarios. Key findings reveal: (1) all evaluated models exhibit significant susceptibility to these biases, with bias magnitude scaling with task complexity; (2) notably, Large Reasoning Models (LRMs) show paradoxical vulnerability. Our in-depth analysis offers crucial insights for improving the accuracy and verifiability of evaluation signals, paving the way for more general and robust evaluation models.