š¤ AI Summary
To address bottlenecks in LLM evaluationāincluding high human annotation costs, rigid task formatting, reliance on reference answers, and systemic biasesāthis paper proposes AutoReview, an automated peer-review framework. Methodologically, AutoReview introduces the first LLM auto-selection mechanism grounded in three dimensions: consistency, relevance, and confidenceāspanning instruction understanding, content alignment, and response discrimination. It integrates the LLM-as-judge paradigm, multi-dimensional capability quantification, structured qualification exams, and task-adaptive evaluator selection. Empirically, AutoReview achieves state-of-the-art performance on three diverse tasksāsummarization, non-factual QA, and dialogue generationāwhile substantially reducing evaluation cost. Moreover, it demonstrates strong scalability and cross-task generalization capability, enabling robust, reference-free, and format-agnostic LLM assessment.
š Abstract
The rapid development of large language models (LLMs) has highlighted the need for efficient and reliable methods to evaluate their performance. Traditional evaluation methods often face challenges like high costs, limited task formats, dependence on human references, and systematic biases. To address these limitations, we propose Auto-PRE, an automatic LLM evaluation framework inspired by the peer review process. Unlike previous approaches that rely on human annotations, Auto-PRE automatically selects evaluator LLMs based on three core traits: consistency, pertinence, and self-confidence, which correspond to the instruction, content, and response stages, respectively, and collectively cover the entire evaluation process. Experiments on three representative tasks, including summarization, non-factoid QA, and dialogue generation, demonstrate that Auto-PRE achieves state-of-the-art performance while significantly reducing evaluation costs. Furthermore, the structured and scalable design of our automatic qualification exam framework provides valuable insights into automating the evaluation of LLMs-as-judges, paving the way for more advanced LLM-based evaluation frameworks.