Re-evaluating Automatic LLM System Ranking for Alignment with Human Preference

πŸ“… 2024-12-31
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study investigates the consistency between automated LLM evaluation frameworks (β€œbenchers”) and human preference rankings, particularly addressing their effectiveness bottleneck when comparing models with similar performance. Method: Through controlled experiments, we systematically analyze how input sets, judgment models, evaluation types, and aggregation methods (e.g., Elo) affect ranking robustness. We empirically measure instance-level accuracy of judgment models and correlate it with system-level bencher validity. Contribution/Results: We first reveal a significant inconsistency between judgment-model instance-level accuracy and bencher system-level effectiveness. We find that existing benchers routinely fail to reliably rank LLMs with comparable capabilities. Based on these findings, we propose evidence-based guidelines for component selection, identify critical performance inflection points in benchers, and establish a reproducible, multi-dimensional evaluation protocol. Our work provides a methodological benchmark and concrete improvement pathways for building trustworthy, robust automated LLM evaluation systems.

Technology Category

Application Category

πŸ“ Abstract
Evaluating and ranking the capabilities of different LLMs is crucial for understanding their performance and alignment with human preferences. Due to the high cost and time-consuming nature of human evaluations, an automatic LLM bencher (i.e., an automatic evaluation framework that aims to rank LLMs based on their alignment with human preferences) is indispensable. An automatic LLM bencher consists of four components: the input set (e.g., a user instruction), the evaluation model (e.g., an LLM), the evaluation type (e.g., pairwise comparison), and the aggregation method (e.g., the ELO rating system). However, previous work has not thoroughly explored how to select these components or how their different combinations influence the results. In this work, through controlled experiments, we provide a series of recommendations on how to choose each component to better automate the evaluation of LLMs. Furthermore, we discovered that when evaluating LLMs with similar performance, the performance of the automatic LLM bencher declines sharply, underscoring the limitations of current benchers and calling for future work. Lastly, we found that the evaluation models' performance at the instance level (e.g., the accuracy of selecting the best output) does not always align with their effectiveness when used as a component of a bencher, highlighting the importance of dedicated system-level evaluation of benchers.
Problem

Research questions and friction points this paper is trying to address.

Automatic Evaluation
Large Language Models
Human Consistency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated Evaluation
Large Language Models
Human Preference Reflection
πŸ”Ž Similar Papers
No similar papers found.