A Statistical Framework for Ranking LLM-Based Chatbots

📅 2024-12-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM ranking methods struggle to robustly handle ties in human judgments and ignore inter-model capability correlations, resulting in unstable rankings and poor interpretability. To address this, we propose a statistical ranking framework tailored for pairwise comparison data: (1) an interpretable factorized tie model that explicitly characterizes the tie-generation mechanism; (2) incorporation of a covariance structure over competitors’ latent performance scores to uncover implicit capability hierarchies; and (3) identification constraints to resolve parameter non-identifiability inherent in Bradley–Terry–type models. Implemented via maximum likelihood estimation with constrained optimization, our method achieves significant improvements on real-world benchmarks—including Chatbot Arena—yielding a +12.7% gain in goodness-of-fit and a 0.15 increase in Kendall’s τ for ranking stability. We release an open-source Python package, *leaderbot*, to support reproducibility.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have transformed natural language processing, with frameworks like Chatbot Arena providing pioneering platforms for evaluating these models. By facilitating millions of pairwise comparisons based on human judgments, Chatbot Arena has become a cornerstone in LLM evaluation, offering rich datasets for ranking models in open-ended conversational tasks. Building upon this foundation, we propose a statistical framework that incorporates key advancements to address specific challenges in pairwise comparison analysis. First, we introduce a factored tie model that enhances the ability to handle ties -- an integral aspect of human-judged comparisons -- significantly improving the model's fit to observed data. Second, we extend the framework to model covariance between competitors, enabling deeper insights into performance relationships and facilitating intuitive groupings into performance tiers. Third, we resolve optimization challenges arising from parameter non-uniqueness by introducing novel constraints, ensuring stable and interpretable parameter estimation. Through rigorous evaluation and extensive experimentation, our framework demonstrates substantial improvements over existing methods in modeling pairwise comparison data. To support reproducibility and practical adoption, we release leaderbot, an open-source Python package implementing our models and analyses.
Problem

Research questions and friction points this paper is trying to address.

Statistical Methods
Language Model Evaluation
Chatbot Ranking
Innovation

Methods, ideas, or system contributions that make the work stand out.

Statistical Method
Chatbot Performance Evaluation
Open-source Toolkit
🔎 Similar Papers
No similar papers found.
S
S. Ameli
ICSI and Department of Statistics, University of California, Berkeley
Siyuan Zhuang
Siyuan Zhuang
PhD Student, UC Berkeley
Machine LearningDistributed Systems
Ion Stoica
Ion Stoica
Professor of Computer Science, UC Berkeley
Cloud ComputingNetworkingDistributed SystemsBig Data
M
Michael W. Mahoney
ICSI, LBNL, and Department of Statistics, University of California, Berkeley