Drawing Conclusions from Draws: Rethinking Preference Semantics in Arena-Style LLM Evaluation

📅 2025-10-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper challenges the conventional interpretation of “ties” in LLM arena evaluations as indicative of equal model capability, arguing instead that ties primarily reflect query difficulty rather than balanced model performance. Method: Leveraging three real-world arena datasets, the authors conduct causal analysis and hazard ratio modeling to quantify the relationship between tie occurrences and query attributes—demonstrating that ties are significantly associated with higher query objectivity and lower difficulty. Contribution/Results: They propose a query-aware scoring framework that modulates tie handling in Elo-style rating systems, moving beyond uniform tie weighting. Experiments show that excluding ties from rating updates improves prediction accuracy for head-to-head outcomes by 1–3% across four Elo variants, empirically validating the semantic reinterpretation of ties as difficulty signals rather than capability equivalences.

Technology Category

Application Category

📝 Abstract
In arena-style evaluation of large language models (LLMs), two LLMs respond to a user query, and the user chooses the winning response or deems the"battle"a draw, resulting in an adjustment to the ratings of both models. The prevailing approach for modeling these rating dynamics is to view battles as two-player game matches, as in chess, and apply the Elo rating system and its derivatives. In this paper, we critically examine this paradigm. Specifically, we question whether a draw genuinely means that the two models are equal and hence whether their ratings should be equalized. Instead, we conjecture that draws are more indicative of query difficulty: if the query is too easy, then both models are more likely to succeed equally. On three real-world arena datasets, we show that ignoring rating updates for draws yields a 1-3% relative increase in battle outcome prediction accuracy (which includes draws) for all four rating systems studied. Further analyses suggest that draws occur more for queries rated as very easy and those as highly objective, with risk ratios of 1.37 and 1.35, respectively. We recommend future rating systems to reconsider existing draw semantics and to account for query properties in rating updates.
Problem

Research questions and friction points this paper is trying to address.

Rethinking draw semantics in arena-style LLM evaluation systems
Investigating whether draws indicate model equality or query difficulty
Proposing rating systems that account for query properties
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinterprets draws as query difficulty indicators
Proposes ignoring draws for rating updates
Recommends incorporating query properties in ratings
🔎 Similar Papers
No similar papers found.
Raphael Tang
Raphael Tang
Microsoft
machine learningnatural language processingmultimodalityinformation retrieval
Crystina Zhang
Crystina Zhang
University of Waterloo
Information RetrievalNatural Language Processing
Wenyan Li
Wenyan Li
University of Copenhagen
NLPMultimodalIR
C
Carmen Lai
Independent Researcher
P
Pontus Stenetorp
Centre for Artificial Intelligence, University College London; Research and Development Center for Large Language Models, National Institute of Informatics
Y
Yao Lu
Centre for Artificial Intelligence, University College London