Is Elo Rating Reliable? A Study Under Model Misspecification

📅 2025-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Despite violating the Bradley–Terry (BT) model assumptions and lacking parameter stability guarantees, the Elo rating system often outperforms more sophisticated models in win-rate prediction—especially under model misspecification and non-stationary dynamics. Method: We reinterpret Elo through the lens of no-regret online learning, formalizing it as an online gradient descent algorithm operating over sparse pairwise comparisons. This perspective exposes its intrinsic robustness to both BT violations and temporal non-stationarity. Contribution/Results: Through theoretical analysis, synthetic experiments (including strong/weak stochastic transitivity regimes), and joint evaluation of win-rate calibration and ranking consistency, we demonstrate that Elo significantly surpasses advanced alternatives—including mElo—on non-BT and non-stationary data. Crucially, we establish, for the first time, a strong empirical and theoretically grounded correlation between predictive accuracy and ranking quality. These findings provide principled justification for Elo’s widespread robustness in practical settings such as LLM evaluation.

Technology Category

Application Category

📝 Abstract
Elo rating, widely used for skill assessment across diverse domains ranging from competitive games to large language models, is often understood as an incremental update algorithm for estimating a stationary Bradley-Terry (BT) model. However, our empirical analysis of practical matching datasets reveals two surprising findings: (1) Most games deviate significantly from the assumptions of the BT model and stationarity, raising questions on the reliability of Elo. (2) Despite these deviations, Elo frequently outperforms more complex rating systems, such as mElo and pairwise models, which are specifically designed to account for non-BT components in the data, particularly in terms of win rate prediction. This paper explains this unexpected phenomenon through three key perspectives: (a) We reinterpret Elo as an instance of online gradient descent, which provides no-regret guarantees even in misspecified and non-stationary settings. (b) Through extensive synthetic experiments on data generated from transitive but non-BT models, such as strongly or weakly stochastic transitive models, we show that the ''sparsity'' of practical matching data is a critical factor behind Elo's superior performance in prediction compared to more complex rating systems. (c) We observe a strong correlation between Elo's predictive accuracy and its ranking performance, further supporting its effectiveness in ranking.
Problem

Research questions and friction points this paper is trying to address.

Assessing Elo rating reliability under model misspecification.
Comparing Elo with complex rating systems in win prediction.
Exploring Elo's performance in non-Bradley-Terry model contexts.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online gradient descent reinterpretation
Sparsity in matching data
Correlation between accuracy and ranking
🔎 Similar Papers
No similar papers found.