🤖 AI Summary
Despite violating the Bradley–Terry (BT) model assumptions and lacking parameter stability guarantees, the Elo rating system often outperforms more sophisticated models in win-rate prediction—especially under model misspecification and non-stationary dynamics.
Method: We reinterpret Elo through the lens of no-regret online learning, formalizing it as an online gradient descent algorithm operating over sparse pairwise comparisons. This perspective exposes its intrinsic robustness to both BT violations and temporal non-stationarity.
Contribution/Results: Through theoretical analysis, synthetic experiments (including strong/weak stochastic transitivity regimes), and joint evaluation of win-rate calibration and ranking consistency, we demonstrate that Elo significantly surpasses advanced alternatives—including mElo—on non-BT and non-stationary data. Crucially, we establish, for the first time, a strong empirical and theoretically grounded correlation between predictive accuracy and ranking quality. These findings provide principled justification for Elo’s widespread robustness in practical settings such as LLM evaluation.
📝 Abstract
Elo rating, widely used for skill assessment across diverse domains ranging from competitive games to large language models, is often understood as an incremental update algorithm for estimating a stationary Bradley-Terry (BT) model. However, our empirical analysis of practical matching datasets reveals two surprising findings: (1) Most games deviate significantly from the assumptions of the BT model and stationarity, raising questions on the reliability of Elo. (2) Despite these deviations, Elo frequently outperforms more complex rating systems, such as mElo and pairwise models, which are specifically designed to account for non-BT components in the data, particularly in terms of win rate prediction. This paper explains this unexpected phenomenon through three key perspectives: (a) We reinterpret Elo as an instance of online gradient descent, which provides no-regret guarantees even in misspecified and non-stationary settings. (b) Through extensive synthetic experiments on data generated from transitive but non-BT models, such as strongly or weakly stochastic transitive models, we show that the ''sparsity'' of practical matching data is a critical factor behind Elo's superior performance in prediction compared to more complex rating systems. (c) We observe a strong correlation between Elo's predictive accuracy and its ranking performance, further supporting its effectiveness in ranking.