🤖 AI Summary
Codeforces Elo ratings are widely used to evaluate the programming capabilities of large language models, yet their reliability is compromised by several hidden factors. This study systematically identifies and quantifies three major sources of bias: submission order, contest difficulty selection, and model execution stochasticity. Through controlled experiments across 37 Codeforces contests and 13,691 generated test cases, multiple models were repeatedly evaluated under varying conditions. The results demonstrate that these factors can induce rating fluctuations of up to 394, 1,122, and 349 points, respectively, substantially undermining the validity of cross-model comparisons. This work exposes critical limitations in current mainstream evaluation paradigms and calls for the establishment of standardized assessment protocols to ensure more reliable and reproducible benchmarking of code generation models.
📝 Abstract
As Large Language Models (LLMs) achieve breakthroughs in complex reasoning, Codeforces-based Elo ratings have emerged as a prominent metric for evaluating competitive programming capabilities. However, these ratings are often reported without critical experimental details, leading to significant discrepancies illustrated by recent reports where the score of the same model version fluctuated by nearly 500 points. This paper presents a systematic empirical study on the hidden factors biasing Elo evaluations: (1) the temporal ordering of submissions, (2) contest difficulty selection, and (3) run to run stochastic variability of LLMs. Utilizing a controlled benchmark of 37 recent Codeforces contests and 13,691 generated test cases, we demonstrate that Elo scores are highly sensitive to these parameters. Our findings reveal that varying submission orders can shift scores by 394 points, while contest selection can cause differences of up to 1,122 points for the same model. Run to run performance exhibits substantial instability, with a maximum difference of 349 points in mean scores observed when evaluating identical contests. We conclude that direct Elo comparisons are unreliable and potentially misleading without strict standardization and transparent reporting of experimental settings.