🤖 AI Summary
Conventional LLM evaluation relies on global average metrics (e.g., accuracy or human preference scores), obscuring performance heterogeneity across diverse prompts and user conditions.
Method: We propose Prompt-to-Leaderboard (P2L), the first fine-grained, prompt-dependent dynamic leaderboard framework. P2L end-to-end trains an LLM to map natural language prompts into Bradley–Terry coefficient vectors, enabling prompt-level relative performance modeling.
Contribution/Results: Theoretically and empirically, we show that such prompt-level performance follows a power-law scaling law analogous to that of LLMs themselves. P2L enables unsupervised task adaptation, model routing, personalized evaluation, and capability attribution. Trained on Chatbot Arena human voting data, P2L achieved #1 ranking on the Arena leaderboard in January 2025—significantly outperforming global-average baselines and precisely characterizing each model’s strengths and weaknesses across prompt categories.
📝 Abstract
Large language model (LLM) evaluations typically rely on aggregated metrics like accuracy or human preference, averaging across users and prompts. This averaging obscures user- and prompt-specific variations in model performance. To address this, we propose Prompt-to-Leaderboard (P2L), a method that produces leaderboards specific to a prompt. The core idea is to train an LLM taking natural language prompts as input to output a vector of Bradley-Terry coefficients which are then used to predict the human preference vote. The resulting prompt-dependent leaderboards allow for unsupervised task-specific evaluation, optimal routing of queries to models, personalization, and automated evaluation of model strengths and weaknesses. Data from Chatbot Arena suggest that P2L better captures the nuanced landscape of language model performance than the averaged leaderboard. Furthermore, our findings suggest that P2L's ability to produce prompt-specific evaluations follows a power law scaling similar to that observed in LLMs themselves. In January 2025, the router we trained based on this methodology achieved the #1 spot in the Chatbot Arena leaderboard. Our code is available at this GitHub link: https://github.com/lmarena/p2l.