RouterEval: A Comprehensive Benchmark for Routing LLMs to Explore Model-level Scaling Up in LLMs

📅 2025-03-08
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Prior LLM routing research lacks a comprehensive, open-source benchmark for systematic evaluation. Method: We introduce RouterEval—the first large-scale, fully open, domain-specific evaluation benchmark for LLM routing—comprising over 8,500 models, 12 diverse evaluation tasks (e.g., knowledge-based QA, commonsense and mathematical reasoning), and more than 200 million performance records. It establishes a standardized routing evaluation framework and an automated, reproducible evaluation pipeline. Contribution/Results: We empirically identify and validate a “model-level scaling effect” in LLM routing: router performance improves significantly as the candidate model pool expands. Experiments demonstrate that optimal routing strategies consistently outperform both the best individual model in the pool and state-of-the-art strong base models across multiple tasks. Moreover, our analysis reveals substantial room for improvement in existing routing methodologies, highlighting key challenges and opportunities for future work.

Technology Category

Application Category

📝 Abstract
Routing large language models (LLMs) is a novel paradigm that recommends the most suitable LLM from a pool of candidates to process a given input through a well-designed router. Our comprehensive analysis reveals a model-level scaling-up phenomenon in LLMs, i.e., a capable router can significantly enhance the performance of this paradigm as the number of candidates increases. This improvement can even easily surpass the performance of the best single model in the pool and most existing strong LLMs, making it a highly promising paradigm. However, the lack of comprehensive and open-source benchmarks for Routing LLMs has hindered the development of routers. In this paper, we introduce RouterEval, a benchmark designed specifically for router research, which includes over 200,000,000 performance records for 12 popular LLM evaluations across areas such as knowledge-based Q&A, commonsense reasoning, semantic understanding, mathematical reasoning, and instruction following, based on more than 8,500 LLMs. Using RouterEval, extensive evaluations of existing Routing LLM methods reveal that most still have significant room for improvement. See https://github.com/MilkThink-Lab/RouterEval for all data, code, and tutorials.
Problem

Research questions and friction points this paper is trying to address.

Lack of benchmarks for routing LLMs hinders router development
RouterEval introduces a benchmark for router research
Existing routing methods show significant room for improvement
Innovation

Methods, ideas, or system contributions that make the work stand out.

RouterEval benchmark for routing LLMs
Model-level scaling up with capable router
Over 200M performance records for evaluation
🔎 Similar Papers
No similar papers found.