🤖 AI Summary
Inference serving for large language models (LLMs) suffers from inefficient instruction-model matching, leading to suboptimal routing decisions and wasted computational resources.
Method: This paper proposes a dynamic routing paradigm based on “capability instructions”—structured prompts that jointly encode model capability representations, user instructions, and performance-probing queries—enabling zero-shot routing without executing candidate models. We introduce Model-SAT, an end-to-end framework featuring a lightweight model capability encoder that enables rapid self-assessment of unseen models across 50 tasks with only 20-shot examples. The method encompasses capability instruction construction, capability representation learning, positive/negative sample generation, and probabilistic dynamic routing.
Contribution/Results: Evaluated in realistic new-model deployment scenarios, our approach achieves state-of-the-art routing accuracy while significantly improving service precision and resource efficiency.
📝 Abstract
Large Language Models (LLMs) have demonstrated human-like instruction-following abilities, particularly those exceeding 100 billion parameters. The combined capability of some smaller, resource-friendly LLMs can address most of the instructions that larger LLMs excel at. In this work, we explore how to route the best-performing LLM for each instruction to achieve better overall performance. We develop a new paradigm, constructing capability instructions with model capability representation, user instruction, and performance inquiry prompts to assess the performance. To learn from capability instructions, we introduce a new end-to-end framework called Model Selection with Aptitude Test (Model-SAT), which generates positive and negative samples based on what different models perform well or struggle with. Model-SAT uses a model capability encoder that extends its model representation to a lightweight LLM. Our experiments show that Model-SAT understands the performance dimensions of candidate models and provides the probabilities of their capability to handle various instructions. Additionally, during deployment, a new model can quickly infer its aptitude test results across 50 tasks, each with 20 shots. Model-SAT performs state-of-the-art model routing without candidate inference and in real-world new model-released scenarios. The code is available at https://github.com/Now-Join-Us/CIT-LLM-Routing