🤖 AI Summary
This work addresses the limitations of existing prompt routing methods, which struggle to distinguish between large language models with similar performance and rely on manual task categorization, thereby failing to capture fine-grained capability differences. The authors propose a two-stage routing architecture: first, graph clustering is employed to automatically discover latent fine-grained tasks and train a task classifier; second, a task-aware mixture-of-experts (MoE) framework performs prompt-level quality estimation. During inference, predictions from both stages are fused to balance stability and adaptability. This approach is the first to enable annotation-free discovery of fine-grained tasks and integrates task-level classification with prompt-level scoring, significantly improving routing accuracy and scalability. Experiments across 10 benchmarks and 11 state-of-the-art models demonstrate that the method outperforms the strongest individual model while reducing inference costs by over 50%.
📝 Abstract
Prompt routing dynamically selects the most appropriate large language model from a pool of candidates for each query, optimizing performance while managing costs. As model pools scale to include dozens of frontier models with narrow performance gaps, existing approaches face significant challenges: manually defined task taxonomies cannot capture fine-grained capability distinctions, while monolithic routers struggle to differentiate subtle differences across diverse tasks. We propose a two-stage routing architecture that addresses these limitations through automated fine-grained task discovery and task-aware quality estimation. Our first stage employs graph-based clustering to discover latent task types and trains a classifier to assign prompts to discovered tasks. The second stage uses a mixture-of-experts architecture with task-specific prediction heads for specialized quality estimates. At inference, we aggregate predictions from both stages to balance task-level stability with prompt-specific adaptability. Evaluated on 10 benchmarks with 11 frontier models, our method consistently outperforms existing baselines and surpasses the strongest individual model while incurring less than half its cost.