Efficient Training-Free Online Routing for High-Volume Multi-LLM Serving

📅 2025-09-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
For high-concurrency, low-token-budget online LLM serving, this paper proposes the first training-free online routing algorithm. Our method rapidly matches query semantics via lightweight query feature estimation and approximate nearest-neighbor search, and learns a routing policy via single-shot optimization over an initial query set. Theoretically, it achieves a provably bounded competitive ratio, balancing dynamic adaptability with computational efficiency. Evaluated on three benchmark datasets against eight baselines, our approach improves overall performance by 3.55×, cost efficiency by 1.85×, and throughput by 4.25×, while substantially reducing deployment overhead. To the best of our knowledge, this is the first online LLM routing framework that is simultaneously training-free, theoretically grounded (with provable guarantees), and high-throughput.

Technology Category

Application Category

📝 Abstract
Increasing demand for Large Language Models (LLMs) services imposes substantial deployment and computation costs on providers. LLM routing offers a cost-efficient solution by directing queries to the optimal LLM based on model and query features. However, existing works primarily focus on offline scenarios and struggle to adapt to online settings with high query volume and constrained token budgets. In this work, we introduce the first training-free algorithm for online routing scenarios. Our algorithm leverages approximate nearest neighbor search to efficiently estimate query features and performs a one-time optimization over a small set of initial queries to learn a routing strategy that guides future routing. We provide theoretical guarantees demonstrating that our algorithm achieves a competitive ratio of $1 - o(1)$ under natural assumptions, which is further validated by extensive experiments across 3 benchmark datasets and 8 baselines, showing an average improvement of 3.55$ imes$ in overall performance, 1.85$ imes$ in cost efficiency, and nearly 4.25$ imes$ in throughput.
Problem

Research questions and friction points this paper is trying to address.

Online routing for high-volume multi-LLM serving
Training-free algorithm for efficient query routing
Optimizing routing under constrained token budgets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free algorithm for online routing
Uses approximate nearest neighbor search
One-time optimization with initial queries
🔎 Similar Papers
No similar papers found.