Robust Batch-Level Query Routing for Large Language Models under Cost and Capacity Constraints

📅 2026-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing query-level routing methods, which struggle to control batch-level overhead under constraints of cost, GPU resources, and concurrency—particularly when faced with non-uniform or adversarial query batches. The paper introduces the first batch-oriented, resource-aware robust routing framework that jointly optimizes model assignment within each batch and incorporates offline multi-model instance scheduling. By formulating the problem as an integer program or applying heuristic algorithms under explicit resource constraints, the approach balances performance uncertainty while adhering to cost and capacity limits. Experiments on two multitask LLM benchmarks demonstrate substantial improvements over baselines: the robust variant achieves 1–14% higher accuracy, batch-level routing yields up to 24% gains over query-level methods in adversarial settings, and optimized instance allocation provides an additional boost of up to 3%, all while strictly satisfying resource constraints.
📝 Abstract
We study the problem of routing queries to large language models (LLMs) under cost, GPU resources, and concurrency constraints. Prior per-query routing methods often fail to control batch-level cost, especially under non-uniform or adversarial batching. To address this, we propose a batch-level, resource-aware routing framework that jointly optimizes model assignment for each batch while respecting cost and model capacity limits. We further introduce a robust variant that accounts for uncertainty in predicted LLM performance, along with an offline instance allocation procedure that balances quality and throughput across multiple models. Experiments on two multi-task LLM benchmarks show that robustness improves accuracy by 1-14% over non-robust counterparts (depending on the performance estimator), batch-level routing outperforms per-query methods by up to 24% under adversarial batching, and optimized instance allocation yields additional gains of up to 3% compared to a non-optimized allocation, all while strictly controlling cost and GPU resource constraints.
Problem

Research questions and friction points this paper is trying to address.

query routing
large language models
cost constraints
GPU resources
batch-level optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

batch-level routing
resource-aware optimization
robust query routing
LLM performance uncertainty
instance allocation
🔎 Similar Papers
No similar papers found.