🤖 AI Summary
This work addresses the limitations of existing query-level routing methods, which struggle to control batch-level overhead under constraints of cost, GPU resources, and concurrency—particularly when faced with non-uniform or adversarial query batches. The paper introduces the first batch-oriented, resource-aware robust routing framework that jointly optimizes model assignment within each batch and incorporates offline multi-model instance scheduling. By formulating the problem as an integer program or applying heuristic algorithms under explicit resource constraints, the approach balances performance uncertainty while adhering to cost and capacity limits. Experiments on two multitask LLM benchmarks demonstrate substantial improvements over baselines: the robust variant achieves 1–14% higher accuracy, batch-level routing yields up to 24% gains over query-level methods in adversarial settings, and optimized instance allocation provides an additional boost of up to 3%, all while strictly satisfying resource constraints.
📝 Abstract
We study the problem of routing queries to large language models (LLMs) under cost, GPU resources, and concurrency constraints. Prior per-query routing methods often fail to control batch-level cost, especially under non-uniform or adversarial batching. To address this, we propose a batch-level, resource-aware routing framework that jointly optimizes model assignment for each batch while respecting cost and model capacity limits. We further introduce a robust variant that accounts for uncertainty in predicted LLM performance, along with an offline instance allocation procedure that balances quality and throughput across multiple models. Experiments on two multi-task LLM benchmarks show that robustness improves accuracy by 1-14% over non-robust counterparts (depending on the performance estimator), batch-level routing outperforms per-query methods by up to 24% under adversarial batching, and optimized instance allocation yields additional gains of up to 3% compared to a non-optimized allocation, all while strictly controlling cost and GPU resource constraints.