🤖 AI Summary
To address resource waste in large language model (LLM) serving caused by capability–cost mismatch, this paper proposes QAServe, a capability–cost co-scheduling framework. Methodologically, it introduces a novel two-stage scheduling mechanism that jointly optimizes response quality, computational cost, and load constraints; constructs the first fine-grained QA reasoning quality–cost annotated dataset; and designs a multi-objective capability/cost predictor (integrating training-based and retrieval-based modeling), a constraint-aware optimizer, and a zero-shot cross-model quality estimator. Evaluated on knowledge QA and mathematical reasoning tasks, QAServe achieves a 6.30% improvement in task success rate and a 10.15% reduction in inference cost over state-of-the-art methods, while keeping scheduling overhead below 0.5% of the average LLM response latency.
📝 Abstract
As large language models (LLMs) are increasingly deployed as service endpoints in systems, the surge in query volume creates significant scheduling challenges. Existing scheduling frameworks mainly target at latency optimization while neglecting the capability of LLMs to serve different level of queries, which could lead to computational resource waste. This paper addresses this challenge by proposing a capability-cost coordinated scheduling framework, ECCOS, for multi-LLM serving, which explicitly constrains response quality and workload to optimize LLM inference cost. Specifically, it introduces the two-stage scheduling by designing a multi-objective predictor and a constrained optimizer. The predictor estimates both model capabilities and computational costs through training-based and retrieval-based approaches, while the optimizer determines cost-optimal assignments under quality and workload constraints. It also introduces QAServe, a dataset collected for sample-wise response quality and costs by zero-shot prompting different LLMs on knowledge QA and mathematical reasoning. Extensive experiments demonstrate that ECCOS improves success rates by 6.30% while reducing costs by 10.15% compared to existing methods, consuming less than 0.5% of LLM response time. The code is available at: https://github.com/agiresearch/ECCOS.