🤖 AI Summary
In LLM service scheduling, balancing user fairness—measured by weighted latency and token allocation—with system efficiency—measured by throughput and GPU utilization—remains challenging; moreover, key performance metrics are only observable post-execution, leading to scheduling paradoxes. Method: This paper proposes a Dual-Counter Fair Scheduling framework featuring: (i) a tunable, unified fairness scoring system integrating user-perceived latency and resource utilization; (ii) MoPE, a deterministic expert prediction model that accurately estimates first-token latency and throughput prior to scheduling; and (iii) adaptive batching coupled with non-blocking scheduling. Results: Evaluated on real and synthetic workloads, the framework achieves 1.3× higher throughput, 60% lower first-token latency, 13% improved fairness, and 94% GPU utilization compared to VTC, demonstrating strong cross-platform robustness and effectiveness.
📝 Abstract
We address the limitations of current LLM serving with a dual-counter framework separating user and operator perspectives. The User Fairness Counter measures quality of service via weighted tokens and latency; the Resource Fairness Counter measures operational efficiency through throughput and GPU utilization. Since these metrics are only available post-execution, creating a scheduling paradox, we introduce a deterministic Mixture of Prediction Experts (MoPE) framework to predict user-perceived latency, output tokens, throughput, and GPU utilization. These predictions enable calculation of a unified Holistic Fairness score that balances both counters through tunable parameters for proactive fairness-aware scheduling. We implement this in Equinox, an open-source system with other optimizations like adaptive batching, and stall-free scheduling. Evaluations on production traces (ShareGPT, LMSYS) and synthetic workloads demonstrate Equinox achieves up to $1.3 imes$ higher throughput, 60% lower time-to-first-token latency, and 13% higher fairness versus VTC while maintaining 94% GPU utilization, proving fairness under bounded discrepancy across heterogeneous platforms.