🤖 AI Summary
To address low resource utilization, inefficient scheduling, and degraded service quality arising from co-located LLM training and inference workloads in large-scale AI clusters, this paper proposes an efficient unified scheduling system. The system introduces quantitative metrics—including GPU allocation rate, scheduling occupancy rate, and node fragmentation rate—to enable analyzable and optimizable mixed-workload scheduling. It further designs a collaborative strategy combining backfilling and an enhanced bin-packing (E-Binpack) algorithm to minimize communication overhead and resource fragmentation while satisfying SLA guarantees. Empirical evaluation demonstrates stable operation across cluster scales ranging from hundreds to tens of thousands of GPUs, achieving 23–37% higher resource utilization and a 19% reduction in average job completion time. The system has been deployed in multiple AI data centers, supporting both efficient training of billion-parameter models and low-latency real-time inference.
📝 Abstract
As AI cluster sizes continue to expand and the demand for large-language-model (LLM) training and inference workloads grows rapidly, traditional scheduling systems face significant challenges in balancing resource utilization, scheduling efficiency, and service quality. This paper presents and evaluates Kant: an efficient unified scheduling platform designed for large-scale AI container clusters, supporting the co-scheduling of both training and inference jobs. Based on the practical implementation of the Kant system, we systematically define a set of key evaluation metrics for AI clusters, including GPU Allocation Ratio (GAR), Scheduling Occupancy Rate (SOR), GPU Node Fragmentation Ratio (GFR), Job Waiting Time Distribution (JWTD), and Job Training Time Estimation Distribution (JTTED), providing a foundation for quantitative performance analysis. Experimental results demonstrate that Kant achieves exceptional performance in clusters ranging from hundreds to tens of thousands of GPUs. By leveraging scheduling strategies such as Backfill and Enhanced Binpack (E-Binpack), the system significantly improves resource utilization and scheduling efficiency, while effectively reducing resource fragmentation and communication overhead in distributed training. The system has been deployed in multiple AI data center clusters, where it stably supports large-scale intelligent computing workloads. This work provides a practical engineering approach for building high-performance, highly available, AI-native scheduling infrastructure.