Kant: An Efficient Unified Scheduling System for Large-Scale AI Clusters

📅 2025-09-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address low resource utilization, inefficient scheduling, and degraded service quality arising from co-located LLM training and inference workloads in large-scale AI clusters, this paper proposes an efficient unified scheduling system. The system introduces quantitative metrics—including GPU allocation rate, scheduling occupancy rate, and node fragmentation rate—to enable analyzable and optimizable mixed-workload scheduling. It further designs a collaborative strategy combining backfilling and an enhanced bin-packing (E-Binpack) algorithm to minimize communication overhead and resource fragmentation while satisfying SLA guarantees. Empirical evaluation demonstrates stable operation across cluster scales ranging from hundreds to tens of thousands of GPUs, achieving 23–37% higher resource utilization and a 19% reduction in average job completion time. The system has been deployed in multiple AI data centers, supporting both efficient training of billion-parameter models and low-latency real-time inference.

Technology Category

Application Category

📝 Abstract
As AI cluster sizes continue to expand and the demand for large-language-model (LLM) training and inference workloads grows rapidly, traditional scheduling systems face significant challenges in balancing resource utilization, scheduling efficiency, and service quality. This paper presents and evaluates Kant: an efficient unified scheduling platform designed for large-scale AI container clusters, supporting the co-scheduling of both training and inference jobs. Based on the practical implementation of the Kant system, we systematically define a set of key evaluation metrics for AI clusters, including GPU Allocation Ratio (GAR), Scheduling Occupancy Rate (SOR), GPU Node Fragmentation Ratio (GFR), Job Waiting Time Distribution (JWTD), and Job Training Time Estimation Distribution (JTTED), providing a foundation for quantitative performance analysis. Experimental results demonstrate that Kant achieves exceptional performance in clusters ranging from hundreds to tens of thousands of GPUs. By leveraging scheduling strategies such as Backfill and Enhanced Binpack (E-Binpack), the system significantly improves resource utilization and scheduling efficiency, while effectively reducing resource fragmentation and communication overhead in distributed training. The system has been deployed in multiple AI data center clusters, where it stably supports large-scale intelligent computing workloads. This work provides a practical engineering approach for building high-performance, highly available, AI-native scheduling infrastructure.
Problem

Research questions and friction points this paper is trying to address.

Addresses resource utilization and scheduling efficiency challenges in large-scale AI clusters
Supports co-scheduling of training and inference jobs in containerized environments
Reduces resource fragmentation and communication overhead in distributed AI workloads
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified scheduling platform for AI container clusters
Leverages Backfill and Enhanced Binpack scheduling strategies
Reduces resource fragmentation and communication overhead
🔎 Similar Papers
No similar papers found.
L
Lingling Zeng
ZTE Corporation
G
Gen Zhang
ZTE Corporation
Jialin Peng
Jialin Peng
Huaqiao University, China
Image ComputingMachine LearningMedical Image Analysis
X
Xiang Xu
ZTE Corporation
Y
Yuan Xu
ZTE Corporation
L
Lijun Ma
ZTE Corporation