🤖 AI Summary
Existing distributed GPU inference schedulers make decisions solely based on instantaneous system states, neglecting the temporal evolution of task requirements and resource availability—leading to low GPU utilization, high migration overhead, and sluggish responsiveness under dynamic workloads. This paper proposes TORTA, the first spatiotemporal joint two-tier scheduling framework: a macro-level scheduler integrates long-horizon workload forecasting with optimal transport theory for cross-region resource allocation, while a micro-level scheduler employs reinforcement learning to satisfy short-horizon execution constraints. TORTA supports dynamic resource coordination across heterogeneous network topologies, achieving low migration cost alongside improved responsiveness and load balancing. Experiments demonstrate that TORTA reduces average inference latency by up to 15%, decreases load standard deviation by 4–5%, and cuts total operational cost by 10–20%, significantly outperforming state-of-the-art approaches.
📝 Abstract
The rapid growth of large language model (LLM) services imposes increasing demands on distributed GPU inference infrastructure. Most existing scheduling systems rely on the current system state to make decisions, without considering how task demand and resource availability evolve over time. This lack of temporal awareness leads to inefficient GPU utilization, high task migration overhead, and poor system responsiveness under dynamic workloads. In this work, we identify the fundamental limitations of these instantaneous-state-only scheduling approaches and propose Temporal Optimal Resource scheduling via Two-layer Architecture (TORTA). TORTA introduces a spatiotemporal scheduling framework that captures both long-term workload patterns and short-term execution constraints. It adopts a two-layer design: a macro-level scheduler leverages reinforcement learning and optimal transport to coordinate inter-region task distribution, while a micro-level allocator refines task-to-server assignments within each region to reduce latency and switching costs. Experimental results across multiple network topologies show that TORTA reduces average inference response time by up to 15%, improves load balance by approximately 4-5%, and cuts total operational cost by 10-20% compared to state-of-the-art baseline methods.