🤖 AI Summary
This work addresses the limitations of existing GPU sharing approaches, which either incur severe tail latency spikes for interactive services under coarse-grained time-division multiplexing or require invasive kernel modifications that compromise behavioral consistency in fine-grained spatial sharing. To overcome these challenges, the authors propose DetShare, a system that introduces a GPU coroutine abstraction to decouple logical execution contexts from physical resources, enabling transparent, fine-grained, and predictable sharing without application code modifications. DetShare is the first to simultaneously guarantee semantic determinism (result consistency) and performance determinism (predictable tail latency), leveraging lightweight context migration, workload-aware placement, and a TPOT-First scheduling policy. Experimental results demonstrate up to 79.2% higher training throughput, 15.1% lower P99 tail latency, 69.1% reduced average inference latency, and a 21.2% decrease in TPOT SLO violations.
📝 Abstract
GPU sharing is critical for maximizing hardware utilization in modern data centers. However, existing approaches present a stark trade-off: coarse-grained temporal multiplexing incurs severe tail-latency spikes for interactive services, while fine-grained spatial partitioning often necessitates invasive kernel modifications that compromise behavioral equivalence.
We present DetShare, a novel GPU sharing system that prioritizes determinism and transparency. DetShare ensures semantic determinism (unmodified kernels yield identical results) and performance determinism (predictable tail latency), all while maintaining complete transparency (zero code modification). DetShare introduces GPU coroutines, a new abstraction that decouples logical execution contexts from physical GPU resources. This decoupling enables flexible, fine-grained resource allocation via lightweight context migration.
Our evaluation demonstrates that DetShare improves training throughput by up to 79.2% compared to temporal sharing. In co-location scenarios, it outperforms state-of-the-art baselines, reducing P99 tail latency by 15.1% without compromising throughput. Furthermore, through workload-aware placement and our TPOT-First scheduling policy, DetShare decreases average inference latency by 69.1% and reduces Time-Per-Output-Token (TPOT) SLO violations by 21.2% relative to default policies.