π€ AI Summary
To address high cold-start overhead, limited GPU concurrency, and queueing delays and resource unfairness caused by dynamic, heterogeneous workloads in serverless GPU function services, this paper proposes MQFQ-Stickyβa novel scheduling mechanism. It integrates fair queuing with GPU memory reuse and incorporates predictive and locality-aware principles from I/O scheduling, enabling black-box function deployment without code modification. By leveraging container sandboxing, sticky scheduling, and weight-based fair queuing, MQFQ-Sticky achieves efficient and isolated GPU resource sharing. Experimental evaluation demonstrates that, compared to state-of-the-art CPU/GPU queuing strategies, MQFQ-Sticky reduces end-to-end function latency by 2β20Γ, significantly improves throughput, and substantially enhances fairness in GPU resource allocation.
π Abstract
Hardware accelerators like GPUs are now ubiquitous in data centers, but are not fully supported by common cloud abstractions such as Functions as a Service (FaaS). Many popular and emerging FaaS applications such as machine learning and scientific computing can benefit from GPU acceleration. However, FaaS frameworks (such as OpenWhisk) are not capable of providing this acceleration because of the impedance mismatch between GPUs and the FaaS programming model, which requires virtualization and sandboxing of each function. The challenges are amplified due to the highly dynamic and heterogeneous FaaS workloads. This paper presents the design and implementation of a FaaS system for providing GPU acceleration in a black-box manner (without modifying function code). Running small functions in containerized sandboxes is challenging due to limited GPU concurrency and high cold-start overheads, resulting in heavy queueing of function invocations. We show how principles from I/O scheduling, such as fair queuing and anticipatory scheduling, can be translated to function scheduling on GPUs. We develop MQFQ-Sticky, an integrated fair queueing and GPU memory management approach, which balances the tradeoffs between locality, fairness, and latency. Empirical evaluation on a range of workloads shows that it reduces function latency by 2x to 20x compared to existing GPU and CPU queueing policies.