🤖 AI Summary
This work addresses severe performance degradation and responsiveness issues on consumer-grade GPUs when concurrently executing multiple large-scale machine learning applications, where near-full GPU memory utilization triggers excessive memory thrashing and overuse of CPU pinned memory under existing sharing mechanisms like unified virtual memory. To tackle this, the authors propose Nixie, a system that enables transparent temporal multiplexing on commodity GPUs without requiring modifications to applications or drivers. Nixie achieves this through coordinated scheduling of GPU memory allocation and kernel launches, augmented by a lightweight MLFQ-inspired scheduler that prioritizes interactive tasks. Experimental evaluation in real-world code completion scenarios demonstrates up to a 3.8× reduction in latency and, at equivalent latency levels, up to a 66.8% reduction in CPU pinned memory usage.
📝 Abstract
Consumer machines are increasingly running large ML workloads such as large language models (LLMs), text-to-image generation, and interactive image editing. Unlike datacenter GPUs, consumer GPUs serve single-user, rapidly changing workloads, and each model's working set often nearly fills the GPU memory. As a result, existing sharing mechanisms (e.g., NVIDIA Unified Virtual Memory) perform poorly due to memory thrashing and excessive use of CPU pinned memory when multiple applications are active. We design and implement Nixie, a system that enables efficient and transparent temporal multiplexing on consumer GPUs without requiring any application or driver changes. Nixie is a system service that coordinates GPU memory allocation and kernel launch behavior to efficiently utilize the CPU-GPU bi-directional bandwidth and CPU pinned memory. A lightweight scheduler in Nixie further improves responsiveness by automatically prioritizing latency-sensitive interactive jobs using MLFQ-inspired techniques. Our evaluations show that Nixie improves latency of real interactive code-completion tasks by up to $3.8\times$ and saves up to 66.8% CPU pinned memory usage given the same latency requirement.