Prism: Unleashing GPU Sharing for Cost-Efficient Multi-LLM Serving

📅 2025-05-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In multi-LLM serving systems, static GPU memory sharing fails to simultaneously achieve high resource utilization and strict SLO latency guarantees—especially under long-tailed model popularity distributions, highly dynamic workloads, and prolonged idle periods. Method: We propose a dynamic GPU memory sharing framework featuring (i) a novel cross-model virtual memory page co-mapping mechanism and (ii) a two-level adaptive scheduling strategy enabling fine-grained, runtime memory reallocation and inter-model resource coordination. Contribution/Results: Evaluated on production traffic, our framework achieves over 2× higher cost savings and 3.3× higher SLO compliance rate compared to the state-of-the-art system, effectively overcoming the performance limitations of static sharing under dynamic load conditions.

Technology Category

Application Category

📝 Abstract
Serving large language models (LLMs) is expensive, especially for providers hosting many models, making cost reduction essential. The unique workload patterns of serving multiple LLMs (i.e., multi-LLM serving) create new opportunities and challenges for this task. The long-tail popularity of models and their long idle periods present opportunities to improve utilization through GPU sharing. However, existing GPU sharing systems lack the ability to adjust their resource allocation and sharing policies at runtime, making them ineffective at meeting latency service-level objectives (SLOs) under rapidly fluctuating workloads. This paper presents Prism, a multi-LLM serving system that unleashes the full potential of GPU sharing to achieve both cost efficiency and SLO attainment. At its core, Prism tackles a key limitation of existing systems$unicode{x2014}$the lack of $ extit{cross-model memory coordination}$, which is essential for flexibly sharing GPU memory across models under dynamic workloads. Prism achieves this with two key designs. First, it supports on-demand memory allocation by dynamically mapping physical to virtual memory pages, allowing flexible memory redistribution among models that space- and time-share a GPU. Second, it improves memory efficiency through a two-level scheduling policy that dynamically adjusts sharing strategies based on models' runtime demands. Evaluations on real-world traces show that Prism achieves more than $2 imes$ cost savings and $3.3 imes$ SLO attainment compared to state-of-the-art systems.
Problem

Research questions and friction points this paper is trying to address.

Reducing costs in multi-LLM serving through GPU sharing
Addressing runtime resource allocation for fluctuating workloads
Enhancing cross-model memory coordination for dynamic sharing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic GPU memory allocation for multi-LLM sharing
Two-level scheduling for efficient memory utilization
Cross-model memory coordination under dynamic workloads
🔎 Similar Papers
No similar papers found.