Towards Swift Serverless LLM Cold Starts with ParaServe

📅 2025-02-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address high cold-start latency and SLO violations in serverless LLM inference caused by slow model loading, this paper proposes ParaServe. Methodologically, ParaServe introduces three key innovations: (1) a novel distributed model parameter sharding mechanism that concurrently fetches shards via pipeline parallelism; (2) a two-level coordination strategy—cluster-level bandwidth-aware parallelism tuning coupled with worker-level overlapping of model loading and initialization; and (3) dynamic pipeline merging to jointly accelerate cold starts while preserving hot-request performance. Experimental evaluation demonstrates that ParaServe reduces cold-start latency by up to 4.7× and improves SLO compliance rate by 1.74× compared to state-of-the-art serverless LLM serving systems, establishing significant gains in both responsiveness and reliability.

Technology Category

Application Category

📝 Abstract
With the surge in number of large language models (LLMs), the industry turns to serverless computing for LLM inference serving. However, serverless LLM serving suffers from significant cold start latency and service level objective (SLO) violations due to the substantial model size, which leads to prolonged model fetching time from remote storage. We present ParaServe, a serverless LLM serving system that minimizes cold start latency through the novel use of pipeline parallelism. Our insight is that by distributing model parameters across multiple GPU servers, we can utilize their aggregated network bandwidth to concurrently fetch different parts of the model. ParaServe adopts a two-level hierarchical design. At the cluster level, ParaServe determines the optimal degree of parallelism based on user SLOs and carefully places GPU workers across servers to reduce network interference. At the worker level, ParaServe overlaps model fetching, loading, and runtime initialization to further accelerate cold starts. Additionally, ParaServe introduces pipeline consolidation, which merges parallel groups back to individual workers to maintain optimal performance for warm requests. Our comprehensive evaluations under diverse settings demonstrate that ParaServe reduces the cold start latency by up to 4.7x and improves SLO attainment by up to 1.74x compared to baselines.
Problem

Research questions and friction points this paper is trying to address.

Minimizes serverless LLM cold start latency
Reduces model fetching time
Improves service level objective attainment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pipeline parallelism reduces cold starts
Hierarchical design optimizes GPU placement
Pipeline consolidation enhances warm request performance
🔎 Similar Papers
No similar papers found.