PipeBoost: Resilient Pipelined Architecture for Fast Serverless LLM Scaling

📅 2025-03-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address high cold-start latency, GPU resource over-provisioning, and service disruption risks in multi-GPU serverless LLM inference under bursty request workloads, this paper proposes a fault-tolerant pipelined architecture that tightly integrates model loading and inference. It introduces cross-stage pipelining with PCIe-bandwidth-aware dynamic resource orchestration, enabling fine-grained scheduling across memory and compute stages. A multi-GPU collaborative failure recovery mechanism ensures zero-interruption cold starts. Furthermore, LoRA weight sharing and granular scheduling jointly improve GPU utilization and first-token generation latency. Evaluated on models including OPT-1.3B, the system achieves cold-start latencies as low as hundreds of microseconds and reduces end-to-end inference latency by 31%–49.8% compared to state-of-the-art systems.

Technology Category

Application Category

📝 Abstract
This paper presents PipeBoost, a low-latency LLM serving system for multi-GPU (serverless) clusters, which can rapidly launch inference services in response to bursty requests without preemptively over-provisioning GPUs. Many LLM inference tasks rely on the same base model (e.g., LoRA). To leverage this, PipeBoost introduces fault-tolerant pipeline parallelism across both model loading and inference stages. This approach maximizes aggregate PCIe bandwidth and parallel computation across GPUs, enabling faster generation of the first token. PipeBoost also introduces recovery techniques that enable uninterrupted inference services by utilizing the shared advantages of multiple GPUs. Experimental results show that, compared to state-of-the-art low-latency LLM serving systems, PipeBoost reduces inference latency by 31% to 49.8%. For certain models (e.g., OPT-1.3B), PipeBoost achieves cold-start latencies in the range of a few hundred microseconds.
Problem

Research questions and friction points this paper is trying to address.

Enables fast LLM scaling in serverless clusters
Improves fault-tolerant pipeline parallelism for LLM
Reduces inference latency and cold-start time
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fault-tolerant pipeline parallelism for model stages
Maximizes PCIe bandwidth and GPU computation
Fast recovery techniques for uninterrupted service
🔎 Similar Papers
No similar papers found.
C
Chongpeng Liu
Beihang University
Xiaojian Liao
Xiaojian Liao
Beihang University
Storage SystemAI System
H
Hancheng Liu
Beihang University
Limin Xiao
Limin Xiao
FDU
Fiber OpticsOptoelectronics
J
Jianxin Li
Beihang University