🤖 AI Summary
To address high cold-start latency, GPU resource over-provisioning, and service disruption risks in multi-GPU serverless LLM inference under bursty request workloads, this paper proposes a fault-tolerant pipelined architecture that tightly integrates model loading and inference. It introduces cross-stage pipelining with PCIe-bandwidth-aware dynamic resource orchestration, enabling fine-grained scheduling across memory and compute stages. A multi-GPU collaborative failure recovery mechanism ensures zero-interruption cold starts. Furthermore, LoRA weight sharing and granular scheduling jointly improve GPU utilization and first-token generation latency. Evaluated on models including OPT-1.3B, the system achieves cold-start latencies as low as hundreds of microseconds and reduces end-to-end inference latency by 31%–49.8% compared to state-of-the-art systems.
📝 Abstract
This paper presents PipeBoost, a low-latency LLM serving system for multi-GPU (serverless) clusters, which can rapidly launch inference services in response to bursty requests without preemptively over-provisioning GPUs. Many LLM inference tasks rely on the same base model (e.g., LoRA). To leverage this, PipeBoost introduces fault-tolerant pipeline parallelism across both model loading and inference stages. This approach maximizes aggregate PCIe bandwidth and parallel computation across GPUs, enabling faster generation of the first token. PipeBoost also introduces recovery techniques that enable uninterrupted inference services by utilizing the shared advantages of multiple GPUs. Experimental results show that, compared to state-of-the-art low-latency LLM serving systems, PipeBoost reduces inference latency by 31% to 49.8%. For certain models (e.g., OPT-1.3B), PipeBoost achieves cold-start latencies in the range of a few hundred microseconds.