DynaServe: Unified and Elastic Tandem-Style Execution for Dynamic Disaggregated LLM Serving

📅 2025-04-12
📈 Citations: 0
Influential: 0
📄 PDF
📝 Abstract
Modern large language model (LLM) serving must efficiently handle highly dynamic workloads, where prompt and response lengths vary significantly across requests. Existing systems typically adopt either colocated execution, where prefill and decode stages share the same GPU for high throughput, or disaggregated execution, which decouples the two stages and assign their tasks to dedicated GPUs for interference avoidance. However, both paradigms face critical limitations: colocation suffers from resource contention and prolonged tail latency, whereas disaggregation likely leads to resource wasting when prefill or decode GPUs are not fully occupied. To address the above limitations, we introduce DynaServe, a unified LLM serving framework based on the Tandem Serving model. Under this model, DynaServe elastically decomposes each request into two virtual sub-requests that are collaboratively processed by a pair of GPU instances. The Lead GPU handles the initial prompt and early generation, while the Follow GPU completes decoding, enabling dynamic load balancing, fine-grained batching, and coherent execution across distributed resources. By coordinating computation and memory across the cluster, DynaServe adapts to diverse and bursty workloads while maintaining stringent latency service-level objectives (SLOs). Evaluations on real-world traces show that DynaServe improves end-to-end Serving Capacity by up to 1.23 $ imes$, increases the overall goodput from 1.15 $ imes$ to 4.34 $ imes$, and improve the memory utilization by up to 49% compared to state-of-the-art colocated and disaggregated systems.
Problem

Research questions and friction points this paper is trying to address.

Optimizes dynamic LLM workloads with varying prompt and response lengths
Resolves resource contention in colocated execution and waste in disaggregated systems
Enables elastic load balancing and fine-grained batching across distributed GPUs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified tandem serving model with elastic request decomposition
Lead and Follow GPUs for dynamic load balancing and fine-grained batching
Coordinated computation and memory across cluster for diverse workloads
🔎 Similar Papers
No similar papers found.