Nexus: Taming Throughput-Latency Tradeoff in LLM Serving via Efficient GPU Sharing

📅 2025-07-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language model (LLM) serving faces a fundamental trade-off between throughput and latency, particularly due to resource contention between the memory-bound prefill phase and the compute-bound decode phase. Method: This paper proposes the first intra-GPU dynamic resource decoupling framework for prefill and decode, eliminating inter-GPU communication overhead. Leveraging chunked prefill analysis and GPU resource marginal return diminishing characteristics, it introduces a real-time, fine-grained scheduling policy anchored at the resource saturation point to minimize interference in mixed workloads. Contribution/Results: Experiments show 2.2× higher throughput, 20× lower time-to-first-token latency, and 2.5× faster per-token generation versus vLLM; it achieves superior performance using only half the GPUs required by vLLM’s multi-GPU splitting strategy and outperforms SGLang. The core innovation is the first adaptive, phase-aware resource partitioning within a single GPU—enabling efficient co-scheduling without cross-GPU synchronization or request-level interference.

Technology Category

Application Category

📝 Abstract
Current prefill-decode (PD) disaggregation is typically deployed at the level of entire serving engines, assigning separate GPUs to handle prefill and decode phases. While effective at reducing latency, this approach demands more hardware. To improve GPU utilization, Chunked Prefill mixes prefill and decode requests within the same batch, but introduces phase interference between prefill and decode. While existing PD disaggregation solutions separate the phases across GPUs, we ask: can the same decoupling be achieved within a single serving engine? The key challenge lies in managing the conflicting resource requirements of prefill and decode when they share the same hardware. In this paper, we first show that chunked prefill requests cause interference with decode requests due to their distinct requirements for GPU resources. Second, we find that GPU resources exhibit diminishing returns. Beyond a saturation point, increasing GPU allocation yields negligible latency improvements. This insight enables us to split a single GPU's resources and dynamically allocate them to prefill and decode on the fly, effectively disaggregating the two phases within the same GPU. Across a range of models and workloads, our system Nexus achieves up to 2.2x higher throughput, 20x lower TTFT, and 2.5x lower TBT than vLLM. It also outperforms SGLang with up to 2x higher throughput, 2x lower TTFT, and 1.7x lower TBT, and achieves 1.4x higher throughput than vLLM-disaggregation using only half the number of GPUs.
Problem

Research questions and friction points this paper is trying to address.

Balancing throughput-latency tradeoff in LLM serving
Reducing GPU resource conflicts between prefill and decode phases
Improving GPU utilization without additional hardware
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic GPU resource allocation for prefill and decode
Efficient GPU sharing to reduce hardware demand
Chunked Prefill with minimized phase interference
🔎 Similar Papers
No similar papers found.