🤖 AI Summary
In long-context LLM serving, dynamic sparse attention (DSA) reduces computation but retains unselected KV cache in HBM, causing HBM capacity bottlenecks that limit batch size and throughput. Existing approaches fail to address the challenges introduced by offloading KV cache to DRAM—namely, memory access fragmentation, HBM contention, and high bandwidth demands under mixed-batch workloads. This paper proposes the first HBM-DRAM hierarchical KV cache management framework tailored for DSA. It introduces three key innovations: fragmentation-aware data transfer, working-set-driven dynamic batch sizing, and layer-wise segmented prefilling—integrated with GPU-direct host-to-device loading (FlashH2D), CPU-assisted device-to-host saving (FlashD2H), and real-time working-set estimation. Experiments demonstrate a 9.26× reduction in first-token latency and up to a 3.14× improvement in generation throughput, effectively alleviating HBM capacity constraints.
📝 Abstract
Serving long-context LLMs is costly because attention computation grows linearly with context length. Dynamic sparse attention algorithms (DSAs) mitigate this by attending only to the key-value (KV) cache of critical tokens. However, with DSAs, the main performance bottleneck shifts from HBM bandwidth to HBM capacity: KV caches for unselected tokens must remain in HBM for low-latency decoding, constraining parallel batch size and stalling further throughput gains. Offloading these underutilized KV caches to DRAM could free HBM capacity, allowing larger parallel batch sizes. Yet, achieving such hierarchical HBM-DRAM storage raises new challenges, including fragmented KV cache access, HBM cache contention, and high HBM demands of hybrid batching, that remain unresolved in prior work.
This paper proposes SparseServe, an LLM serving system that unlocks the parallel potential of DSAs through efficient hierarchical HBM-DRAM management. SparseServe introduces three key innovations to address the challenges mentioned above: (1) fragmentation-aware KV cache transfer, which accelerates HBM-DRAM data movement through GPU-direct loading (FlashH2D) and CPU-assisted saving (FlashD2H); (2) working-set-aware batch size control that adjusts batch sizes based on real-time working set estimation to minimize HBM cache thrashing; (3) layer-segmented prefill that bounds HBM use during prefill to a single layer, enabling efficient execution even for long prompts. Extensive experimental results demonstrate that SparseServe achieves up to 9.26x lower mean time-to-first-token (TTFT) latency and up to 3.14x higher token generation throughput compared to state-of-the-art LLM serving systems.