🤖 AI Summary
This work addresses the high communication overhead and inefficient state caching that hinder the multi-GPU deployment of selective state space models (SSMs). It presents the first efficient tensor parallelism scheme tailored for selective SSMs, significantly reducing synchronization costs through optimized parameter sharding, improved locality of state caching, and the introduction of quantized AllReduce. The proposed method achieves 1.6–4.0× higher batch throughput for models such as Mamba on 2–4 GPUs, with even more pronounced gains in long-context scenarios. Further throughput improvements of 10–18% are obtained by incorporating quantized AllReduce, demonstrating the effectiveness of the approach in scaling selective SSMs across multiple devices.
📝 Abstract
Selective state space models (SSMs) have rapidly become a compelling backbone for large language models, especially for long-context workloads. Yet in deployment, their inference performance is often bounded by the memory capacity, bandwidth, and latency limits of a single GPU, making multi-GPU execution increasingly necessary. Although tensor parallelism (TP) is widely used to scale Transformer inference, applying it to selective SSM blocks is non-trivial because the SSM mixer couples large projections with a sequence-wise recurrent state update and local mixing whose efficiency depends on preserving locality and avoiding synchronization in the critical path. This paper presents a communication-efficient TP design for selective SSM inference that addresses three practical engineering challenges: enabling TTFT improvements via an SSM state cache across prefill and decode, partitioning the mixer's packed parameter tensor so that recurrent updates remain local while minimizing communication, and reducing TP aggregation overhead with quantized AllReduce. We evaluate on three representative SSM-based LLMs spanning pure-SSM and hybrid architectures - Mamba, Falcon-Mamba, and Zamba - on NVIDIA A6000 and A100 clusters. Our experiments show substantial throughput gains from tensor-parallel SSM inference, improving batch-request throughput by ~1.6-2.1x on 2 GPUs and ~2.6-4.0x on 4 GPUs for Mamba, with the largest benefits at long context lengths, and achieving a further ~10-18% throughput improvement from quantized all-reduce by lowering synchronization bandwidth overhead.