ZeCO: Zero Communication Overhead Sequence Parallelism for Linear Attention

📅 2025-07-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing sequence parallelism (SP) methods suffer from severe communication overhead bottlenecks when training linear attention models on ultra-long sequences (e.g., >1M tokens). This paper introduces ZeCO, an end-to-end approximately linearly scalable SP framework. Its core innovation is the All-Scan collective communication primitive—the first to achieve zero-communication-overhead sequence partitioning and state synchronization, with provably optimal time and space complexity. ZeCO tightly integrates linear attention computation with All-Scan, enabling efficient inter-device state propagation and computational coordination. Experiments demonstrate that ZeCO achieves a 60% speedup over the state-of-the-art method on 256 GPUs for 8M-length sequences; moreover, training a 1M-token sequence on 64 GPUs incurs latency equivalent to training a 16K-token sequence on a single GPU—significantly alleviating distributed training performance bottlenecks.

Technology Category

Application Category

📝 Abstract
Linear attention mechanisms deliver significant advantages for Large Language Models (LLMs) by providing linear computational complexity, enabling efficient processing of ultra-long sequences (e.g., 1M context). However, existing Sequence Parallelism (SP) methods, essential for distributing these workloads across devices, become the primary bottleneck due to substantial communication overhead. In this paper, we introduce ZeCO (Zero Communication Overhead) sequence parallelism for linear attention models, a new SP method designed to overcome these limitations and achieve end-to-end near-linear scalability for long sequence training. For example, training a model with a 1M sequence length across 64 devices using ZeCO takes roughly the same time as training with an 16k sequence on a single device. At the heart of ZeCO lies All-Scan, a new collective communication primitive. All-Scan provides each SP rank with precisely the initial operator state it requires while maintaining a minimal communication footprint, effectively eliminating communication overhead. Theoretically, we prove the optimaity of ZeCO, showing that it introduces only negligible time and space overhead. Empirically, we compare the communication costs of different sequence parallelism strategies and demonstrate that All-Scan achieves the fastest communication in SP scenarios. Specifically, on 256 GPUs with an 8M sequence length, ZeCO achieves a 60% speedup compared to the current state-of-the-art (SOTA) SP method. We believe ZeCO establishes a clear path toward efficiently training next-generation LLMs on previously intractable sequence lengths.
Problem

Research questions and friction points this paper is trying to address.

Reducing communication overhead in sequence parallelism
Achieving linear scalability for long sequence training
Enabling efficient training of ultra-long sequence LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Zero communication overhead sequence parallelism
All-Scan collective communication primitive
Near-linear scalability for long sequences
🔎 Similar Papers
No similar papers found.