TASP: Topology-aware Sequence Parallelism

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Ring Attention suffers from suboptimal communication efficiency in long-context large language models due to a mismatch between its ring-based AllGather and the All-to-All topology native to modern accelerators. To address this, we propose a topology-aware sequence parallelism method. Our core innovation is the first joint decomposition of accelerator physical topology and communication primitives, leveraging Hamiltonian graph theory to construct orthogonal ring-shaped data paths that enable multi-path concurrent AllGather transmissions. The method is hardware-aware and compatible with NVIDIA H100 and AMD MI300X architectures. Evaluated on single- and multi-node systems, it achieves up to 3.58× speedup over Ring Attention, significantly improving communication efficiency and training throughput for long-context workloads.

Technology Category

Application Category

📝 Abstract
Long-context large language models (LLMs) face constraints due to the quadratic complexity of the self-attention mechanism. The mainstream sequence parallelism (SP) method, Ring Attention, attempts to solve this by distributing the query into multiple query chunks across accelerators and enable each Q tensor to access all KV tensors from other accelerators via the Ring AllGather communication primitive. However, it exhibits low communication efficiency, restricting its practical applicability. This inefficiency stems from the mismatch between the Ring AllGather communication primitive it adopts and the AlltoAll topology of modern accelerators. A Ring AllGather primitive is composed of iterations of ring-styled data transfer, which can only utilize a very limited fraction of an AlltoAll topology. Inspired by the Hamiltonian decomposition of complete directed graphs, we identify that modern accelerator topology can be decomposed into multiple orthogonal ring datapaths which can concurrently transfer data without interference. Based on this, we further observe that the Ring AllGather primitive can also be decomposed into the same number of concurrent ring-styled data transfer at every iteration. Based on these insights, we propose TASP, a topology-aware SP method for long-context LLMs that fully utilizes the communication capacity of modern accelerators via topology decomposition and primitive decomposition. Experimental results on both single-node and multi-node NVIDIA H100 systems and a single-node AMD MI300X system demonstrate that TASP achieves higher communication efficiency than Ring Attention on these modern accelerator topologies and achieves up to 3.58 speedup than Ring Attention and its variant Zigzag-Ring Attention. The code is available at https://github.com/infinigence/HamiltonAttention.
Problem

Research questions and friction points this paper is trying to address.

Addresses inefficient communication in long-context LLM sequence parallelism
Solves mismatch between Ring AllGather and modern AlltoAll accelerator topologies
Improves communication capacity utilization via topology and primitive decomposition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decomposes accelerator topology into orthogonal ring datapaths
Decomposes Ring AllGather into concurrent ring transfers
Fully utilizes communication capacity via topology decomposition
🔎 Similar Papers