Untied Ulysses: Memory-Efficient Context Parallelism via Headwise Chunking

📅 2026-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high activation memory overhead of self-attention layers that limits existing context parallelism approaches when training Transformer models on extremely long sequences. To overcome this challenge, the authors propose UPipe, which introduces fine-grained headwise chunking along the attention head dimension for the first time, synergistically combining context parallelism with activation memory optimization to substantially reduce intermediate tensor storage requirements. The method achieves up to an 87.5% reduction in attention activation memory for a 32B-parameter model while maintaining training efficiency. Notably, UPipe enables training Llama3-8B on a single 8×H100 node with context lengths up to 5 million tokens, representing a more than 25% improvement over current state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
Efficiently processing long sequences with Transformer models usually requires splitting the computations across accelerators via context parallelism. The dominant approaches in this family of methods, such as Ring Attention or DeepSpeed Ulysses, enable scaling over the context dimension but do not focus on memory efficiency, which limits the sequence lengths they can support. More advanced techniques, such as Fully Pipelined Distributed Transformer or activation offloading, can further extend the possible context length at the cost of training throughput. In this paper, we present UPipe, a simple yet effective context parallelism technique that performs fine-grained chunking at the attention head level. This technique significantly reduces the activation memory usage of self-attention, breaking the activation memory barrier and unlocking much longer context lengths. Our approach reduces intermediate tensor memory usage in the attention layer by as much as 87.5$\%$ for 32B Transformers, while matching previous context parallelism techniques in terms of training speed. UPipe can support the context length of 5M tokens when training Llama3-8B on a single 8$\times$H100 node, improving upon prior methods by over 25$\%$.
Problem

Research questions and friction points this paper is trying to address.

context parallelism
memory efficiency
long sequence
activation memory
Transformer
Innovation

Methods, ideas, or system contributions that make the work stand out.

context parallelism
headwise chunking
activation memory reduction
long-context training
memory-efficient Transformer
🔎 Similar Papers
No similar papers found.