DCP: Addressing Input Dynamism In Long-Context Training via Dynamic Context Parallelism

πŸ“… 2025-10-12
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing context-parallel methods employ static configurations, making them ill-suited to the dynamic variations in sequence length and attention patterns inherent in training dataβ€”resulting in excessive communication overhead and imbalanced computational and memory loads across devices. This work proposes Dynamic Context Parallelism (DCP), a framework that enables adaptive parallelization by combining fine-grained, block-level data partitioning with device mapping and dynamic scheduling algorithms tailored to causal or sparse attention patterns. Its core innovation is the first integration of dynamic block partitioning into context parallelism, thereby overcoming the limitations of fixed, static partitioning. Micro-benchmark results show 1.19×–2.45Γ— speedup in attention computation under causal masking and 2.15×–3.77Γ— under sparse attention. End-to-end training achieves up to 1.46Γ— acceleration in sparse-attention scenarios.

Technology Category

Application Category

πŸ“ Abstract
Context parallelism has emerged as a key technique to support long-context training, a growing trend in generative AI for modern large models. However, existing context parallel methods rely on static parallelization configurations that overlook the dynamic nature of training data, specifically, the variability in sequence lengths and token relationships (i.e., attention patterns) across samples. As a result, these methods often suffer from unnecessary communication overhead and imbalanced computation. In this paper, we present DCP, a dynamic context parallel training framework that introduces fine-grained blockwise partitioning of both data and computation. By enabling flexible mapping of data and computation blocks to devices, DCP can adapt to varying sequence characteristics, effectively reducing communication and improving memory and computation balance. Micro-benchmarks demonstrate that DCP accelerates attention by 1.19x~2.45x under causal masks and 2.15x~3.77x under sparse attention patterns. Additionally, we observe up to 0.94x~1.16x end-to-end training speed-up for causal masks, and 1.00x~1.46x for sparse masks.
Problem

Research questions and friction points this paper is trying to address.

Addressing static parallelization limitations in long-context training
Reducing communication overhead from dynamic sequence length variability
Improving computational balance for varying attention pattern characteristics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic context parallelism adapts to varying sequence lengths
Fine-grained blockwise partitioning of data and computation
Flexible mapping reduces communication and balances computation
πŸ”Ž Similar Papers
No similar papers found.
C
Chenyu Jiang
The University of Hong Kong
Zhenkun Cai
Zhenkun Cai
Amazon Web Services
Large-scale machine learning system
Y
Ye Tian
The University of Hong Kong
Z
Zhen Jia
Amazon Web Services
Y
Yida Wang
Amazon Web Services
Chuan Wu
Chuan Wu
Professor of Computer Science, The University of Hong Kong
cloud computingdistributed machine learning algorithms and systems