Data-Centric Elastic Pipeline Parallelism for Efficient Long-Context LLM Training

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address high communication overhead, excessive GPU memory pressure, and load imbalance caused by fixed-granularity pipeline parallelism (PP) in training large language models (LLMs) on long-context sequences, this paper proposes the Elastic Pipeline Parallelism (EPP) framework and the efficient training system InfiniPipe. Methodologically, it introduces (1) a novel hybrid dynamic parallelism mechanism operating at both token- and batch-levels, integrating sequence splitting/packing with stage-aware scheduling to achieve resource-aware, fine-grained load balancing; and (2) a distributed gradient checkpointing strategy jointly optimizing memory footprint and computational efficiency. Experiments on realistic, non-uniform-length datasets demonstrate that InfiniPipe achieves 1.69× speedup over state-of-the-art systems, significantly improving hardware utilization and training throughput for long-context LLMs.

Technology Category

Application Category

📝 Abstract
Long context training is crucial for LLM's context extension. Existing schemes, such as sequence parallelism, incur substantial communication overhead. Pipeline parallelism (PP) reduces this cost, but its effectiveness hinges on partitioning granularity. Batch-level PP dividing input samples exhibits high memory consumption in long-context scenario, whereas token-level PP splitting sequences into slices alleviates memory overhead but may incur hardware under-utilization. This trade-off motivates adaptively selecting PP granularity to match resource and workload characteristics. Moreover, sequence length distribution of the real-world dataset exhibits skewness, posing a challenge on PP's workload balance and efficient scheduling. Current static PP scheduling methods overlook the variance of sequence length, leading to suboptimal performance. In this paper, we propose Elastic Pipeline Parallelism (EPP) that orchestrates token-level PP and batch-level PP to adapt to resource and workload heterogeneity. We build InfiniPipe, a distributed training system that unleashes the potential of EPP via (1) a resource-aware and workload-balanced sequence processor that splits long sequences and packs short ones; and (2) a co-optimization methodology that jointly optimizes pipeline schedule and gradient checkpointing via a mechanism named stage-aware chunk-level adaptive checkpointing. Comprehensive experiments demonstrate that InfiniPipe achieves a 1.69x speedup over state-of-the-art systems.
Problem

Research questions and friction points this paper is trying to address.

Optimizing pipeline parallelism granularity for efficient long-context LLM training
Addressing workload imbalance due to skewed sequence length distributions
Reducing communication overhead and memory consumption in distributed training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Elastic Pipeline Parallelism adaptively combines token-level and batch-level parallelism
InfiniPipe system uses a resource-aware processor to split and pack sequences
Co-optimization jointly improves pipeline scheduling and gradient checkpointing