🤖 AI Summary
To address three critical bottlenecks—load imbalance, memory overflow, and fault-induced interruptions—in data loading for multi-source training of Large Foundation Models (LFMs), this paper proposes a centralized, declarative data plane architecture. Methodologically, it introduces: (1) a decoupled preprocessing framework with role-separated components (Source Loaders and Data Constructors) enabling elastic orchestration across heterogeneous data sources; (2) a fault-tolerant mechanism leveraging shadow loaders and differential snapshot checkpoints; and (3) a unified scheduling strategy integrating long/short-context handling, multimodal data processing, and curriculum learning. Implemented via a distributed Actor model with auto-scaling worker pools, the system achieves a 4.5× end-to-end training throughput improvement and reduces CPU memory footprint by ≥3.6× on thousands-of-GPU clusters. The approach significantly enhances efficiency, scalability, and robustness in large-scale, multi-source LFM training.
📝 Abstract
Modern frameworks for training large foundation models (LFMs) employ data loaders in a data parallel paradigm. While this design offers implementation simplicity, it introduces two fundamental challenges. First, due to the quadratic computational complexity of the attention operator, the non-uniform sample distribution over data-parallel ranks leads to a significant workload imbalance among loaders, which degrades the training efficiency. This paradigm also impedes the implementation of data mixing algorithms (e.g., curriculum learning) over different datasets. Second, to acquire a broad range of capability, LFMs training ingests data from diverse sources, each with distinct file access states. Colocating massive datasets within loader instances can easily exceed local pod memory capacity. Additionally, heavy sources with higher transformation latency require larger worker pools, further exacerbating memory consumption. We present OVERLORD, an industrial-grade distributed data loading architecture with three innovations: (1) A centralized and declarative data plane, which facilitates elastic data orchestration strategy, such as long-short context, multimodal, and curriculum learning; (2) Disaggregated multisource preprocessing through role-specific actors, i.e., Source Loaders and Data Constructors, leveraging autoscaling for Source Loaders towards heterogeneous and evolving source preprocessing cost; (3) Shadow Loaders with differential checkpointing for uninterrupted fault recovery. Deployed on production clusters scaling to multi-thousand GPU, OVERLORD achieves: (1) 4.5x end-to-end training throughput improvement, (2) a minimum 3.6x reduction in CPU memory usage, with further improvements to be added in later experiments.