Universal Checkpointing: Efficient and Flexible Checkpointing for Large Scale Distributed Training

📅 2024-06-27
🏛️ arXiv.org
📈 Citations: 9
Influential: 0
📄 PDF
🤖 AI Summary
In large-scale DNN distributed training, checkpointing is tightly coupled with model parallelism strategies and hardware topology, severely limiting fault tolerance and elastic scalability. To address this, we propose the “distributed storage, unified loading” paradigm: during saving, model parameters are stored in a distributed representation aligned with the current parallel configuration; during restoration, they are uniformly reconstructed into a logically consistent parameter view. We design a universal checkpoint format—incorporating merged parameter representations and mapping metadata—a Universal Checkpoint Language (UCL), and an on-demand state reconstruction mechanism, achieving, for the first time, full decoupling of checkpointing from parallel configurations. Evaluated on LLaMA, Bloom, and other mainstream large models under diverse parallelism paradigms—including tensor parallelism (TP), pipeline parallelism (PP), data parallelism (DP), and context parallelism (CP)—our approach reduces post-failure recovery time by 12–28% on average, significantly enhancing cross-configuration portability and system robustness.

Technology Category

Application Category

📝 Abstract
Existing checkpointing approaches seem ill-suited for distributed training even though hardware limitations make model parallelism, i.e., sharding model state across multiple accelerators, a requirement for model scaling. Consolidating distributed model state into a single checkpoint unacceptably slows down training, and is impractical at extreme scales. Distributed checkpoints, in contrast, are tightly coupled to the model parallelism and hardware configurations of the training run, and thus unusable on different configurations. To address this problem, we propose Universal Checkpointing, a technique that enables efficient checkpoint creation while providing the flexibility of resuming on arbitrary parallelism strategy and hardware configurations. Universal Checkpointing unlocks unprecedented capabilities for large-scale training such as improved resilience to hardware failures through continued training on remaining healthy hardware, and reduced training time through opportunistic exploitation of elastic capacity. The key insight of Universal Checkpointing is the selection of the optimal representation in each phase of the checkpointing life cycle: distributed representation for saving, and consolidated representation for loading. This is achieved using two key mechanisms. First, the universal checkpoint format, which consists of a consolidated representation of each model parameter and metadata for mapping parameter fragments into training ranks of arbitrary model-parallelism configuration. Second, the universal checkpoint language, a simple but powerful specification language for converting distributed checkpoints into the universal checkpoint format. Our evaluation demonstrates the effectiveness and generality of Universal Checkpointing on state-of-the-art model architectures and a wide range of parallelism techniques.
Problem

Research questions and friction points this paper is trying to address.

Enables reconfigurable parallelism in large-scale DNN training
Decouples checkpoint structure from hardware configurations
Supports flexible mapping of checkpoint state to parallelism strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decouples checkpoint structure from parallelism strategies
Pattern-based reconfiguration pipeline for automatic mapping
Enables flexible parallelism reconfiguration with minimal cost
🔎 Similar Papers
No similar papers found.
X
Xinyu Lian
University of Illinois at Urbana-Champaign
S
Sam Ade Jacobs
Microsoft
L
Lev Kurilenko
Microsoft
M
Masahiro Tanaka
Microsoft
S
Stas Bekman
StasoSphere
Olatunji Ruwase
Olatunji Ruwase
Microsoft Research
Deep LearningOperating SystemsProgramming LanguagesComputer Architecture
Minjia Zhang
Minjia Zhang
University of Illinois at Urbana-Champagin
ParallelismMachine Learning SystemsModel CompressionLLM Application