ByteCheckpoint: A Unified Checkpointing System for Large Foundation Model Development

📅 2024-07-29
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Training large foundation models (LFMs) faces significant challenges in checkpoint management, including poor cross-framework compatibility, tight coupling with parallelization strategies, heterogeneous storage backends, and severe I/O bottlenecks. To address these, this work proposes an industrial-grade unified archival system. Its core contributions are: (1) a novel parallelism-agnostic checkpoint serialization format; (2) a full-stack I/O optimization framework integrating a dynamic resharding engine, multi-framework abstraction interfaces (PyTorch/Megatron/DeepSpeed), asynchronous high-throughput storage adapters, and a distributed I/O monitoring toolchain; and (3) runtime support for cross-parallelism resharding, multi-backend adaptivity, and rapid failure recovery. Experiments demonstrate an average 54.20× reduction in checkpoint blocking time, with peak checkpoint save and load speedups of 9.96× and 8.80×, respectively. The system has been stably deployed in production environments scaling to over one thousand GPUs.

Technology Category

Application Category

📝 Abstract
Checkpointing to preserve training states is crucial during the development of Large Foundation Models (LFMs), for training resumption upon various failures or changes in GPU resources and parallelism configurations. In addition, saved checkpoints are dispatched to evaluation tasks or transferred across different training stages (e.g., from pre-training to post-training). All these scenarios require resharding distributed checkpoints from one parallelism to another. In production, different LFMs are trained with various frameworks and storage backends, depending on model sizes and training scales. A high-performance checkpointing system is needed to enable efficient checkpoint management at scale. This paper presents ByteCheckpoint, an industrial-grade checkpointing system for large-scale LFM training. ByteCheckpoint employs a parallelism-agnostic checkpoint representation that enables efficient load-time checkpoint resharding. ByteCheckpoint advocates a generic checkpoint saving/loading workflow to accommodate multiple training frameworks and support different storage backends. To ensure high I/O efficiency, we take a full-stack approach to optimize saving/loading plan generation, critical stages of checkpointing pipelines, and irregular tensor processing required by resharding. To guarantee the scalability of ByteCheckpoint in large-scale training, we enhance the storage system to efficiently handle high volumes of checkpointing I/O requests, devise communication optimizations within the checkpointing workflow, and introduce a suite of monitoring tools to analyze performance and detect bottlenecks. Compared to existing open-source checkpointing systems [40, 46], ByteCheckpoint significantly reduces runtime checkpoint stalls, achieving an average reduction of 54.20x. For saving and loading times, ByteCheckpoint achieves improvements of up to 9.96x and 8.80x, respectively.
Problem

Research questions and friction points this paper is trying to address.

Efficient checkpoint management for Large Foundation Models.
Support for multiple training frameworks and storage backends.
Reduction of runtime checkpoint stalls and improved I/O efficiency.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parallelism-agnostic checkpoint representation for resharding.
Generic workflow supports multiple frameworks and backends.
Full-stack optimizations enhance I/O efficiency and scalability.
🔎 Similar Papers
No similar papers found.