🤖 AI Summary
To address the high storage overhead, memory consumption, and fault-tolerance cost of checkpointing in large language model (LLM) training, this paper proposes a dynamic adaptive checkpoint compression method. Our approach innovatively integrates bit-mask-based structured sparsification with K-means clustering–guided low-bit quantization, augmented by training-phase- and layer-structure-aware dynamic sparsification and quantization policies. Under strict numerical fidelity constraints, it achieves efficient checkpoint compression: lossless sparsification at 16× compression ratio and quantization at 2× compression with negligible accuracy degradation (<0.1%). Extensive experiments across diverse LLM scales demonstrate substantial reductions in storage footprint and GPU memory usage, while preserving fault-tolerance recovery efficiency and end-to-end training stability.
📝 Abstract
As large language models (LLMs) continue to grow in size and complexity, efficient checkpoint saving&loading has become crucial for managing storage, memory usage, and fault tolerance in LLM training. The current works do not comprehensively take into account the optimization of these several aspects. This paper proposes a novel checkpoint sparsification and quantization method that adapts dynamically to different training stages and model architectures. We present a comprehensive analysis of existing lossy and lossless compression techniques, identify current limitations, and introduce our adaptive approach that balances compression ratio, speed, and precision impact throughout the training process. Experiments on different sizes of LLMs demonstrate that our bitmask-based sparsification method achieves 16x compression ratio without compromising model accuracy. Additionally, the cluster-based quantization method achieves 2x compression ratio with little precision loss.