BitSnap: Checkpoint Sparsification and Quantization in LLM Training

📅 2025-11-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high storage overhead, memory consumption, and fault-tolerance cost of checkpointing in large language model (LLM) training, this paper proposes a dynamic adaptive checkpoint compression method. Our approach innovatively integrates bit-mask-based structured sparsification with K-means clustering–guided low-bit quantization, augmented by training-phase- and layer-structure-aware dynamic sparsification and quantization policies. Under strict numerical fidelity constraints, it achieves efficient checkpoint compression: lossless sparsification at 16× compression ratio and quantization at 2× compression with negligible accuracy degradation (<0.1%). Extensive experiments across diverse LLM scales demonstrate substantial reductions in storage footprint and GPU memory usage, while preserving fault-tolerance recovery efficiency and end-to-end training stability.

Technology Category

Application Category

📝 Abstract
As large language models (LLMs) continue to grow in size and complexity, efficient checkpoint saving&loading has become crucial for managing storage, memory usage, and fault tolerance in LLM training. The current works do not comprehensively take into account the optimization of these several aspects. This paper proposes a novel checkpoint sparsification and quantization method that adapts dynamically to different training stages and model architectures. We present a comprehensive analysis of existing lossy and lossless compression techniques, identify current limitations, and introduce our adaptive approach that balances compression ratio, speed, and precision impact throughout the training process. Experiments on different sizes of LLMs demonstrate that our bitmask-based sparsification method achieves 16x compression ratio without compromising model accuracy. Additionally, the cluster-based quantization method achieves 2x compression ratio with little precision loss.
Problem

Research questions and friction points this paper is trying to address.

Optimizes storage and memory for LLM training checkpoints
Dynamically adapts compression to training stages and architectures
Balances compression ratio with model accuracy preservation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive checkpoint sparsification and quantization method
Bitmask-based sparsification achieves 16x compression
Cluster-based quantization achieves 2x compression
🔎 Similar Papers
No similar papers found.