🤖 AI Summary
This work addresses the high storage overhead and resource contention caused by periodic full checkpointing in large language model training. The authors propose a layer-wise pruned checkpoint fusion framework that leverages the non-uniform update patterns across model layers, retaining only significantly updated layers to construct composite checkpoints. This approach enables, for the first time, fine-grained, layer-level control over both model weights and optimizer states, supports flexible integration of diverse selection strategies, and incorporates a delta-aware merging mechanism. Evaluated on Llama3.1-8B and Qwen2.5-7B, the method reduces checkpoint size by 4.3× and decreases saving time by 2.8× while preserving training convergence and final model quality.
📝 Abstract
Checkpointing is essential for fault tolerance in training large language models (LLMs). However, existing methods, regardless of their I/O strategies, periodically store the entire model and optimizer states, incurring substantial storage overhead and resource contention. Recent studies reveal that updates across LLM layers are highly non-uniform. Across training steps, some layers may undergo more significant changes, while others remain relatively stable or even unchanged. This suggests that selectively checkpointing only layers with significant updates could reduce overhead without harming training. Implementing such selective strategies requires fine-grained control over both weights and optimizer states, which no current tool provides. To address this gap, we propose \texttt{LLMTailor}, a checkpoint-merging framework that filters and assembles layers from different checkpoints to form a composite checkpoint. Our evaluation indicates that LLMTailor can work with different selective checkpointing strategies and effectively reduce checkpoint size (e.g., 4.3 times smaller for Llama3.1-8B) and checkpoint time (e.g., 2.8 times faster for Qwen2.5-7B) while maintaining model quality.