🤖 AI Summary
During large language model (LLM) training, attention mechanisms are prone to numerical failures—such as INF/NaN values—that cause abrupt interruptions and severely degrade training efficiency. To address this, we propose the first algorithm-level fault-tolerant method specifically designed for Transformer attention modules. Our approach pioneers the deep adaptation of Algorithm-Based Fault Tolerance (ABFT) to scaled dot-product attention, leveraging error propagation analysis to devise a lightweight matrix checksum encoding scheme and a dynamic error detection-and-recovery mechanism. The method ensures high reliability while achieving system-level performance optimization: it incurs only 7% average training overhead across four mainstream LLMs, achieves 100% detection and correction of extreme numerical errors, and reduces fault recovery cost by up to 49× compared to conventional checkpoint-based recovery.
📝 Abstract
Large Language Models (LLMs) have demonstrated remarkable performance in various natural language processing tasks. However, the training of these models is computationally intensive and susceptible to faults, particularly in the attention mechanism, which is a critical component of transformer-based LLMs. In this paper, we investigate the impact of faults on LLM training, focusing on INF, NaN, and near-INF values in the computation results with systematic fault injection experiments. We observe the propagation patterns of these errors, which can trigger non-trainable states in the model and disrupt training, forcing the procedure to load from checkpoints. To mitigate the impact of these faults, we propose ATTNChecker, the first Algorithm-Based Fault Tolerance (ABFT) technique tailored for the attention mechanism in LLMs. ATTNChecker is designed based on fault propagation patterns of LLM and incorporates performance optimization to adapt to both system reliability and model vulnerability while providing lightweight protection for fast LLM training. Evaluations on four LLMs show that ATTNChecker incurs on average 7% overhead on training while detecting and correcting all extreme errors. Compared with the state-of-the-art checkpoint/restore approach, ATTNChecker reduces recovery overhead by up to 49x.