🤖 AI Summary
Transformer large-model pretraining suffers from poor scalability, high communication overhead, and low GPU utilization under tensor parallelism due to the low-rank bottleneck architecture. This paper proposes the first efficient distributed training framework specifically designed for bottleneck structures. Its core innovations include: (1) bottleneck-aware tensor parallelism—adapted to the skewed weight distribution of low-rank layers; (2) online RMSNorm—eliminating intermediate normalization storage; (3) grouped linear layer computation; (4) low-rank activation checkpointing; and (5) communication optimization. Experiments demonstrate that, on identical hardware, our method achieves 1.46–1.91× speedup over full-rank baselines and 1.87–2.27× speedup over naive 3D-parallel low-rank implementations. It significantly improves GPU utilization and reduces inter-GPU communication volume.
📝 Abstract
The scale of transformer model pre-training is constrained by the increasing computation and communication cost. Low-rank bottleneck architectures offer a promising solution to significantly reduce the training time and memory footprint with minimum impact on accuracy. Despite algorithmic efficiency, bottleneck architectures scale poorly under standard tensor parallelism. Simply applying 3D parallelism designed for full-rank methods leads to excessive communication and poor GPU utilization. To address this limitation, we propose BOOST, an efficient training framework tailored for large-scale low-rank bottleneck architectures. BOOST introduces a novel Bottleneck-aware Tensor Parallelism, and combines optimizations such as online-RMSNorm, linear layer grouping, and low-rank activation checkpointing to achieve end-to-end training speedup. Evaluations on different low-rank bottleneck architectures demonstrate that BOOST achieves 1.46-1.91$ imes$ speedup over full-rank model baselines and 1.87-2.27$ imes$ speedup over low-rank model with naively integrated 3D parallelism, with improved GPU utilization and reduced communication overhead.