BOOST: BOttleneck-Optimized Scalable Training Framework for Low-Rank Large Language Models

📅 2025-12-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Transformer large-model pretraining suffers from poor scalability, high communication overhead, and low GPU utilization under tensor parallelism due to the low-rank bottleneck architecture. This paper proposes the first efficient distributed training framework specifically designed for bottleneck structures. Its core innovations include: (1) bottleneck-aware tensor parallelism—adapted to the skewed weight distribution of low-rank layers; (2) online RMSNorm—eliminating intermediate normalization storage; (3) grouped linear layer computation; (4) low-rank activation checkpointing; and (5) communication optimization. Experiments demonstrate that, on identical hardware, our method achieves 1.46–1.91× speedup over full-rank baselines and 1.87–2.27× speedup over naive 3D-parallel low-rank implementations. It significantly improves GPU utilization and reduces inter-GPU communication volume.

Technology Category

Application Category

📝 Abstract
The scale of transformer model pre-training is constrained by the increasing computation and communication cost. Low-rank bottleneck architectures offer a promising solution to significantly reduce the training time and memory footprint with minimum impact on accuracy. Despite algorithmic efficiency, bottleneck architectures scale poorly under standard tensor parallelism. Simply applying 3D parallelism designed for full-rank methods leads to excessive communication and poor GPU utilization. To address this limitation, we propose BOOST, an efficient training framework tailored for large-scale low-rank bottleneck architectures. BOOST introduces a novel Bottleneck-aware Tensor Parallelism, and combines optimizations such as online-RMSNorm, linear layer grouping, and low-rank activation checkpointing to achieve end-to-end training speedup. Evaluations on different low-rank bottleneck architectures demonstrate that BOOST achieves 1.46-1.91$ imes$ speedup over full-rank model baselines and 1.87-2.27$ imes$ speedup over low-rank model with naively integrated 3D parallelism, with improved GPU utilization and reduced communication overhead.
Problem

Research questions and friction points this paper is trying to address.

Optimizes training for low-rank LLMs to reduce computation and communication costs
Addresses poor scalability of bottleneck architectures under standard tensor parallelism
Enhances GPU utilization and reduces overhead in large-scale low-rank model training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bottleneck-aware Tensor Parallelism for low-rank models
Online-RMSNorm and linear layer grouping optimizations
Low-rank activation checkpointing for end-to-end speedup
🔎 Similar Papers
No similar papers found.