π€ AI Summary
To address the prohibitive computational and memory overhead of training large language models (LLMs) from scratch, this paper proposes LOSTβa novel method that jointly models low-rank and sparse structures for efficient pretraining. LOST decomposes model weights via singular value decomposition (SVD), using dominant singular vectors as a low-rank basis, while incorporating channel-wise sparse residual terms; both components are optimized end-to-end. This co-design mitigates information loss inherent in conventional low-rank approximations, preserving representational capacity while improving efficiency. LOST enables scalable pretraining across model sizes ranging from 60M to 7B parameters. Empirically, it matches or surpasses full-rank baselines across multi-scale downstream tasks, reduces GPU memory consumption by 38%β52%, and cuts FLOPs by 29%β47%. The implementation is publicly available.
π Abstract
While large language models (LLMs) have achieved remarkable performance across a wide range of tasks, their massive scale incurs prohibitive computational and memory costs for pre-training from scratch. Recent studies have investigated the use of low-rank parameterization as a means of reducing model size and training cost. In this context, sparsity is often employed as a complementary technique to recover important information lost in low-rank compression by capturing salient features in the residual space. However, existing approaches typically combine low-rank and sparse components in a simplistic or ad hoc manner, often resulting in undesirable performance degradation compared to full-rank training. In this paper, we propose extbf{LO}w-rank and extbf{S}parse pre- extbf{T}raining ( extbf{LOST}) for LLMs, a novel method that ingeniously integrates low-rank and sparse structures to enable effective training of LLMs from scratch under strict efficiency constraints. LOST applies singular value decomposition to weight matrices, preserving the dominant low-rank components, while allocating the remaining singular values to construct channel-wise sparse components to complement the expressiveness of low-rank training. We evaluate LOST on LLM pretraining ranging from 60M to 7B parameters. Our experiments show that LOST achieves competitive or superior performance compared to full-rank models, while significantly reducing both memory and compute overhead. Moreover, Code is available at href{https://github.com/JiaxiLi1/LOST-Low-rank-and-Sparse-Training-for-Large-Language-Models}{LOST Repo}