🤖 AI Summary
During large language model (LLM) training, activation storage in the backward pass constitutes the dominant contributor to peak GPU memory consumption; existing approaches—primarily targeting optimizer state or parameter compression—fail to effectively alleviate this bottleneck. This work introduces, for the first time, a low-rank compression mechanism specifically designed for activations: leveraging random projections, we construct an end-to-end differentiable, zero-overhead compression-reconstruction layer that yields gradient-compatible activation approximations. The method incurs no additional computational latency and is fully trainable. Experiments demonstrate up to 25–30% GPU memory reduction during pretraining and 50% during fine-tuning, with strictly preserved model accuracy. This significantly enhances memory scalability and hardware utilization for large-model training.
📝 Abstract
We introduce CompAct, a technique that reduces peak memory utilization on GPU by 25-30% for pretraining and 50% for fine-tuning of LLMs. Peak device memory is a major limiting factor in training LLMs, with various recent works aiming to reduce model memory. However most works don't target the largest component of allocated memory during training: the model's compute graph, which is stored for the backward pass. By storing low-rank, compressed activations to be used in the backward pass we greatly reduce the required memory, unlike previous methods which only reduce optimizer overheads or the number of trained parameters. Our compression uses random projection matrices, thus avoiding additional memory overheads. Comparisons with previous techniques for either pretraining or fine-tuning show that CompAct substantially improves existing compute-performance tradeoffs. We expect CompAct's savings to scale even higher for larger models.