CompAct: Compressed Activations for Memory-Efficient LLM Training

📅 2024-10-20
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
During large language model (LLM) training, activation storage in the backward pass constitutes the dominant contributor to peak GPU memory consumption; existing approaches—primarily targeting optimizer state or parameter compression—fail to effectively alleviate this bottleneck. This work introduces, for the first time, a low-rank compression mechanism specifically designed for activations: leveraging random projections, we construct an end-to-end differentiable, zero-overhead compression-reconstruction layer that yields gradient-compatible activation approximations. The method incurs no additional computational latency and is fully trainable. Experiments demonstrate up to 25–30% GPU memory reduction during pretraining and 50% during fine-tuning, with strictly preserved model accuracy. This significantly enhances memory scalability and hardware utilization for large-model training.

Technology Category

Application Category

📝 Abstract
We introduce CompAct, a technique that reduces peak memory utilization on GPU by 25-30% for pretraining and 50% for fine-tuning of LLMs. Peak device memory is a major limiting factor in training LLMs, with various recent works aiming to reduce model memory. However most works don't target the largest component of allocated memory during training: the model's compute graph, which is stored for the backward pass. By storing low-rank, compressed activations to be used in the backward pass we greatly reduce the required memory, unlike previous methods which only reduce optimizer overheads or the number of trained parameters. Our compression uses random projection matrices, thus avoiding additional memory overheads. Comparisons with previous techniques for either pretraining or fine-tuning show that CompAct substantially improves existing compute-performance tradeoffs. We expect CompAct's savings to scale even higher for larger models.
Problem

Research questions and friction points this paper is trying to address.

Reduces GPU memory usage
Compresses activations for backward pass
Improves compute-performance tradeoffs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compresses activations for memory efficiency
Uses random projection matrices
Reduces peak GPU memory usage
🔎 Similar Papers
No similar papers found.