PRAC: Principal-Random Subspace for LLM Activation Compression and Memory-Efficient Training

📅 2026-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high activation memory cost that constitutes a major bottleneck in large-batch training of large language models. Existing compression methods are limited by their neglect of the spectral structure of activations. To overcome this, we propose a principal–random subspace decomposition approach: it preserves critical information in the principal component subspace via singular value decomposition (SVD) while approximating the tail components through unbiased random sampling in the orthogonal complement subspace. Our method establishes, for the first time, a theoretical link between subspace projection and rapid convergence, and introduces an exact scaling factor to minimize the variance of gradient estimates. Experiments demonstrate up to 36% activation memory savings during both pretraining and fine-tuning, with negligible performance degradation and minimal computational overhead.

Technology Category

Application Category

📝 Abstract
Activations have become the primary memory bottleneck in large-batch LLM training. However, existing compression methods fail to exploit the spectral structure of activations, resulting in slow convergence or limited compression. To address this, we bridge the relationship between the algorithm's fast convergence and the requirements for subspace projection, and show that an effective compression should yield an unbiased estimate of the original activation with low variance. We propose Principal-Random Subspace for LLM Activation Compression (PRAC), which novelly decomposes activations into two components: a principal subspace captured via SVD to retain dominant information, and a random subspace sampled from the orthogonal complement to approximate the tail. By introducing a precise scaling factor, we prove that PRAC yields an unbiased gradient estimator with minimum variance under certain conditions. Extensive experiments on pre-training and fine-tuning tasks demonstrate that PRAC achieves up to 36% total memory reduction with negligible performance degradation and minimal computational cost.
Problem

Research questions and friction points this paper is trying to address.

activation compression
memory bottleneck
large language models
spectral structure
LLM training
Innovation

Methods, ideas, or system contributions that make the work stand out.

activation compression
principal-random subspace
unbiased gradient estimator
memory-efficient training
large language models
🔎 Similar Papers
No similar papers found.
Y
Yanyi Li
School of Intelligence Science and Technology, Peking University, China
Y
Yimu Zhang
School of Intelligence Science and Technology, Peking University, China
Cong Fang
Cong Fang
Peking University
machine learningoptmizationstatistics