🤖 AI Summary
To address high approximation errors in low-rank compression of large language models (LLMs) caused by large variance in activation distributions and significant inter-layer sensitivity disparities, this paper proposes ASVD—a training-free post-training compression method. Methodologically: (1) it introduces an activation-distribution-driven weight pre-transformation to absorb outlier activations; (2) it employs a layer-adaptive iterative calibration strategy to mitigate uneven sensitivity across layers; and (3) it pioneers the extension of low-rank decomposition to channel-wise compression of KV caches. Technically, ASVD integrates activation statistical modeling, weighted singular value decomposition (SVD), and channel-wise dimensionality reduction. Experiments demonstrate that, without any fine-tuning or training, ASVD achieves 10–30% weight compression, reduces KV cache memory by 50%, and incurs zero accuracy degradation—enabling immediate, plug-and-play deployment.
📝 Abstract
In this paper, we introduce a new post-training compression paradigm for Large Language Models (LLMs) to facilitate their wider adoption. We delve into LLM weight low-rank decomposition, and find that the challenges of this task stem from the distribution variance in the LLM activations and the sensitivity difference among various kinds of layers. To address these issues, we propose a training-free approach called Activation-aware Singular Value Decomposition (ASVD). Specifically, ASVD manages activation outliers by transforming the weight matrix based on the activation distribution. This transformation allows the outliers in the activation matrix to be absorbed into the transformed weight matrix, thereby enhancing decomposition accuracy. Additionally, we propose an efficient iterative calibration process to optimize layer-specific decomposition by addressing the varying sensitivity of different LLM layers. In this way, ASVD can compress a network by 10%-30%. Based on the success of the low-rank decomposition of projection matrices in the self-attention module, we further introduce ASVD to compress the KV cache. By reducing the channel dimension of KV activations, memory requirements for KV cache can be largely reduced. ASVD can further achieve 50% KV cache reductions without performance drop in a training-free manner. Code is anonymously available in supplementary materials.