ResSVD: Residual Compensated SVD for Large Language Model Compression

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the error accumulation from ignoring singular value decomposition (SVD) truncation residuals and the sharp performance degradation caused by uniform layer-wise compression in large language model (LLM) compression, this work proposes a residual compensation and critical-layer selective compression framework. First, it explicitly models the SVD truncation residual as a learnable compensation term to mitigate approximation error. Second, guided by hierarchical importance analysis, it applies compression only to the final few transformer layers—under a fixed compression ratio—to suppress forward propagation of reconstruction errors. The method integrates post-training SVD compression, residual matrix modeling, and importance-aware layer selection. Extensive evaluation across diverse LLMs (Llama-2/3, Qwen) and benchmarks (GLUE, MMLU, CMMLU) demonstrates substantial improvements over existing SVD-based approaches: at high compression ratios (e.g., 50% parameter count), it achieves significantly lower accuracy degradation—averaging +2.1% absolute accuracy gain—thereby balancing efficiency and fidelity.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have demonstrated impressive capabilities in a wide range of downstream natural language processing tasks. Nevertheless, their considerable sizes and memory demands hinder practical deployment, underscoring the importance of developing efficient compression strategies. Singular value decomposition (SVD) decomposes a matrix into orthogonal components, enabling efficient low-rank approximation. This is particularly suitable for LLM compression, where weight matrices often exhibit significant redundancy. However, current SVD-based methods neglect the residual matrix from truncation, resulting in significant truncation loss. Additionally, compressing all layers of the model results in severe performance degradation. To overcome these limitations, we propose ResSVD, a new post-training SVD-based LLM compression method. Specifically, we leverage the residual matrix generated during the truncation process to reduce truncation loss. Moreover, under a fixed overall compression ratio, we selectively compress the last few layers of the model, which mitigates error propagation and significantly improves the performance of compressed models.Comprehensive evaluations of ResSVD on diverse LLM families and multiple benchmark datasets indicate that ResSVD consistently achieves superior performance over existing counterpart methods, demonstrating its practical effectiveness.
Problem

Research questions and friction points this paper is trying to address.

Reduces truncation loss in SVD-based LLM compression
Selectively compresses model layers to minimize degradation
Improves performance of compressed large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

ResSVD leverages residual matrix for truncation loss reduction
Selective compression of last few layers mitigates error
Post-training SVD method improves LLM compression performance
🔎 Similar Papers
No similar papers found.
H
Haolei Bai
Westlake University, Nanyang Technological University
S
Siyong Jian
Westlake University, Nanjing University
Tuo Liang
Tuo Liang
Case Western
VLMVisual ReasoningVisual Hallucination
Y
Yu Yin
Case Western Reserve University
H
Huan Wang
Westlake University