🤖 AI Summary
This work addresses a critical limitation in existing SVD-based large language model compression methods, which process layers independently and neglect the propagation and accumulation of reconstruction errors across deep networks, leading to degraded global performance. To mitigate this issue, the authors propose SAES-SVD, a novel framework that explicitly models and compensates for cross-layer cumulative error under a fixed-rank constraint. The approach integrates Cumulative Error-Aware Layer Compression (CEALC) with Adaptive Collaborative Error Suppression (ACES), leveraging second-order activation statistics to derive a closed-form low-rank solution. It further employs Frobenius norm ratios and an adaptive weighting strategy to dynamically optimize the compression objective. Evaluated across diverse large language models and tasks, SAES-SVD consistently outperforms prior methods without requiring fine-tuning or mixed-rank strategies, effectively alleviating error accumulation.
📝 Abstract
The rapid growth in the parameter scale of large language models (LLMs) has created a high demand for efficient compression techniques. As a hardware-agnostic and highly compatible technique, low-rank compression has been widely adopted. However, existing methods typically compress each layer independently by minimizing per-layer reconstruction error, overlooking a critical limitation: the reconstruction error propagates and accumulates through the network, which leads to amplified global deviations from the full-precision baseline. To address this, we propose Self-Adaptive Error Suppression SVD (SAES-SVD), a LLMs compression framework that jointly optimizes intra-layer reconstruction and inter-layer error compensation. SAES-SVD is composed of two novel components: (1) Cumulative Error-Aware Layer Compression (CEALC), which formulates the compression objective as a combination of local reconstruction and weighted cumulative error compensation. Based on it, we derive a closed-form low-rank solution relied on second-order activation statistics, which explicitly aligns each layer's output with its full-precision counterpart to compensate for accumulated errors. (2) Adaptive Collaborative Error Suppression (ACES), which automatically adjusts the weighting coefficient to enhance the low-rank structure of the compression objective in CEALC. Specifically, the coefficient is optimized to maximize the ratio between the Frobenius norm of the compressed layer's output and that of the compression objective under a fixed rank, thus ensuring that the rank budget is utilized effectively. Extensive experiments across multiple LLM architectures and tasks show that, without fine-tuning or mixed-rank strategies, SAES-SVD consistently improves post-compression performance.