AA-SVD : Anchored and Adaptive SVD for Large Language Model Compression

๐Ÿ“… 2026-04-02
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing SVD-based compression methods for large language models struggle to simultaneously preserve the fidelity of original outputs and account for input distribution shifts induced by upstream compression, often leading to error accumulation and performance degradation. This work proposes a fast, training-free low-rank compression framework that, within a single-layer decomposition, jointly anchors the original output and models input distribution shifts. By end-to-end optimizing entire Transformer blocks to minimize output distortion, the method innovatively integrates output anchoring, adaptive input modeling, and block-level joint optimization. It consistently outperforms existing SVD-based baselines across various compression ratios and maintains stable performance even under high compression, effectively avoiding the catastrophic performance collapse observed in other approaches.
๐Ÿ“ Abstract
We introduce a fast low-rank factorization-based framework for compressing large language models that enables rapid compression of billion-parameter models without retraining. Unlike existing factorization-based approaches that optimize only on the original inputs, ignoring distribution shifts from upstream compression and thus propagating errors forward, or those that rely only on shifted inputs and risk drifting away from the original outputs, our approach accounts for both. Beyond individual layer compression, we further refine each transformer block end-to-end, minimizing block-level output distortion and allowing compressed layers to jointly compensate for accumulated errors. By anchoring each compressed layer to the original outputs while explicitly modeling input distribution shifts, our method finds a low-rank approximation that maintains functional equivalence with the original model. Experiments on large language models show that our method consistently outperforms existing SVD-based baselines across compression ratios, with the advantage becoming increasingly pronounced at aggressive compression budgets, where competing methods degrade substantially or collapse entirely, offering a practical solution for efficient, large-scale model deployment.
Problem

Research questions and friction points this paper is trying to address.

large language model compression
low-rank factorization
distribution shift
error propagation
functional equivalence
Innovation

Methods, ideas, or system contributions that make the work stand out.

low-rank factorization
model compression
distribution shift
functional equivalence
end-to-end block refinement
A
Atul Kumar Sinha
University of Geneva, Geneva, Switzerland
Franรงois Fleuret
Franรงois Fleuret
University of Geneva
machine learning