WSVD: Weighted Low-Rank Approximation for Fast and Efficient Execution of Low-Precision Vision-Language Models

📅 2026-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing SVD-based model compression methods struggle to simultaneously achieve inference acceleration and accuracy preservation in vision-language models. This work proposes Weighted Singular Value Decomposition (WSVD), which introduces a fine-grained importance-aware mechanism to adaptively assign weights during low-rank approximation and jointly optimizes weight quantization with activation quantization. By prioritizing critical components through this weighting strategy, WSVD maintains model accuracy while delivering over 1.8× faster decoding speed, significantly outperforming current low-rank compression approaches.
📝 Abstract
Singular Value Decomposition (SVD) has become an important technique for reducing the computational burden of Vision Language Models (VLMs), which play a central role in tasks such as image captioning and visual question answering. Although multiple prior works have proposed efficient SVD variants to enable low-rank operations, we find that in practice it remains difficult to achieve substantial latency reduction during model execution. To address this limitation, we introduce a new computational pattern and apply SVD at a finer granularity, enabling real and measurable improvements in execution latency. Furthermore, recognizing that weight elements differ in their relative importance, we adaptively allocate relative importance to each element during SVD process to better preserve accuracy, then extend this framework with quantization applied to both weights and activations, resulting in a highly efficient VLM. Collectively, we introduce~\textit{Weighted SVD} (WSVD), which outperforms other approaches by achieving over $1.8\times$ decoding speedup while preserving accuracy. We open source our code at: \href{https://github.com/SAI-Lab-NYU/WSVD}{\texttt{https://github.com/SAI-Lab-NYU/WSVD}
Problem

Research questions and friction points this paper is trying to address.

Vision-Language Models
Low-Rank Approximation
Model Latency
Singular Value Decomposition
Efficient Inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Weighted SVD
Low-Rank Approximation
Vision-Language Models
Model Quantization
Efficient Inference
🔎 Similar Papers
No similar papers found.
H
Haiyu Wang
Tandon School of Engineering, New York University
Y
Yutong Wang
Tandon School of Engineering, New York University
J
Jack Jiang
Courant Institute of Mathematical Sciences, New York University
Sai Qian Zhang
Sai Qian Zhang
New York University