FLRQ: Faster LLM Quantization with Flexible Low-Rank Matrix Sketching

📅 2026-01-09
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing low-rank post-training quantization (PTQ) methods, which rely on a globally uniform rank and struggle to accommodate layer-wise data heterogeneity in large language models, while also incurring high computational costs due to singular value decomposition (SVD). To overcome these challenges, the authors propose FLRQ, a novel framework that leverages R1-Sketch to efficiently construct low-rank approximations and enables outlier-aware, layer-adaptive rank selection. FLRQ further integrates a scaling-and-clipping strategy with BLC-based iterative optimization to effectively suppress quantization error. Experimental results demonstrate that FLRQ achieves state-of-the-art quantization accuracy across multiple models while maintaining high computational efficiency, significantly outperforming current low-rank PTQ approaches.

Technology Category

Application Category

📝 Abstract
Traditional post-training quantization (PTQ) is considered an effective approach to reduce model size and accelerate inference of large-scale language models (LLMs). However, existing low-rank PTQ methods require costly fine-tuning to determine a compromise rank for diverse data and layers in large models, failing to exploit their full potential. Additionally, the current SVD-based low-rank approximation compounds the computational overhead. In this work, we thoroughly analyze the varying effectiveness of low-rank approximation across different layers in representative models. Accordingly, we introduce \underline{F}lexible \underline{L}ow-\underline{R}ank \underline{Q}uantization (FLRQ), a novel solution designed to quickly identify the accuracy-optimal ranks and aggregate them to achieve minimal storage combinations. FLRQ comprises two powerful components, Rank1-Sketch-based Flexible Rank Selection (R1-FLR) and Best Low-rank Approximation under Clipping (BLC). R1-FLR applies the R1-Sketch with Gaussian projection for the fast low-rank approximation, enabling outlier-aware rank extraction for each layer. Meanwhile, BLC aims at minimizing the low-rank quantization error under the scaling and clipping strategy through an iterative method. FLRQ demonstrates strong effectiveness and robustness in comprehensive experiments, achieving state-of-the-art performance in both quantization quality and algorithm efficiency.
Problem

Research questions and friction points this paper is trying to address.

post-training quantization
low-rank approximation
large language models
computational overhead
rank selection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Low-rank quantization
Post-training quantization
R1-Sketch
Flexible rank selection
Large language models
🔎 Similar Papers
No similar papers found.
H
Hongyaoxing Gu
Institute of Software Chinese Academy of Sciences, University of Chinese Academy of Sciences
L
Lijuan Hu
Institute of Software Chinese Academy of Sciences
Shuzi Niu
Shuzi Niu
Institute of Software, Chinese Academy of Sciences
F
Fangfang Liu
Institute of Software Chinese Academy of Sciences, Key Laboratory of System Software (Chinese Academy of Sciences)