GlowQ: Group-Shared LOw-Rank Approximation for Quantized LLMs

📅 2026-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes GlowQ, a novel low-bit quantization method that addresses the performance degradation of large language models at ultra-low bitwidths (e.g., 4-bit) while mitigating the latency and memory overhead of existing low-rank correction approaches. GlowQ introduces the first group-shared low-rank approximation mechanism, which caches a single shared right factor per input group, reuses high-precision projections, and applies corrections only to the most beneficial layers or groups, thereby substantially reducing parameter and computational redundancy. Its lightweight variant, GlowQ-S, further activates correction modules selectively based on their efficacy. Experiments show that GlowQ reduces time-to-first-byte by 5.6%, increases throughput by 9.6%, lowers WikiText-2 perplexity by 0.17%, and improves downstream task accuracy by 0.42 percentage points; GlowQ-S achieves a 23.4% reduction in TTFB and 37.4% higher throughput with less than 0.2 percentage point accuracy loss.

Technology Category

Application Category

📝 Abstract
Quantization techniques such as BitsAndBytes, AWQ, and GPTQ are widely used as a standard method in deploying large language models but often degrades accuracy when using low-bit representations, e.g., 4 bits. Low-rank correction methods (e.g., LQER, QERA, ASER) has been proposed to mitigate this issue, however, they restore all layers and insert error-correction modules into every decoder block, which increases latency and memory overhead. To address this limitation, we propose GlowQ, a group-shared low-rank approximation for quantized LLMs that caches a single shared right factor per input-sharing group and restores only the groups or layers that yield the highest accuracy benefit. GlowQ computes the high-precision projection once per input-sharing group and reuses it across its modules, reducing parameter and memory overhead, and retaining the expressivity of layer-specific corrections. We also propose a selective variant, GlowQ-S, that applies the cached shared module only where it provides the largest benefit. Compared with strong baselines, our approach reduces TTFB by (5.6%) and increases throughput by (9.6%) on average, while reducing perplexity on WikiText-2 by (0.17%) and increasing downstream accuracy by 0.42 percentage points. The selective model GlowQ-S further reduces latency, cutting TTFB by (23.4%) and increasing throughput by (37.4%), while maintaining accuracy within 0.2 percentage points on average.
Problem

Research questions and friction points this paper is trying to address.

quantization
low-rank approximation
large language models
accuracy degradation
memory overhead
Innovation

Methods, ideas, or system contributions that make the work stand out.

group-shared low-rank approximation
quantized LLMs
latency reduction
memory efficiency
selective error correction
🔎 Similar Papers
No similar papers found.
S
Selim An
Department of Artificial Intelligence, DGIST, Korea
I
Ilhong Suh
COGA robotics, Korea
Yeseong Kim
Yeseong Kim
Associate and Distinguished Professor, DGIST
Brain-inspired HD ComputingLightweight AISystem/Architecture Design for AI and IoT ecosystems