SEPTQ: A Simple and Effective Post-Training Quantization Paradigm for Large Language Models

📅 2026-04-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the significant performance degradation and procedural complexity commonly encountered in low-bit post-training quantization of large language models. The authors propose a concise and efficient two-step quantization paradigm: first, static global importance scoring identifies critical weight positions; then, a mask matrix guides column-wise updates of quantized weights. This approach substantially streamlines the quantization pipeline while consistently outperforming strong existing baselines across multiple models and datasets. Notably, it demonstrates exceptional capability in preserving model performance under extremely low-bit settings (e.g., 2–4 bits), offering a practical solution for deploying compact yet accurate language models.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have shown remarkable performance in various domains, but they are constrained by massive computational and storage costs. Quantization, an effective technique for compressing models to fit resource-limited devices while preserving generative quality, encompasses two primary methods: quantization aware training (QAT) and post-training quantization (PTQ). QAT involves additional retraining or fine-tuning, thus inevitably resulting in high training cost and making it unsuitable for LLMs. Consequently, PTQ has become the research hotspot in recent quantization methods. However, existing PTQ methods usually rely on various complex computation procedures and suffer from considerable performance degradation under low-bit quantization settings. To alleviate the above issues, we propose a simple and effective post-training quantization paradigm for LLMs, named SEPTQ. Specifically, SEPTQ first calculates the importance score for each element in the weight matrix and determines the quantization locations in a static global manner. Then it utilizes the mask matrix which represents the important locations to quantize and update the associated weights column-by-column until the appropriate quantized weight matrix is obtained. Compared with previous methods, SEPTQ simplifies the post-training quantization procedure into only two steps, and considers the effectiveness and efficiency simultaneously. Experimental results on various datasets across a suite of models ranging from millions to billions in different quantization bit-levels demonstrate that SEPTQ significantly outperforms other strong baselines, especially in low-bit quantization scenarios.
Problem

Research questions and friction points this paper is trying to address.

post-training quantization
large language models
low-bit quantization
performance degradation
quantization complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

post-training quantization
large language models
low-bit quantization
importance scoring
mask-based quantization
🔎 Similar Papers
No similar papers found.