MatGPTQ: Accurate and Efficient Post-Training Matryoshka Quantization

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes MatGPTQ, the first post-training Matryoshka quantization method that enables single-pass calibration to produce sliceable multi-bit models without costly quantization-aware training. Addressing the lack of efficient post-training solutions and open-source support in traditional Matryoshka quantization, MatGPTQ jointly optimizes multiple precision targets using a small calibration set. Its key innovations include cross-bit error compensation, budget-aware heterogeneous per-layer bit-width search, and a customized mixed-precision inference kernel. Experiments demonstrate that MatGPTQ significantly improves low-bit performance on standard large language models while preserving high-bit accuracy, establishing a new state of the art in post-training Matryoshka quantization. The authors release their code to facilitate practical adoption and further research.

Technology Category

Application Category

📝 Abstract
Matryoshka Quantization (MatQuant) is a recent quantization approach showing that a single integer-quantized model can be served across multiple precisions, by slicing the most significant bits (MSB) at inference time. This enables a single checkpoint to cover a wide range of memory and latency budgets, but renders quantization much more challenging. In particular, the initial MatQuant relies on expensive quantization-aware training (QAT) variants, rather than fast one-shot post training quantization (PTQ), and lacks open-source and kernel support. We address all of these limitations by introducing Post-Training Matryoshka Quantization (MatGPTQ), a new PTQ pipeline that produces a single parent model jointly optimized for multiple target precisions in one-shot, based on a small calibration set. MatGPTQ casts Matryoshka quantization as a multi-precision objective with bit-slicing and cross-bit error compensation, resulting in an algorithm that produces a multi-bit-width,"sliceable"model in a single pass. We also incorporate a new budget-aware search for heterogeneous per-layer bit-witdhs and provide efficient kernels that implement slicing and mixed-precision execution. Across standard LLMs and benchmarks, MatGPTQ preserves high-bit accuracy while substantially improving performance at low-bit-witdh settings. Overall, we establish a new state of the art for Matryoshka-style post-training quantization and make single-checkpoint, multi-precision deployment open and practical. Code is available at https://github.com/IST-DASLab/MatGPTQ.
Problem

Research questions and friction points this paper is trying to address.

Matryoshka Quantization
Post-Training Quantization
Multi-Precision Deployment
Model Compression
Efficient Inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Matryoshka Quantization
Post-Training Quantization
Bit-Slicing
Multi-Precision Optimization
Efficient Kernels
🔎 Similar Papers
No similar papers found.
M
Maximilian Kleinegger
Vienna University of Technology, Vienna, Austria; Institute of Science & Technology Austria (ISTA), Vienna, Austria
E
Elvir Crnvcevi'c
Red Hat AI, Boston, USA
Dan Alistarh
Dan Alistarh
Professor at IST Austria
Machine LearningAlgorithmsDistributed Computing