🤖 AI Summary
This work addresses the significant challenges in post-training quantization (PTQ) of autoregressive vision generation (ARVG) models, namely channel-wise outliers, highly dynamic token-level activations, and mismatched sample distributions. It presents the first systematic analysis of these issues and introduces PTQ4ARVG, a training-free quantization framework comprising three key components: Gain-Projected Scaling (GPS), Static Token-Wise Quantization (STWQ), and Distribution-Guided Calibration (DGC), integrated with Taylor expansion and entropy-driven sampling strategies. Without requiring any fine-tuning, PTQ4ARVG successfully compresses diverse ARVG models to 8-bit and even 6-bit precision, substantially reducing model size and inference latency while preserving generation quality, thereby achieving notable improvements in quantization accuracy and generalization capability.
📝 Abstract
AutoRegressive Visual Generation (ARVG) models retain an architecture compatible with language models, while achieving performance comparable to diffusion-based models. Quantization is commonly employed in neural networks to reduce model size and computational latency. However, applying quantization to ARVG remains largely underexplored, and existing quantization methods fail to generalize effectively to ARVG models. In this paper, we explore this issue and identify three key challenges: (1) severe outliers at channel-wise level, (2) highly dynamic activations at token-wise level, and (3) mismatched distribution information at sample-wise level. To these ends, we propose PTQ4ARVG, a training-free post-training quantization (PTQ) framework consisting of: (1) Gain-Projected Scaling (GPS) mitigates the channel-wise outliers, which expands the quantization loss via a Taylor series to quantify the gain of scaling for activation-weight quantization, and derives the optimal scaling factor through differentiation.(2) Static Token-Wise Quantization (STWQ) leverages the inherent properties of ARVG, fixed token length and position-invariant distribution across samples, to address token-wise variance without incurring dynamic calibration overhead.(3) Distribution-Guided Calibration (DGC) selects samples that contribute most to distributional entropy, eliminating the sample-wise distribution mismatch. Extensive experiments show that PTQ4ARVG can effectively quantize the ARVG family models to 8-bit and 6-bit while maintaining competitive performance. Code is available at http://github.com/BienLuky/PTQ4ARVG .