🤖 AI Summary
This work addresses the limited effectiveness of block-wise rotation in suppressing outliers during post-training quantization and the unclear relationship between such rotation and input geometry. The authors establish, for the first time, a non-asymptotic theoretical analysis of outlier suppression under block rotation. They propose permuting activations prior to Hadamard rotation to balance the ℓ₁ norms across blocks, thereby enhancing quantization accuracy. A greedy activation norm diffusion algorithm is designed, and leveraging the permutation equivariance inherent in Transformers, the permutation is fused into the weights to avoid inference overhead. Evaluated on INT4 quantization of Llama3-1B with block size 16, the method improves perplexity recovery from 46% to 90%, with consistent gains observed across all block sizes.
📝 Abstract
Recent post-training quantization (PTQ) methods have adopted block rotations to diffuse outliers prior to rounding. While this reduces the overhead of full-vector rotations, the effect of block structure on outlier suppression remains poorly understood. To fill this gap, we present the first systematic, non-asymptotic analysis of outlier suppression for block Hadamard rotations. Our analysis reveals that outlier suppression is fundamentally limited by the geometry of the input vector. In particular, post-rotation outliers are deterministically minimized when the pre-rotation $\ell_1$ norm mass is evenly distributed across blocks. Guided by these insights, we introduce MixQuant, a block rotation-aware PTQ framework that redistributes activation mass via permutations prior to rotation. We propose a greedy mass diffusion algorithm to calibrate permutations by equalizing the expected blockwise $\ell_1$ norms. To avoid adding inference overhead, we identify permutation-equivariant regions in transformer architectures to merge the resulting permutations into model weights before deployment. Experiments show that MixQuant consistently improves accuracy across all block sizes, recovering up to 90% of the full-vector rotation perplexity when quantizing Llama3 1B to INT4 with block size 16, compared to 46% without permutations.